Sequential multi-hypothesis testing for compound Poisson processes
Transcript of Sequential multi-hypothesis testing for compound Poisson processes
![Page 1: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/1.jpg)
This article was downloaded by: [Stony Brook University]On: 24 October 2014, At: 21:33Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,37-41 Mortimer Street, London W1T 3JH, UK
Stochastics An International Journal of Probabilityand Stochastic Processes: formerly Stochastics andStochastics ReportsPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/gssr20
Sequential multi-hypothesis testing for compoundPoisson processesSavas Dayanik a , H. Vincent Poor b & Semih O. Sezer ca Department of Operations Research and Financial Engineering , Bendheim Center forFinance, Princeton University , Princeton, NJ, 08544, USAb School of Engineering and Applied Science, Princeton University , Princeton, NJ, 08544,USAc Department of Mathematics , University of Michigan , Ann Arbor, MI, 48109, USAPublished online: 05 Nov 2008.
To cite this article: Savas Dayanik , H. Vincent Poor & Semih O. Sezer (2008) Sequential multi-hypothesis testing forcompound Poisson processes, Stochastics An International Journal of Probability and Stochastic Processes: formerlyStochastics and Stochastics Reports, 80:1, 19-50, DOI: 10.1080/17442500701594490
To link to this article: http://dx.doi.org/10.1080/17442500701594490
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) containedin the publications on our platform. However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of theContent. Any opinions and views expressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon andshould be independently verified with primary sources of information. Taylor and Francis shall not be liable forany losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use ofthe Content.
This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions
![Page 2: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/2.jpg)
Sequential multi-hypothesis testing for compound Poissonprocesses
SAVAS DAYANIK†*, H. VINCENT POOR‡§ and SEMIH O. SEZER{k
†Department of Operations Research and Financial Engineering, Bendheim Center for Finance,Princeton University, Princeton, NJ 08544, USA
‡School of Engineering and Applied Science, Princeton University, Princeton, NJ 08544, USA{Department of Mathematics, University of Michigan, Ann Arbor, MI 48109, USA
(Received 1 November 2006; in final form 23 June 2007)
Suppose that there are finitely many simple hypotheses about the unknown arrival rate and markdistribution of a compound Poisson process, and that exactly one of them is correct. The objective is todetermine the correct hypothesis with minimal error probability and as soon as possible after theobservation of the process starts. This problem is formulated in a Bayesian framework, and its solution ispresented. Provably convergent numerical methods and practical near-optimal strategies are describedand illustrated on various examples.
Keywords: Sequential hypothesis testing; Sequential analysis; Compound Poisson processes; Optimalstopping
AMS Subject Classification: 62L10; 62L15; 62C10; 60G40
1. Introduction
Let X be a compound Poisson process defined as
Xt ¼ X0 þXNt
i¼1
Yi; t $ 0; ð1Þ
where N is a simple Poisson process with some arrival rate l, and Y1; Y2; . . . are independent
Rd-valued random variables with some common distribution nð·Þ such that nð{0}Þ ¼ 0.
Suppose that the characteristics (l,n) of the process X are unknown, and exactly one of M
distinct hypotheses,
H1 : ðl; nÞ ¼ ðl1; n1Þ; H2 : ðl; nÞ ¼ ðl2; n2Þ; . . . ; HM : ðl; nÞ ¼ ðlM ; nMÞ; ð2Þ
is correct. Let Q be the index of correct hypothesis, and l1 # l2 # · · · # lM without loss of
generality.
Stochastics: An International Journal of Probability and Stochastics Processes
ISSN 1744-2508 print/ISSN 1744-2516 online q 2008 Taylor & Francis
http://www.tandf.co.uk/journals
DOI: 10.1080/17442500701594490
*Corresponding author. Email: [email protected]§Email: [email protected]: [email protected]
Stochastics: An International Journal of Probability and Stochastics Processes,Vol. 80, No. 1, February 2008, 19–50
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 3: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/3.jpg)
At time t ¼ 0, the hypothesis Hi, i [ I W {1; . . . ;M} is correct with prior probability
P ~p{Q ¼ i} ¼ pi for some ~p [ E W {ðp1; . . . ;pMÞ [ ½0; 1�M :PM
k¼1pk ¼ 1}, and we start
observing the process X. The objective is to determine the correct hypothesis as quickly as
possible with minimal probability of making a wrong decision. We are allowed to observe
the process X before the final decision as long as we want. Postponing the final decision
improves the odds of its correctness but increases the sampling and lost opportunity costs.
Therefore, a good strategy must resolve optimally the trade-off between the costs of waiting
and a wrong terminal decision.
A strategy ðt; dÞ consists of a sampling termination time t and a terminal decision rule d.
At time t, we stop observing the process X and select the hypothesis Hi on the event {d ¼ i}
for i [ I. A strategy ðt; dÞ is admissible if t is a stopping time of the process X, and if the
value of d is determined completely by the observations of X until time t.
Suppose that aij $ 0, i; j [ I is the cost of deciding on Hi when Hj is correct, aii ¼ 0 for
every i [ I. Moreover, r . 0 is a discount rate, and f : Rd 7! R is a Borel function such that
its negative part f 2ð·Þ is ni-integrable for every i [ I; i.e.ÐRd f 2ð yÞniðdyÞ , 1, i [ I.
The compromise achieved by an admissible strategy ðt; dÞ between sampling cost and cost of
wrong terminal decision may be captured by one of three alternative Bayes risks,
Rð ~p; t; dÞ W E ~p tþ 1{t,1}
XMi¼1
XMj¼1
aij·1{d¼i;Q¼j}
" #; ð3Þ
R0ð ~p; t; dÞ W E ~p Nt þ 1{t,1}
XMi¼1
XMj¼1
aij·1{d¼i;Q¼j}
" #; ð4Þ
R00ð ~p; t; dÞ W E ~pXNt
k¼1
e2rsk f ðYkÞ þ e2rtXMi¼1
XMj¼1
aij·1{d¼i;Q¼j}
" #; ð5Þ
where si, i $ 1 are the arrival times of the process N, and our objective will be (i) to calculate
the minimum Bayes risk
Vð ~pÞ W infðt;dÞ[A
Rð ~p; t; dÞ; ~p [ E; ðsimilarly;V 0ð·Þ and V 00ð·ÞÞ ð6Þ
over the collection A of all admissible strategies and (ii) to find an admissible decision rule
that attains the infimum for every ~p [ E, if such a rule exists.
The Bayes risk R penalizes the waiting time at a constant rate regardless of the true identity
of the system and is suitable if major running costs are wages and rents paid at the same rates
per unit time during the study. Bayes risks R0 and R00 reflect the impact of the unknown
system identity on the waiting costs and account better for lost opportunity costs in a business
setting, for example, due to unknown volume of customer arrivals or unknown amount of
(discounted) cash flows brought by the same customers.
Sequential Bayesian hypothesis testing problems under Bayes risk R of (3) were studied by
Wald and Wolfowitz [1], Blackwell and Girshick [2], Zacks [3], Shiryaev [4]. Peskir and
Shiryaev [5] solved this problem for testing two simple hypotheses about the arrival rate of a
simple Poisson process. For a compound Poisson process whose marks are exponentially
distributed with mean the same as their arrival rate, Gapeev [6] derived under the same Bayes
risk optimal sequential tests for two simple hypotheses about both mark distribution and
arrival rate. The solution for general mark distributions has been obtained by Dayanik and
S. Dayanik et al.20
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 4: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/4.jpg)
Sezer [7], who formulated the problem under an auxiliary probability measure as the optimal
stopping of an Rþ-valued likelihood-ratio process. Unfortunately, that approach for M ¼ 2
fails whenM $ 3, since optimal continuation region in the latter case is not always bounded,
and good convergence results do not exist anymore.
In this paper, we solve the Bayesian sequential multi-hypothesis testing problems in (6) for
every M $ 2 without restrictions on the arrival rate and mark distribution of the compound
Poisson process. We describe an optimal admissible strategy ðU0; dðU0ÞÞ in terms of an
E-valued Markov process PðtÞ W ðP1ðtÞ; . . . ;PMðtÞÞ, t $ 0, whose ith coordinate is the
posterior probability PiðtÞ ¼ P ~p{Q ¼ ijF t} that hypothesis Hi is correct given the past
observations F t ¼ s{Xs; 0 # s # t} of the process X. The stopping time U0 of the
observable filtration F ¼ {F t}t$0 is the hitting time of the processPðtÞ to some closed subset
G1 of E. The stopping region G1 can be partitioned into M closed convex sets
G1;1; . . . ;G1;M , and dðU0Þ equals i [ I on the event {U0 , 1;PðU0Þ [ G1;i}. Namely, the
optimal admissible strategy ðU0; dðU0ÞÞ is fully determined by the collection of subsets
G1;1; . . . ;G1;M , and they will depend of course on the choice of Bayes risks R, R0 and R00.
In plain words, one observes the process X until the uncertainty about true system identity
is reduced to a low level beyond which waiting costs are no longer justified. At this time, the
observer stops collecting new information and selects the least-costly hypothesis.
The remainder of the paper is organized as follows. In Sections 2–5, we discuss
exclusively the problem with the first Bayes risk, R. We start with a precise problem
description in Section 2 and show that (6) is equivalent to optimal stopping of the posterior
probability process P. After the successive approximations of the latter problem are studied
in Section 3, the solution is described in Section 4. A provably-convergent numerical
algorithm that gives nearly-optimal admissible strategies is also developed. The algorithm is
illustrated on several examples in Section 5. Finally, we mention in Section 6 the necessary
changes to previous arguments in order to obtain similar results for the alternative Bayes
risks R0 and R00 in (4). Long derivations and proofs are deferred to the appendix.
2. Problem description
Let ðV;F Þ be a measurable space hosting a counting process N, a sequence of Rd-valued
random variables ðYnÞn$1, and a random variable Q taking values in I W {1; . . . ;M}.
Suppose thatP1; . . . ;PM are probability measures on this space such that the process X in (1)
is a compound Poisson process with the characteristics ðli; nið·ÞÞ under Pi, and
Pi{Q ¼ j} ¼1; if j ¼ i
0; if j – i
( )for every i [ I:
Then for every ~p [ E W {ðp1; . . . ;pMÞ [ ½0; 1�M :PM
i¼1pi ¼ 1};
P ~pðAÞ WXMi¼1
piPiðAÞ; A [ F ð7Þ
is a probability measure on ðV;F Þ, and for A ¼ {Q ¼ i} we obtain P ~p{Q ¼ i} ¼ pi.
Namely, ~p ¼ ðp1; . . . ;pMÞ gives the prior probabilities of the hypotheses in (2), and PiðAÞ
Stochastics 21
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 5: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/5.jpg)
becomes the conditional probability P ~pðAjQ ¼ iÞ of the event A [ F given that the correct
hypothesis is Q ¼ i for every i [ I.
Let F ¼ {F t}t$0 be the natural filtration of the process X, and A be the collection of the
pairs ðt; dÞ of an F-stopping time t and an I-valued F t-measurable random variable d.
We would like to calculate the minimum Bayes risk Vð·Þ in (6) and find a strategy ðt; dÞ [ Athat has the smallest Bayes risk Rð ~p; t; dÞ of (3) for every ~p [ E (Bayes risks R0 and R00 in (4)
and (5) are discussed in Section 6). Next proposition, whose proof is in the appendix,
identifies the form of the optimal terminal decision rule d and reduces the original problem to
the optimal stopping of the posterior probability process
PðtÞ ¼ ðP1ðtÞ; . . . ;PMðtÞÞ; t $ 0; where PiðtÞ ¼ P ~p{Q ¼ ijF t}; i [ I: ð8Þ
Proposition 2.1. The smallest Bayes risk Vð·Þ of (6) becomes
Vð ~pÞ ¼ inft[F
E ~p½tþ 1{t,1}hðPðtÞÞ�; ~p ¼ ðp1; . . . ;pMÞ [ E; ð9Þ
where h : E 7! Rþ is the optimal terminal decision cost function given by
hð ~pÞ W mini[I
hið ~pÞ and hið ~pÞ WXMj¼1
aijpj: ð10Þ
Let dðtÞ [ argmini[I
PMj¼1aijPjðtÞ, t $ 0. If an F-stopping time t* solves the problem in (9),
then ðt*; dðt*ÞÞ [ A is an optimal admissible strategy for Vð·Þ in (6).
Thus, the original Bayesian sequential multi-hypothesis testing simplifies to the optimal
stopping problem in (9). To solve it, we shall first study sample paths of the process P.
For every i [ I, let pið·Þ be the density of the probability distribution ni with respect to some
s-finite measure m on Rd with Borel s-algebra BðRdÞ. For example, one can take
m ¼ n1 þ · · ·þ nM . In the appendix, we show that
PiðtÞ ¼pie
2li tQNt
k¼1 lipiðYkÞPMj¼1 pje2lj t
QNt
k¼1 ljpjðYkÞ; t $ 0 P ~p-almost surely; ~p [ E; i [ I: ð11Þ
If we denote the jump times of the compound Poisson process X by
sn W inf{t . sn21 : Xt – Xt2}; n $ 1 ðs0 ; 0Þ; ð12Þ
then Pi{s1 [ dt1; . . . ;sn [ dtn;snþ1 . t;Y1 [ dy1; . . . ; Yn [ dyn} equals
lie2lit1dt1
� �· · · lie
2liðtn2tn21Þdtn� �
e2liðt2tnÞ� �Yn
k¼1
pið ykÞmðdykÞ ¼ e2litYnk¼1
lidtkpið ykÞmðdykÞ
for every n $ 0 and 0 , t1 # · · · # tn # t. Therefore,PiðtÞ of (11) is the relative likelihood
pi lie2li t1dt1
� �· · · lie
2liðtn2tn21Þdtn� �
e2liðt2tnÞ� �Qn
k¼1 pið ykÞmðdykÞPMj¼1 pj lje2lj t1dt1
� �· · · lje2ljðtn2tn21Þdtn� �
e2ljðt2tnÞ� �Qn
pjpjð ykÞmðdykÞ
�����n¼Nt ;t1¼s1; ... ;tn¼sn;y1¼Y1; ... ;yn¼Yn
of the path XðsÞ, 0 # s # t under hypothesis Hi as a result of Bayes rule. The explicit form in
(11) also gives the recursive dynamics of the process P as in
S. Dayanik et al.22
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 6: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/6.jpg)
PðtÞ ¼ xðt2 sn21;Pðsn21ÞÞ; t [ ½sn21;snÞ
PðsnÞ ¼l1p1ðYnÞP1ðsn2ÞPM
j¼1ljpjðYnÞPjðsn2Þ
; . . . ; lMpM ðYnÞPM ðsn2ÞPM
j¼1ljpjðYnÞPjðsn2Þ
!8>><>>:
9>>=>>;; n $ 1 ð13Þ
in terms of the deterministic mapping x: R £ E 7! E defined by
xðt; ~pÞ ; x1ðt; ~pÞ; . . . ; xMðt; ~pÞ� �
We2l1tp1PMj¼1 e
2lj tpj
; . . . ;e2lMtpMPMj¼1 e
2ljtpj
!: ð14Þ
This mapping satisfies the semi-group property xðt þ s; ~pÞ ¼ xðt; xðs; ~pÞÞ for s; t $ 0.
Moreover, P is a piecewise-deterministic process by (13). The process P follows the
deterministic curves t 7! xðt; ~pÞ, ~p [ E between arrival times of X and jumps from one curve
to another at the arrival times ðsnÞn$1 of X; see figure 1.
Since under every Pi, i [ I marks and interarrival times of the process X are i.i.d., and the
latter are exponentially distributed, the process P is a piecewise-deterministic (Pi, F)-
Markov process for every i [ I. Therefore, as shown in the appendix,
E ~p½gðPðt þ sÞÞjF t� ¼XMi¼1
PiðtÞEi½gðPðt þ sÞÞjF t� ¼XMi¼1
PiðtÞEi½gðPðt þ sÞÞjPðtÞ� ð15Þ
for every bounded Borel function g : E 7! R, numbers s; t $ 0 and ~p [ E and P is also a
piecewise-deterministic ðP ~p; FÞ-Markov process for every ~p [ E. By (14), we have
›xiðt; ~pÞ
›t¼ xiðt; ~pÞ 2li þ
XMj¼1
ljxjðt; ~pÞ
!; i [ I; ð16Þ
Figure 1. Sample paths t 7! PðtÞ when M ¼ 2 in (a),M ¼ 3 in (b) and (c) andM ¼ 4 in (d). Between arrival timess1;s2; . . . of X, the process P follows the deterministic curves t 7! xðt; ~pÞ, ~p [ E. At the arrival times, it switchesfrom one curve to another. In (a), (b) and (d), the arrival rates are different under hypotheses. In (c), l1 ¼ l2 , l3,and (14) implies P1ðtÞ=P2ðtÞ ¼ P1ðsnÞ=P2ðsnÞ for every t [ ½sn;snþ1Þ, n $ 1.
Stochastics 23
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 7: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/7.jpg)
and Jensen’s inequality gives
XMi¼1
li›xiðt; ~pÞ
›t¼ 2
XMi¼1
l2i xiðt; ~pÞ þXMi¼1
lixiðt; ~pÞ
!2
# 0:
Therefore, the sum on the right hand side of (16) decreases in t, and next remark follows.
Remark 1. For every i [ I, there exists tið ~pÞ [ ½0;1� such that the function xiðt; ~pÞ of (14) is
increasing in t on ½0; tið ~pÞ� and decreasing on ½tið ~pÞ;1Þ. Since l1 # l2 # · · · # lM , (16)
implies that tið·Þ ¼ 1 for every i [ I such that li ¼ l1. Similarly, tið·Þ ¼ 0 for every i such
that li ¼ lM . In other words, as long as no arrivals are observed, the processP give more and
more weights on the hypotheses under which the arrival rate is the smallest.
On the other hand, if l1 ¼ · · · ¼ lM , then xðt; ~pÞ ¼ ~p for every t $ 0 and ~p [ E by (14).
Hence by (13), P does not change between arrivals, and the relative likelihood of hypotheses
in (2) are updated only at arrival times according to the observed marks.
The following lemma will be useful in describing a numerical algorithm in Section 4.2;
see the appendix for its proof.
Lemma 2.2. Denote a “neighborhood of the corners” in E by
Eðq1; ... qM Þ W[Mj¼1
{ ~p [ E : pj $ qj} for every ðq1; . . . ; qMÞ [ ð0; 1ÞM: ð17Þ
If l1 , l2 , · · · , lM in (2), then for every ðq1; . . . ; qMÞ [ ð0; 1ÞM , the hitting time inf{t $
0 : xðt; ~pÞ [ Eðq1; ... ;qMÞ} of the path t 7! xðt; ~pÞ to the set Eðq1; ... ;qM Þ is bounded uniformly in
~p [ E from above by some finite number sðq1; . . . ; qMÞ.
Standard dynamic programming arguments suggest that the value function Vð·Þ solve the
variational inequalities
min{AVð ~pÞ þ 1; hð ~pÞ2 Vð ~pÞ} ¼ 0 for every ~p [ Rd; ð18Þ
where
AVð ~pÞ ¼XMi¼1
pi 2li þXMj¼1
ljpj
!›V
›pi
� �ðpiÞ
þXMi¼1
ðRd
Vl1p1ð yÞp1PMj¼1 ljpjð yÞpj
; . . . ;lMpMð yÞpMPMj¼1 ljpjð yÞpj
!2 Vð ~pÞ
" #lipið yÞpimðdyÞ
ð19Þ
is the ðF;P†Þ-infinitesimal-generator of the process P; see, for example, Øksendal [8], and
that t ¼ inf{t $ 0; hðPðtÞÞ2 VðPðtÞÞ ¼ 0} be an optimal stopping time for the problem in
(9). Solving (18) for the function Vð·Þ requires the solution of the partial–differential-
difference equation AV þ 1 ¼ 0 on an unidentified “continuation” region in Rd, which
should also be determined simultaneously with Vð·Þ as part of the solution.
Solving high-dimensional free-boundary partial–differential-difference equations is
difficult. Instead, following Gugerli [9] and Davis [10] we construct in the next section
accurate successive approximations of the value function Vð·Þ, by means of which we can
later identify precisely the structure of the optimal stopping strategy and the solution.
S. Dayanik et al.24
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 8: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/8.jpg)
The same approach has been recently taken by Bayraktar et al. [11] to solve adaptive Poisson
disorder problem, and by Dayanik and Sezer [7,12] to detect quickly a sudden unobserved
change in the local characteristics and to test two simple hypotheses about the local
characteristics of a compound Poisson process.
In sequential testing of two simple hypotheses, H1 : ðl; nÞ ¼ ðl1; n1Þ and
H2 : ðl; nÞ ¼ ðl2; n2Þ, Dayanik and Sezer [7] formulated the problem in terms of the
likelihood-ratio process P1ðtÞ=P2ðtÞ ; P1ðtÞ=ð1 2P1ðtÞÞ, t $ 0 after a suitable change of
measure, in order to take advantage of (i) linear dynamics of the likelihood-ratio process and
(ii) statistical independence between several stochastic elements under a suitable auxiliary
probability measure. This approach turned out to be futile for the sequential multi-hypothesis
testing: the optimal continuation region of the optimal stopping problem for the likelihood
process fails to be bounded for certain range of parameters under a similar auxiliary
probability measure. The bounded continuation region was the key to explicit and uniform
error bounds derived by Dayanik and Sezer [7] for successive approximation of the value
function. In order to prove in this paper explicit uniform error bounds of the same strength on
the successive approximations for the multi-hypothesis testing problem, we formulate the
optimal stopping problem in terms of the posterior probability process P under the physical
probability measure, as described in (9). Hence, the convenience of working with likelihood-
ratio processes under suitable auxiliary probability measures is sacrificed in favor of a new
setup that will result in bounded continuation regions. This required a major rework of most
results as described in the remainder.
3. Successive approximations
By limiting the horizon of the problem in (9) to the nth arrival time sn of the process X,
we obtain the family of optimal stopping problems
Vnðp1; . . . ;pMÞ W inft[F
E ~p½t ^ sn þ hðPðt ^ snÞÞ�; ~p [ E; n $ 0; ð20Þ
where hð·Þ is the optimal terminal decision cost in (10). The functions Vnð·Þ, n $ 1 and Vð·Þ
are nonnegative and bounded since so is hð·Þ. The sequence {Vnð·Þ}n$1 is decreasing and has
pointwise limit. Next proposition shows that it converges to Vð·Þ uniformly on E.
Remark 1. In the proof of Proposition 3.1 and elsewhere, it will be important to remember
that the infimum in (9) and (20) can be taken over stopping times whose P ~p-expectation is
bounded from above byP
i;jaij for every ~p [ E, since “immediate stopping” already gives
the upper bound hð·Þ #P
i;jaij , 1 on the value functions Vð·Þ and Vnð·Þ, n $ 0.
Proposition 3.1. The sequence {Vnð·Þ}n$1 converges to Vð·Þ uniformly on E. In fact,
0 # Vnð ~pÞ2 Vð ~pÞ #Xi;j
aij
!3=2 ffiffiffiffiffiffiffiffiffiffiffilM
n2 1
rfor every ~p [ E and n $ 1: ð21Þ
Stochastics 25
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 9: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/9.jpg)
Proof. By definition, we have Vnð·Þ $ Vð·Þ. For every F-stopping time t, the expectation
E ~p½tþ 1{t,1}hðPðtÞÞ� in (9) can be written as
E ~p½t ^ sn þ hðPðt ^ snÞÞ� þ E ~p 1{t.sn}½t2 sn þ hðPðtÞÞ2 hðPðsnÞÞ�� �
$ E ~p½t ^ sn þ hðPðt ^ snÞÞ�2Xi;j
aijE~p1{t.sn}:
We have E ~p1{t.sn} # E ~p½1{t.sn}ðt=snÞ1=2� #
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiE ~ptE ~pð1=snÞ
qdue to Cauchy–Schwartz
inequality, and E ~pð1=snÞ # lM=ðn2 1Þ by (7). If E ~pt #P
i;jaij, then we obtain
E ~p½tþ 1{t,1}hðPðtÞÞ� $ E ~p½t ^ sn þ hðPðt ^ snÞÞ�2Xi;j
aij
!3=2 ffiffiffiffiffiffiffiffiffiffiffilM
n2 1
r;
and taking the infimum of both sides over the F-stopping times whose P ~p-expectation is
bounded byP
i;jaij for every ~p [ E completes the proof by Remark 1. A
The dynamic programming principle suggests that the functions in (20) satisfy the relation
Vnþ1 ¼ J0Vn, n $ 0, where J0 is an operator acting on bounded functions w : E 7! R by
J0wð ~pÞ W inft[F
E ~p t ^ s1 þ 1{t,s1}hðPðtÞÞ þ 1{t$s1}wðPðs1ÞÞ� �
; ~p [ E: ð22Þ
We show that (22) is in fact a minimization problem over deterministic times, by using the
next result about the characterization of stopping times of piecewise-deterministic Markov
processes; see Bremaud ([13], Theorem T33, p. 308), Davis ([10], Lemma A2.3, p. 261) for
its proof.
Lemma 3.2. For every F-stopping time t and every n $ 0, there is an Fsn-measurable random
variable Rn : V 7! ½0;1� such that t ^ snþ1 ¼ ðsn þ RnÞ ^ snþ1 P ~p-a.s. on {t $ sn}.
Particularly, if t is an F-stopping time, then there exists some constant t ¼ tð ~pÞ [ ½0;1�
such that t ^ s1 ¼ t ^ s1 holds P ~p-a.s. Therefore, (22) becomes
J0wð ~pÞ ¼ inft[½0;1�
Jwðt; ~pÞ W E ~p t ^ s1 þ 1{t,s1}hðPðtÞÞ þ 1{t$s1}wðPðs1ÞÞ� �
: ð23Þ
Since the first arrival time s1 of the process X is distributed under P ~p according to a mixture
of exponential distributions with rates li, i [ I and weights pi, i [ I, the explicit dynamics
in (13) of the process P gives
Jwðt; ~pÞ ¼
ðt0
XMi¼1
pie2liu½1þ li·ðSiwÞðxðu; ~pÞÞ�duþ
XMi¼1
pie2li t
!·hðxðt; ~pÞÞ; ð24Þ
where Si is an operator acting on bounded functions w : E 7! R and is defined as
Siwð ~pÞ W
ðRd
wl1p1ð yÞp1PMj¼1 ljpjð yÞpj
; . . . ;lMpMð yÞpMPMj¼1 ljpjð yÞpj
!pið yÞmðdyÞ; ~p [ E; i [ I: ð25Þ
Hence, by (23) and (24) we conclude that the optimal stopping problem in (22) is essentially
a deterministic minimization problem. Using the operator J0, let us now define the sequence
v0ð·Þ W hð·Þ and vnþ1ð·Þ W J0vnð·Þ; n $ 0: ð26Þ
S. Dayanik et al.26
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 10: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/10.jpg)
One of the main results of this section is Proposition 3.5 below and shows that Vn ¼ vn,
n $ 0. Therefore, the solutions ðvnÞn$1 of deterministic minimization problems in (26) give
the successive approximations ðVnÞn$1 in (20) of the function V in (9) by Proposition 3.1.
That result hinges on certain properties of the operator J0 and sequence ðvnÞn$1 summarized
by the next proposition, whose proof is in the appendix.
Proposition 3.3. For two bounded functions w1ð·Þ # w2ð·Þ on E, we have Jw1ðt; ·Þ #
Jw2ðt; ·Þ for every t $ 0, and J0w1ð·Þ # J0w2ð·Þ # hð·Þ. If wð·Þ is concave, so are Jwðt; ·Þ,
t $ 0 and J0wð·Þ. Finally, if w : E 7! Rþ is bounded and continuous, so are ðt; ~pÞ 7!
J0wðt; ~pÞ and ~p 7! J0wð ~pÞ.
Corollary 3.4. Each function vnð·Þ, n $ 0 of (26) is continuous and concave on E. We have
hð·Þ ; v0ð·Þ $ v1ð·Þ $ · · · $ 0, and the pointwise limit vð·Þ W limn!1vnð·Þ exists and is
concave on E.
Proof. Since v0ð·Þ ; hð·Þ and vnð·Þ ¼ J0vn21ð·Þ, the claim on concavity and continuity
follows directly from Proposition 3.3 by induction. To show the monotonicity, by
construction we have 0 # v0ð·Þ ; hð·Þ, therefore, 0 # v1ð·Þ # v0ð·Þ again by Remark 3.3.
Assume now that 0 # vnð·Þ # vn21ð·Þ. Applying the operator J0 to both sides, we get
0 # vnþ1ð·Þ ¼ J0vnð·Þ # J0vn21ð·Þ ¼ vnð·Þ. Hence the sequence is decreasing and the
pointwise limit limn!1vnð·Þ exists on E. Finally, since it is the lower envelop of concave
functions vn’s, the function v is also concave. A
Remark 2. For Jwðt; ·Þ given in (24) let us define
Jtwð ~pÞ ¼ infs[½t;1�
Jwðs; ~pÞ; ð27Þ
which agrees with (23) for t ¼ 0. For every bounded continuous function w : E 7! Rþ, the
mapping t 7! Jwðt; ~pÞ is again bounded and continuous on ½0;1�, and the infimum in (27)
attained for all t [ ½0;1�.
Proposition 3.5. For every n $ 0, we have vnð·Þ ¼ Vnð·Þ in (20) and (26). If for every 1 $ 0
we define
r1nð ~pÞ W inf{s [ ð0;1� : Jvnðs; ~pÞ # J0vnð ~pÞ þ 1}; n $ 0; ~p [ E;
S11 W r10ðPð0ÞÞ ^ s1; and S1nþ1 Wr1=2n ðPð0ÞÞ; if s1 . r1=2n ðPð0ÞÞ
s1 þ S1=2n + us1; if s1 # r1=2n ðPð0ÞÞ
8<:
9=;;
n $ 1;
where us is the shift-operator on V; i.e. Xt + us ¼ Xsþt, then
E ~p S1n þ hðPðS1nÞÞ� �
# vnð ~pÞ þ 1; ;1 $ 0; n $ 1; ~p [ E: ð28Þ
Corollary 3.6. Since vnð·Þ ¼ Vnð·Þ, n $ 0, we have vð·Þ ¼ Vð·Þ by Proposition 3.1.
The function Vð·Þ is continuous and concave on E by Proposition 3.1 and Corollary 3.4.
Stochastics 27
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 11: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/11.jpg)
Proposition 3.7. The value function V of (9) satisfies V ¼ J0V , and it is the largest bounded
solution of U ¼ J0U smaller than or equal to h.
Remark 3. If the arrival rates in (2) are identical, then xðt; ~pÞ ¼ ~p for every t $ 0 and ~p [ E;
see Remark 1. If moreover l1 ¼ · · · ¼ lM ¼ 1, then (23)–(24) reduce to
J0wð ~pÞ ¼ min hð ~pÞ; 1þXMi¼1
pi·ðSiwÞð ~pÞ
( ); ~p [ E ð29Þ
for every bounded function w : E 7! R. Blackwell and Girshick ([2], Chapter 9) show that
limn!1Jn0h ; limn!1J0ðJ
n210 hÞ ; limn!1J0vn ¼ v ; V gives the minimum Bayes risk of
(6) in the discrete-time sequential Bayesian hypothesis testing problem, where (i) the
expected sum of the number of samples Y1; Y2; . . . and terminal decision cost is minimized,
and (ii) the unknown probability density function p of the Yk’s must be identified among the
alternatives p1; . . . ; pM .
Proposition 3.8. For every bounded function w : E 7! Rþ, we have
Jtwð ~pÞ ¼ Jwðt; ~pÞ þXMi¼1
pie2li t
!½J0wðxðt; ~pÞÞ2 hðxðt; ~pÞÞ�; ð30Þ
where the operators J and Jt are defined in (24) and (27), respectively.
Proof. Let us fix a constant s $ t and ~p [ E. Using the operator J in (24) and the semigroup
property of xðt; ~pÞ in (14), Jwðs; ~pÞ can be written as
Jwðs; ~pÞ ¼ Jwðt; ~pÞ þXMi¼1
pie2li t
ðst
XMi¼1
xiðt; ~pÞe2liu½1þ li·Siwðxðt þ u; xðt; ~pÞÞÞ�du
2XMi¼1
pie2lit
!·hðxðt; ~pÞÞ þ
XMi¼1
pie2liðu2tÞe2lit
!·hðxðs2 t; xðt; ~pÞÞÞ
¼ Jwðt; ~pÞ þXMi¼1
pie2lit
!½Jwðs2 t; xðt; ~pÞÞ2 hðxðt; ~pÞÞ�:
Taking the infimum above over s [ ½t;1� concludes the proof. A
Corollary 3.9. If
rnð ~pÞ ; rð0Þn ð ~pÞ ¼ inf{s [ ð0;1� : Jvnðs; ~pÞ ¼ J0vnð ~pÞ} ð31Þ
and inf Y ¼ 1 as rð1Þn ð ~pÞ in Proposition 3.5 with 1 ¼ 0, then
rnð ~pÞ ¼ inf{t . 0 : vnþ1ðxðt; ~pÞÞ ¼ hðxðt; ~pÞÞ}: ð32Þ
S. Dayanik et al.28
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 12: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/12.jpg)
Remark 4. Substituting w ¼ vn in (30) gives the “dynamic programming equation” for the
sequence {vnð·Þ}n$0
vnþ1ð ~pÞ ¼ Jvnðt; ~pÞ þXMi¼1
pi e2li t
!½vnþ1ðxðt; ~pÞÞ2 hðxðt; ~pÞÞ�; t [ ½0;rnð ~pÞ�:
Moreover Carollary 3.6 and Proposition 3.8 give
JtVð ~pÞ ¼ JVðt; ~pÞ þXMi¼1
pi e2li t
!½Vðxðt; ~pÞÞ2 hðxðt; ~pÞÞ�; t [ Rþ: ð33Þ
Similar to (31), let us define
rð ~pÞ W inf{t . 0 : JVðt; ~pÞ ¼ J0Vð ~pÞ}: ð34Þ
Then, similar arguments as in Corollary 3.9 lead to
rð ~pÞ ¼ inf{t . 0 : Vðxðt; ~pÞÞ ¼ hðxðt; ð ~pÞÞ}; ð35Þ
and
Vð ~pÞ ¼ JVðt; ~pÞ þXMi¼1
pi e2lit
!½Vðxðt; ~pÞÞ2 hðxðt; ~pÞÞ�; t [ ½0;rð ~pÞ�: ð36Þ
Remark 5. By Corollary 3.6, the function Vð·Þ in (9) is continuous on E. Since t 7! xðt; ~pÞ of
(14) is continuous, the mapping t! Vðxðt; ~pÞÞ is continuous. Moreover, the paths t 7! PðtÞ
follows the deterministic curves t 7! xðt; ·Þ between two jumps. Hence the process VðPðtÞÞ
has right-continuous paths with left limits.
Let us define the F-stopping times
U1 W inf{t $ 0 : VðPðtÞÞ2 hðPðtÞÞ $ 21}; 1 $ 0: ð37Þ
Then the regularity of the paths t 7! PiðtÞ implies that
VðPðU1ÞÞ2 hðPðU1ÞÞ $ 21 on the event {U1 , 1}: ð38Þ
Proposition 3.10. Let Mt W t þ VðPðtÞÞ, t $ 0. For every n $ 0 and 1 $ 0, we have
E ~p½M0� ¼ E ~p½MU1^sn�; i.e.
Vð ~pÞ ¼ E ~p½U1 ^ sn þ VðPðU1 ^ snÞÞ�: ð39Þ
Proposition 3.11. The stopping time U1 in (37) and NU1have bounded P ~p-expectations for
every 1 $ 0 and ~p [ E. Moreover, it is 1-optimal for the problem in (9); i.e.
E ~p U1 þ 1{U1,1}hðPðU1ÞÞ� �
# Vð ~pÞ þ 1 for every ~p [ E:
Proof. To show the first claim, note that by Proposition 3.4 and Corollary 3.6 we havePi;jaij $ hð ~pÞ $ Vð ~pÞ $ 0. Using Proposition 3.10 above we haveX
i;j
aij $ Vð ~pÞ ¼ E ~p½U1 ^ sn þ VðPðU1 ^ snÞÞ� $ E ~p½U1 ^ sn�;
and by monotone convergence theorem it follows thatP
i;jaij $ E ~p½U1�.
Stochastics 29
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 13: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/13.jpg)
Also note that the process Nt 2PM
i¼1PiðtÞðlitÞ, t $ 0 is a P ~p-martingale. Then using the
stopped martingale we obtain
E ~p NU1^t
� �¼ E ~p
XMi¼1
PiðU1 ^ tÞ·li·ðU1 ^ tÞ
" ## lM ·E
~pU1 # lMXi;j
aij:
Letting t ! 1 and applying monotone convergence theorem we get E ~pNU1# lM
Pi;jaij.
Next, the almost-sure finiteness of U1 implies
Vð ~pÞ ¼ limn!1
E ~p½U1 ^ sn þ VðPðU1 ^ snÞÞ� ¼ E ~p½U1 þ VðPðU1ÞÞ�;
by the monotone and bounded convergence theorems, and by Proposition 3.10 Since
VðPðU1ÞÞ2 hðPðU1ÞÞ $ 21 by (38) we have
Vð ~pÞ $ E ~p U1 þ VðPðU1ÞÞ2 hðPðU1ÞÞ þ hðPðU1ÞÞ½ � $ E ~p½U1 þ hðPðU1ÞÞ�2 1
and the proof is complete. A
Corollary 3.12. Taking 1 ¼ 0 in Proposition 3.11 implies that U0 is an optimal stopping
time for the problem in (9).
4. Solution
Let us introduce the stopping region G1 W { ~p [ E : VðfÞ ¼ hðfÞ} and continuation region
C1 W EnG1 for the problem in (9). Because hð ~pÞ ¼ mini[Ihið ~pÞ, we have
G1 ¼ <i[IG1;i; where G1;i W { ~p [ E : Vð ~pÞ ¼ hið ~pÞ}; i [ I: ð40Þ
According to Proposition 2.1 and Corollary 3.12, an admissible strategy ðU0; dðU0ÞÞ that
attains the minimum Bayes risk in (6) is (i) to observe X until the process P of (13) and (14)
enters the stopping region G1, and then (ii) to stop the observation and select hypothesis Hi if
dðU0Þ ¼ i, equivalently, if PðU0Þ is in G1;i for some i [ I.
In order to implement this strategy one needs to calculate, at least approximately, the
subsets G1;i, i [ I or the function Vð·Þ. In Sections 4.1 and 4.2 we describe numerical
methods that give strategies whose Bayes risks are within any given positive margin of the
minimum, and the following structural results and observations are needed along the way.
Because the functions Vð·Þ and hið·Þ, i [ I are continuous by Corollary 3.6 and (10), the
sets G1;i, i [ I are closed. They are also convex, since both Vð·Þ is convex by the same
corollary, and hð·Þ is affine. Indeed, if we fix i [ I, ~p1; ~p2 [ G1;i and a [ ½0; 1�, then
Vða ~p1 þ ð12 aÞ ~p2Þ $ aVð ~p1Þ þ ð12 aÞVð ~p2Þ ¼ ahið ~p1Þ þ ð12 aÞhið ~p2Þ
¼ hiða ~p1 þ ð12 aÞ ~p2Þ $ Vða ~p1 þ ð12 aÞ ~p2Þ
Hence, Vða ~p1 þ ð12 aÞ ~p2Þ ¼ hiða ~p1 þ ð12 aÞ ~p2Þ and a ~p1 þ ð12 aÞ ~p2 [ G1;i.
Proposition 3.7 and next proposition, whose proof is in the appendix, show that the
stopping region G1 always includes a nonempty open neighborhood of every corner of the
ðM 2 1Þ-simplex E. However, some of the sets G1;i, i [ I may be empty in general, unless
aij . 0 for every 1 # i – j # M, in which case for some �pi , 1, i [ I we have G1;i $
{ ~p [ E : pi $ �pi} – Y for every i [ I.
S. Dayanik et al.30
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 14: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/14.jpg)
Proposition 4.1. Let w : E 7! Rþ be a bounded function. For every i [ I, define
�pi W 12 1= 2lMmaxk
aik
� � þ; andwhenever li . l1 ð41Þ
p*i W inf p [ ½ �pi; 1Þ :
p
lið12 pÞ12
12 �pi
�pi
·p
12 p
� �2li=ðli2l1Þ" #
$ maxk
aik
( ): ð42Þ
Then �p W ðmaxi:li¼l1 �piÞ _ ðmaxi:li.l1p*i Þ , 1, and
{ ~p [ E : J0wð ~pÞ ¼ hð ~pÞ} $[Mi¼1
{ ~p [ E : pi $ �p}; ð43Þ
and if aij . 0 for every 1 # i – j # M, then
{ ~p [ E : J0wð ~pÞ ¼ hið ~pÞ} $ ~p [ E : pi $ �p _maxkaik
ðminj–iajiÞ þ ðmaxkaikÞ
� �; i [ I: ð44Þ
For the auxiliary problems in (20), let us define the stopping and continuation regions as
Gn W { ~p [ E : VnðfÞ ¼ hðfÞ} and Cn W EnGn, respectively, for every n $ 0. Similar
arguments as above imply that
Gn ¼[Mi¼1
Gn;i; where Gn;i W { ~p [ E : Vnð ~pÞ ¼ hið ~pÞ}; i [ I
are closed and convex subsets of E. Since Vð·Þ # . . . # V1ð·Þ # V0ð·Þ ; hð·Þ, we have
[Mi¼1
{ ~p[ E : pi $ p*i }# G1 # · · ·# Gn # Gn21 # · · ·# G1 # G0 ; E;
{ ~p[ E : p1 , p*1; . . . ;pM , p*
M}$ C $ · · ·$ Cn $ Cn21 $ · · ·$ C1 $ C0 ; Y:
ð45Þ
Remark 1. If J0hð·Þ ¼ hð·Þ, then (26) and Corollary 3.6 imply Vð·Þ ¼ hð·Þ, and that it is
optimal to stop immediately and select the hypothesis that gives the smallest terminal cost.
A sufficient condition for this is that 1=li $ maxk[Iaki for every i [ I; i.e. the expected cost
of (or waiting-time for) an additional observation is higher than the maximum cost of a
wrong decision. Note that hð·Þ $ J0hð·Þ by definition of the operator J0 in (23), and since
hð·Þ $ 0, we have
J0hð ~pÞ $ inft[½0;1�
ðt0
XMi¼1
pie2lisdsþ
XMi¼1
pie2lis
!hðxðt; ~pÞÞ
" #
¼ infk;t
ðt0
XMi¼1
pie2lisdsþ
XMi¼1
pie2lis
!hkðxðt; ~pÞÞ
" #
¼ infk;t
XMi¼1
pi
1
liþ ak;i 2
1
li
� �e2lit
:
If 1=li $ maxkaki for every i [ I, then last double minimization is attained when t ¼ 0 and
becomes hð ~pÞ; therefore, we obtain hð·Þ $ J0hð·Þ $ hð·Þ; i.e. immediate stopping is optimal.
Stochastics 31
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 15: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/15.jpg)
4.1 Nearly optimal strategies
In Section 3, the value function Vð·Þ of (6) is approximated by the sequence {Vnð·Þ}n$1,
whose elements can be obtained by applying the operator J0 successively to the function hð·Þ
in (10); see (26) and Proposition 3.5. According to Proposition 3.1, Vð·Þ can be approximated
this way within any desired positive level of precision after finite number of iterations. More
precisely, (21) implies that
N $ 1þlM
12
Xi;j
aij
!3
! kVN 2 Vk W sup~p[E
jVNð ~pÞ2 Vð ~pÞj # 1; ;1 . 0: ð46Þ
In Section 4.2, we give a numerical algorithm to calculate the functions V1;V2; . . .
successively. By using those functions, we describe here two 1-optimal strategies.
Recall from Proposition 3.5 the 1-optimal stopping times S1n for the truncated problems
Vnð·Þ, n $ 1 in (20). For a given 1 . 0, if we fix N by (46) such that kVN 2 Vk # 1=2, then
S1=2N is 1-optimal for Vð·Þ in the sense that
E ~p S1=2N þ h P S
1=2N
� �h i# VNð ~pÞ þ 1=2 # Vð ~pÞ þ 1 for every ~p [ E;
and ðS1=2N ; dðS
1=2N ÞÞ is an 1-optimal Bayes strategy for the sequential hypothesis testing
problem of (6); recall the definition of dð·Þ from Proposition 2.1.
As described by Proposition 3.5, the stopping rule S1=2N requires that we wait until the least
of r1=4N21ð ~pÞ and first jump time s1 of X. If r
1=4N21ð ~pÞ comes first, then we stop and select
hypothesis Hi, i [ I that gives the smallest terminal cost hiðxðr1=4N21ð ~pÞ; ~pÞÞ. Otherwise, we
update the posterior probabilities at the first jump time to Pðs1Þ and wait until the least of
r1=4N22ðPðs1ÞÞ and next jump time s1+us1
of the process X. If r1=4N22ð ~pÞ comes first, then we stop
and select a hypothesis, or we wait as before. We stop at the Nth jump time of the process X
and select the best hypothesis if we have not stopped earlier.
Let N again be an integer as in (46) with 1=2 instead of 1. Then ðUðNÞ1=2; dðU
ðNÞ1=2ÞÞ is another
1-optimal strategy, if we define
UðNÞ1=2 W inf{t $ 0; hðPðtÞÞ # VNðPðtÞÞ þ 1=2}: ð47Þ
Indeed, E ~p½UðNÞ1=2 þ hðPðUðNÞ
1=2ÞÞ� # VNð ~pÞ þ 1=2 # Vð ~pÞ þ 1 by the arguments similar in the
proof of Proposition 3.11. After we calculate VNð·Þ as in Section 4.2, this strategy requires
that we observe X until the process P enters the region { ~p [ E : VNð ~pÞ2 hð ~pÞ $ 21=2}
and then select the hypothesis with the smallest terminal cost as before.
4.2 Computing the successive approximations
For the implementation of the nearly-optimal strategies described above, we need to compute
V1ð·Þ;V2ð·Þ; . . . ;VNð·Þ in (20) for any given integer N $ 1.
If the arrival rates in (2) are distinct; i.e. l1 , l2 , · · · , lM , then by Lemma 2.2 the
entrance time t ~pðp*1; . . . ;p
*MÞ of the path t 7! xðt; ~pÞ in (14) to the region <M
i¼1{ ~p [ E :
pi $ p*i } defined in Proposition 4.1 is bounded uniformly in ~p [ E from above by some
finite number sðp*1; . . . ;p
*MÞ. Therefore, the minimization in the problem Vnþ1ð ~pÞ ¼
J0Vnð ~pÞ ¼ inft[½0;1�JVnðt; ~pÞ can be restricted to the compact interval ½0; t ~pðp*1; . . . ;p
*M� ,
½0; sðp*1; . . . ;p
*M� for every n $ 1 and ~p [ E, thanks to Corollary 3.9. On the other hand, if
the arrival rates l1 ¼ · · · ¼ lM are the same, then this minimization problem over t [ ½0;1�
S. Dayanik et al.32
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 16: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/16.jpg)
reduces to a simple comparison of two numbers as in (29) because of degenerate operator J0;
see Remark 3.
If some of the arrival rates are equal, then t ~pðp*1; . . . ;p
*MÞ is not uniformly bounded in
~p [ E anymore; in fact, it can be infinite for some ~p [ E. Excluding the simple case of
identical arrival rates, one can still restrict the minimization in inft[½0;1�JVnððt; ~pÞ to a
compact interval independent of ~p [ E and still control the difference arising from this. Note
that the operator J defined in (24) satisfies
sup ~p[EjJwðt; ~pÞ2 Jwð1; ~pÞj # ð1=l1Þ e2l1t 1þ ðl1 þ lMÞ
Xi;j
aij
" #for every t $ 0
w : E 7! Rþ such that sup ~p[Ewð ~pÞ #P
i;jaij; recall Remark 1. If we define
tðdÞ W 21
l1ln
ðd=2Þl11þ ðl1 þ lMÞ
Pi;j
aij
0B@
1CA for every d . 0; ð48Þ
J0;twð ~pÞ W infs[½0;t�
Jwðs; ~pÞ; for every bounded w : E 7! R; t $ 0; ~p [ E; ð49Þ
then for every d . 0, t1; t2 $ tðdÞ and ~p [ E, we have
jJwðt1; ~pÞ2 Jwðt2; ~pÞj # jJwðt1; ~pÞ2 Jwð1; ~pÞj þ jJwð1; ~pÞ2 Jwðt2; ~pÞj #d
2þ
d
2# d;
and therefore, sup ~p[EjJ0;tðdÞwð ~pÞ2 J0wð ~pÞj , d. Let us now define
Vd;0ð·Þ W hð·Þ and Vd;nþ1ð·Þ W J0;tðdÞVd;nð·Þ; n $ 1; d . 0: ð50Þ
Lemma 4.2. For every d . 0 and n $ 0, we have Vnð·Þ # Vd;nð·Þ # ndþ Vnð·Þ
Proof. The inequalities holds for n ¼ 0, since Vd;0ð·Þ ¼ V0ð·Þ ¼ hð·Þ by definition. Suppose
that they hold for some n $ 0. Then Vnþ1ð·Þ ¼ J0Vnð·Þ # J0Vd;nð·Þ # J0;tðdÞVd;nð·Þ ¼
Vd;nþ1ð·Þ, which proves the first inequality for nþ 1. To prove the second inequality, note
that Vd;nð·Þ is bounded from above by hð·Þ #P
i;jaij. Then by (49) we have for every ~p [ E
that
Vd;nþ1ð ~pÞ ¼ inft[½0;tðdÞ�
JVd;nðt; ~pÞ # inft[½0;1�
JVd;nðt; ~pÞ þ d
# inft[½0;1�
JVnðt; ~pÞ þ
ðt0
XMi¼1
pie2liulind duþ d
# Vnþ1ðt; ~pÞ þ
ð10
XMi¼1
pie2liulind duþ d ¼ Vnþ1ð ~pÞ þ ðnþ 1Þd;
and the proof is complete by induction on n $ 0. A
If some (but not all) of the arrival rates are equal, then Lemma 4.2 lets us approximate the
function Vð·Þ of (9) by the functions {Vd;nð·Þ}d.0;n$1 in (50). In this case, there is additional
loss in precision due to truncation at d, and this is compensated by increasing the number
Stochastics 33
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 17: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/17.jpg)
of iterations. For example, for a every 1 . 0 the choices of N and d . 0 such that
N $ 1 þ1
121 þ
ffiffiffiffiffiffilM
p Xi;j
aij
!3=224
35
2
and d #1
NffiffiffiffiffiffiffiffiffiffiffiffiffiN 2 1
p imply that
kVd;N 2 Vk # kVd;N 2 VNk þ kVN 2 Vk # NdþXi;j
aij
!3=2 ffiffiffiffiffiffiffiffiffiffiffiffiffilM
N 2 1
r# 1 ð51Þ
by (21) and Lemma 4.2. Hence, the function Vð·Þ in (9) can be calculated at any desired
precision level 1 . 0 after some finite N ¼ Nð1Þ number of applications of the operator J0;tðdÞin (49) for some d ¼ dðNÞ . 0 to the function hð·Þ of (10), and one can choose
Nð1Þ W smallest integer greater than or equal to 1þffiffiffiffiffiffilM
p Xi;j
aij
!3=224
352
; ð52Þ
dðNÞ W1
NffiffiffiffiffiffiffiffiffiffiffiffiffiN 2 1
p for every 1 . 0 and N . 1: ð53Þ
If we set N ¼ Nð1=2Þ and d ¼ dðNÞ, and define Uðd;NÞ1=2 W inf{t $ 0; hðPðtÞÞ # Vd;NðPðtÞÞþ
1=2}, then for every ~p [ E we have E ~p½Uðd;NÞ1=2 þ hðPðU
ðd;NÞ1=2 ÞÞ� # Vd;Nð ~pÞ þ 1=2 #
Vð ~pÞ þ 1 as in the proof of Proposition 3.11. As in the case of (47), observing the process
X until the stopping time Uðd;NÞ1=2 and then selecting a hypothesis with the smallest expected
terminal cost give an 1-optimal strategy for the Vð·Þ in (6).
Figure 2 summarizes our discussion in the form of an algorithm that computes the
elements of the sequence {Vnð·Þ}n$1 or {Vd;nð·Þ}n$1 and approximates the function Vð·Þ in (9)
numerically. This algorithm is used in the numerical examples of Section 5.
Figure 2. Numerical algorithm that approximates the value function Vð·Þ in (9).
S. Dayanik et al.34
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 18: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/18.jpg)
5. Examples
Here we solve several examples by using the numerical methods of previous section.
We restrict ourselves to the problems with M ¼ 3 and 4 alternative hypotheses since the
drawings of the value functions and stopping regions are then possible and help checking
numerically Section 4’s theoretical conclusions on their structures; compare them also to
Dayanik and Sezer’s [7] examples with M ¼ 2 alternative hypotheses. In each example,
an approximate value function is calculated by using the numerical algorithm in figure 2 with
a negligible 1-value.
Figure 3 displays the results of two examples on testing unknown arrival rate of a simple
Poisson process. In both examples, the terminal decision costs aij ¼ 2, i – j are the same for
wrong terminal decisions.
In figure 3(a), three alternative hypotheses H1 : l ¼ 5, H2 : l ¼ 10, H3 : l ¼ 15 are
tested sequentially. The value function and boundaries between stopping and continuation
regions are drawn from two perspectives. As noted in Section 4, the stopping region consists
of convex and closed neighborhoods of the corners of 2-simplex E at the bottom of the figures
on the left. The value function above E looks concave and continuous in agreement with
Corollary 3.6. In figure 3(b), four hypotheses H1 : l ¼ 5, H2 : l ¼ 10, H3 : l ¼ 15 and
H4 : l ¼ 20 are tested sequentially and stopping regions G1;1; . . . ;G1;4 are displayed on the
3-simplex E. Note that the first three hypotheses are the same as those of the previous
example. Therefore, this problem reduces to the previous one if p4 ¼ 0, and the intersection
Figure 3. Testing hypotheses H1 : l ¼ 5, H2 : l ¼ 10, H3 : l ¼ 15 in (a) and H1 : l ¼ 5, H2 : l ¼ 10,H3 : l ¼ 15, H4 : 20 in (b) about the unknown arrival rate of a simple Poisson process. Wrong terminal decisioncosts are aij ¼ 2 for i – j in both examples. In (a), the approximations of the function Vð·Þ in (6)–(9) and stoppingregions G1;1; . . . ;G1;3 in (40) are displayed from two perspectives. In (b), the stopping regions G1;1; . . . ;G1;4 aredisplayed by the darker regions in 3-simplex E. Observe that in (b) p4 equals zero on the upper left H1H2H3-face ofthe tetrahedron, and the stopping regions on that face coincide with those of the example in (a) as expected.
Stochastics 35
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 19: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/19.jpg)
of its stopping regions with the H1H2H3-face of the simplex E coincides with those stopping
regions of the previous problem.
Next we suppose that the marks of a compound Poisson process X take values in
{1,2,3,4,5} and their unknown common distribution nð·Þ is one of the alternatives
n1 ¼1
15;2
15;3
15;4
15;5
15
� �; n2 ¼
3
15;3
15;3
15;3
15;3
15
� �; n3 ¼
5
15;4
15;3
15;2
15;1
15
� �: ð54Þ
For three different sets of alternative arrival rates ðl1; l2; l3Þ in (2); namely, (1,3,5), (1,3,10),
(0.1,3,10), we solve the problem (6) and calculate the corresponding minimum Bayes risk
functions V ð1Þð·Þ;V ð2Þð·Þ;V ð3Þð·Þ, respectively. The terminal decision cost is the same aij ¼ 2
for every 1 # i – j # 3. In figure 4(a), the functions V ð1Þ;V ð2Þ;V ð3Þ and stopping region
boundaries are plotted. It suggests that V ð1Þð·Þ $ V ð2Þð·Þ $ V ð3Þð·Þ as expected, since the
wider the alternative arrival rates are stretched from each other, the easier is to discriminate
the correct rate.
Figure 4(b) displays the minimum Bayes risks V ð4Þ, V ð5Þ, V ð6Þ and boundaries of stopping
regions of three more sequential hypothesis-testing problems with M ¼ 3 alternative
hypotheses. For each of those three problems, the alternative arrival rates ðl1; l2; l3Þ are the
same and equal (3,3,3), and the common mark distribution nð·Þ is Gamma ð2;bÞ with scale
parameter 2 and unknown shape parameter b. Alternative b-values under three hypotheses
are (3,6,9), (1,6,9), (1,6,15) for three problems V ð4Þ, V ð5Þ, V ð6Þ, respectively. Figure 4(b)
suggests that V ð4Þð·Þ $ V ð5Þð·Þ $ V ð6Þð·Þ as expected, since alternative Gamma distributions
are easier to distinguish as the shape parameters stretched out from each other.
Figure 4. Impact on the minimum Bayes risk in (6) and on its stopping regions of different alternative hypotheses in(2) about unknown characteristics of a compound Poisson process. On the left, V ð1Þ, V ð2Þ, V ð3Þ are the minimumBayes risks when (i) alternative arrival rates ðl1;l2; l3Þ are (1,3,5), (1,3,10), (0.1,3,10), respectively, and (ii) thealternative mark distributions on {1,2,3,4,5} are given by (54). On the right, V ð4Þ, V ð5Þ, V ð6Þ are the minimum Bayesrisks when (i) alternative arrival rates ðl1;l2; l3Þ ; ð3; 3; 3Þ are the same, and (ii) the marks are Gamma distributedrandom variables with common scale parameter 2 and unknown shape parameter alternatives (3,6,9), (1,6,9),(1,6,15), respectively.
S. Dayanik et al.36
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 20: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/20.jpg)
Furthermore, the kinks of the functions V ð4Þð·Þ, V ð5Þð·Þ, V ð6Þð·Þ suggest that these are not
smooth; compare with V ð1Þð·Þ, V ð2Þð·Þ, V ð3Þð·Þ in figure 4 (a). Dayanik and Sezer [7] show for
M ¼ 2 that the function Vð·Þ in (6) may not be differentiable, even in the interior of the
continuation, if the alternative arrival rates are equal. This result applies to the above
example with M ¼ 3, at least on the regions where one of the components of ~p equals zero
and the example reduces to a two-hypothesis testing problem.
6. Alternative Bayes risks
The key facts behind the analysis of the problem (6) were that (i) the Bayes risk R in (3) is the
expectation of a special functional of the piecewise-deterministic Markov process P in
(8),(13)–(16), (ii) the stopping times of such processes are “predictable between jump times”
in the sense of Lemma 3.2, and (iii) the minimum Bayes risk Vð·Þ in (6) and (9) is
approximated uniformly by the truncations Vnð·Þ, n $ 1 in (20) of the same problem at arrival
times of observation process X. Fortunately, all of these facts remain valid for the other two
problems in (6),
V 0ð ~pÞ W infðt;dÞ[A
R0ð ~p; t; dÞ and V 00ð ~pÞ W infðt;dÞ[A
R00ð ~p; t; dÞ; ð55Þ
of the alternative Bayes risks R0 and R00 in (4) and (5), respectively. Minor changes to the
proof of Proposition 2.1 give that
V 0ð ~pÞ ¼ inft[F
E ~p½Nt þ 1{t,1}hðPðtÞÞ�; ~p [ E;
V 00ð ~pÞ ¼ inft[F
E ~pXNt
k¼1
e2rsk f ðYkÞ þ e2rthðPðtÞÞ
" #; ~p [ E
ð56Þ
in terms of the minimum terminal decision cost function hð·Þ in (10). Moreover, for every
a.s.-finite optimal F-stopping times t0 and t00 of the problems in (56) the admissible strategies
ðt0; dðt0ÞÞ and ðt00; dðt00ÞÞ attain the minimum Bayes risks V 0ð·Þ and V 00ð·Þ in (55), respectively,
where the random variable dðtÞ [ argmini[I
Pj[IaijPiðtÞ, t $ 0 is the same as in Proposition
2.1. The sequence of functions
V 0nð ~pÞ W inf
t[FE ~p½n ^ Nt þ 1{t,1}hðPðt ^ snÞÞ�; n $ 0 and
V 00nð ~pÞ W inf
t[FE ~p
Xn^Nt
k¼1
e2rsk f ðYkÞ þ e2rðt^snÞhðPðt ^ snÞÞ
" #; n $ 0;
ð57Þ
obtained from the problems in (56) by truncation at the nth arrival time snðs0 ; 0Þ of the
observation process X, approximate respectively the functions V 0ð ~pÞ and V 00ð ~pÞ uniformly in
~p [ E, and the error bounds are similar to (21); i.e.
0 # kV 0n 2 V 0k; kV 00
n 2 V 00k #Xi;j
aij
!3=2 ffiffiffiffiffiffiffiffiffiffiffilM
n2 1
rfor every n $ 1:
Stochastics 37
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 21: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/21.jpg)
Moreover, the sequences ðV 0nð·ÞÞn$1 and ðV 00
nð·ÞÞn$1 can be calculated successively as in
V 0nþ1ð ~pÞ ¼ inf
t[½0;1�J 0V 0
nðt; ~pÞ; n $ 0 and V 00nþ1ð ~pÞ ¼ inf
t[½0;1�J 00V 00
nðt; ~pÞ; n $ 0:
by using the operators J 0 and J 00 acting on bounded functions w : E 7! R according to
J 0wðt; ~pÞ W
ðt0
XMi¼1
pie2liuli½1þ ðSiwÞðxðu; ~pÞÞ�duþ
XMi¼1
pie2lithðxðt; ~pÞÞ;
J 00wðt; ~pÞ W
ðt0
XMi¼1
pie2ðrþliÞuli½mi þ ðSiwÞðxðu; ~pÞÞ�duþ
XMi¼1
pie2ðrþliÞthðxðt; ~pÞÞ
ð58Þ
for every t $ 0; here, Si, i [ I is the same operator in (25), and
mi W E†½f ðY1ÞjQ ¼ i� ¼
ðRd
f ð yÞniðdyÞ ;ðRd
f ð yÞpið yÞmðdyÞ; i [ I
is the conditional expectation of the random variable f ðY1Þ given that the correct hypothesis is
Q ¼ i. Recall from Section 1 that the negative part f 2ð·Þ of the Borel function f : Rd 7! R
is ni-integrable for every i [ I; therefore, the expectation mi, i [ I exists.
As in Proposition 3.11 and Corollary 3.12 the F-stopping times
U00 W inf{t $ 0;V 0ðPðtÞÞ ¼ hðPðtÞÞ} and U0 W inf{t $ 0;V 00ðPðtÞÞ ¼ hðPðtÞÞ}
are optimal for the problems V 0ð·Þ and V 00ð·Þ in (56), respectively. The functions V 0ð·Þ and
V 00ð·Þ are concave and continuous, and the nonempty stopping regions
G01 W { ~p [ E : V 0ð ~pÞ ¼ hð ~pÞ} and G00
1 W { ~p [ E : V 00ð ~pÞ ¼ hð ~pÞ}
are the union of closed and convex neighborhoods
G01;i W G0
1 > { ~p [ E : hð ~pÞ ¼ hið ~pÞ} and
G001;i W G00
1 > { ~p [ E : hð ~pÞ ¼ hið ~pÞ}; i [ I
ofM corners of the (M 2 1)-simplex E. It is optimal to stop at time U 00 (respectively,U
000) and
select the hypothesis Hi, i [ I if PðU00Þ [ G0
1;i (respectively, PðU 000Þ [ G00
1;i).
The numerical algorithm in figure 2 approximates V 0ð·Þ in (56) with its outcome V 0Nð·Þ or
V 0d;Nð·Þ if J, �pi, p
*i are replaced with J 0 in (58),
�p0i W 1 2
l1
2lMðmaxkaikÞ
� �_
maxkaik
ðminj–iajiÞ þ ðmaxkaikÞ;
p0i W inf pi [ ½ �p0
i; 1� :pi
ð12 piÞ12
12 �pi
�pi
·pi
12 pi
� �2li=ðli2l1Þ" #
$ maxk
aik
( );
respectively, for every i [ I. Similarly, its outcome V 00Nð·Þ or V 00
d;Nð·Þ approximates the
function V 00ð·Þ in (56) if in figure 2 J, �pi, p*i are replaced with J00 in (58),
�p00i W 12
l1
2ðlM þ rÞðmaxkaikÞ
� �_
maxkaik
ðminj–iajiÞ þ ðmaxkaikÞ;
p00i W inf pi [ ½ �p00
i ;1� :pili
ðli þ rÞð12piÞ12
12 �pi
�pi
·pi
12pi
� �2ðliþrÞ=ðli2l1Þ" #
$maxk
aik
( );
S. Dayanik et al.38
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 22: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/22.jpg)
respectively, for every i [ I. As in Section 4.1, these numerical solutions can be used to
obtain nearly-optimal strategies. For every 1 . 0, if we fix N such that kV 0N 2 V 0k # 1=2
and define U01=2;N W inf{t $ 0; hðPðtÞÞ # V 0
NðPðtÞÞ þ 1=2}, then ðU 01=2;N ; dðU
01=2;NÞÞ is an
1-optimal admissible strategy for the minimum Bayes risk V 0ð·Þ in (55). Similarly, if we fix N
such that kV 00N 2 V 00k # 1=2 and define U 00
1=2;N W inf{t $ 0; hðPðtÞÞ # V 00NðPðtÞÞ þ 1=2},
then ðU001=2;N ; dðU
001=2;NÞÞ is an 1-optimal admissible strategy for the minimum Bayes risk V 00ð·Þ
in (55). The proofs are omitted since they are similar to those discussed in detail for Vð·Þ of
(6). However, beware that the Bayes risk R0 in (4) does not immediately reduce to R of (3),
since the ðP ~p; FÞ-compensator of the process Nt, t $ 0. isPM
i¼1PiðtÞli, t $ 0:
Figure 5 compares the minimum Bayes risk functions and optimal stopping regions for a
three-hypothesis testing problem. Comparison of (a) and (b) confirms that the Bayes risks R
and R0 are different. Since the stopping regions in (b) are larger than those in (a), optimal
decision strategies arrive at a conclusion earlier under R0 than under R. As the discount rate r
decreases to zero, the minimum risk V 00ð·Þ and stopping regions become larger and converge,
apparently to V 0ð·Þ and its stopping regions, respectively.
Acknowledgements
The authors thank the anonymous referee for careful reading and suggestions, which
improved the presentation of the manuscript. The research of Savas Dayanik was supported
by the Air Force Office of Scientific Research, under grant AFOSR-FA9550-06-1-0496.
The research of H. Vincent Poor and Semih O. Sezer was supported by the US Army
Pantheon Project.
Figure 5. The minimum Bayes risk functions Vð·Þ, V 0ð·Þ and V 00ð·Þ corresponding to Bayes risks R in (3), R0 in (4)and R00 in (5) with discount rates r ¼ 0:2; 0:4; 0:6; 0:8, respectively, when three alternative hypotheses, H1 : l ¼ 1,H2 : l ¼ 2 andH3 : l ¼ 3, about the arrival rate l of a simple Poisson process are tested. In each figure, the stoppingregions G1;1, G12 and G1;3 in the neighborhoods of the corners at H1, H2 and H3 are also displayed. The terminaldecision costs are aij ¼ 10 for every 1 # i – j # 3, and in (c) through (f) it is assumed that Vð·Þ ¼ 1:
Stochastics 39
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 23: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/23.jpg)
References
[1] Wald, A. and Wolfowitz, J., 1950, Bayes solutions of sequential decision problems, Annals of MathematicalStatistics, 21, 82–99.
[2] Blackwell, D. and Girshick, M., 1954, Theory of Games and Statistical Decisions (New York: John Wiley &Sons).
[3] Zacks, S., 1971, The Theory of Statistical Inference (New York: John Wiley & Sons).[4] Shiryaev, A., 1978, Optimal Stopping Rules (New York: Springer-Verlag).[5] Peskir, G. and Shiryaev, A., 2000, Sequential testing problems for Poisson processes, Annals of Statistics, 28(3),
837–859.[6] Gapeev, P., 2002, Problems of the sequential discrimination of hypotheses for a compound Poisson process with
exponential jumps, Uspekhi Matematicheskikh Nauk, 57(6(348)), 171–172.[7] Dayanik, S. and Sezer, S.O., 2006, Sequential testing of simple hypotheses about compound Poisson processes,
Stochastic Processes and their Applications, 116, 1892–1919.[8] Øksendal, B., 2003, Stochastic Differential Equations, 6th ed., Universitext (an introduction with applications)
(Berlin: Springer-Verlag).[9] Gugerli, U.S., 1986, Optimal stopping of a piecewise-deterministic Markov process, Stochastics, 19(4),
221–236.[10] Davis, M.H.A., 1993, Markov models and organization. Monographs on Statistics and Applied Probability
(London: Chapman & Hall), Vol. 49.[11] Bayraktar, E., Dayanik, S. and Karatzas, I., 2006, Adaptive Poisson disorder problem, The Annals of Applied
Probability, 16(3), 1190–1261.[12] Dayanik, S. and Sezer, S.O., 2006, Compound Poisson disorder problem, Mathematics of Operations Research,
31(4), 649–672.[13] Bremaud, P., 1981, Point Processes and Queues Martingale Dynamics, Springer Series in Statistics (New York:
Springer-Verlag).
Appendix A Proofs of selected results
Proof of (11). The right hand side of (11) is F t-measurable, and it is sufficient to show
E ~p 1A·P ~pðQ ¼ ijF tÞh i
¼ E ~p 1Apie
2li tQNt
k¼1 li·piðYkÞPMj¼1 pje2ljt
QNt
k¼1 lj·pjðYkÞ
" #
for sets of the form A W {Nt1 ¼ m1; . . . ;Ntk ¼ mk; ðY1; . . . ; YmkÞ [ Bmk
} for every k $ 1,
0 ; t0 # t1 # · · · # tk ¼ t, 0 ; m0 # m1 # · · · # mk, and Borel subset Bmkof Rmk£d. If A is
as above, then E ~p½1AP~pðQ ¼ ijF tÞ� ¼ E ~p½E ~pð1A>{Q¼i}jF tÞ� ¼ E ~p½1A>{Q¼i}� ¼ piPiðAÞ
becomes
pie2li tk
Ykn¼1
ðtn 2 tn21Þmn2mn21
ðmn 2 mn21Þ!
ðBmk
Ymk
l¼1
pið ylÞmðdylÞ
¼Ykn¼1
ðtn 2 tn21Þmn2mn21
ðmn 2 mn21Þ!
ðBmk
Liðtk;mk; y1; . . . ; ymkÞYmk
l¼1
mðdylÞ
in terms ofLiðt;m; y1; . . . ; ymÞ W pie2litQm
l¼1lipiðylÞ; i [ I. If Lðt;m; y1; . . . ; ymÞ WPMi¼1Liðt;m; y1; . . . ; ymÞ, then E ~p½1AP
~pðQ ¼ ijF tÞ� equals
S. Dayanik et al.40
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 24: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/24.jpg)
Ykn¼1
ðtn 2 tn21Þmn2mn21
ðmn 2 mn21Þ!
ðBmk
Lðtk;mk; y1; . . . ; ymkÞ·Liðtk;mk; y1; . . . ; ymk
Þ
Lðtk;mk; y1; . . . ; ymkÞ
Ymk
l¼1
mðdylÞ
¼XMj¼1
pje2ljtklmk
j
Ykn¼1
ðtn 2 tn21Þmn2mn21
ðmn 2 mn21Þ!
ðBmk
Liðtk;mk; y1; . . . ; ymkÞ
Lðtk;mk; y1; . . . ; ymkÞ
Ymk
l¼1
pjð ylÞmðdylÞ
¼XMj¼1
pjE~p 1A
Liðtk;Ntk ; Y1; . . . ; YNtkÞ
Lðtk;Ntk ; Y1; . . . ; YNtkÞ
�����Q ¼ j
" #¼ E ~p 1A
pie2litk
QNtk
l¼1 lipiðYlÞPMj¼1 pje2lj tk
QNtk
l¼1 ljpjðYlÞ
" #:
A
Proof of (15). For every bounded F t-measurable r.v. Z, we have
E ~p½ZgðPðtþsÞÞ�¼XMi¼1
piEi½ZgðPðtþsÞÞ�¼XMi¼1
piEi½ZEi½gðPðtþsÞÞjF t��
¼XMi¼1
P ~p{Q¼ i}E ~p½ZEi½gðPðtþsÞÞjF t�jQ¼ i�
¼XMi¼1
E ~p½Z1{Q¼i}Ei½gðPðtþsÞÞjF t��
¼XMi¼1
E ~p½ZPiðtÞEi½gðPðtþsÞÞjF t��¼E ~p ZXMi¼1
PiðtÞEi½gðPðtþ sÞÞjF t�
!" #:
A
Proof of Proposition 2.1. Let t be an F-stopping time taking countably many real values
ðtnÞn$1. Then by monotone convergence theorem
E ~p tþXMi¼1
XMj¼1
aij·1{d¼i;Q¼j}
" #¼Xn
E ~p1{t¼tn} tn þXMi¼1
XMj¼1
aij·1{d¼i;Q¼j}
" #
¼Xn
E ~p1{t¼tn} tn þXMi¼1
XMj¼1
aij·1{d¼i}P~p{Q ¼ jjF tn}
" #
¼Xn
E ~p1{t¼tn} tn þXMi¼1
XMj¼1
aij·1{d¼i}PjðtnÞ
" #
¼ E ~p tþXMi¼1
XMj¼1
aij·1{d¼i}PjðtÞ
" #:
ðA1Þ
Next take an arbitrary F-stopping time t with finite expectation E ~pt. There is a decreasing
sequence ðtkÞk$1 of F-stopping times converging to t such that (i) every tk, k $ 1 takes
countably many values, and (ii) E ~pt1 is finite. (For example, if wkðtÞ W ðj2 1Þk22k whenever
t [ ½ðj2 1Þk22k; jk22k) for some j $ 1, then tk W wkðtÞ, k $ 1 are F-stopping times taking
countably many real values and tk # t , tk þ k2k for every k $ 1.) Therefore, d [ F t ,
F tk for every k $ 1. Then the right-continuity of the sample-paths of the process Pi in (8),
Stochastics 41
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 25: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/25.jpg)
dominated convergence theorem, and (A1) lead to
E ~p tþXMi¼1
XMj¼1
aij·1{d¼i;Q¼j}
" #¼ lim
k!1E ~p tk þ
XMi¼1
XMj¼1
aij·1{d¼i;Q¼j}
" #
¼ limk!1
E ~p tk þXMi¼1
XMj¼1
aij·1{d¼i}PjðtkÞ
" #
¼ E ~p tþXMi¼1
XMj¼1
aij·1{d¼i}PjðtÞ
" #:
The last expectation is minimized if we take dðtÞ [ arg mini[I
PMj¼1aij·PjðtÞ. Hence,
infðt;dÞ[A
Rðt; dÞ $ inft[F
E ~p tþXMi¼1
XMj¼1
aij·1{d¼i}PjðtÞ
" #
by Remark 1. Since dðtÞ is F t-measurable, we also have the reverse inequality. A
The following result will be useful in establishing Lemma 2.2.
Lemma A.1. Suppose l1 , l2 , · · · , lM . Let Eðq1; ... ;qMÞ be as in (17) for some 0 ,
q1; . . . ; qM , 1. For every fixed k [ {1; . . . ;M 2 1} and arbitrary constant 0 , dk , 1,
there exists some 1k [ ð0; dk� such that starting at time t ¼ 0 in the region
~p [ E w Eðq1; ... ;qM Þ : pj , 1k ;j , k and pk $ dk
( );
the entrance time of the path t 7! xðt; ~pÞ, t $ 0 into Eðq1; ... ;qM Þ is bounded from above. More
precisely, there exists some finite tkðdk; qkÞ such that
xkðtkðdk; qkÞ; ~pÞ $ qk for every ~p in the region above:
Proof. Let k ¼ 1 and d1 [ ð0; 1Þ be fixed. The mapping t 7! x1ðt; ~pÞ is increasing, and
limt!1x1ðt; ~pÞ ¼ 1 on { ~p [ E : p1 $ d1}. Hence the given level 0 , q1 , 1 will eventually
be exceeded. Then the explicit form of xðt; ~pÞ in (14) implies that we can set
t1ðd1; q1Þ ¼1
l2 2 l1ln
12 d1
d1·
q1
12 q1
� �:
For 1 , k , M 2 1, let dk [ ð0; 1Þ be a given constant. By (16), the mapping t 7! xkðt; ~pÞ is
increasing on the region { ~p [ E : pj # 1k; for all j , k}, where
1k W minð12 qkÞðlkþ1 2 lkÞ
ðk2 1Þðlkþ1 2 lk21Þ;12 dk
k2 1
� �:
S. Dayanik et al.42
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 26: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/26.jpg)
Let us fix
1k W sup 0 # pk # dk ^ 1k ^lM 2 lk
lM 2 l1: dk
12 qk
qk
� ��
$ pkðk2 1Þ1k
12 1k·12 pk
pk
� �ðlk2l1Þ=ðlM2l1Þ
þ1k
12 1k·12 pk
pk
� �ðlk2lkþ1ÞðlM2l1Þ):
The right hand side of the inequality above is 0 at pk ¼ 0, and it is increasing on
pk [ ½0; ðlM 2 lkÞ=ðlM 2 l1Þ�. Therefore, 1k is well defined. Moreover, since 1k # 1k, for a
point ~p on the subset { ~p [ E : pj , 1k for all j , k} we have by (14) for every j , k that
xjðt; ~pÞ # 1k for every t #1
lM 2 l1ln
1k
12 1k·12 1k
1k
� �:
In other words, t 7! xkðt; ~pÞ is increasing on t [ 0; 1lM2l1
�ln 1k
121k· 121k
1k
�h ifor every ~p in
the smaller set { ~p [ E : pj , 1k for all j . k}. Moreover,
dk12 qk
qk
� �$Xj,k
1k1k
12 1k·12 1k
1k
� � lk2ljlM2l1
�þXj.k
pj
1k
12 1k·12 1k
1k
� � lk2ljlM2l1
�:
For every ~p [ E such that pk $ dk and pj $ 1k for all j , k, rearranging this inequality and
using (14) give
xk1
lM 2 l1ln
1k
12 1k·12 1k
1k
� �; ~p
� �$ qk;
which completes the proof for 1 , k , M 2 1. A
Proof of Lemma 2.2. The claim is immediate if ~p [ Eðq1; ... ;qM Þ. To prove it on EnEðq1; ... ;qMÞ,
with the notation in Lemma A.1, fix
dM21 ¼12 qM
M 2 1; and di ¼ 1iþ1 for i ¼ M 2 2; . . . ; 1:
Then by Lemma A.1 we have d1 # . . . # dM21, and our choice of dM21 implies that
{ ~p [ EnE * : pi , di; i , M 2 1} ¼ Y. By the same lemma, for every starting point ~p at
time t ¼ 0 in the set
~p [ E w Eðq1; ... ;qM Þ : pi , di; ;i , M 2 2; and pM21 $ dM21
� �# ~p [ E w Eðq1; ... ;qM Þ : pi , dM22; ;i , M 2 2; and pM21 $ dM21
� �there exists tM21ðdM21; qM21Þ such that the exit time of the path t 7! xðt; ~pÞ from
EnEðq1; ... ;qMÞ is bounded from above by tM21ðdM21; qM21Þ. Then the last two statements
imply that the same bound holds on { ~p [ EnEðq1; ... ;qMÞ : pi , di; for i , M 2 2}.
Now assume that for some 1 # n , M 2 1 and for every ~p in the set
~p [ E w Eðq1; ... ;qM Þ : pi , di; for i , n� �
;
we have inf{t $ 0 : xðt; ~pÞ [ Eðq1; ... ;qMÞ} # maxk.ntkðdk; qkÞ. Again by Lemma A.1 there
exists tnðdn; qnÞ , 1 as the upper bound on the hitting time to Eðq1; ... ;qM Þ for all the points
in { ~p [ EnEðq1; ... ;qMÞ : pi , di; for i , n2 1; and pn $ dn}. Hence on { ~p [ En
Stochastics 43
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 27: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/27.jpg)
Eðq1; ... ;qMÞ : pi , di; for i , n2 1} the new upper bound is maxk.n21tkðdk; qkÞ. By
induction, it follows that sðq1; . . . ; qMÞ can be taken as maxk$1tkðdk; qkÞ. A
A.1 Derivation of the ðF;PÞ-infinitesimal-generator A of the process P
Let f : Rd 7! R be a continuously differentiable function. On the event {sn # t , snþ1} we
have
f ðPðtÞÞ ¼ f ðPð0ÞÞ þ f ðPðs1ÞÞ2 f ðPðs12ÞÞ þ f ðPðs12ÞÞ2 f ðPð0ÞÞ
þ f ðPðs2ÞÞ2 f ðPðs22ÞÞ þ f ðPðs22ÞÞ2 f ðPðs1ÞÞ
..
.
þ f ðPðsnÞÞ2 f ðPðsn2ÞÞ þ f ðPðsn2ÞÞ2 f ðPðsn21ÞÞ
þ f ðPðtÞÞ2 f ðPðsnÞÞ:
The point process pðA £ ð0; t�Þ WP1
k¼11AðYkÞ1{sk#t} (A [ BðRdÞ, t $ 0) admits ðF;P†Þ-
compensator ~pðA £ ð0; t�Þ WPM
i¼1
Ð t0
ÐAlipið yÞPiðs2ÞmðdyÞds; namely, qðA £ ð0; t�Þ W ðA£
ð0; t�Þ2 ~pðA £ ð0; t�Þ, t $ 0 is a ðF;P†Þ-martingale for every A [ BðRdÞ. Then by (13)Pnk¼1½f ðPðskÞÞ2 f ðPðsk2ÞÞ� equals
Xnk¼1
fl1p1ðYnÞP1ðsn2ÞPMj¼1 ljpjðYnÞPjðsn2Þ
; . . . ;lMpMðYnÞPMðsn2ÞPMj¼1 ljpjðYnÞPjðsn2Þ
!2 f ðPðsk2ÞÞ
" #
¼
ðt0
ðRd
fl1p1ð yÞP1ðs2ÞPMj¼1 ljpjð yÞPjðs2Þ
; . . . ;lMpMð yÞPMðs2ÞPMj¼1 ljpjð yÞPjðs2Þ
!2 f ðPðs2ÞÞ
" #pðdy £ dsÞ
¼
ðt0
ðRd
½ . . . �qðdy £ dsÞ þ
ðt0
ðRd
½ . . . �~pðdy £ dsÞ
¼ a local martingale
þ
ðt0
ðRd
fl1p1ð yÞP1ðs2ÞPMj¼1 ljpjð yÞPjðs2Þ
; . . . ;lMpMð yÞPMðs2ÞPMj¼1 ljpjð yÞPjðs2Þ
!2 f ðPðs2ÞÞ
" #
£XMi¼1
lipið yÞPiðs2Þ
!mðdyÞds:
On the other hand, for every 1 # k # nþ 1, sk21 # t , sk, and i ¼ 1; . . . ;M we have
PiðtÞ2Piðsk21Þ ¼ xiðt2 sk21;Pðsk21ÞÞ2 xið0;Pðsk21ÞÞ
¼
ðt2sk21
0
xiðu;Pðsk21ÞÞ 2li þXMj¼1
ljxjðu;Pðsk21ÞÞ
!du
¼
ðt2sk21
0
Piðsk21 þ uÞ 2li þXMj¼1
ljPjðsk21 þ uÞ
!du
¼
ðtsk21
PiðuÞ 2li þXMj¼1
ljPjðuÞ
!du;
S. Dayanik et al.44
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 28: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/28.jpg)
hence, dPiðtÞ ¼ PiðtÞð2li þPM
j¼1 ljPjðtÞÞ dt, and the chain rule gives that
f ðPðsk2ÞÞ2 f ðPðsk21ÞÞ ¼
ðsk
sk21
XMi¼1
PiðuÞ 2li þXMj¼1
ljPjðuÞ
!›f
›pi
� �ðPiðuÞÞdu;
and
f ðPðtÞÞ ¼ f ðPð0ÞÞ þXnk¼1
½f ðPðskÞÞ2 f ðPðsk2ÞÞ� þXnk¼1
½f ðPðsk2ÞÞ2 f ðPðsk21ÞÞ�
þ f ðPðtÞÞ2 f ðPðsnÞÞ
equals a local martingale plusÐ t
0ðAf ÞðPsÞds, where Af ð ~pÞ is defined by (19).
Proof of Proposition 3.3. Monotonicity of the operators Jw and J0w, and boundedness of
J0wð·Þ from above by hð·Þ follow from (23) and (24). The check the concavity, let wð·Þ be a
concave function on E. Then wð ~pÞ ¼ infk[Kbk0 þ
PMj¼1b
kjpj for some index set K and some
constants bkj , k [ K, j [ I < {0}. Then
PMi¼1pie
2liuli·ðSiwÞðxðu; ~pÞÞ equals
XMi¼1
lipie2liu
ðRd
infk[K
bk0 þ
PMj¼1 b
kj ljxjðu; ~pÞpjð yÞPM
j¼1 ljxjðu; ~pÞpjð yÞ
" #pið yÞmðdyÞ
¼
ðRd
infk[K
bk0
XMi¼1
lipie2liupið yÞ þ
XMi¼1
bki lipie
2liupið yÞ
" #mðdyÞ;
where the last equality follows from the explicit form of xðu; ~pÞ given in (14). Hence, the
integrand and the integral in (24) are concave functions of ~p. Similarly, using (14) and (10)
we have ðPM
i¼1pje2litÞhðxðt; ~pÞÞ ¼ infk[I
PMj¼1ak;jpje
2ljt, and the second term in (24) is
concave. Being the sum of two concave functions, ~p 7! Jwðt; ~pÞ is also concave. Finally,
since ~p 7! J0wð ~pÞ is the infimum of concave functions, it is concave.
Next, we take w : E 7! Rþ bounded and continuous, and we verify the continuity of the
mappings ðt; ~pÞ 7! Jwðt; ~pÞ and ~p 7! J0ð ~pÞ. To show that ðt; ~pÞ 7! Jwðt; ~pÞ of (24)
is continuous, we first note that the mappings
ðt; ~pÞ 7!
ðt0
XMi¼1
pie2liudu and ðt; ~pÞ 7!
XMi¼1
pie2lit
!·hðxðt; ~pÞÞ
are jointly continuous. For a bounded and continuous function wð·Þ on E, the mapping
~p 7! ðSiwÞð ~pÞ is also bounded and continuous by bounded convergence theorem. Let
Stochastics 45
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 29: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/29.jpg)
ðt ðkÞ; ~p ðkÞÞk$1 be a sequence in Rþ £ E converging to ðt; ~pÞ. Then we have
ðt0
XMi¼1
e2liulipi·Siwðxðu; ~pÞÞdu2
ðt ðkÞ0
XMi¼1
e2liulipðkÞi ·Siwðxðu; ~p
ðkÞÞÞdu
����������
#
ðt0
XMi¼1
e2liulipi·Siwðxðu; ~pÞÞdu2
ðt ðkÞ0
XMi¼1
e2liulipi·Siwðxðu; ~pÞÞdu
����������
þ
ðt ðkÞ0
XMi¼1
e2liulipi·Siwðxðu; ~pÞÞdu2
ðt ðkÞ0
XMi¼1
e2liulipðkÞi ·Siwðxðu; ~p
ðkÞÞÞdu
����������
# lMkwk·jt2 t ðkÞj þ
ðt ðkÞ0
XMi¼1
e2liuli pi·Siwðxðu; ~pÞÞ2 pðkÞi ·Siwðxðu; ~p
ðkÞÞÞ�� ��du
# lMkwk·jt2 t ðkÞj þ lM
ð10
e2l1uXMi¼1
pi·Siwðxðu; ~pÞÞ2 pðkÞi ·Siwðxðu; ~p
ðkÞÞÞ�� ��du
where kwk W sup ~p[Ejwð ~pÞj , 1. Letting k!1, the first term of the last line above goes to
0. The second term also goes to 0 by the continuity of ~p 7! Siwðxðu; ~pÞÞ and dominated
convergence theorem. Therefore, Jwðt; ~pÞ is jointly continuous in ðt; ~pÞ.
To see the continuity of ~p 7! J0wð ~pÞ, let us define the “truncated” operators
Jk0wð ~pÞ W inft[½0;k�
Jwðt; ~pÞ; ~p [ E; k $ 1
on bounded functions w : E 7! R. Since ðt; ~pÞ 7! Jwðt; ~pÞ is uniformly continuous on
½0; k� £ E, the mapping ~p 7! Jk0wð ~pÞ is also continuous on E. Also note that
Jwðt; ~pÞ ¼ Jwðt ^ k; ~pÞ þ
ðtt^k
XMi¼1
e2liupi½1þ li·Siwðxðu; ~pÞÞ�du
þXMi¼1
e2litpi·hðxðt; ~pÞÞ2XMi¼1
e2liðt^kÞpi·hðxðt ^ k; ~pÞÞ
$ Jwðt ^ k; ~pÞ2 1{t.k}
XMi¼1
e2likpi·hðxðk; ~pÞÞ
$ Jwðt ^ k; ~pÞ2XMi;j
aij
!XMi¼1
e2likpi $ Jwðt ^ k; ~pÞ2XMi;j
aij
!e2l1k;
since 0 # wð·Þ and 0 # hð·Þ #PM
i;jaij. Taking the infimum over t $ 0 of both sides gives
Jk0wð ~pÞ $ J0wð ~pÞ $ Jk0wð ~pÞ2XMi;j
aij
!e2l1k:
Hence, Jk0wð ~pÞ! J0wð ~pÞ as k!1 uniformly in ~p [ E, and ~p 7! J0wð ~pÞ is continuous on
the compact set E. A
Proof of Proposition 3.5. Here, the dependence of r1=2n on ~p is omitted for typographical
reasons. Let us first establish the inequality
E½t ^ sn þ hðPðt ^ snÞÞ� $ vnð ~pÞ; t [ F; ~p [ E; ðA2Þ
S. Dayanik et al.46
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 30: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/30.jpg)
by proving inductively on k ¼ 1; . . . ; nþ 1 that
E ~p½t ^ sn þ hðPðt ^ snÞÞ� $ E ~p t ^ sn2kþ1 þ 1{t$sn2kþ1}vk21ðPðsn2kþ1ÞÞ�
þ 1{t,sn2kþ1}hðPðtÞÞ�¼: RHSk21:
ðA3Þ
Note that (A2) will follow from (A3) when we set k ¼ nþ 1.
For k ¼ 1, (A3) is immediate since v0ð·Þ ; hð·Þ. Assume that it holds for some 1 # k ,
nþ 1 and prove it for k þ 1. Note that RHSk21 defined in (A3) can be decomposed as
RHSk21 ¼ RHSð1Þk21 þ RHSð2Þ
k21 ðA4Þ
where RHSð1Þk21 W E ~p½t ^ sn2k þ 1{t,sn2k}hðPðtÞÞ� and RHSð2Þ
k21 is defined by
E ~p 1{t$sn2k} t ^ sn2kþ1 2 sn2k þ 1{t$sn2kþ1}vk21ðPðsn2kþ1ÞÞ þ 1{t,sn2kþ1}hðPðtÞÞ� �� �
:
By Lemma 3.2, there is an Fsn2k-measurable random variable Rn2k such that t ^ sn2kþ1 ¼
ðsn2k þ Rn2kÞ ^ sn2kþ1 on {t $ sn2k}. Therefore, RHSð2Þk21 becomes
E ~p 1{t$sn2k} ðsn2k þ Rn2kÞ ^ sn2kþ1 2 sn2k þ 1{sn2kþRn2k$sn2kþ1}vk21ðPðsn2kþ1ÞÞ��
þ 1{sn2kþRn2k,sn2kþ1}hðPðsn2k þ Rn2kÞÞ��
¼ E ~p 1{t$sn2k}E~p r ^ s1Þ+usn2k
� ��r¼Rn2k
þ1{Rn2k$sn2kþ12sn2k}vk21ðPðsn2kþ1ÞÞnh
þ 1{Rn2k,sn2kþ12sn2k}hðPðsn2k þ Rn2kÞÞjFsn2k
��:
SinceP has the strong Markov property and the same jump times sk, k $ 1 as the process X,
the last expression for RHSð2Þk21 can be rewritten as
RHSð2Þk21 ¼ E ~p 1{t$sn2k}f n2kðRn2k;Pðsn2kÞÞ
� �;
where f n2kðr; ~pÞ W E ~p½r ^ s1 þ 1{r.s1}vk21ðPðs1ÞÞ þ 1{r#s1}hðPðrÞÞ� ; Jvk21ðr; ~pÞ.
Thus, f n2kðr; ~pÞ $ J0vk21ð ~pÞ ¼ vkð ~pÞ, and RHSð2Þk21 $ E½1{t.sn2k}vkðPðsn2kÞÞ�. Using this
inequality with (A3) and (A4) we get
E ~p½t ^ sn þ hðPðt ^ snÞÞ� $ RHSk21
$ E ~p t ^ sn2k þ 1{t,sn2k}hðPðtÞÞ þ 1{t$sn2k}vkðPðtÞÞ� �
¼ RHSk:
This proves (A3) for k þ 1 instead of k, and (A2) follows after induction for k ¼ nþ 1.
Taking the infimum on the left-hand side of (A2) over all F-stopping times t gives
Vnð·Þ $ vnð·Þ.
To prove the reverse inequality Vnð·Þ $ vnð·Þ, it is enough to show (28) since by
construction S1n is an F-stopping time. We will prove (28) also inductively. For n ¼ 1, the left
hand side of (28) reduces to
E ~p r10 ^ s1 þ h P r10 ^ s1
� �� �� �¼ Jv0 r10; ~p
� �# J0v0ð ~pÞ þ 1;
where the inequality follows from Remark 2, and the inequality in (28) holds with n ¼ 1.
Stochastics 47
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 31: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/31.jpg)
Suppose now that (28) holds for some n $ 1. By definition, S1nþ1 ^ s1 ¼ r1=2n ^ s1, and
E ~p S1nþ1þh P S1nþ1
� �� �� �¼E ~p S1nþ1^s1þ S1nþ12s1
� �1 S1nþ1$s1f gþh P S1nþ1
� �� �h i¼E ~p r1=2n ^s1þ S1=2n +us1
þh P s1þS1=2n +us1
� � �1
r1=2n $s1
� �þ1r1=2n ,s1
� �h P r1=2n
� �
¼E ~p r1=2n ^s1þ1r1=2n ,s1
� �h P r1=2n
� �þ1
r1=2n $s1
� �E ~p S1=2n þh P S1=2n
� � �+us1
jFs1
h i
#E ~p r1=2n ^s1þ1r1=2n ,s1
� �h P r1=2n
� �þ1
r1=2n $s1
� �vn Ps1
� � þ1
2¼ Jvn r1=2n ; ~p
�þ1
2;
where the inequality follows from the induction hypothesis and strong Markov property.
Hence, E ~p½S1nþ1 þ hðPðS1nþ1ÞÞ� # vnð ~pÞ þ 1 by Remark 2, and (28) holds for nþ 1. A
Proof of Proposition 3.7. Note that the sequence {vn}n$1 is decreasing. Therefore,
Vð ~pÞ ¼ vð ~pÞ ¼ infn$1
vnð ~pÞ ¼ infn$1
J0vn21ð ~pÞ ¼ infn$1
inft$0
Jvn21ðt; ~pÞ ¼ inft$0
infn$1
Jvn21ðt; ~pÞ
¼ inft$0
infn$1
ðt0
XMi¼1
pie2liu½1þ liSivn21ðxðu; ~pÞÞ�duþ
XMi¼1
e2litpihðxðt; ~pÞÞ
" #
¼ inft$0
ðt0
XMi¼1
pie2liu½1þ liSivðxðu; ~pÞÞ�duþ
XMi¼1
e2litpihðxðt; ~pÞÞ
" #¼ J0vð ~pÞ
¼ J0Vð ~pÞ;
where the seventh equality follows from the bounded convergence theorem since
0 # vð·Þ # vnð·Þ # hð·Þ #P
i;jaij. Thus, V satisfies V ¼ J0V .
Next, let Uð·Þ # hð·Þ be a bounded solution of U ¼ J0U. Proposition 3.3 implies that
U ¼ J0U # J0h ¼ v1. Suppose that U # vn for some n $ 0. Then U ¼ J0U # J0vn ¼
vnþ1, and by induction we have U # vn for all n $ 1. This implies U # limn!1vn ¼ V . A
Proof of Corollary 3.9. For typographical reasons, we will again omit the dependence of
rnð ~pÞ on ~p. By Remark 2, Jvnðrn; ~pÞ ¼ J0vnð ~pÞ ¼ Jrnvnð ~pÞ. Let us first assume that rn , 1.
Taking t ¼ rn and w ¼ vn in (30) gives
Jvnðrn; ~pÞ ¼ Jrnvnð ~pÞ ¼ Jvnðrn; ~pÞ þXMi¼1
pie2lirn ½vnþ1ðxðrn; ~pÞÞ2 hðxðrn; ~pÞÞ�:
Therefore, vnþ1ðxðrn; ~pÞÞ , hðxðrn; ~pÞÞ. If 0 , t , rn, then Jvnðt; ~pÞ . J0vnð ~pÞ ¼
Jrnvnð ~pÞ ¼ Jtvnð ~pÞ. Using (30) once again with w ¼ vn and t [ ð0; rnÞ yields
J0vnð ~pÞ ¼ Jtvnð ~pÞ ¼ Jvnðt; ~pÞ þXMi¼1
pie2li t½vnþ1ðxðt; ~pÞÞ2 hðxðt; ~pÞÞ�:
Hence vnþ1ðxðt; ~pÞÞ , hðxðt; ~pÞÞ for t [ ð0; rnÞ. If rn ¼ 1, then the same argument as above
implies that vnþ1ðxðt; ~pÞÞ , hðxðt; ~pÞÞ for t [ ½0;1�, and (32) still holds since infY ; 1. A
S. Dayanik et al.48
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 32: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/32.jpg)
Proof of Proposition 3.10. We will prove (39) inductively. For n ¼ 1, by Lemma 3.2 there
exists a constant u [ ½0;1� such that U1 ^ s1 ¼ u ^ s1. Then
E ~p½MU1^s1� ¼ E ~p½u ^ s1 þ VðPðu ^ s1ÞÞ�
¼ E ~p u ^ s1 þ 1{u$s1}VðPðs1ÞÞ þ 1{u,s1}hðPðuÞÞ� �
þ E ~p 1{u,s1}½VðPðuÞÞ2 hðPðuÞÞ�� �
¼ JVðu; ~pÞ þXMi¼1
pie2liu½Vðxðu; ~pÞÞ2 hðxðu; ~pÞÞ� ¼ JuVð ~pÞ
ðA5Þ
because of the delay equation in (33). Fix any t [ ½0; uÞ. Same equation implies that
JVðt; ~pÞ ¼ JtVð ~pÞ2PM
i¼1pie2li t½Vðxðt; ~pÞÞ2 hðxðt; ~pÞÞ� is less than or equal to
J0Vð ~pÞ2XMi¼1
pie2li t½Vðxðt; ~pÞÞ2 hðxðt; ~pÞÞ� ¼ J0Vð ~pÞ2 E ~p 1{s1.t}½VðPðtÞÞ2 hðPðtÞÞ�
� �:
On {s1 . t}, we have U1 . t (otherwise, U1 # t , s1 would imply U1 ¼ u # t, which
contradicts with our initial choice of t , u). Thus, VðPðtÞÞ2 hðPðtÞÞ , 21 on {s1 . t}, and
JVðt; ~pÞ $ J0Vð ~pÞ þ 1E ~p 1{s1.t}
� �$ J0Vð ~pÞ þ 1
XMi¼1
pie2lit for every t [ ½0; uÞ:
Therefore, J0Vð ~pÞ ¼ JuVð ~pÞ, and (A5) implies E ~p½MU1^s1� ¼ JuVð ~pÞ ¼ J0Vð ~pÞ ¼ Vð ~pÞ ¼
E ~p½M0�. This completes the proof of (39) for n ¼ 1. Now assume that (39) holds for some
n $ 1. Note that E ~p½MU1^snþ1� ¼ E ~p½1{U1,s1}MU1
� þ E ~p½1{U1$s1}MU1^snþ1� equals
E ~p 1{U1,s1}MU1
� �þ E ~p 1{U1$s1}{s1 þ U1 ^ snþ1 2 s1 þ VðPðU1 ^ snþ1ÞÞ}
� �:
Since U1 ^ snþ1 ¼ s1 þ ½ðU1 ^ snÞ+us1� on the event {U1 $ s1}, the strong Markov
property of P implies that E ~p½MU1^snþ1� equals
E ~p 1{U1,s1}MU1
� �þ E ~p 1{U1$s1}s1
� �þ E ~p 1{U1$s1}E
Pðs1Þ½U1 ^ sn þ VðPðU1 ^ snÞÞ�� �
:
By the induction hypothesis the inner expectation equals VðPðs1ÞÞ, and
E ~p MU1^snþ1
� �¼ E ~p 1{U1,s1}MU1
� �þ E ~p 1{U1$s1}ðs1 þ VðPðs1ÞÞÞ
� �¼ E ~p MU1^s1
� �¼ E ~p½M0�;
where the last inequality follows from the above proof for n ¼ 1. Hence, E ~p½MU1^snþ1� ¼
E ~p½M0�, and the proof is complete by induction. A
Proof of Proposition 4.1. By Proposition 3.3 we have 0 # J0wð·Þ # hð·Þ for every bounded
and positive function wð·Þ. To prove (43), it is therefore enough to show that hð·Þ # J0wð·Þ on
{ ~p [ E : pi $ �p} for �p defined in the lemma. Since wð·Þ $ 0, we have
J0wð ~pÞ ¼ inft$0Jwðt; ~pÞ $ inft$0infi[IKiðt; ~pÞ, where
Kiðt; ~pÞ W
ðt0
XMj¼1
pje2ljuduþ
XMj¼1
e2lj thiðxðt; ~pÞÞ;
Stochastics 49
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4
![Page 33: Sequential multi-hypothesis testing for compound Poisson processes](https://reader031.fdocuments.in/reader031/viewer/2022030219/5750a48f1a28abcf0cab5098/html5/thumbnails/33.jpg)
with the partial derivative
›Kiðt; ~pÞ
›t¼XMj¼1
pje2lj t 12
XMk¼1
ðlj þ lkÞaikxkðt; ~pÞ þXMk¼1
aikxkðt; ~pÞ
! XMk¼1
lkxkðt; ~pÞ
!( )
$XMj¼1
pje2ljt 12 2lM max
kaik
� �ð12 xiðt; ~pÞÞ
� �:
ðA6Þ
Note that if li ¼ l1, then t 7! xiðt; ~pÞ is non-decreasing. Hence ›Kiðt; ~pÞ=›t $ 0 for all t $ 0
and pi $ �pi W 1 2 1=ð2lMmaxkai;kÞ� �þ
. Hence on { ~p [ E : pi $ �pi}, we get hð ~pÞ $
J0wð ~pÞ $ inft$0infiKiðt; ~pÞ ¼ infiinft$0Kiðt; ~pÞ ¼ infiKið0; ~pÞ ¼ infihið ~pÞ ¼ hð ~pÞ.
Now assume li . l1, and let Tið ~p;mÞ W inf{t $ 0 : xiðt; ~pÞ # m} be the first time
t 7! xiðt; ~pÞ reaches ½0;m�. For t $ Tið ~p; �piÞ, the definition of Kiðt; ~pÞ implies
Kiðt; ~pÞ $ pi
ðTið ~p; �piÞ
0
e2liudu $pi
li12
12 �pi
�pi
·pi
12 pi
� �2li=ðli2l1Þ" #
;
where the last inequality follows from the explicit form of xðt; ~pÞ given in (14). The last
expression above is 0 at pi ¼ �pi, and it is increasing on pi [ ½ �pi; 1Þ and increases to the limit
1=li as pi ! 1. For pi $ p*i in (42), we have
pi
li1 2
1 2 �pi
�pi
·pi
1 2 pi
� �2li=ðli2l1Þ" #
$ maxk
aik
� �ð1 2 piÞ;
and the inequality holds with an equality at pi ¼ p*i . Hence, we have Kiðt; ~pÞ $ hið ~pÞ for
t $ Tið ~p; �piÞ, and for pi $ p*i . Since t 7! Kiðt; ~pÞ is increasing on ½0; Tið ~p; �piÞ�, we get
hð ~pÞ $ J0wð ~pÞ $ infiinft$0Kiðt; ~pÞ ¼ infihið ~pÞ ¼ hð ~pÞ on { ~p [ E : pi $ p*i }, and (43)
follows. Finally, if aij . 0 for every i – j, then (44) follows from (43) since hð ~pÞ ¼ hið ~pÞ on
{ ~p [ E : pi $ maxkaik=½ðminj–iajiÞ þ ðmaxkaikÞ�} for every i [ I. A
S. Dayanik et al.50
Dow
nloa
ded
by [
Ston
y B
rook
Uni
vers
ity]
at 2
1:33
24
Oct
ober
201
4