ORNSTEIN-UHLENBECK PROCESSES: limit theorems and …ajacquie/WorkshopLDP/ICWorkshopLDP2013 -...
Transcript of ORNSTEIN-UHLENBECK PROCESSES: limit theorems and …ajacquie/WorkshopLDP/ICWorkshopLDP2013 -...
ORNSTEIN-UHLENBECK PROCESSES:limit theorems and Markov modulation
Michel Mandjes1,2,3
1Korteweg-de Vries Institute for Mathematics, University of Amsterdam2CWI, Amsterdam
3Eurandom, Eindhoven
Imperial College, London, April 2013
Workshop on Large Deviations & Asymptotic Methods in Finance
OVERVIEW
I Introduction on Ornstein-Uhlenbeck processes, relation withinfinite-server queues
I Part I: Reflection (joint work with Gang Huang and PeterSpreij)
I Part II: CLT under Markov modulation (joint work with DaveAnderson, Joke Blom, Offer Kella, Peter Spreij, HalldoraThorsdottir, Koen de Turck)
I Part III: Large deviations under Markov modulation (jointwork with Joke Blom, Koen de Turck)
OVERVIEW
I Introduction on Ornstein-Uhlenbeck processes, relation withinfinite-server queues
I Part I: Reflection (joint work with Gang Huang and PeterSpreij)
I Part II: CLT under Markov modulation (joint work with DaveAnderson, Joke Blom, Offer Kella, Peter Spreij, HalldoraThorsdottir, Koen de Turck)
I Part III: Large deviations under Markov modulation (jointwork with Joke Blom, Koen de Turck)
OVERVIEW
I Introduction on Ornstein-Uhlenbeck processes, relation withinfinite-server queues
I Part I: Reflection (joint work with Gang Huang and PeterSpreij)
I Part II: CLT under Markov modulation (joint work with DaveAnderson, Joke Blom, Offer Kella, Peter Spreij, HalldoraThorsdottir, Koen de Turck)
I Part III: Large deviations under Markov modulation (jointwork with Joke Blom, Koen de Turck)
OVERVIEW
I Introduction on Ornstein-Uhlenbeck processes, relation withinfinite-server queues
I Part I: Reflection (joint work with Gang Huang and PeterSpreij)
I Part II: CLT under Markov modulation (joint work with DaveAnderson, Joke Blom, Offer Kella, Peter Spreij, HalldoraThorsdottir, Koen de Turck)
I Part III: Large deviations under Markov modulation (jointwork with Joke Blom, Koen de Turck)
OVERVIEW
Hence I’ll (slightly) deviate from the abstract I submitted.There I announced I was also going to cover work with Kosinski onmultidimensional ruin probabilities.
Doesn’t mean that paper is not interesting... , — see arXiv.
Combines multidimensional aspect (as in Collamore’s work) withnon-standard scaling (as in Duffield/O’Connell and Duffie et al.).
OVERVIEW
Hence I’ll (slightly) deviate from the abstract I submitted.There I announced I was also going to cover work with Kosinski onmultidimensional ruin probabilities.
Doesn’t mean that paper is not interesting... ,
— see arXiv.
Combines multidimensional aspect (as in Collamore’s work) withnon-standard scaling (as in Duffield/O’Connell and Duffie et al.).
OVERVIEW
Hence I’ll (slightly) deviate from the abstract I submitted.There I announced I was also going to cover work with Kosinski onmultidimensional ruin probabilities.
Doesn’t mean that paper is not interesting... , — see arXiv.
Combines multidimensional aspect (as in Collamore’s work) withnon-standard scaling (as in Duffield/O’Connell and Duffie et al.).
OVERVIEW
Hence I’ll (slightly) deviate from the abstract I submitted.There I announced I was also going to cover work with Kosinski onmultidimensional ruin probabilities.
Doesn’t mean that paper is not interesting... , — see arXiv.
Combines multidimensional aspect (as in Collamore’s work) withnon-standard scaling (as in Duffield/O’Connell and Duffie et al.).
PAPER WITH KOSINSKI
PAPER WITH KOSINSKI
I Covering strong correlation structures, e.g. fractionalBrownian motion.
I One way is to use (generalized version of) ‘Schilder/Azencott’.Is needed for ‘involved’ probabilities as arise in queueing: longbusy period (M./Mannersalo/Norros/van Uitert, Stoch. Proc.Appl. 2006), downstream queue in tandem (M./van Uitert,Ann. Appl. Probab. 2005), convergence to stationarity(M./Norros/Glynn, Ann. Appl. Probab. 2009).Difficulty: no explicit expression for rate function of givenpath.
PAPER WITH KOSINSKI
I Covering strong correlation structures, e.g. fractionalBrownian motion.
I One way is to use (generalized version of) ‘Schilder/Azencott’.Is needed for ‘involved’ probabilities as arise in queueing: longbusy period (M./Mannersalo/Norros/van Uitert, Stoch. Proc.Appl. 2006), downstream queue in tandem (M./van Uitert,Ann. Appl. Probab. 2005), convergence to stationarity(M./Norros/Glynn, Ann. Appl. Probab. 2009).Difficulty: no explicit expression for rate function of givenpath.
PAPER WITH KOSINSKI
I Covering strong correlation structures, e.g. fractionalBrownian motion.
I One way is to use (generalized version of) ‘Schilder/Azencott’.Is needed for ‘involved’ probabilities as arise in queueing: longbusy period (M./Mannersalo/Norros/van Uitert, Stoch. Proc.Appl. 2006), downstream queue in tandem (M./van Uitert,Ann. Appl. Probab. 2005), convergence to stationarity(M./Norros/Glynn, Ann. Appl. Probab. 2009).Difficulty: no explicit expression for rate function of givenpath.
PAPER WITH KOSINSKI
I This is not needed for ruin or overflow in single queue.‘Principle of largest term’. See also predecessor of this paperfor Gaussian case (Debicki/Kosinski/M./Rolski, Stoch. Proc.Appl., 2011).
I As in Duffield/O’Connell (single dimension): most likelyepoch of ruin determines large deviations. Non-linear scalefunctions, so as to cover long-range dependence.
I Paper with Kosinski: extension to multidimensional risk.
PAPER WITH KOSINSKI
I This is not needed for ruin or overflow in single queue.‘Principle of largest term’. See also predecessor of this paperfor Gaussian case (Debicki/Kosinski/M./Rolski, Stoch. Proc.Appl., 2011).
I As in Duffield/O’Connell (single dimension): most likelyepoch of ruin determines large deviations. Non-linear scalefunctions, so as to cover long-range dependence.
I Paper with Kosinski: extension to multidimensional risk.
PAPER WITH KOSINSKI
I This is not needed for ruin or overflow in single queue.‘Principle of largest term’. See also predecessor of this paperfor Gaussian case (Debicki/Kosinski/M./Rolski, Stoch. Proc.Appl., 2011).
I As in Duffield/O’Connell (single dimension): most likelyepoch of ruin determines large deviations. Non-linear scalefunctions, so as to cover long-range dependence.
I Paper with Kosinski: extension to multidimensional risk.
ORNSTEIN AND UHLENBECK
INTRODUCTION
I Stochastic differential equation
dMt = (α− γMt)dt + σdBt , Y0 = x > 0,
where α, γ, σ > 0, Bt is a standard Brownian motion.
I Similarity with the infinite-server queue. There jobs aregenerated according to a Poisson process of rate λ. Theyremain in system exp(µ) time; they don’t “see” each other, sodeparture rate is µ multiplied by number of jobs present.
INTRODUCTION
I Stochastic differential equation
dMt = (α− γMt)dt + σdBt , Y0 = x > 0,
where α, γ, σ > 0, Bt is a standard Brownian motion.
I Similarity with the infinite-server queue. There jobs aregenerated according to a Poisson process of rate λ. Theyremain in system exp(µ) time; they don’t “see” each other, sodeparture rate is µ multiplied by number of jobs present.
INTRODUCTION
I Stochastic differential equation
dMt = (α− γMt)dt + σdBt , Y0 = x > 0,
can be solved trivially, for instance as follows.
I F (Mt , t)) := Mteγt . Ito’s lemma:
dF (Mt , t) = eγt(γMtdt + dMt) = eγt(αdt + σdBt).
I Integrating:
Mteγt = M0 +
∫ t
0αeγsds +
∫ t
0σeγsdBs .
I Hence
Mt = M0e−γt +
α
γ(1− e−γt) + σ
∫ t
0e−γ(t−s)dBs .
INTRODUCTION
I Stochastic differential equation
dMt = (α− γMt)dt + σdBt , Y0 = x > 0,
can be solved trivially, for instance as follows.
I F (Mt , t)) := Mteγt . Ito’s lemma:
dF (Mt , t) = eγt(γMtdt + dMt) = eγt(αdt + σdBt).
I Integrating:
Mteγt = M0 +
∫ t
0αeγsds +
∫ t
0σeγsdBs .
I Hence
Mt = M0e−γt +
α
γ(1− e−γt) + σ
∫ t
0e−γ(t−s)dBs .
INTRODUCTION
I Stochastic differential equation
dMt = (α− γMt)dt + σdBt , Y0 = x > 0,
can be solved trivially, for instance as follows.
I F (Mt , t)) := Mteγt . Ito’s lemma:
dF (Mt , t) = eγt(γMtdt + dMt) = eγt(αdt + σdBt).
I Integrating:
Mteγt = M0 +
∫ t
0αeγsds +
∫ t
0σeγsdBs .
I Hence
Mt = M0e−γt +
α
γ(1− e−γt) + σ
∫ t
0e−γ(t−s)dBs .
INTRODUCTION
I Stochastic differential equation
dMt = (α− γMt)dt + σdBt , Y0 = x > 0,
can be solved trivially, for instance as follows.
I F (Mt , t)) := Mteγt . Ito’s lemma:
dF (Mt , t) = eγt(γMtdt + dMt) = eγt(αdt + σdBt).
I Integrating:
Mteγt = M0 +
∫ t
0αeγsds +
∫ t
0σeγsdBs .
I Hence
Mt = M0e−γt +
α
γ(1− e−γt) + σ
∫ t
0e−γ(t−s)dBs .
INTRODUCTION
I From
Mt = M0e−γt +
α
γ(1− e−γt) + σ
∫ t
0e−γ(t−s)dBs
we observe
I mean:EMt = EM0e
−γt +α
γ(1− e−γt),
I variance:
VarMt = Var(σ
∫ t
0
e−γ(t−s)dBs
)=σ2
2γ(1− e−2γt).
I In fact, Mt has a Normal distribution with this mean andvariance.
INTRODUCTION
I From
Mt = M0e−γt +
α
γ(1− e−γt) + σ
∫ t
0e−γ(t−s)dBs
we observe
I mean:EMt = EM0e
−γt +α
γ(1− e−γt),
I variance:
VarMt = Var(σ
∫ t
0
e−γ(t−s)dBs
)=σ2
2γ(1− e−2γt).
I In fact, Mt has a Normal distribution with this mean andvariance.
INTRODUCTION
I From
Mt = M0e−γt +
α
γ(1− e−γt) + σ
∫ t
0e−γ(t−s)dBs
we observe
I mean:EMt = EM0e
−γt +α
γ(1− e−γt),
I variance:
VarMt = Var(σ
∫ t
0
e−γ(t−s)dBs
)=σ2
2γ(1− e−2γt).
I In fact, Mt has a Normal distribution with this mean andvariance.
INTRODUCTION
I From
Mt = M0e−γt +
α
γ(1− e−γt) + σ
∫ t
0e−γ(t−s)dBs
we observe
I mean:EMt = EM0e
−γt +α
γ(1− e−γt),
I variance:
VarMt = Var(σ
∫ t
0
e−γ(t−s)dBs
)=σ2
2γ(1− e−2γt).
I In fact, Mt has a Normal distribution with this mean andvariance.
INTRODUCTION
I In fact, Mt has a Normal distribution with this mean andvariance.
I OU is a Markovian, Gaussian process, that is mean-reverting(towards the limiting mean α/γ).
INTRODUCTION
I In fact, Mt has a Normal distribution with this mean andvariance.
I OU is a Markovian, Gaussian process, that is mean-reverting(towards the limiting mean α/γ).
INTRODUCTION
In this talk we consider two complications:
I Reflection at 0 (from above), or double reflection at 0 (fromabove) and d > 0 (from below).
I Markov modulation: parameters α, γ, σ > 0 have valuesαi , γi , σi > 0 when independent background Markov chain isin state i .
INTRODUCTION
In this talk we consider two complications:
I Reflection at 0 (from above), or double reflection at 0 (fromabove) and d > 0 (from below).
I Markov modulation: parameters α, γ, σ > 0 have valuesαi , γi , σi > 0 when independent background Markov chain isin state i .
Part IREFLECTED ORNSTEIN-UHLENBECK (ROU)
REFLECTED ORNSTEIN-UHLENBECK (ROU)
I The reflected Ornstein-Uhlenbeck process (ROU) withreflection at 0 is defined as the unique strong solution to theSDE
dYt = (α− γYt)dt + σdBt + dLt , Y0 = x > 0,
where α, γ, σ > 0, Bt is a standard Brownian motion and Lt isthe minimal nondecreasing process which makes Yt ≥ 0. Wehave ∫
[0,T ]1(Yt > 0)dLt = 0, ∀T > 0.
I Idea (in queueing lingo): Lt can be interpreted as ‘cumulativeidle time’ (i.e., this can only grow if Yt is positive).
REFLECTED ORNSTEIN-UHLENBECK (ROU)
I The reflected Ornstein-Uhlenbeck process (ROU) withreflection at 0 is defined as the unique strong solution to theSDE
dYt = (α− γYt)dt + σdBt + dLt , Y0 = x > 0,
where α, γ, σ > 0, Bt is a standard Brownian motion and Lt isthe minimal nondecreasing process which makes Yt ≥ 0. Wehave ∫
[0,T ]1(Yt > 0)dLt = 0, ∀T > 0.
I Idea (in queueing lingo): Lt can be interpreted as ‘cumulativeidle time’ (i.e., this can only grow if Yt is positive).
REFLECTED ORNSTEIN-UHLENBECK (ROU)
I The existence and uniqueness of the solution is guaranteed byb(x) = α− γx is uniformly Lipschitz continuous and grows nofaster than linearly, and σ is a bounded constant.
I In Ward and Glynn’s papers [3, 5], ROU is used toapproximate the number-in-system processes in M/M/1 andGI/GI/1 queues both with reneging.
REFLECTED ORNSTEIN-UHLENBECK (ROU)
I The existence and uniqueness of the solution is guaranteed byb(x) = α− γx is uniformly Lipschitz continuous and grows nofaster than linearly, and σ is a bounded constant.
I In Ward and Glynn’s papers [3, 5], ROU is used toapproximate the number-in-system processes in M/M/1 andGI/GI/1 queues both with reneging.
I The number-in-system process in a GI/M/n loss model can beapproximated by ROU, see e.g. Srikant and Whitt [2].
I Useful in finance as well, if certain quantity cannot crossspecific boundaries (for instance if it has to stay positive).
I The scaled ROU (with small perturbation)
dY εt = (α− γY ε
t )dt +√εσdBt + dLεt , Y ε
0 = x > 0.
I The first question to be addressed is: given T > 0 andb > E(Y ε
T | Y ε0 = x), what is the probability that the process
Y εt exceeds b at time T . I.e., compute
limε→0
ε logP(Y εT > b | Y ε
0 = x). (1)
I Shwartz and Weiss [1] calculate this blocking probability ofM/M/n queue (or overflow probability in correspondingM/M/∞) analytically by large deviations techniques.
I The number-in-system process in a GI/M/n loss model can beapproximated by ROU, see e.g. Srikant and Whitt [2].
I Useful in finance as well, if certain quantity cannot crossspecific boundaries (for instance if it has to stay positive).
I The scaled ROU (with small perturbation)
dY εt = (α− γY ε
t )dt +√εσdBt + dLεt , Y ε
0 = x > 0.
I The first question to be addressed is: given T > 0 andb > E(Y ε
T | Y ε0 = x), what is the probability that the process
Y εt exceeds b at time T . I.e., compute
limε→0
ε logP(Y εT > b | Y ε
0 = x). (1)
I Shwartz and Weiss [1] calculate this blocking probability ofM/M/n queue (or overflow probability in correspondingM/M/∞) analytically by large deviations techniques.
I The number-in-system process in a GI/M/n loss model can beapproximated by ROU, see e.g. Srikant and Whitt [2].
I Useful in finance as well, if certain quantity cannot crossspecific boundaries (for instance if it has to stay positive).
I The scaled ROU (with small perturbation)
dY εt = (α− γY ε
t )dt +√εσdBt + dLεt , Y ε
0 = x > 0.
I The first question to be addressed is: given T > 0 andb > E(Y ε
T | Y ε0 = x), what is the probability that the process
Y εt exceeds b at time T . I.e., compute
limε→0
ε logP(Y εT > b | Y ε
0 = x). (1)
I Shwartz and Weiss [1] calculate this blocking probability ofM/M/n queue (or overflow probability in correspondingM/M/∞) analytically by large deviations techniques.
I The number-in-system process in a GI/M/n loss model can beapproximated by ROU, see e.g. Srikant and Whitt [2].
I Useful in finance as well, if certain quantity cannot crossspecific boundaries (for instance if it has to stay positive).
I The scaled ROU (with small perturbation)
dY εt = (α− γY ε
t )dt +√εσdBt + dLεt , Y ε
0 = x > 0.
I The first question to be addressed is: given T > 0 andb > E(Y ε
T | Y ε0 = x), what is the probability that the process
Y εt exceeds b at time T . I.e., compute
limε→0
ε logP(Y εT > b | Y ε
0 = x). (1)
I Shwartz and Weiss [1] calculate this blocking probability ofM/M/n queue (or overflow probability in correspondingM/M/∞) analytically by large deviations techniques.
I The number-in-system process in a GI/M/n loss model can beapproximated by ROU, see e.g. Srikant and Whitt [2].
I Useful in finance as well, if certain quantity cannot crossspecific boundaries (for instance if it has to stay positive).
I The scaled ROU (with small perturbation)
dY εt = (α− γY ε
t )dt +√εσdBt + dLεt , Y ε
0 = x > 0.
I The first question to be addressed is: given T > 0 andb > E(Y ε
T | Y ε0 = x), what is the probability that the process
Y εt exceeds b at time T . I.e., compute
limε→0
ε logP(Y εT > b | Y ε
0 = x). (1)
I Shwartz and Weiss [1] calculate this blocking probability ofM/M/n queue (or overflow probability in correspondingM/M/∞) analytically by large deviations techniques.
STRATEGY:
I Step 1: Derive Large Deviations Principle for Y εt with an
explicit rate function. Observe that rate function hassameexpression as the one of OU but operates on smaller functionspace.
I Step 2: Minimize rate function of OU over all continuouspaths f such that f (0) = x and f (T ) > b. Observe thatoptimizing path f ? does not hit level 0 between 0 and T .
I Step 3: Prove that the optimizing path f ? also solvesvariational problem of minimizing rate function of ROU overall positive continuous paths f such that f (0) = x andf (T ) > b. Hence the decay rate (1) is obtained.
STRATEGY:
I Step 1: Derive Large Deviations Principle for Y εt with an
explicit rate function. Observe that rate function hassameexpression as the one of OU but operates on smaller functionspace.
I Step 2: Minimize rate function of OU over all continuouspaths f such that f (0) = x and f (T ) > b. Observe thatoptimizing path f ? does not hit level 0 between 0 and T .
I Step 3: Prove that the optimizing path f ? also solvesvariational problem of minimizing rate function of ROU overall positive continuous paths f such that f (0) = x andf (T ) > b. Hence the decay rate (1) is obtained.
STRATEGY:
I Step 1: Derive Large Deviations Principle for Y εt with an
explicit rate function. Observe that rate function hassameexpression as the one of OU but operates on smaller functionspace.
I Step 2: Minimize rate function of OU over all continuouspaths f such that f (0) = x and f (T ) > b. Observe thatoptimizing path f ? does not hit level 0 between 0 and T .
I Step 3: Prove that the optimizing path f ? also solvesvariational problem of minimizing rate function of ROU overall positive continuous paths f such that f (0) = x andf (T ) > b. Hence the decay rate (1) is obtained.
STEP 1: Identification of ROU’s rate function
Proposition 1
dY εt = b(Y ε
t )dt +√εσdBt + dLεt , Y ε
0 = x > 0;
recall that in our case b(x) = α− γx . Define:
Hx :=
{f : f (t) = x +
∫ t
0φ(s)ds, φ ∈ L2([0,T ])
}.
I When b(0) > 0, Y εt satisfies LDP with rate function
I+(h) =1
2σ2
∫ T
0
(h′t − b(ht)
)2dt
if h ∈ Hx and ∞ else.I When b(0) < 0, Y ε
t satisfies LDP with rate function
I+(h) =1
2σ2
∫ T
0
(h′t − b(ht)
)2dt− 1
2σ2
∫ T
01{0}(ht)(b(0))2dt.
if h ∈ Hx and ∞ else.
Observe: coincides with LDP for OU, adapted at boundary 0.
STEP 1: Identification of ROU’s rate function
Proposition 1
dY εt = b(Y ε
t )dt +√εσdBt + dLεt , Y ε
0 = x > 0;
recall that in our case b(x) = α− γx . Define:
Hx :=
{f : f (t) = x +
∫ t
0φ(s)ds, φ ∈ L2([0,T ])
}.
I When b(0) > 0, Y εt satisfies LDP with rate function
I+(h) =1
2σ2
∫ T
0
(h′t − b(ht)
)2dt
if h ∈ Hx and ∞ else.
I When b(0) < 0, Y εt satisfies LDP with rate function
I+(h) =1
2σ2
∫ T
0
(h′t − b(ht)
)2dt− 1
2σ2
∫ T
01{0}(ht)(b(0))2dt.
if h ∈ Hx and ∞ else.
Observe: coincides with LDP for OU, adapted at boundary 0.
STEP 1: Identification of ROU’s rate function
Proposition 1
dY εt = b(Y ε
t )dt +√εσdBt + dLεt , Y ε
0 = x > 0;
recall that in our case b(x) = α− γx . Define:
Hx :=
{f : f (t) = x +
∫ t
0φ(s)ds, φ ∈ L2([0,T ])
}.
I When b(0) > 0, Y εt satisfies LDP with rate function
I+(h) =1
2σ2
∫ T
0
(h′t − b(ht)
)2dt
if h ∈ Hx and ∞ else.I When b(0) < 0, Y ε
t satisfies LDP with rate function
I+(h) =1
2σ2
∫ T
0
(h′t − b(ht)
)2dt− 1
2σ2
∫ T
01{0}(ht)(b(0))2dt.
if h ∈ Hx and ∞ else.
Observe: coincides with LDP for OU, adapted at boundary 0.
STEP 1: Identification of ROU’s rate function
Proposition 1
dY εt = b(Y ε
t )dt +√εσdBt + dLεt , Y ε
0 = x > 0;
recall that in our case b(x) = α− γx . Define:
Hx :=
{f : f (t) = x +
∫ t
0φ(s)ds, φ ∈ L2([0,T ])
}.
I When b(0) > 0, Y εt satisfies LDP with rate function
I+(h) =1
2σ2
∫ T
0
(h′t − b(ht)
)2dt
if h ∈ Hx and ∞ else.I When b(0) < 0, Y ε
t satisfies LDP with rate function
I+(h) =1
2σ2
∫ T
0
(h′t − b(ht)
)2dt− 1
2σ2
∫ T
01{0}(ht)(b(0))2dt.
if h ∈ Hx and ∞ else.
Observe: coincides with LDP for OU, adapted at boundary 0.
STEP 2: Solving a variational problem
I Observe that we can write P(Y εT > b | X ε
0 = x) = P(Y εt ∈ S),
with
S :=⋃a>b
Sa, Sa :={f ∈ C[0,T ](R) : f (0) = x , f (T ) = a
}.
I Hence,
limε→0
ε logP(Y εT > b | Y ε
0 = x) = − inff ∈S
I+(f ).
STEP 2: Solving a variational problem
I Observe that we can write P(Y εT > b | X ε
0 = x) = P(Y εt ∈ S),
with
S :=⋃a>b
Sa, Sa :={f ∈ C[0,T ](R) : f (0) = x , f (T ) = a
}.
I Hence,
limε→0
ε logP(Y εT > b | Y ε
0 = x) = − inff ∈S
I+(f ).
STEP 2: Solving a variational problem
I Mεt , scaled OU. E(Mε
T | Mε0 = x) = α
γ +(x − α
γ
)e−γT . It is
greater than 0 when the starting point x > 0. Define the
crossing level b > αγ +
(x − α
γ
)e−γT for both OU and ROU.
I Proposition 2 Let a > b. We have, with Ix(f ) the ratefunction for OU,
inff ∈Sa
Ix(f ) =[a− (αγ + (x − α
γ )e−γT )]2
[1− e−2γT ](σ2/γ).
The optimizing path is given by
f ?(t) =(x − α
γ )(eγT−γt − e−γT+γt) + (a− αγ )(eγt − e−γt)
eγT − e−γT+α
γ.
I Moreover, f ?(t) > 0 on t ∈ [0,∞) when the starting pointx > 0.
I f ?(t) ∈ [0, d ] on t ∈ [0,T ] when the starting point x ∈ [0, d ],a ∈ [0, d ] and α
γ < d .
STEP 2: Solving a variational problem
I Mεt , scaled OU. E(Mε
T | Mε0 = x) = α
γ +(x − α
γ
)e−γT . It is
greater than 0 when the starting point x > 0. Define the
crossing level b > αγ +
(x − α
γ
)e−γT for both OU and ROU.
I Proposition 2 Let a > b. We have, with Ix(f ) the ratefunction for OU,
inff ∈Sa
Ix(f ) =[a− (αγ + (x − α
γ )e−γT )]2
[1− e−2γT ](σ2/γ).
The optimizing path is given by
f ?(t) =(x − α
γ )(eγT−γt − e−γT+γt) + (a− αγ )(eγt − e−γt)
eγT − e−γT+α
γ.
I Moreover, f ?(t) > 0 on t ∈ [0,∞) when the starting pointx > 0.
I f ?(t) ∈ [0, d ] on t ∈ [0,T ] when the starting point x ∈ [0, d ],a ∈ [0, d ] and α
γ < d .
STEP 2: Solving a variational problem
I Mεt , scaled OU. E(Mε
T | Mε0 = x) = α
γ +(x − α
γ
)e−γT . It is
greater than 0 when the starting point x > 0. Define the
crossing level b > αγ +
(x − α
γ
)e−γT for both OU and ROU.
I Proposition 2 Let a > b. We have, with Ix(f ) the ratefunction for OU,
inff ∈Sa
Ix(f ) =[a− (αγ + (x − α
γ )e−γT )]2
[1− e−2γT ](σ2/γ).
The optimizing path is given by
f ?(t) =(x − α
γ )(eγT−γt − e−γT+γt) + (a− αγ )(eγt − e−γt)
eγT − e−γT+α
γ.
I Moreover, f ?(t) > 0 on t ∈ [0,∞) when the starting pointx > 0.
I f ?(t) ∈ [0, d ] on t ∈ [0,T ] when the starting point x ∈ [0, d ],a ∈ [0, d ] and α
γ < d .
STEP 2: Solving a variational problem
I Mεt , scaled OU. E(Mε
T | Mε0 = x) = α
γ +(x − α
γ
)e−γT . It is
greater than 0 when the starting point x > 0. Define the
crossing level b > αγ +
(x − α
γ
)e−γT for both OU and ROU.
I Proposition 2 Let a > b. We have, with Ix(f ) the ratefunction for OU,
inff ∈Sa
Ix(f ) =[a− (αγ + (x − α
γ )e−γT )]2
[1− e−2γT ](σ2/γ).
The optimizing path is given by
f ?(t) =(x − α
γ )(eγT−γt − e−γT+γt) + (a− αγ )(eγt − e−γt)
eγT − e−γT+α
γ.
I Moreover, f ?(t) > 0 on t ∈ [0,∞) when the starting pointx > 0.
I f ?(t) ∈ [0, d ] on t ∈ [0,T ] when the starting point x ∈ [0, d ],a ∈ [0, d ] and α
γ < d .
STEP 3: Computation of the decay rate
I Theorem 1 Let b > αγ +
(x − α
γ
)e−γT . Then
limε→0
ε logP(Y εT > b | Y ε
0 = x) = −[b − (αγ + (x − α
γ )e−γT )]2
[1− e−2γT ](σ2/γ).
Moreover, the optimizing path is the one given in Prop. 2(with a replaced by b).
TRANSIENT ASYMPOTOTICS FOR DROU
I Doubly reflected Ornstein-Uhlenbeck process (DROU) on[0, d ] is defined as the strong solution to
dZt = (α− γZt)dt + σdBt + dLt − dUt , Z0 = x ∈ [0, d ],
where Ut is the minimal nondecreasing process which makesZ (t) ≤ d , i.e., we have
∫[0,T ] 1(Zt > 0)dLt = 0 as well as∫
[0,T ] 1(Zt < d)dUt = 0.
I In context of queues with finite-capacity, Ut is the continuousanalog to the cumulative amount of loss over [0, t] — oftenreferred to (queueing lingo!) as ‘loss process’.
I In addition, we assume that the upper boundary d > αγ . It
means that asymptotic mean αγ lies between two reflecting
boundaries.
TRANSIENT ASYMPOTOTICS FOR DROU
I Doubly reflected Ornstein-Uhlenbeck process (DROU) on[0, d ] is defined as the strong solution to
dZt = (α− γZt)dt + σdBt + dLt − dUt , Z0 = x ∈ [0, d ],
where Ut is the minimal nondecreasing process which makesZ (t) ≤ d , i.e., we have
∫[0,T ] 1(Zt > 0)dLt = 0 as well as∫
[0,T ] 1(Zt < d)dUt = 0.
I In context of queues with finite-capacity, Ut is the continuousanalog to the cumulative amount of loss over [0, t] — oftenreferred to (queueing lingo!) as ‘loss process’.
I In addition, we assume that the upper boundary d > αγ . It
means that asymptotic mean αγ lies between two reflecting
boundaries.
TRANSIENT ASYMPOTOTICS FOR DROU
I Doubly reflected Ornstein-Uhlenbeck process (DROU) on[0, d ] is defined as the strong solution to
dZt = (α− γZt)dt + σdBt + dLt − dUt , Z0 = x ∈ [0, d ],
where Ut is the minimal nondecreasing process which makesZ (t) ≤ d , i.e., we have
∫[0,T ] 1(Zt > 0)dLt = 0 as well as∫
[0,T ] 1(Zt < d)dUt = 0.
I In context of queues with finite-capacity, Ut is the continuousanalog to the cumulative amount of loss over [0, t] — oftenreferred to (queueing lingo!) as ‘loss process’.
I In addition, we assume that the upper boundary d > αγ . It
means that asymptotic mean αγ lies between two reflecting
boundaries.
I Recall in Prop. 2, let f ?(t) be the solution to the variationalproblem inff ∈Sa Ix(f ). Then, importantly, f ?(t) ∈ [0, d ] ont ∈ [0,T ] when the starting point x ∈ [0, d ], a ∈ [0, d ] andαγ < d .
I Theorem 2Let d > b > α
γ +(x − α
γ
)e−γT . Then
limε→0
ε logP(Z εT > b | Z ε0 = x) = −[b − (αγ + (x − α
γ )e−γT )]2
[1− e−2γT ](σ2/γ).
Moreover, the optimizing path is the one given in Prop. 2(with a replaced by b).
I Recall in Prop. 2, let f ?(t) be the solution to the variationalproblem inff ∈Sa Ix(f ). Then, importantly, f ?(t) ∈ [0, d ] ont ∈ [0,T ] when the starting point x ∈ [0, d ], a ∈ [0, d ] andαγ < d .
I Theorem 2Let d > b > α
γ +(x − α
γ
)e−γT . Then
limε→0
ε logP(Z εT > b | Z ε0 = x) = −[b − (αγ + (x − α
γ )e−γT )]2
[1− e−2γT ](σ2/γ).
Moreover, the optimizing path is the one given in Prop. 2(with a replaced by b).
CENTRAL LIMIT THEOREM OF LOSS PROCESSES
I CLT of Ut (Lt) of DROU
dZt = (α− γZt)dt + σdBt + dLt − dUt , Z0 = x ∈ [0, d ].
I Zhang & Glynn [6] solve the same problem for doublyreflected Brownian motion by a martingale approach.
I Let h be a twice continuously differentiable function on R,and Zt be the DROU process. By Ito’s formula, we have:
dh(Zt) =
((α− γZt)h
′(Zt) +σ2
2h′′(Zt)
)dt
+ σh′(Zt)dBt + h′(Zt)dLt − h′(Zt)dUt .
CENTRAL LIMIT THEOREM OF LOSS PROCESSES
I CLT of Ut (Lt) of DROU
dZt = (α− γZt)dt + σdBt + dLt − dUt , Z0 = x ∈ [0, d ].
I Zhang & Glynn [6] solve the same problem for doublyreflected Brownian motion by a martingale approach.
I Let h be a twice continuously differentiable function on R,and Zt be the DROU process. By Ito’s formula, we have:
dh(Zt) =
((α− γZt)h
′(Zt) +σ2
2h′′(Zt)
)dt
+ σh′(Zt)dBt + h′(Zt)dLt − h′(Zt)dUt .
CENTRAL LIMIT THEOREM OF LOSS PROCESSES
I CLT of Ut (Lt) of DROU
dZt = (α− γZt)dt + σdBt + dLt − dUt , Z0 = x ∈ [0, d ].
I Zhang & Glynn [6] solve the same problem for doublyreflected Brownian motion by a martingale approach.
I Let h be a twice continuously differentiable function on R,and Zt be the DROU process. By Ito’s formula, we have:
dh(Zt) =
((α− γZt)h
′(Zt) +σ2
2h′′(Zt)
)dt
+ σh′(Zt)dBt + h′(Zt)dLt − h′(Zt)dUt .
I Since Lt ,Ut are local times at 0 and d respectively
dh(Zt) = (Lh)(Zt)dt+σh′(Zt)dBt +h′(0)dLt−h′(d)dUt (2)
where the operator L is defined through
L := (α− γx)d
dx+σ2
2
d2
dx2.
I The ODE(Lh)(x) = q, 0 6 x 6 d ,
such that h(0) = 0, h′(0) = 0, and h′(d) = 1, has a uniquesolution pair (h(x), qU).
I h(0) = 0, h′(0) = −1, and h′(d) = 0, has a unique solutionpair (h1(x), qL).
I Since Lt ,Ut are local times at 0 and d respectively
dh(Zt) = (Lh)(Zt)dt+σh′(Zt)dBt +h′(0)dLt−h′(d)dUt (2)
where the operator L is defined through
L := (α− γx)d
dx+σ2
2
d2
dx2.
I The ODE(Lh)(x) = q, 0 6 x 6 d ,
such that h(0) = 0, h′(0) = 0, and h′(d) = 1, has a uniquesolution pair (h(x), qU).
I h(0) = 0, h′(0) = −1, and h′(d) = 0, has a unique solutionpair (h1(x), qL).
I Since Lt ,Ut are local times at 0 and d respectively
dh(Zt) = (Lh)(Zt)dt+σh′(Zt)dBt +h′(0)dLt−h′(d)dUt (2)
where the operator L is defined through
L := (α− γx)d
dx+σ2
2
d2
dx2.
I The ODE(Lh)(x) = q, 0 6 x 6 d ,
such that h(0) = 0, h′(0) = 0, and h′(d) = 1, has a uniquesolution pair (h(x), qU).
I h(0) = 0, h′(0) = −1, and h′(d) = 0, has a unique solutionpair (h1(x), qL).
I Theorem 3 The loss process Ut satisfies the central limittheorem, with η2
U defined below in (3),
Ut − qUt√t
⇒ N (0, η2U), as t →∞.
I Proof: Insert the unique solution h(x) into (2). Sinceh′(0) = 0, h′(d) = 1, and (Lh)(Zt) = qU ,
Ut − qUt + h(Zt)− h(Z0) = σ
∫ t
0h′(Zs)dBs .
I Mt := Ut − qUt + h(Zt)− h(Z0) is zero-mean squareintegrable martingale;〈Mt〉: quadratic variation process of Mt .
I By the ergodic theorem,
〈Mt〉t
=σ2
t
∫ t
0h′(Zs)2ds
P→ σ2
∫ d
0h′(x)2π(dx) =: η2
U , (3)
where π is the stationary distribution corresponding Zt . [4]
I Theorem 3 The loss process Ut satisfies the central limittheorem, with η2
U defined below in (3),
Ut − qUt√t
⇒ N (0, η2U), as t →∞.
I Proof: Insert the unique solution h(x) into (2). Sinceh′(0) = 0, h′(d) = 1, and (Lh)(Zt) = qU ,
Ut − qUt + h(Zt)− h(Z0) = σ
∫ t
0h′(Zs)dBs .
I Mt := Ut − qUt + h(Zt)− h(Z0) is zero-mean squareintegrable martingale;〈Mt〉: quadratic variation process of Mt .
I By the ergodic theorem,
〈Mt〉t
=σ2
t
∫ t
0h′(Zs)2ds
P→ σ2
∫ d
0h′(x)2π(dx) =: η2
U , (3)
where π is the stationary distribution corresponding Zt . [4]
I Theorem 3 The loss process Ut satisfies the central limittheorem, with η2
U defined below in (3),
Ut − qUt√t
⇒ N (0, η2U), as t →∞.
I Proof: Insert the unique solution h(x) into (2). Sinceh′(0) = 0, h′(d) = 1, and (Lh)(Zt) = qU ,
Ut − qUt + h(Zt)− h(Z0) = σ
∫ t
0h′(Zs)dBs .
I Mt := Ut − qUt + h(Zt)− h(Z0) is zero-mean squareintegrable martingale;〈Mt〉: quadratic variation process of Mt .
I By the ergodic theorem,
〈Mt〉t
=σ2
t
∫ t
0h′(Zs)2ds
P→ σ2
∫ d
0h′(x)2π(dx) =: η2
U , (3)
where π is the stationary distribution corresponding Zt . [4]
I Theorem 3 The loss process Ut satisfies the central limittheorem, with η2
U defined below in (3),
Ut − qUt√t
⇒ N (0, η2U), as t →∞.
I Proof: Insert the unique solution h(x) into (2). Sinceh′(0) = 0, h′(d) = 1, and (Lh)(Zt) = qU ,
Ut − qUt + h(Zt)− h(Z0) = σ
∫ t
0h′(Zs)dBs .
I Mt := Ut − qUt + h(Zt)− h(Z0) is zero-mean squareintegrable martingale;〈Mt〉: quadratic variation process of Mt .
I By the ergodic theorem,
〈Mt〉t
=σ2
t
∫ t
0h′(Zs)2ds
P→ σ2
∫ d
0h′(x)2π(dx) =: η2
U , (3)
where π is the stationary distribution corresponding Zt . [4]
I By the martingale central limit theorem,
Mt
t=
Ut − qUt + h(Zt)− h(Z0)√t
⇒ N (0, η2U)
as t →∞.
I Since Zt ∈ [0, d ] and h is continuous, h(Zt) is bounded. So,
h(Zt)− h(Z0)√t
→ 0
a.s. as t →∞.
I Hence,Ut − qUt√
t⇒ N (0, η2
U), as t →∞.
I By the martingale central limit theorem,
Mt
t=
Ut − qUt + h(Zt)− h(Z0)√t
⇒ N (0, η2U)
as t →∞.I Since Zt ∈ [0, d ] and h is continuous, h(Zt) is bounded. So,
h(Zt)− h(Z0)√t
→ 0
a.s. as t →∞.
I Hence,Ut − qUt√
t⇒ N (0, η2
U), as t →∞.
I By the martingale central limit theorem,
Mt
t=
Ut − qUt + h(Zt)− h(Z0)√t
⇒ N (0, η2U)
as t →∞.I Since Zt ∈ [0, d ] and h is continuous, h(Zt) is bounded. So,
h(Zt)− h(Z0)√t
→ 0
a.s. as t →∞.
I Hence,Ut − qUt√
t⇒ N (0, η2
U), as t →∞.
So far:
I Tail probabilities for DROU;
I CLT Ut and Lt for DROU;
I To be done: LD Ut and Lt for DROU. Tricky!
So far:
I Tail probabilities for DROU;
I CLT Ut and Lt for DROU;
I To be done: LD Ut and Lt for DROU. Tricky!
So far:
I Tail probabilities for DROU;
I CLT Ut and Lt for DROU;
I To be done: LD Ut and Lt for DROU. Tricky!
So far:
I Tail probabilities for DROU;
I CLT Ut and Lt for DROU;
I To be done: LD Ut and Lt for DROU. Tricky!
Bibliography of Part I
Adam Shwartz and Alan Weiss.Large deviations for performance analysis.Stochastic Modeling Series. Chapman & Hall, London, 1995.Queues, communications, and computing, With an appendix by Robert J.Vanderbei.
Rayadurgam Srikant and Ward Whitt.Simulation run lengths to estimate blocking probabilities.ACM Trans. Model. Comput. Simul., 6(1):7–52, January 1996.
Amy R. Ward and Peter W. Glynn.A diffusion approximation for a Markovian queue with reneging.Queueing Syst., 43(1-2):103–128, 2003.
Amy R. Ward and Peter W. Glynn.Properties of the reflected Ornstein-Uhlenbeck process.Queueing Syst., 44(2):109–123, 2003.
Amy R. Ward and Peter W. Glynn.A diffusion approximation for a GI/GI/1 queue with balking or reneging.Queueing Syst., 50(4):371–400, 2005.
Xiaowei Zhang and Peter W. Glynn.On the dynamics of a finite buffer queue conditioned on the amount of loss.Queueing Syst., 67(2):91–110, 2011.
Part IIMARKOV MODULATED ORNSTEIN-UHLENBECK (MMOU)
Central Limit Theorems
MODEL: MMOU
I (X (t))t>0: irreducible, Markov process on {1, . . . , d}.
I Transition rates: Q = (qij)di ,j=1, (unique) invariant
distribution: π.
I Now we suppose that the process X (·) modulates anOrnstein-Uhlenbeck process: while X (·) in state i , the process(M(t))t>0 behaves as an Ornstein-Uhlenbeck process Ui (·)with parameters αi , γi , and σi , independently of the‘background process’ X (·).
I Hence, M(·) obeys the following SDE:
dM(t) = (αX (t) − γX (t)M(t))dt + σX (t) dB(t);
(B(t))t>0 standard BM independent of (X (t))t>0.
I Queueing: Markov modulation — Finance: regime switching.
MODEL: MMOU
I (X (t))t>0: irreducible, Markov process on {1, . . . , d}.I Transition rates: Q = (qij)
di ,j=1, (unique) invariant
distribution: π.
I Now we suppose that the process X (·) modulates anOrnstein-Uhlenbeck process: while X (·) in state i , the process(M(t))t>0 behaves as an Ornstein-Uhlenbeck process Ui (·)with parameters αi , γi , and σi , independently of the‘background process’ X (·).
I Hence, M(·) obeys the following SDE:
dM(t) = (αX (t) − γX (t)M(t))dt + σX (t) dB(t);
(B(t))t>0 standard BM independent of (X (t))t>0.
I Queueing: Markov modulation — Finance: regime switching.
MODEL: MMOU
I (X (t))t>0: irreducible, Markov process on {1, . . . , d}.I Transition rates: Q = (qij)
di ,j=1, (unique) invariant
distribution: π.
I Now we suppose that the process X (·) modulates anOrnstein-Uhlenbeck process: while X (·) in state i , the process(M(t))t>0 behaves as an Ornstein-Uhlenbeck process Ui (·)with parameters αi , γi , and σi , independently of the‘background process’ X (·).
I Hence, M(·) obeys the following SDE:
dM(t) = (αX (t) − γX (t)M(t))dt + σX (t) dB(t);
(B(t))t>0 standard BM independent of (X (t))t>0.
I Queueing: Markov modulation — Finance: regime switching.
MODEL: MMOU
I (X (t))t>0: irreducible, Markov process on {1, . . . , d}.I Transition rates: Q = (qij)
di ,j=1, (unique) invariant
distribution: π.
I Now we suppose that the process X (·) modulates anOrnstein-Uhlenbeck process: while X (·) in state i , the process(M(t))t>0 behaves as an Ornstein-Uhlenbeck process Ui (·)with parameters αi , γi , and σi , independently of the‘background process’ X (·).
I Hence, M(·) obeys the following SDE:
dM(t) = (αX (t) − γX (t)M(t))dt + σX (t) dB(t);
(B(t))t>0 standard BM independent of (X (t))t>0.
I Queueing: Markov modulation — Finance: regime switching.
MODEL: MMOU
I (X (t))t>0: irreducible, Markov process on {1, . . . , d}.I Transition rates: Q = (qij)
di ,j=1, (unique) invariant
distribution: π.
I Now we suppose that the process X (·) modulates anOrnstein-Uhlenbeck process: while X (·) in state i , the process(M(t))t>0 behaves as an Ornstein-Uhlenbeck process Ui (·)with parameters αi , γi , and σi , independently of the‘background process’ X (·).
I Hence, M(·) obeys the following SDE:
dM(t) = (αX (t) − γX (t)M(t))dt + σX (t) dB(t);
(B(t))t>0 standard BM independent of (X (t))t>0.
I Queueing: Markov modulation — Finance: regime switching.
First: deterministic modulationI First consider OU with deterministically changing parameters:
dM(t) = (α(t)− γ(t)M(t))dt + σ(t)dB(t),
with B(t) a standard Brownian motion, and α(t), γ(t), andσ(t) arbitrary positive, deterministic functions.
I Solve SDE. Define F (M(t), t) := M(t)eΓ(t), withΓ(t) :=
∫ t0 γ(s)ds. Then, by virtue of Ito’s lemma,
dF (M(t), t) = eΓ(t) (γ(t)M(t)dt + dM(t))
= eΓ(t) (α(t)dt + σ(t) dB(t)) .
I Now integrating yields
M(t)eΓ(t) = M(0) +
∫ t
0eΓ(s)α(s)ds +
∫ t
0eΓ(s)σ(s) dB(s),
so that, trivially, M(t) equals
M(0)e−Γ(t)+
∫ t
0e−(Γ(t)−Γ(s))α(s)ds+
∫ t
0e−(Γ(t)−Γ(s))σ(s)dB(s).
First: deterministic modulationI First consider OU with deterministically changing parameters:
dM(t) = (α(t)− γ(t)M(t))dt + σ(t)dB(t),
with B(t) a standard Brownian motion, and α(t), γ(t), andσ(t) arbitrary positive, deterministic functions.
I Solve SDE. Define F (M(t), t) := M(t)eΓ(t), withΓ(t) :=
∫ t0 γ(s)ds. Then, by virtue of Ito’s lemma,
dF (M(t), t) = eΓ(t) (γ(t)M(t) dt + dM(t))
= eΓ(t) (α(t) dt + σ(t) dB(t)) .
I Now integrating yields
M(t)eΓ(t) = M(0) +
∫ t
0eΓ(s)α(s)ds +
∫ t
0eΓ(s)σ(s) dB(s),
so that, trivially, M(t) equals
M(0)e−Γ(t)+
∫ t
0e−(Γ(t)−Γ(s))α(s)ds+
∫ t
0e−(Γ(t)−Γ(s))σ(s)dB(s).
First: deterministic modulationI First consider OU with deterministically changing parameters:
dM(t) = (α(t)− γ(t)M(t))dt + σ(t)dB(t),
with B(t) a standard Brownian motion, and α(t), γ(t), andσ(t) arbitrary positive, deterministic functions.
I Solve SDE. Define F (M(t), t) := M(t)eΓ(t), withΓ(t) :=
∫ t0 γ(s)ds. Then, by virtue of Ito’s lemma,
dF (M(t), t) = eΓ(t) (γ(t)M(t) dt + dM(t))
= eΓ(t) (α(t) dt + σ(t) dB(t)) .
I Now integrating yields
M(t)eΓ(t) = M(0) +
∫ t
0eΓ(s)α(s) ds +
∫ t
0eΓ(s)σ(s) dB(s),
so that, trivially, M(t) equals
M(0)e−Γ(t)+
∫ t
0e−(Γ(t)−Γ(s))α(s)ds+
∫ t
0e−(Γ(t)−Γ(s))σ(s) dB(s).
Deterministic modulation, ctd.
I From now on: M(0) equals the constant m0.
I Now the random variable M(t) necessarily has a Normaldistribution, with mean
µt = m0e−Γ(t) +
∫ t
0e−(Γ(t)−Γ(s))α(s)ds,
and variance
vt = Var(∫ t
0e−(Γ(t)−Γ(s))σ(s)dB(s)
)=
∫ t
0e−2(Γ(t)−Γ(s))σ2(s)ds.
Deterministic modulation, ctd.
I From now on: M(0) equals the constant m0.
I Now the random variable M(t) necessarily has a Normaldistribution, with mean
µt = m0e−Γ(t) +
∫ t
0e−(Γ(t)−Γ(s))α(s)ds,
and variance
vt = Var(∫ t
0e−(Γ(t)−Γ(s))σ(s)dB(s)
)=
∫ t
0e−2(Γ(t)−Γ(s))σ2(s)ds.
MMOU: conditional mean and variance
I In MMOU α(t) = αi , γ(t) = γi , and σ2(t) = σ2i if X (t) = i .
We have found the following result.
I Denote by X the path (X (s), s ∈ [0, t]). (M(t) |X ) has aNormal distribution with random parameters m and s given by
m := E(M(t) |X )
= m0 exp
(−∫ t
0γX (s)ds
)+
∫ t
0exp
(−∫ t
sγX (r)dr
)αX (s) ds
and
s := Var(M(t) |X ) =
∫ t
0exp
(−2
∫ t
sγX (r)dr
)σ2X (s) ds.
I Similarity with corresponding result for Markov modulatedinfinite-server queue by D’Auria: there number of jobs insystem has a Poisson distribution with random parameter.
MMOU: conditional mean and variance
I In MMOU α(t) = αi , γ(t) = γi , and σ2(t) = σ2i if X (t) = i .
We have found the following result.
I Denote by X the path (X (s), s ∈ [0, t]). (M(t) |X ) has aNormal distribution with random parameters m and s given by
m := E(M(t) |X )
= m0 exp
(−∫ t
0γX (s)ds
)+
∫ t
0exp
(−∫ t
sγX (r)dr
)αX (s) ds
and
s := Var(M(t) |X ) =
∫ t
0exp
(−2
∫ t
sγX (r)dr
)σ2X (s) ds.
I Similarity with corresponding result for Markov modulatedinfinite-server queue by D’Auria: there number of jobs insystem has a Poisson distribution with random parameter.
MMOU: conditional mean and variance
I In MMOU α(t) = αi , γ(t) = γi , and σ2(t) = σ2i if X (t) = i .
We have found the following result.
I Denote by X the path (X (s), s ∈ [0, t]). (M(t) |X ) has aNormal distribution with random parameters m and s given by
m := E(M(t) |X )
= m0 exp
(−∫ t
0γX (s)ds
)+
∫ t
0exp
(−∫ t
sγX (r)dr
)αX (s) ds
and
s := Var(M(t) |X ) =
∫ t
0exp
(−2
∫ t
sγX (r)dr
)σ2X (s) ds.
I Similarity with corresponding result for Markov modulatedinfinite-server queue by D’Auria: there number of jobs insystem has a Poisson distribution with random parameter.
MMOU: unconditional mean and variance
I Now: expressions for mean and variance.
I Special cases: γi equal, t →∞, certain scalings.
I Let Z (t) ∈ {0, 1}d such that Zi (t) = 1 if X (t) = i and 0 else.
MMOU: unconditional mean and variance
I Now: expressions for mean and variance.
I Special cases: γi equal, t →∞, certain scalings.
I Let Z (t) ∈ {0, 1}d such that Zi (t) = 1 if X (t) = i and 0 else.
MMOU: unconditional mean and variance
I Now: expressions for mean and variance.
I Special cases: γi equal, t →∞, certain scalings.
I Let Z (t) ∈ {0, 1}d such that Zi (t) = 1 if X (t) = i and 0 else.
MMOU: unconditional mean and variance
I Directly from the definitions
dµt = E(αTZ (t)− γTY (t))dt,
with Y (t) := Z (t)M(t). Let νt := EY (t) and
pt := (P(X (t) = 1), . . . ,P(X (t) = d))T.
I Conclude thatµ′t = αTpt − γTνt .
I Clearly, dZ (t) = QTZ (t) dt + dK (t), for a d-dimensionalmartingale K (t). With Ito’s rule,
dY (t) = M(t)(QTZ (t)dt + dK (t)
)+Z (t)
(αTZ (t)− γTY (t)
)dt + σTZ (t)dB(t).
MMOU: unconditional mean and variance
I Directly from the definitions
dµt = E(αTZ (t)− γTY (t))dt,
with Y (t) := Z (t)M(t). Let νt := EY (t) and
pt := (P(X (t) = 1), . . . ,P(X (t) = d))T.
I Conclude thatµ′t = αTpt − γTνt .
I Clearly, dZ (t) = QTZ (t) dt + dK (t), for a d-dimensionalmartingale K (t). With Ito’s rule,
dY (t) = M(t)(QTZ (t)dt + dK (t)
)+Z (t)
(αTZ (t)− γTY (t)
)dt + σTZ (t)dB(t).
MMOU: unconditional mean and variance
I Directly from the definitions
dµt = E(αTZ (t)− γTY (t))dt,
with Y (t) := Z (t)M(t). Let νt := EY (t) and
pt := (P(X (t) = 1), . . . ,P(X (t) = d))T.
I Conclude thatµ′t = αTpt − γTνt .
I Clearly, dZ (t) = QTZ (t) dt + dK (t), for a d-dimensionalmartingale K (t). With Ito’s rule,
dY (t) = M(t)(QTZ (t) dt + dK (t)
)+Z (t)
(αTZ (t)− γTY (t)
)dt + σTZ (t)dB(t).
MMOU: unconditional mean and variance
I Taking expectations of both sides, we obtain, with
Qγ := QT − diag{γ},
ν ′t = Qγνt + diag{α}pt .
I From this non-homogeneous linear system of differentialequations, we know that νt is given by
νt = eQγ tν0 +
∫ t
0eQγ(t−s)diag{α}psds;
and then µt = 1Tνt .
MMOU: unconditional mean and variance
I Taking expectations of both sides, we obtain, with
Qγ := QT − diag{γ},
ν ′t = Qγνt + diag{α}pt .I From this non-homogeneous linear system of differential
equations, we know that νt is given by
νt = eQγ tν0 +
∫ t
0eQγ(t−s)diag{α}psds;
and then µt = 1Tνt .
MMOU: unconditional mean and variance
I Equations simplify drastically if background process starts offin equilibrium at time 0; then pt = π for all t ≥ 0. As a result,
νt = eQγ tν0 − Q−1γ (I − eQγ t)diag{α}π.
I It follows that ν∞ = −Q−1γ diag{α}π, and
µ∞ = 1Tν∞ = −1TQ−1γ diag{α}π.
I We further note that γ = −(Q − diag{γ})1, and hence
γTQ−1γ = −1T,
so that γTν∞ = πTα.
MMOU: unconditional mean and variance
I Equations simplify drastically if background process starts offin equilibrium at time 0; then pt = π for all t ≥ 0. As a result,
νt = eQγ tν0 − Q−1γ (I − eQγ t)diag{α}π.
I It follows that ν∞ = −Q−1γ diag{α}π, and
µ∞ = 1Tν∞ = −1TQ−1γ diag{α}π.
I We further note that γ = −(Q − diag{γ})1, and hence
γTQ−1γ = −1T,
so that γTν∞ = πTα.
MMOU: unconditional mean and variance
I Equations simplify drastically if background process starts offin equilibrium at time 0; then pt = π for all t ≥ 0. As a result,
νt = eQγ tν0 − Q−1γ (I − eQγ t)diag{α}π.
I It follows that ν∞ = −Q−1γ diag{α}π, and
µ∞ = 1Tν∞ = −1TQ−1γ diag{α}π.
I We further note that γ = −(Q − diag{γ})1, and hence
γTQ−1γ = −1T,
so that γTν∞ = πTα.
MMOU: unconditional mean and variance
I Variance can be found similarly. Define Y (t) := Z (t)M2(t),and w t := EY (t).
I Starting point is the relation
d(M(t)− µt) =(αT(Z (t)− pt)− γT(Y (t)− νt)
)dt
+σTZ (t)dB(t),
so that
d(M(t)− µt)2 = 2(M(t)− µt)(αT(Z (t)− pt)− γT(Y (t)− νt)
)dt
+ 2(M(t)− µt)σTZ (t)dB(t) + σTdiag{Z (t)}σ dt.
I Taking expectations of both sides,
v ′t = 2αTνt − 2µtαTpt − 2γTw t + 2µtγ
Tνt +σTdiag{pt}σ.
MMOU: unconditional mean and variance
I Variance can be found similarly. Define Y (t) := Z (t)M2(t),and w t := EY (t).
I Starting point is the relation
d(M(t)− µt) =(αT(Z (t)− pt)− γT(Y (t)− νt)
)dt
+σTZ (t)dB(t),
so that
d(M(t)− µt)2 = 2(M(t)− µt)(αT(Z (t)− pt)− γT(Y (t)− νt)
)dt
+ 2(M(t)− µt)σTZ (t)dB(t) + σTdiag{Z (t)}σ dt.
I Taking expectations of both sides,
v ′t = 2αTνt − 2µtαTpt − 2γTw t + 2µtγ
Tνt +σTdiag{pt}σ.
MMOU: unconditional mean and variance
I Variance can be found similarly. Define Y (t) := Z (t)M2(t),and w t := EY (t).
I Starting point is the relation
d(M(t)− µt) =(αT(Z (t)− pt)− γT(Y (t)− νt)
)dt
+σTZ (t)dB(t),
so that
d(M(t)− µt)2 = 2(M(t)− µt)(αT(Z (t)− pt)− γT(Y (t)− νt)
)dt
+ 2(M(t)− µt)σTZ (t)dB(t) + σTdiag{Z (t)}σ dt.
I Taking expectations of both sides,
v ′t = 2αTνt − 2µtαTpt − 2γTw t + 2µtγ
Tνt +σTdiag{pt}σ.
MMOU: unconditional mean and variance
I Clearly, to evaluate this expression, we first need to identifyw t . To this end, we set up and equation for dY (t) as before,take expectations, so as to obtain
w ′t = Q2γw t + 2diag{α}νt + diag{σ2}pt .
I This leads to
w t = eQ2γ tw0+
∫ t
0eQ2γ(t−s)
(2diag{α}νs + diag{σ2}ps
)ds,
so that vt = 1Tw t − µ2t .
MMOU: unconditional mean and variance
I Clearly, to evaluate this expression, we first need to identifyw t . To this end, we set up and equation for dY (t) as before,take expectations, so as to obtain
w ′t = Q2γw t + 2diag{α}νt + diag{σ2}pt .
I This leads to
w t = eQ2γ tw0+
∫ t
0eQ2γ(t−s)
(2diag{α}νs + diag{σ2}ps
)ds,
so that vt = 1Tw t − µ2t .
MMOU: unconditional mean and variance
I Again simplifications can be made if p0 = π (and hencept = π for all t). In that case, we had already found anexpression for νs above, and as a result w t can be explicitlyevaluated.
I For the stationary situation (t →∞, that is) we obtain
w∞ = −Q−12γ
(2diag{α}ν∞ + diag{σ2}π
),
and v∞ = 1Tw∞ − µ2∞.
MMOU: unconditional mean and variance
I Again simplifications can be made if p0 = π (and hencept = π for all t). In that case, we had already found anexpression for νs above, and as a result w t can be explicitlyevaluated.
I For the stationary situation (t →∞, that is) we obtain
w∞ = −Q−12γ
(2diag{α}ν∞ + diag{σ2}π
),
and v∞ = 1Tw∞ − µ2∞.
MMOU: unconditional mean and variance, γi equal
I It is directly seen that µ∞ = πTα/γ.
I Now γTQ−1γ = −1T implies 1TQ−1
δ1 = −δ−11T for any δ > 0,so that
v∞ = 1Tw∞ − µ2∞ =
1Tdiag{α}ν∞γ
+πTσ2
2γ−(πTα
γ
)2
= −1Tdiag{α}Q−1
γ1 diag{α}πγ
+πTσ2
2γ−(πTα
γ
)2
.
MMOU: unconditional mean and variance, γi equal
I It is directly seen that µ∞ = πTα/γ.
I Now γTQ−1γ = −1T implies 1TQ−1
δ1 = −δ−11T for any δ > 0,so that
v∞ = 1Tw∞ − µ2∞ =
1Tdiag{α}ν∞γ
+πTσ2
2γ−(πTα
γ
)2
= −1Tdiag{α}Q−1
γ1 diag{α}πγ
+πTσ2
2γ−(πTα
γ
)2
.
MMOU: unconditional mean and variance, γi equalI Let Dij(γ) :=
∫∞0 pij(v)e−γvdv . Integration by parts:
QD(γ) =
∫ ∞0
QP(v)e−γvdv =
∫ ∞0
P ′(v)e−γvdv
= −I +
∫ ∞0
γP(v)e−γvdv = −I + γD(γ).
I As a consequence, −(Q − γI )D(γ) = I , so that
v∞ =πTσ2
2γ+
1
γαTdiag{π}D(γ)α−
(πTα
γ
)2
,
where Dij(γ) :=∫∞
0 (pij(v)− πj)e−γvdv = Dij(γ)− πj/γ.I We find:
v∞ =πTσ2
2γ+
1
γαTdiag{π}D(γ)α.
I Mean and variance can also be found by elementary, insightfularguments, however!
MMOU: unconditional mean and variance, γi equalI Let Dij(γ) :=
∫∞0 pij(v)e−γvdv . Integration by parts:
QD(γ) =
∫ ∞0
QP(v)e−γvdv =
∫ ∞0
P ′(v)e−γvdv
= −I +
∫ ∞0
γP(v)e−γvdv = −I + γD(γ).
I As a consequence, −(Q − γI )D(γ) = I , so that
v∞ =πTσ2
2γ+
1
γαTdiag{π}D(γ)α−
(πTα
γ
)2
,
where Dij(γ) :=∫∞
0 (pij(v)− πj)e−γvdv = Dij(γ)− πj/γ.
I We find:
v∞ =πTσ2
2γ+
1
γαTdiag{π}D(γ)α.
I Mean and variance can also be found by elementary, insightfularguments, however!
MMOU: unconditional mean and variance, γi equalI Let Dij(γ) :=
∫∞0 pij(v)e−γvdv . Integration by parts:
QD(γ) =
∫ ∞0
QP(v)e−γvdv =
∫ ∞0
P ′(v)e−γvdv
= −I +
∫ ∞0
γP(v)e−γvdv = −I + γD(γ).
I As a consequence, −(Q − γI )D(γ) = I , so that
v∞ =πTσ2
2γ+
1
γαTdiag{π}D(γ)α−
(πTα
γ
)2
,
where Dij(γ) :=∫∞
0 (pij(v)− πj)e−γvdv = Dij(γ)− πj/γ.I We find:
v∞ =πTσ2
2γ+
1
γαTdiag{π}D(γ)α.
I Mean and variance can also be found by elementary, insightfularguments, however!
MMOU: unconditional mean and variance, γi equalI Let Dij(γ) :=
∫∞0 pij(v)e−γvdv . Integration by parts:
QD(γ) =
∫ ∞0
QP(v)e−γvdv =
∫ ∞0
P ′(v)e−γvdv
= −I +
∫ ∞0
γP(v)e−γvdv = −I + γD(γ).
I As a consequence, −(Q − γI )D(γ) = I , so that
v∞ =πTσ2
2γ+
1
γαTdiag{π}D(γ)α−
(πTα
γ
)2
,
where Dij(γ) :=∫∞
0 (pij(v)− πj)e−γvdv = Dij(γ)− πj/γ.I We find:
v∞ =πTσ2
2γ+
1
γαTdiag{π}D(γ)α.
I Mean and variance can also be found by elementary, insightfularguments, however!
MMOU: unconditional mean and variance, γi equal
I First: mean µt . It is immediate that µt is a convex mixture ofm0 and πTα/γ:
µt = m0e−γt + e−γt
∫ t
0eγsds
(d∑
i=1
πiαi
)
= m0e−γt +
πTα
γ(1− e−γt),
which converges, as t →∞, to πTα/γ, as expected.
MMOU: unconditional mean and variance, γi equal
I Then: variance vt . Use law of total variance. Denote again byX the path (X (s), s ∈ [0, t]). We now compute
VarM(t) = E (Var(M(t) |X )) + Var (E(M(t) |X )) .
I First term: use result for conditional mean. We get:
E (Var(M(t) |X )) = E(∫ t
0e−2γ(t−s)σ2
X (s) ds
)=
∫ t
0e−2γ(t−s)E
(σ2X (s)
)ds
=d∑
i=1
πiσ2i
(1− e−2γt
2γ
).
MMOU: unconditional mean and variance, γi equal
I Then: variance vt . Use law of total variance. Denote again byX the path (X (s), s ∈ [0, t]). We now compute
VarM(t) = E (Var(M(t) |X )) + Var (E(M(t) |X )) .
I First term: use result for conditional mean. We get:
E (Var(M(t) |X )) = E(∫ t
0e−2γ(t−s)σ2
X (s) ds
)=
∫ t
0e−2γ(t−s)E
(σ2X (s)
)ds
=d∑
i=1
πiσ2i
(1− e−2γt
2γ
).
MMOU: unconditional mean and variance, γi equal
I Also,
Var (E(M(t) |X )) = Var(∫ t
0e−γ(t−s)αX (s) ds
)=
∫ t
0
∫ t
0Cov
(e−γ(t−s)αX (s), e
−γ(t−u)αX (u)
)du ds
= e−2γt
∫ t
0
∫ t
0eγ(s+u)Cov
(αX (s), αX (u)
)du ds.
I The latter integral expression can be evaluated as
2e−2γt
∫ t
0
∫ s
0eγ(s+u)Cov
(αX (s), αX (u)
)du ds = . . . =
=1
γ
d∑i=1
d∑j=1
αiαj
∫ t
0
(e−γv − e−γ(2t−v)
)πi (pij(v)− πj)dv
(change order of integration and use elementary calculus).
MMOU: unconditional mean and variance, γi equal
I Also,
Var (E(M(t) |X )) = Var(∫ t
0e−γ(t−s)αX (s) ds
)=
∫ t
0
∫ t
0Cov
(e−γ(t−s)αX (s), e
−γ(t−u)αX (u)
)du ds
= e−2γt
∫ t
0
∫ t
0eγ(s+u)Cov
(αX (s), αX (u)
)du ds.
I The latter integral expression can be evaluated as
2e−2γt
∫ t
0
∫ s
0eγ(s+u)Cov
(αX (s), αX (u)
)du ds = . . . =
=1
γ
d∑i=1
d∑j=1
αiαj
∫ t
0
(e−γv − e−γ(2t−v)
)πi (pij(v)− πj)dv
(change order of integration and use elementary calculus).
MMOU: unconditional mean and variance, γi equal
I Specializing to the situation that t →∞, we obtain
VarM(∞) =πTσ2
2γ+
1
γ
d∑i=1
d∑j=1
αiαjπiDij(γ)
=πTσ2
2γ+
1
γαTdiag{π}D(γ)α,
in accordance with the expression we found before.
MMOU: unconditional mean and variance, γi equalI Scale α 7→ Nα, σ2 7→ Nσ2, and Q 7→ N fQ for some f > 0.
VarM(N)(t) = Nd∑
i=1
πiσ2i
(1− e−2γt
2γ
)
+N2d∑
i=1
d∑j=1
αiαj
∫ t
0
(e−γv − e−γ(2t−v)
γ
)πi (pij(vN
f )− πj)dv ,
which for N large behaves as, with D := D(0),(1− e−2γt
2γ
)Nd∑
i=1
πiσ2i + 2N2−f
d∑i=1
d∑j=1
αiαjπiDij
=
(1− e−2γt
2γ
)(NπTσ2 + 2N2−fαTdiag{π}Dα
).
I Dichotomy: for f > 1 the variance is essentially linear in N,while for f < 1 it behaves superlinearly (more specifically,proportionally to N2−f ).
I The matrix D is referred to as the deviation matrix.
MMOU: unconditional mean and variance, γi equalI Scale α 7→ Nα, σ2 7→ Nσ2, and Q 7→ N fQ for some f > 0.
VarM(N)(t) = Nd∑
i=1
πiσ2i
(1− e−2γt
2γ
)
+N2d∑
i=1
d∑j=1
αiαj
∫ t
0
(e−γv − e−γ(2t−v)
γ
)πi (pij(vN
f )− πj)dv ,
which for N large behaves as, with D := D(0),(1− e−2γt
2γ
)Nd∑
i=1
πiσ2i + 2N2−f
d∑i=1
d∑j=1
αiαjπiDij
=
(1− e−2γt
2γ
)(NπTσ2 + 2N2−fαTdiag{π}Dα
).
I Dichotomy: for f > 1 the variance is essentially linear in N,while for f < 1 it behaves superlinearly (more specifically,proportionally to N2−f ).
I The matrix D is referred to as the deviation matrix.
MMOU: unconditional mean and variance, γi equalI Scale α 7→ Nα, σ2 7→ Nσ2, and Q 7→ N fQ for some f > 0.
VarM(N)(t) = Nd∑
i=1
πiσ2i
(1− e−2γt
2γ
)
+N2d∑
i=1
d∑j=1
αiαj
∫ t
0
(e−γv − e−γ(2t−v)
γ
)πi (pij(vN
f )− πj)dv ,
which for N large behaves as, with D := D(0),(1− e−2γt
2γ
)Nd∑
i=1
πiσ2i + 2N2−f
d∑i=1
d∑j=1
αiαjπiDij
=
(1− e−2γt
2γ
)(NπTσ2 + 2N2−fαTdiag{π}Dα
).
I Dichotomy: for f > 1 the variance is essentially linear in N,while for f < 1 it behaves superlinearly (more specifically,proportionally to N2−f ).
I The matrix D is referred to as the deviation matrix.
MMOU: Central Limit Theorem
I Result:
Nβ/2
(M(N)(t)
N−mt
)converges to a Normal distribution with zero mean andvariance σ2(t).
I Here β := min{f , 1}, and
σ2(t) :=
(1− e−2γt
2γ
)(πTσ21{f≥1} + 2αTdiag{π}Dα1{f≤1}
).
I Future work: weak convergence to OU process withappropriate parameters.
MMOU: Central Limit Theorem
I Result:
Nβ/2
(M(N)(t)
N−mt
)converges to a Normal distribution with zero mean andvariance σ2(t).
I Here β := min{f , 1}, and
σ2(t) :=
(1− e−2γt
2γ
)(πTσ21{f≥1} + 2αTdiag{π}Dα1{f≤1}
).
I Future work: weak convergence to OU process withappropriate parameters.
MMOU: Central Limit Theorem
I Result:
Nβ/2
(M(N)(t)
N−mt
)converges to a Normal distribution with zero mean andvariance σ2(t).
I Here β := min{f , 1}, and
σ2(t) :=
(1− e−2γt
2γ
)(πTσ21{f≥1} + 2αTdiag{π}Dα1{f≤1}
).
I Future work: weak convergence to OU process withappropriate parameters.
Part IIIMARKOV MODULATED ORNSTEIN-UHLENBECK (MMOU)
Large Deviations
MMOU: Large deviations
Two regimes!
I First regime: α 7→ Nα, σ2 7→ Nσ2, Q 7→ N fQ with f > 1.
I Idea: Markov chain moves faster than OU processes. Hence:we see effectively OU with parameters Nα∞ := N πTα,Nσ2∞ := N πTσ2, γ∞ := πTγ.
I
limN→∞
1
NlogP(M(N)(t) ≥ Na) = −1
2
(a−m∞(t))2
s∞(t),
where
m∞(t) =α∞γ∞
(1− e−γ∞t),
s∞(t) =σ2∞
2γ∞(1− e−2γ∞t).
MMOU: Large deviations
Two regimes!
I First regime: α 7→ Nα, σ2 7→ Nσ2, Q 7→ N fQ with f > 1.
I Idea: Markov chain moves faster than OU processes. Hence:we see effectively OU with parameters Nα∞ := N πTα,Nσ2∞ := N πTσ2, γ∞ := πTγ.
I
limN→∞
1
NlogP(M(N)(t) ≥ Na) = −1
2
(a−m∞(t))2
s∞(t),
where
m∞(t) =α∞γ∞
(1− e−γ∞t),
s∞(t) =σ2∞
2γ∞(1− e−2γ∞t).
MMOU: Large deviations
Two regimes!
I First regime: α 7→ Nα, σ2 7→ Nσ2, Q 7→ N fQ with f > 1.
I Idea: Markov chain moves faster than OU processes. Hence:we see effectively OU with parameters Nα∞ := N πTα,Nσ2∞ := N πTσ2, γ∞ := πTγ.
I
limN→∞
1
NlogP(M(N)(t) ≥ Na) = −1
2
(a−m∞(t))2
s∞(t),
where
m∞(t) =α∞γ∞
(1− e−γ∞t),
s∞(t) =σ2∞
2γ∞(1− e−2γ∞t).
MMOU: Large deviations
Proof technique:
I Construct lower bound by considering specific scenario.
I Split interval in subintervals of length t/√N.
I Within each interval consider scenario that backgroundprocess is close to π, viz. in δ-environment.
I Find lower bound on mean and upper bound on variance ofthe Normally distribution M(N)(t).
I Let δ ↓ 0.
MMOU: Large deviations
Proof technique:
I Construct lower bound by considering specific scenario.
I Split interval in subintervals of length t/√N.
I Within each interval consider scenario that backgroundprocess is close to π, viz. in δ-environment.
I Find lower bound on mean and upper bound on variance ofthe Normally distribution M(N)(t).
I Let δ ↓ 0.
MMOU: Large deviations
Proof technique:
I Construct lower bound by considering specific scenario.
I Split interval in subintervals of length t/√N.
I Within each interval consider scenario that backgroundprocess is close to π, viz. in δ-environment.
I Find lower bound on mean and upper bound on variance ofthe Normally distribution M(N)(t).
I Let δ ↓ 0.
MMOU: Large deviations
Proof technique:
I Construct lower bound by considering specific scenario.
I Split interval in subintervals of length t/√N.
I Within each interval consider scenario that backgroundprocess is close to π, viz. in δ-environment.
I Find lower bound on mean and upper bound on variance ofthe Normally distribution M(N)(t).
I Let δ ↓ 0.
MMOU: Large deviations
Proof technique:
I Construct lower bound by considering specific scenario.
I Split interval in subintervals of length t/√N.
I Within each interval consider scenario that backgroundprocess is close to π, viz. in δ-environment.
I Find lower bound on mean and upper bound on variance ofthe Normally distribution M(N)(t).
I Let δ ↓ 0.
MMOU: Large deviations
Proof technique, ctd.:
I Construct upper bound by showing all other scenarios are lesslikely, as follows.
I Split interval in Nε/2 subintervals of length t/Nε/2.
I For any event E ,
P(M(N)(t) ≥ Na) ≤ P(M(N)(t) ≥ Na,E ) + P(E c).
Let E be the event of being close to π (i.e., δ-environment).
I Second term decays superexponentially.
I Find upper bound on mean and lower bound on variance ofthe Normally distribution M(N)(t) on E .
I Let δ ↓ 0.
MMOU: Large deviations
Proof technique, ctd.:
I Construct upper bound by showing all other scenarios are lesslikely, as follows.
I Split interval in Nε/2 subintervals of length t/Nε/2.
I For any event E ,
P(M(N)(t) ≥ Na) ≤ P(M(N)(t) ≥ Na,E ) + P(E c).
Let E be the event of being close to π (i.e., δ-environment).
I Second term decays superexponentially.
I Find upper bound on mean and lower bound on variance ofthe Normally distribution M(N)(t) on E .
I Let δ ↓ 0.
MMOU: Large deviations
Proof technique, ctd.:
I Construct upper bound by showing all other scenarios are lesslikely, as follows.
I Split interval in Nε/2 subintervals of length t/Nε/2.
I For any event E ,
P(M(N)(t) ≥ Na) ≤ P(M(N)(t) ≥ Na,E ) + P(E c).
Let E be the event of being close to π (i.e., δ-environment).
I Second term decays superexponentially.
I Find upper bound on mean and lower bound on variance ofthe Normally distribution M(N)(t) on E .
I Let δ ↓ 0.
MMOU: Large deviations
Proof technique, ctd.:
I Construct upper bound by showing all other scenarios are lesslikely, as follows.
I Split interval in Nε/2 subintervals of length t/Nε/2.
I For any event E ,
P(M(N)(t) ≥ Na) ≤ P(M(N)(t) ≥ Na,E ) + P(E c).
Let E be the event of being close to π (i.e., δ-environment).
I Second term decays superexponentially.
I Find upper bound on mean and lower bound on variance ofthe Normally distribution M(N)(t) on E .
I Let δ ↓ 0.
MMOU: Large deviations
Proof technique, ctd.:
I Construct upper bound by showing all other scenarios are lesslikely, as follows.
I Split interval in Nε/2 subintervals of length t/Nε/2.
I For any event E ,
P(M(N)(t) ≥ Na) ≤ P(M(N)(t) ≥ Na,E ) + P(E c).
Let E be the event of being close to π (i.e., δ-environment).
I Second term decays superexponentially.
I Find upper bound on mean and lower bound on variance ofthe Normally distribution M(N)(t) on E .
I Let δ ↓ 0.
MMOU: Large deviations
Proof technique, ctd.:
I Construct upper bound by showing all other scenarios are lesslikely, as follows.
I Split interval in Nε/2 subintervals of length t/Nε/2.
I For any event E ,
P(M(N)(t) ≥ Na) ≤ P(M(N)(t) ≥ Na,E ) + P(E c).
Let E be the event of being close to π (i.e., δ-environment).
I Second term decays superexponentially.
I Find upper bound on mean and lower bound on variance ofthe Normally distribution M(N)(t) on E .
I Let δ ↓ 0.
MMOU: Large deviations
Two regimes!
I Second regime: α 7→ Nα, σ2 7→ Nσ2, Q unchanged.
I A single path f (t) of X (t) determines asymptotics.
I mf (t) = E(M(t) |X = f ) and sf (t) = Var(M(t) |X = f )
minf :f (t)∈{1,...,d}
(a−mf (t))2
sf (t).
MMOU: Large deviations
Two regimes!
I Second regime: α 7→ Nα, σ2 7→ Nσ2, Q unchanged.
I A single path f (t) of X (t) determines asymptotics.
I mf (t) = E(M(t) |X = f ) and sf (t) = Var(M(t) |X = f )
minf :f (t)∈{1,...,d}
(a−mf (t))2
sf (t).
MMOU: Large deviations
Two regimes!
I Second regime: α 7→ Nα, σ2 7→ Nσ2, Q unchanged.
I A single path f (t) of X (t) determines asymptotics.
I mf (t) = E(M(t) |X = f ) and sf (t) = Var(M(t) |X = f )
minf :f (t)∈{1,...,d}
(a−mf (t))2
sf (t).
MMOU: Large deviations
Goal: estimate P(M(t) ≥ a) for large a (rare event).
A few thoughts on rare-event simulation by importance sampling:
I Two sources of randomness: in background process X (·) andin individual OU processes Ui (·).
I Change-of-measure can be constructed? (cf. work Pham)
I ‘Hybrid’ idea: sample background process, and then computeprobability.
MMOU: Large deviations
Goal: estimate P(M(t) ≥ a) for large a (rare event).
A few thoughts on rare-event simulation by importance sampling:
I Two sources of randomness: in background process X (·) andin individual OU processes Ui (·).
I Change-of-measure can be constructed? (cf. work Pham)
I ‘Hybrid’ idea: sample background process, and then computeprobability.
MMOU: Large deviations
Goal: estimate P(M(t) ≥ a) for large a (rare event).
A few thoughts on rare-event simulation by importance sampling:
I Two sources of randomness: in background process X (·) andin individual OU processes Ui (·).
I Change-of-measure can be constructed? (cf. work Pham)
I ‘Hybrid’ idea: sample background process, and then computeprobability.
MMOU: Large deviations
Goal: estimate P(M(t) ≥ a) for large a (rare event).
A few thoughts on rare-event simulation by importance sampling:
I Two sources of randomness: in background process X (·) andin individual OU processes Ui (·).
I Change-of-measure can be constructed? (cf. work Pham)
I ‘Hybrid’ idea: sample background process, and then computeprobability.
MMOU: Large deviations
Goal: estimate P(M(t) ≥ a) for large a (rare event).
A few thoughts on rare-event simulation by importance sampling:
I Two sources of randomness: in background process X (·) andin individual OU processes Ui (·).
I Change-of-measure can be constructed? (cf. work Pham)
I ‘Hybrid’ idea: sample background process, and then computeprobability.
CONCLUSION
I OU models allow fairly explicit analysis;
I Features such as reflection and Markov modulation can bebrought in;
I ... and there is still a lot of work to be done.
CONCLUSION
I OU models allow fairly explicit analysis;
I Features such as reflection and Markov modulation can bebrought in;
I ... and there is still a lot of work to be done.
CONCLUSION
I OU models allow fairly explicit analysis;
I Features such as reflection and Markov modulation can bebrought in;
I ... and there is still a lot of work to be done.