Stochastics II - Ulm€¦ · Definition1.1.7 LetX X t ,t> T bearandomfunctionwithvaluesin S,B...
Transcript of Stochastics II - Ulm€¦ · Definition1.1.7 LetX X t ,t> T bearandomfunctionwithvaluesin S,B...
Stochastics II
Lecture notes
Prof. Dr. Evgeny Spodarev
Ulm2015
Contents
1 General theory of random functions 11.1 Random functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Elementary examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3 Regularity properties of trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4 Differentiability of trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.5 Moments und covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.6 Stationarity and Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141.7 Processes with independent increments . . . . . . . . . . . . . . . . . . . . . . . . . 151.8 Additional exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2 Counting processes 192.1 Renewal processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2 Poisson processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.1 Poisson processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.2.2 Compound Poisson process . . . . . . . . . . . . . . . . . . . . . . . . . . . 332.2.3 Cox process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.3 Additional exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3 Wiener process 383.1 Elementary properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.2 Explicit construction of the Wiener process . . . . . . . . . . . . . . . . . . . . . . 39
3.2.1 Haar- and Schauder-functions . . . . . . . . . . . . . . . . . . . . . . . . . . 393.2.2 Wiener process with a.s. continuous paths . . . . . . . . . . . . . . . . . . 42
3.3 Distribution and path properties of Wiener processes . . . . . . . . . . . . . . . . 453.3.1 Distribution of the maximum . . . . . . . . . . . . . . . . . . . . . . . . . . 453.3.2 Invariance properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.4 Additional exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4 Lévy Processes 524.1 Lévy Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.1.1 Infinitely Divisibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524.1.2 Lévy-Khintchine Representation . . . . . . . . . . . . . . . . . . . . . . . . 554.1.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594.1.4 Subordinators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.2 Additional Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5 Martingales 675.1 Basic Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675.2 (Sub-, Super-)Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695.3 Uniform Integrability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
i
ii Contents
5.4 Stopped Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745.5 Lévy processes and Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785.6 Martingales and Wiener Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805.7 Additional Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Bibliography 85
1 General theory of random functions
1.1 Random functionsLet Ω,A,P be a probability space and S,B a measurable space, Ω,S x g.Definition 1.1.1A random element X Ω S is a ASB-measurable mapping (Notation: X > ASB), i.e.,
X1B ω > Ω Xω > B > A, B > B.
If X is a random element, then Xω is a realization of X for arbitrary ω > Ω.We say that the σ-algebra B of subsets of S is induced by the set system M (Elements of M
are also subsets of S), ifB
FaMF-σ-algebra on S
F
(Notation: B σM).If S is a toplological or metric space, then M is often chosen as a class of all open sets of S
and σM is called the Borel σ-algebra (Notation: B BS).Example 1.1.1 1. If S R, B BR, then a random element X is called a random
variable.
2. If S Rm, B BRm, m A 1, then X is called a random vector. Random variables andrandom vectors are considered in the lectures „Elementare Wahrscheinlichkeitsrechnungund Statistik“ and „Stochastik I“.
3. Let S be the class of all closed sets of Rm. Let
M A > S A 9B x g , B – arbitrary compactum of Rm .Then X Ω S is a random closed set.
As an example we consider n independent uniformly distributed points Y1, . . . , Yn > 0,1mand R1, . . . ,Rn A 0 (almost surely) independent random variables, which are defined on thesame probability space Ω,A,P as Y1, . . . , Yn. Consider X 8ni1BRiYi, where Brx y >Rm SSyxSS B r. Obviously, this is a random closed set. An example of a realization is providedin Figure 1.1.Exercise 1.1.1Let Ω,A and S,B be measurable spaces, B σM, where M is a class of subsets of S.Prove that X Ω S is ASB-measurable if and only if X1C > A, C >M.Definition 1.1.2Let T be an arbitrary index set and St,Btt>T a family of measurable spaces. A familyX Xt, t > T of random elements Xt Ω St defined on Ω,A,P and ASBt-measurablefor all t > T is called a random function (associated with St,Btt>T ).
1
2 1 General theory of random functions
Abb. 1.1: Example of a random set X 86i1BRiYi
Therefore it holds X Ω T St, t > T , i.e. Xω, t > St for all ω > Ω, t > T andX, t > ASBt, t > T . We often omit ω in the notation and write Xt instead of Xω, t.Sometimes St,Bt does not depend on t > T as well: St,Bt S,B for all t > T .
Special cases of random functions:
1. T b Z X is called a random sequence or stochastic process in discrete time.Example: T Z, N.
2. T b R X is called a stochastic process in continuous time.Example: T R, a, b, ª @ a @ b @ª, R.
3. T b Rd, d C 2 X is called a random field.Example: T Zd, Rd
, Rd, a, bd.
4. T b BRd X is called set-indexed process.If X is almost surely non-negative and σ-additive on the σ-algebra T , then X is calleda random measure.
The tradition of denoting the index set with T comes from the interpretation of t > T for thecases 1 and 2 as time parameter.For every ω > Ω, Xω, t, t > T is called a trajectory or path of the random function X.We would like to prove that the random function X Xt, t > T is a random element
within the corresponding function space, which is equipped with a σ-algebra that now is spec-ified.Let ST Lt>T St be the cartesian product of St, t > T , i.e., x > ST if xt > St, t > T . The
elementary cylindric set in ST is defined as
CT B, t x > ST xt > B ,where t > T is a selected point from T and B > Bt a subset of St. CT B, t therefore containsall trajectories x, which go through the „gate“ B, see Figure 1.2.Definition 1.1.3The cylindric σ-algebra BT is introduced as a σ-algebra induced in ST by the family of all
1 General theory of random functions 3
Abb. 1.2: Trajectories which pass a „gate“ Bt.
elementary cylinders. It is denoted by BT at>TBt. If Bt B for all t > T , then BT is writteninstead of BT .Lemma 1.1.1The family X Xt, t > T is a random function on Ω,A,P with phase spaces St,Btt>Tif and only if for every ω > Ω the mapping ω (Xω, is ASBT -measurable.Exercise 1.1.2Proof Lemma 1.1.1.Definition 1.1.4Let X be a random element X Ω S, i.e. X be ASB-measurable. The distribution of X isthe probability measure PX on S,B such that PXB PX1B, B > B.Lemma 1.1.2An arbitrary probability measure µ on S,B can be considered as the distribution of a randomelement X.
Proof Take Ω S, A B, P µ and Xω ω, ω > Ω.
When does a random function with given properties exist? A random function, which consistsof independent random elements always exists. This assertion is known asTheorem 1.1.1 (Lomnicki, Ulam):Let St,Bt, µtt>T be a sequence of probability spaces. It exists a random sequence X Xt, t > T on a probability space Ω,A,P (associated with St,Btt>T ) such that
1. Xt, t > T are independent random elements.
2. PXt µt on St,Bt, t > T .A lot of important classes of random processes is built on the basis of independent randomelements; cf. examples in Section 1.2.Definition 1.1.5Let X Xt, t > T be a random function on Ω,A,P with phase space St,Btt>T . Thefinite-dimensional distributions of X are defined as the distribution law Pt1,...,tn of Xt1, . . . ,XtnTon St1,...,tn ,Bt1,...,tn, for arbitrary n > N, t1, . . . , tn > T , where St1,...,tn St1 . . . Stn and
4 1 General theory of random functions
Bt1,...,tn Bt1 a . . .aBtn is the σ-algebra in St1,...,tn , which is induced by all sets Bt1 . . .Btn ,Bti > Bti , i 1, . . . , n, i.e., Pt1,...,tnC PXt1, . . . ,XtnT > C, C > Bt1,...,tn . In particular,for C B1 . . . Bn, Bk > Btk one has
Pt1,...,tnB1 . . . Bn PXt1 > B1, . . . ,Xtn > Bn.Exercise 1.1.3Prove that Xt1,...,tn Xt1, . . . ,XtnT is a ASBt1,...,tn-measurable random element.Definition 1.1.6Let St R for all t > T . The random function X Xt, t > T is called symmetric, if allof its finite-dimensional distributions are symmetric probability measures, i.e., Pt1,...,tnA
Pt1,...,tnA for A > Bt1,...,tn and all n > N, t1, . . . , tn > T , whereby
Pt1,...,tnA PXt1, . . . ,XtnT > A.Exercise 1.1.4Prove that the finite-dimensional distributions of a random function X have the followingproperties: for arbitrary n > N, n C 2, t1, . . . , tn ` T , Bk > Stk , k 1, . . . , n and an arbitrarypermutation i1, . . . , in of 1, . . . , n it holds:
1. Symmetry: Pt1,...,tnB1 . . . Bn Pti1 ,...,tin Bi1 . . . Bin2. Consistency: Pt1,...,tnB1 . . . Bn1 Stn Pt1,...,tn1B1 . . . Bn1The following theorem evidences that these properties are sufficient to prove the existence of
a random function X with given finite-dimensional distributions.Theorem 1.1.2 (Kolmogorov):Let Pt1,...,tn , n > N, t1, . . . , tn ` T be a family of probability measures on
Rm . . . Rm,BRma . . .a BRm,which fulfill conditions 1 and 2 of Exercise 1.1.4. Then there exists a random function X Xt, t > T defined on a probability space Ω,A,P with finite-dimensional distributionsPt1,...,tn .
Proof See [13], Section II.9.
This theorem also holds on more general (however not arbitrary!) spaces than Rm, on so-called Borel spaces, which are in a sense isomorphic to 0,1 ,B 0,1 or a subspace of that.Definition 1.1.7Let X Xt, t > T be a random function with values in S,B, i.e., Xt > S almost surelyfor arbitrary t > T . Let (T,C) be itself a measurable space. X is called measurable if themapping X ω, t(Xω, t > S, ω, t > Ω T , is AaC SB-measurable.Thus, Definition 1.1.7 not only provides the measurability of X with respect to ω > Ω:
X, t > ASB for all t > T , but X, > AaC SB as a function of ω, t. The measurability of Xis of significance if Xω, t is considered at random moments τ Ω T , i.e., Xω, τω. Thisis in particular the case in the theory of martingales if τ is a so-called stopping time for X. Thedistribution of Xω, τω might differ considerably from the distribution of Xω, t, t > T .
1 General theory of random functions 5
1.2 Elementary examplesThe theorem of Kolmogorov can be used directly for the explicit construction of random pro-cesses only in few cases, since for a lot of random functions their finite-dimensional distributionsare not given explicitly. In these cases a new random function X Xt, t > T is built asXt gt, Y1, Y2, . . ., t > T , where g is a measurable function and Yn a sequence of randomelements (also random functions), whose existence has already been ensured. For that we giveseveral examples.Let X Xt, t > T be a real-valued random function on a probability space Ω,A,P.1. White noise:
Definition 1.2.1The random function X Xt, t > T is called white noise, if all Xt, t > T , areindependent and identically distributed (i.i.d.) random variables.
White noise exists according to the Theorem 1.1.1. It is used to model the noise in(electromagnetic or acoustical) signals. If Xt Berp, p > 0,1, t > T , one meansSalt-and-pepper noise, the binary noise, which occurs at the transfer of binary data incomputer-networks. If Xt N 0, σ2, σ2 A 0, t > T , then X is called Gaussian whitenoise. It occures e.g. in acoustical signals.
2. Gaussian random function:Definition 1.2.2The random function X Xt, t > T is called Gaussian, if all of its finite-dimensionaldistributions are Gaussian, i.e. for all n > N, t1, . . . , tn ` T it holds
Xt1,...,tn Xt1, . . . ,Xtn N µt1,...,tn , Qt1,...,tn
,where the mean is given by µt1,...,tn EXt1, . . . ,EXtn and the covariance matrixis given by Pt1,...,tn covXti,Xtjni,j1.Exercise 1.2.1Proof that the distribution of an Gaussian random function X is uniquely determined byits mean value function µtEXt, t > T , and covariance function Cs, tEXsXt,s, t > T , respectively.
An example for a Gaussian process is the so-called Wiener process (or Brownian motion)X Xt, t C 0, which has the expected value zero (µt 0, t C 0) and the covariancefunction Cs, t min s, t, s, t C 0. Usually it is addionally required that the paths ofX are continuous functions.We shall investigate the regularity properties of the paths of random functions in moredetail in Section 1.3. Now we can say that such a process exists with probability one(with almost surely continuous trajectories).Exercise 1.2.2Prove that the Gaussian white noise is a Gaussian random function.
3. Lognormal- and χ2-functions:The random function X Xt, t > T is called lognormal, if Xt eY t, where Y
6 1 General theory of random functions
Y t, t > T is a Gaussian random function. X is called χ2-function, if Xt YY tY2,where Y Y t, t > T is a Gaussian random function with values in Rn, for whichY t N 0, I, t > T ; here I is the n n-unit matrix. Then it holds that Xt χ2
n,t > T .
4. Cosine wave:X Xt, t > R is defined by Xt º2 cos2πY tZ, where Y U0,1 and Z is arandom variable, which is independent of Y .Exercise 1.2.3Let X1,X2, . . . be i.i.d. cosine waves. Determine the weak limit of the finite-dimensionaldistributions of the random function 1º
n Pnk1Xkt, t > R for nª.
5. Poisson process:Let Ynn>N be a sequence of i.i.d. random variables Yn Expλ, λ A 0. The stochasticprocess X Xt, t C 0 defined as Xt max n > N Pnk1 Yk B t is called Poissonprocess with intensity λ A 0. Xt counts the number of certain events until the timet A 0, where the typical interval between two of these events is Expλ-distributed. Theseevents can be claim arrivals of an insurance portfolio the records of elementary particlesin the Geiger counter, etc. Then Xt represents the number of claims or particles withinthe time interval 0, t.
1.3 Regularity properties of trajectories
The theorem of Kolmogorov provides the existence of the distribution of a random functionwith given finite-dimensional distributions. However, it does not provide a statement aboutthe properties of the paths of X. This is understandable since all random objects are definedin the almost surely sense (a.s.) in probability theory, with the exception of a set A ` Ω withPA 0.Example 1.3.1Let Ω,A,P 0,1 ,B0,1, ν1, where ν1 is the Lebesgue measure on 0,1. We defineX Xt, t > 0,1 by Xt 0, t > 0,1 and Y Y t, t > 0,1 by
Y t 1, t U,0, sonst,
where Uω ω, ω > 0,1, is a U0,1-distributed random variable defined on Ω,A,P.Since PY t 0 1, t > T , because of PU t 0, t > T , it is clear that X d
Y . Nevertheless,X and Y have different path properties since X has continuous and Y has discontinuoustrajectories, and PXt 0, ¦t > T 1, where PY t 0, ¦t > T 0.It may well be that the „set of exceptions“ A (see above) is very different for Xt for every
t > T . Therefore, we require that all Xt, t > T , are defined simultaneously on a subset Ω0 b Ωwith PΩ0 1. The so defined random function X Ω0 T R is called modification ofX ΩT R. X and X differ on a set Ω~Ω0 with probability zero. Therefore we indicate laterwhen stating that „random function X possesses a property C“ that it exists a modificationof X with this property C. Let us hold it in the following definition:
1 General theory of random functions 7
Definition 1.3.1The random functions X Xt, t > T and Y Y t, t > T defined on the same prob-ability space Ω,A,P associated with St,Btt>T have equivalent trajectories (or are calledstochastically indistinguishable) if
A ω > Ω Xω, t x Y ω, t for a t > T > Aand PA 0.This term implies that X and Y have paths, which coincide with probability one.
Definition 1.3.2The random functionsX Xt, t > T and Y Y t, t > T defined on the same probabilityspace Ω,A,P are called (stochastically) equivalent, if
Bt ω > Ω Xω, t x Y ω, t > A, t > T,and PBt 0, t > T .We also can say that X and Y are versions or modifications of one and the same random
function. If the space Ω,A,P is complete (i.e. the implication of A > A PA 0 is for allB ` A: B > A (and then PB 0)), then the indistinguishable processes are stochasticallyequivalent, but vice versa is not always true (it is true for so-called separable processes. Thisis the case if T is countable).Exercise 1.3.1Prove that the random functions X and Y in Example 1.3.1 are stochastically equivalent.Definition 1.3.3The random functions X Xt, t > T and Y Y t, t > T (not necessarily defined onthe same probability space) are called equivalent in distribution, if PX PY on St,Btt>T .Notation: X d
Y .According to Theorem 1.1.2 it is sufficient for the equivalence in distribution of X and Y that
they possess the same finite-dimensional distributions. It is clear that stochastic equivalenceimplies equivalence in distribution, but not the other way around.Now, let T and S be Banach spaces with norms S ST and S SS , respectively. The random
function X Xt, t > T is now defined on Ω,A,P with values in S,B.Definition 1.3.4The random function X Xt, t > T is called
a) stochastically continuous on T , if Xs PÐÐst
Xt, for arbitrary t > T , i.e.PSXs XtSS A εÐÐ
st0, for all ε A 0.
b) Lp-continuous on T , p C 1, if Xs LpÐÐst
Xt, t > T , i.e. ESXsXtSp ÐÐst
0. For p 2the specific notation „continuity in the square mean “is used.
c) a.s. continuous on T , if Xs f.s.ÐÐst
Xt, t > T , i.e., PXsÐÐst
Xt 1, t > T .
d) continuous, if all trajectories of X are continuous functions.
8 1 General theory of random functions
In applications one is interested in the cases c) and d), although the weakest form of continuityis the stochastic continuity.
Lp-continuity Ô stochastic continuity Ô a.s. continuity Ô continuity of all paths
Why are cases c) and d) important? Let us consider an example.Example 1.3.2Let T 0,1 and Ω,A,P be the canonical probability space with Ω R0,1, i.e. Ω Lt>0,1R.Let X Xt, t > 0,1 be a stochastic process on Ω,A,P. Not all events are elements ofA, like e.g. A ω > Ω Xω, t 0 for all t > 0,1 9t>0,1 Xω, t 0, since this is anintersection of measurable events from A in uncountable number. If however X is continuous,then all of its paths are continuous functions and one can write A 9t>D Xω, t 0, whereD is a dense countable subset of 0,1, e.g., D Q 9 0,1. Then it holds that A > A.However, in many applications (like e.g. in financial mathematics) it is not realistic to
consider stochastic processes with continuous paths as models for real phenomena. Therefore,a bigger class of possible trajectories of X is allowed: the so-called càdlàg-class (càdlàg =continue à droite, limitée à gauche (fr.)).Definition 1.3.5A stochastic process X Xt, t > R is called càdlàg, if all of its trajectories are right-sidecontinuous functions, which have left-side limits.Now, we would like to consider the properties of the notion of continuity (introduced above)
in more detail. One can note e.g. that the stochastic continuity is a property of the two-dimensional distribution Ps,t of X. This is shown by the following lemma.Lemma 1.3.1Let X Xt, t > T be a random function associated with S,B, where S and T are Banachspaces. The following statements are equivalent:
a) Xs PÐÐst0
Y ,
b) Ps,td
ÐÐÐs,tt0
PY,Y ,
where t0 > T and Y is a S-valued ASB-random element. For the stochastic continuity of X, oneshould choose t0 > T arbitrarily and Y Xt0.Proof a bXs P
ÐÐst0
Y means Xs,Xt PÐÐÐs,tt0
Y,Y .PSXs,Xt Y,Y S2´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶SXsY S2SSXtY S2S1~2
A ε D PSXs Y SS A ε~2 PSXt Y SS A ε~2ÐÐÐs,tt0
0
This results in Ps,td PY,Y , since P
-convergence is stronger than d-convergence.
b aFor arbitrary ε A 0 we consider a continuous function gε R 0,1 with gε0 0, gεx 1,
1 General theory of random functions 9
x ¶ Bε0. It holds for all s, t > T that
EgεSXs XtSS PSXs XtSS A ε EgεSXs XtSS1SXs XtSS B ε,hence PSXs XtSS A ε B EgεSXs XtSS RS RS gεSx ySSPs,tdx, y ÐÐ
st0tt0
RS RS gεSx ySSPY,Y dx, y 0, since PY,Y is concentrated on x, y > S2 x y and
gε0 0. Thus Xsst0 is a fundamental sequence (in probability), therefore Xs PÐÐst0
Y.
It may be that X is stochastically continuous, although all of the paths of X have jumps,i.e. X cannot possess any a.s. continuous modification. The descriptive explanation for thatis that such X may have a jump at concrete t > T with probability zero. Therefore jumps ofthe paths of X always occur at different locations.Exercise 1.3.2Prove that the Poisson process is stochastically continuous, although it does not possess anya.s. continuous modification.Exercise 1.3.3Let T be compact. Prove that if X is stochastically continuous on T , then it also is uniformlystochastically continuous, i.e., for all ε, η A 0 §δ A 0, such that for all s, t > T with Ss tST @ δ itholds that PSXs XtSS A ε @ η.Now let S R, EX2t @ ª, t > T , EXt 0, t > T . Let Cs, t E XsXt be the
covariance function of X.Lemma 1.3.2For all t0 > T and a random variable Y with EY 2 @ª the following assertions are equivalent:
a) Xs L2ÐÐst0
Y
b) Cs, tÐÐÐs,tt0
EY 2
Proof a bThe assertion results from the Cauchy-Schwarz inequality:SCs, t EY 2S SEXsXt EY 2S SE Xs Y Y Xt Y Y EY 2S
B ESXs Y Xt Y S ESXs Y Y S ESXt Y Y SB
¿ÁÁÁÀEXs Y 2 EXt Y 2´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶SSXsY SS2L2 SSXtY SS2
L2
¿ÁÁÁÀEY 2 EXs Y 2´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶SSXsY SS2L2
¿ÁÁÁÀEY 2 EXt Y 2´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶SSXtY SS2L2
ÐÐÐs,tt0
0
with assumption a).b a
EXs Xt2 EXs2
2EXsXt EXt2
Cs, s Ct, t 2Cs, tÐÐÐs,tt0
2EY 2 2EY 2
0.
10 1 General theory of random functions
Thus, Xs, s t0 is a fundamental sequence in the L2-sense, and we get Xs L2ÐÐst0
Y .
A random function X, which is continuous in the mean-square sense, may still have uncon-tinuous trajectories. In most of the cases which are practically relevant, X however has an a.s.continuous modification. Later on this will become more precise by stating a correspondingtheorem.Corollary 1.3.1The random function X, which satisfies the conditions of Lemma 1.3.2, is continuous on T inthe mean-square sense if and only if its covariance function C T 2 R is continuous on thediagonal diagT 2 s, t > T 2
s t, i.e., lims,tt0 Cs, t Ct0, t0 VarXt0 for all t0 > T.
Proof Choose Y Xt0 in Lemma 1.3.2.
Remark 1.3.1If X is not centered, then the continuity of µ together with the continuity of C on diagT 2
is required to ensure the L2-continuity of X on T .Exercise 1.3.4Give an example of a stochastic process with a.s. discontinuous trajectories, which is L2-continuous.Now we consider the property of (a.s.) continuity in more detail. As mentioned before,
we can merely talk about continuous modification or version of a process. The possibility topossess such a version also depends on the properties of the two-dimensional distributions ofthe process. This is proven by the following theorem (originally proven by A. Kolmogorov).Theorem 1.3.1Let X Xt, t > a, b, ª @ a @ b B ª be a real-valued stochastic process. X has acontinuous version, if there exist constants α, c, δ A 0 such that
ESXt h XtSα @ cShS1δ, t > a, b, (1.3.1)
for sufficiently small ShS.Proof See, e.g. [7], Theorem 2.23.
Now we turn to processes with càdlàg-trajectories. Let Ω,A,P be a complete probabilityspace.Theorem 1.3.2Let X Xt, t C 0 be a real-valued stochastic process and D a countable dense subset of0,ª. If
a) X is stochastically right-handside continuous, i.e., Xt h PÐÐÐh0
Xt, t > 0,ª,b) the trajectories of X at every t > D have finite right- and left-handside limits, i.e.,S limh0Xt hS @ª, t >D a.s.,
then X has a version with a.s. càdlàg-paths.Without proof.
1 General theory of random functions 11
Lemma 1.3.3Let X Xt, t C 0 and Y Y t, t C 0 be two versions of a random function, both definedon the probability space Ω,A,P, with property that X and Y have a.s. right-handsidecontinuous trajectories. Then X and Y are indistinguishable.
Proof Let ΩX ,ΩY be „sets of exception“, for which the trajectories of X and Y , respec-tively are not right-sided continuous. It holds that PΩX PΩY 0. Consider At ω > Ω Xω, t x Y ω, t, t > 0,ª and A 8t>QAt, where Q Q 9 0,ª. Since X andY are stochastically equivalent, it holds that PA 0 and therefore
P A B PA PΩX PΩY 0,
where A A 8 ΩX 8 ΩY . Therefore Xω, t Y ω, t holds for t > Q and ω > Ω A. Now,we prove this for all t C 0. For arbitrary t C 0 a sequence tn ` Q exists, such that tn t.Since Xω, tn Y ω, tn for all n > N and ω > Ω A, it holds that Xω, t limnªXω, tn limnª Y ω, tn Y ω, t for t C 0 and ω > Ω A. Therefore X and Y are indistinguishable.
Corollary 1.3.2If càdlàg-processes X Xt, t C 0 and Y Y t, t C 0 are versions of the same randomfunction then they are indistinguishable.
1.4 Differentiability of trajectoriesLet T be a linear normed space.Definition 1.4.1A real-valued random function X Xt, t > T is differentiable on T in direction h > Tstochastically, in the Lp-sense, p C 1, or a.s., if
liml0
Xt hl Xtl
X
ht, t > Texists in the corresponding sense, namely stochastically, in the Lp-space or a.s..The Lemmas 1.3.1 - 1.3.2 show that the stochastic differentiability is a property that is deter-
mined by three-dimensional distributions ofX (because the common distribution of XthlXtl
and XthlXtl
should converge weakly), whereas the differentiability in the mean-squaresense is determined by the smoothness of the covariance function Cs, t.Exercise 1.4.1Show that
1. the Wiener process is not stochastically differentiable on 0,ª.2. the Poisson process is stochastically differentiable on 0,ª, however not in the Lp-mean,p A 1.
Lemma 1.4.1A centered random function X Xt, t > T (i.e., EXt 0, t > T ) with EX2t @ª, t > Tis L2-differentiable in t > T in direction h > T if its covariance function C is differentiabletwice in t, t in direction h, i.e., if § C
hht, t ∂2Cs,t∂sh∂th
Ust. X
ht is L2-continuous in t > T
12 1 General theory of random functions
if C
hhs, t ∂2Cs,t∂sh∂th
is continuous in s t. Moreover, C
hhs, t is the covariance function ofX
h X
ht, t > T.Proof According to Lemma 1.3.2 it is enough to show that
I liml,l0
EXt lh Xtl
Xs lh Xs
l
exists for s t. Indeed we get
I 1llCt lh, s lh Ct lh, s Ct, s lh Ct, s
1lCt lh, s lh Ct lh, s
lCt, s lh Ct, s
lÐÐÐ
l,l0C
hh s, t .All other statements of the lemma result from this relation.
Remark 1.4.1The properties of the L2-differentiability and a.s. differentiability of random functions aredisjoint in the following sense: there are stochastic processes that have L2-differentiable paths,although they are a.s. discontinuous, and vice versa, processes with a.s. differentiable pathsare not always L2-differentiable, since e.g. the first derivative of their covariance function isnot continuous.Exercise 1.4.2Give appropriate examples!
1.5 Moments und covarianceLet X Xt, t > T be a random function that is real-valued, and let T be an arbitraryindex space.Definition 1.5.1The mixed moment µj1,...,jnt1, . . . , tn of X of order j1, . . . , jn > Nn, t1, . . . , tn > T is givenby µj1,...,jnt1, . . . , tn E Xj1t1 . . . Xjntn, where it is required that the expected valueexists and is finite. Then it is sufficient to assume that ESXtSj @ ª for all t > T and j
j1 . . . jn.Important special cases:
1. µ t µ1t EXt, t > T – mean value function of X.
2. µ1,1 s, t E XsXt Cs, t – (non-centered) covariance function of X. Whereasthe centered covariance function is: Ks, t covXs,Xt µ1,1s, t µsµt,s, t > T .
Exercise 1.5.1Show that the centered covariance function of a real-valued random function X
1. is symmetric, i.e., Ks, t Kt, s, s, t > T .
1 General theory of random functions 13
2. is positive semidefinite, i.e., for n > N, t1, . . . , tn > T , z1, . . . , zn > R it holds thatn
Qi,j1
Kti, tjzizj C 0.
3. satisfies Kt, t VarXt, t > T .Property 2) also holds for the non-centered covariance function Cs, t.The mean value function µt shows a (non random) trend. If µt is known, the random
function X can be centered by considering a random function Y Y t, t > T with Y t Xt µt, t > T .The covariance function Ks, t (Cs, t, respectively) contains information about the depen-
dence structure of X. Sometimes the correlation function Rs, t Ks,t»Ks,sKt,t for all s, t > T :
Ks, s VarXs A 0, Kt, t VarXt A 0 is used instead of K and C, respectively. Becauseof the Cauchy-Schwarz inequality it holds that SRs, tS B 1, s, t > T . The set of all mixedmoments in general does not (uniquely) determine the distribution of a random function.Exercise 1.5.2Give an example of different random functions X Xt, t > T und Y Y t, t > T, forwhich it holds that EXt EY t, t > T and EXsXt EY sY t, s, t > T .Exercise 1.5.3Let µ T R be a measurable function and K T T R be a positive semidefinite sym-metric function. Prove that a random function X Xt, t > T exists with EXt µt,covXs,Xt Ks, t, s, t > T .Let now X Xt, t > T be a real-valued random function with E SXtSk @ª, t > T , for a
k > N.Definition 1.5.2The mean increment of order k of X is given by γks, t EXs Xtk, s, t > T .Special attention is paid to the function γs, t 1
2γ2s, t 12EXsXt2, s, t > T , which
is called variogram of X. In geostatistics the variogram is often used instead of the covariancefunction. A lot of times we discard the condition EX2t @ ª, t > T , instead we assume thatγs, t @ª for all s, t > T .Exercise 1.5.4Prove that there exist random functions without finite second moments with γs, t @ ª,s, t > T .Exercise 1.5.5Show that for a random function X Xt, t > T with mean value function µ and covariancefunction K it holds that:
γs, t Ks, s Kt, t2
Ks, t 12µs µt2, s, t > T.
If the random function X is complex-valued, i.e., X Ω T C, with E SXtS2 @ ª, t > T ,then the covariance function of X is introduced as Ks, t EXs EXsXt EXt,s, t > T , where z is the complex conjugate of z > C. Then it holds that Ks, t Kt, s, s, t > T ,and K is positive semidefinite, i.e, for all n > N, t1, . . . , tn > T , z1, . . . , zn > C it holds thatPni,j1Kti, tjzizj C 0.
14 1 General theory of random functions
1.6 Stationarity and Independence
T be a subset of the linear vector space with operations , over space R.Definition 1.6.1The random function X Xt, t > T is called stationary (strict sense stationary) if for alln > N, h, t1, . . . , tn > T with t1 h, . . . , tn h > T it holds that:
PXt1,...,Xtn PXt1h,...,Xtnh,i.e., all finite-dimensional distributions of X are invariant with repsect to translations in T .Definition 1.6.2A (complex-valued) random function X Xt, t > T is called second-order stationary (orwide sense stationary) if ESXtS2 @ ª, t > T , and µt EXt µ, t > T , Ks, t
covXs,Xt Ks h, t h for all h, s, t > T s h, t h > T .If X is second-order stationary, it is convenient to introduce a function Kt K0, t, t > T
whereby 0 > T .Strict sense stationarity and wide sense stationarity do not result from each other. However
it is clear that if a complex-valued random function is strict sense stationary and possessesfinite second-order moments, then the function is also second-order stationary.Definition 1.6.3A real-valued random function X Xt, t > T is intrinsic second-order stationary if γks, t,s, t > T exist for k B 2, and for all s, t, h > T , s h, t h > T it holds that γ1s, t 0,γ2s, t γ2s h, t h.For real-valued random functions, intrinsic second-order stationarity is more general than
second-order stationarity since the existence of ESXtS2, t > T is not required.The analogue of the stationarity of increments of X also exists in strict sense.
Definition 1.6.4Let X Xt, t > T be a real-valued stochastic process, T ` R. It is said that X
1. possesses stationary increments if for all n > N, h, t0, t1, t2, . . . , tn > T , witht0 @ t1 @ t2 @ . . . @ tn, ti h > T , i 0, . . . , n the distribution ofXt1 h Xt0 h, . . . ,Xtn h Xtn1 hdoes not depend on h.
2. possesses independent increments if for all n > N, t0, t1, . . . , tn > T with t0 @ t1 @ . . . @ tnthe random variables Xt0,Xt1Xt0, . . . ,XtnXtn1 are pairwise independent.
Let S1,B1 and S2,B2 be measurable spaces. In general it is said that two randomelements X Ω S1 and X Ω S2 are independent on the same probability space Ω,A,Pif PX > A1, Y > A2 PX > A1PY > A2 for all A1 > B1, A2 > B2.This definition can be applied to the independence of random functions X and Y with phase
space ST ,BT , since they can be considered as random elements with S1 S2 ST , B1 B2 BT (cf. Lemma 1.1.1). The same holds for the independence of a random element (or a randomfunction) X and of a sub-σ-algebra G > A: this is the case if PX > A 9G PX > APG,for all A > B1, G > G (or A > BT , G > G).
1 General theory of random functions 15
1.7 Processes with independent incrementsIn this section we concentrate on the properties and existence of processes with independentincrements.Let ϕs,t, s, t C 0 be a family of characteristic functions of probability measures Qs,t,
s, t C 0 on BR, i.e., for z > R, s, t C 0 it holds that ϕs,tz RR eizxQs,tdx.Theorem 1.7.1There exists a stochastic process X Xt, t C 0 with independent increments with theproperty that for all s, t C 0 the characteristic function of Xt Xs is equal to ϕs,t if andonly if
ϕs,t ϕs,uϕu,t (1.7.1)
for all 0 B s @ u @ t @ª. Thereby the distribution of X0 can be chosen arbitrarily.
Proof The necessity of the condition (1.7.1) is clear since for all s > 0,ª s @ u @ t it holdsXtXs Xt Xu´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Y1
Xu Xs´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶Y2
, and XtXu and XuXs are independent.Then it holds ϕs,t ϕY1Y2 ϕY1ϕY2 ϕs,uϕu,t.Now we prove the sufficiency.If the existence of a process X with independent increments and property ϕXtXs ϕs,ton a probability space Ω,A,P had already been proven, one could define the characteristicfunctions of all of its finite-dimensional distributions with the help of ϕs,t as follows.Let n > N, 0 t0 @ t1 @ . . . @ tn @ª and Y Xt0,Xt1Xt0, . . . ,XtnXtn1. Theindependence of increments results in
ϕY z0, z1, . . . , zn´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶z
Eei`z,Y e ϕXt0z0ϕt0,t1z1 . . . ϕtn1,tnzn, z > Rn1,
where the distribution of Xt0 is an arbitrary probability measure Q0 on BR. For Xt0,...,tn Xt0,Xt1, . . . ,Xtn however it holds that Xt0,...,tn AY , where
A
1 0 0 . . . 01 1 0 . . . 01 1 1 . . . 0. . . . . . . . . . . . . . .1 1 1 . . . 1
.
Then ϕXt0,...,tn z ϕAY z Eei`z,AY e Eei`Az,Y e ϕY Az holds. Therefore the distribu-tion ofXt0,...,tn possesses the characteristic function ϕXt0,...,tn z ϕQ0l0ϕt0,t1l1 . . . ϕtn1,tnln,where l l1, l1, . . . , ln Az, thus¢
¦¤l0 z0 . . . znl1 z1 . . . zn
ln zn
Thereby ϕXt0 ϕQ0 and ϕXt1,...,tn z1, . . . , zn ϕXt0,...,tn 0, z1, . . . , zn holds for all zi > R.Now we prove the existence of such a process X.
16 1 General theory of random functions
For that we construct the family of characteristic functions
ϕt0 , ϕt0,t1,...,tn , ϕt1,...,tn , 0 t0 @ t1 @ . . . @ tn @ª, n > Nfrom ϕQ0 and ϕs,t, 0 B s @ t as above, thus
ϕt0 ϕQ0 , ϕt1,...,tn0, z1, . . . , zn ϕt0,t1,...,tn0, z1, . . . , zn, zi > R,ϕt0,...,tnz ϕt0z1 . . . znϕt0,t1z1 . . . zn . . . ϕtn1,tnzn.
Now we have to check whether the corresponding probability measures of these characteristicfunctions fulfill the conditions of Theorem 1.1.2. We will do that in equivalent form since byExercise 1.8.1 the conditions of symmetry and consistency in Theorem 1.1.2 are equivalent to:
a) ϕti0 ,...,tin zi0 , . . . , zin ϕt0,...,tnz0, . . . , zn for an arbitrary permutation 0,1, . . . , n (i0, i1, . . . , in,b) ϕt0,...,tm1,tm1,...,tnz0, . . . , zm1, zm1, . . . , zn ϕt0,...,tnz0, . . . ,0, . . . , zn, for all
z0, . . . , zn > R, m > 1, . . . , n.Condition a) is obvious. Conditon b) holds since
ϕtm1,tm0 zm1 . . . znϕtm,tm1zm1 . . . zn ϕtm1,tm1zm1, . . . , znfor all m > 1, . . . , n. Thus, the existence of X is proven.
Example 1.7.1 1. If T N0 N80, thenX Xt, t > N0 has independent incrementsif and only if Xn d
Pni0 Yi, where Yi are independent random variables and Ynd
Xn Xn 1, n > N. Such a process X is called random walk. It also may be definedfor Yi with values in Rm.
2. The Poisson process with intensity λ has independent increments (we will show thatlater).
3. The Wiener process possesses independent increments.Exercise 1.7.1Proof that!Exercise 1.7.2LetX Xt, t C 0 be a process with independent increments and g 0,ª R an arbitrary(deterministic) function. Show that the process Y Y t, t C 0 with Y t Xtgt, t C 0,also possesses independent increments.
1.8 Additional exercisesExercise 1.8.1Prove the following assertion: The family of probability measures Pt1,...,tn on Rn,BRn,n C 1, t t1, . . . , tn > Tn fulfills the conditions of the theorem of Kolmogorov if and only iffor n C 2 and for all s s1, . . . , sn > Rn the following conditions are fulfilled:
a) ϕPt1,...,tn s1, . . . , sn ϕPtπ1,...,tπn sπ1, . . . , sπn for all π > Sn.
1 General theory of random functions 17
b) ϕPt1,...,tn1s1, . . . , sn1 ϕPt1,...,tn s1, . . . , sn1,0.
Remark: ϕ denotes the characteristic function of the corresponding measure. Sn denotes thegroup of all permutations π 1, . . . , n 1, . . . , n.Exercise 1.8.2Show the existence of a random function whose finite-dimensional distributions are multivariate-normally distributed and explicitly give the measurable spaces Et1,...,tn ,Et1,...,tn.Exercise 1.8.3Give an example of a family of probability measures Pt1,...,tn , which do not fulfill the conditionsof the theorem of Kolmogorov.Exercise 1.8.4Let X Xt, t > T and Y Y t, t > T be two stochastic processes which are defined onthe same complete probability space Ω,F ,P and which take values in the measurable spaceS,B.
a) Proof that: X and Y are stochastically equivalent Ô PX PY .
b) Give an example of two processes X and Y for which holds: PX PY , but X and Y arenot stochastically equivalent.
c) Proof that: X and Y are stochastically indistinguishable ÔX and Y are stochasticallyequivalent.
d) Proof in the case of countability of T : X and Y are stochastically equivalent ÔX andY are stochastically indistinguishable.
e) Give in the case of uncountability of T an example of two processes X and Y for whichholds: X and Y are stochastically equivalent but not stochastically indistinguishable.
Exercise 1.8.5Let W W t, t > R be a Wiener Process. Which of the following processes are Wienerprocesses as well?
a) W1 W1t W t, t > R,b) W2 W2t ºtW 1, t > R,c) W3 W3t W 2t W t, t > R.
Exercise 1.8.6Given a stochastic process X Xt, t > 0,1 which consists of idependent and identicallydistributed random variables with density fx, x > R. Show that such a process can not becontinuous in t > 0,1.Exercise 1.8.7Give an example of a stochastic process X Xt, t > T which is stochastically continuouson T , and prove why this is the case.Exercise 1.8.8In connection with the continuity of stochastic processes the so-called criterion of Kolmogorov
18 1 General theory of random functions
plays a central role. (see also theorem 1.3.1 in the lecture notes): Let X Xt, t > a, b bea real-valued stochastic process. If constants α, ε A 0 and C Cα, ε A 0 exist such that
ESXt h XtSα B C ShS1ε (1.8.1)
for sufficient small h, then the process X possesses a continuous modification. Show that:
a) If you fix the variable ε 0 in condition (1.8.1), then in general the condition is notsufficient for the existence of a continuous modification. Hint: Consider the Poissonprocess.
b) The Wiener process W W t, t > 0,ª possesses a continuous modification. Hint:Consider the case α 4.
Exercise 1.8.9Show that the Wiener process W is not stochastically differentiable at any point t > 0,ª.Exercise 1.8.10Show that the covariance function Cs, t of a complex-valued stochastic process X Xt, t >T
a) is symmetric, i.e. Cs, t Ct, s, s, t > T ,b) fulfills the identity Ct, t VarXt, t > T ,c) is positive semidefinite, i.e. for all n > N, t1, . . . , tn > T , z1, . . . , zn > C it holds that:
n
Qi1
n
Qj1
Cti, tjzizj C 0.
Exercise 1.8.11Show that it exists a random function X Xt, t > T which simultaneously fulfills theconditions:
• The second moment EX2 does not exist.
• The variogram γs, t is finite for all s, t > T .
Exercise 1.8.12Give an example of a stochastic process X Xt, t > T whose paths are simultaneouslyL2-differentiable but not almost surely differentiable, and prove why this is the case.Exercise 1.8.13Give an example of a stochastic process X Xt, t > T whose paths are simultaneouslyalmost surely differentiable but not L1-differentiable, and prove why this is the case.Exercise 1.8.14Proof that the Wiener process possesses independent increments.Exercise 1.8.15Proof: A (real-valued) stochastic process X Xt, t > 0,ª with independent incrementsalready has stationary increments if the distibution of the random variable Xt h Xh isindependent of h.
2 Counting processes
In this chapter we consider several examples of stochastic processes which model the countingof events and thus possess piecewise constant paths.Let Ω,A,P be a probability space and Snn>N a non-decreasing sequence of a.s. non-
negative random variables, i.e. 0 B S1 B S2 B . . . B Sn B . . ..Definition 2.0.1The stochastic process N Nt, t C 0 is called counting process if
Nt ª
Qn1
1Sn B t,where 1A is the indicator function of the event A > A.Nt counts the events which occur at Sn until time t. Sn e.g. may be the time of occurence
of
1. the n-th elementary particle in the Geiger counter, or
2. a damage in the insurance of material damage, or
3. a data paket at a server within a computer network, etc.
A special case of the counting processes are the so-called renewal processes.
2.1 Renewal processesDefinition 2.1.1Let Tnn>N be a sequence of i.i.d. non-negative random variables with PT1 A 0 A 0. Acounting process N Nt, t C 0 with N0 0 a.s., Sn Pnk1 Tk, n > N, is called renewalprocess. Thereby Sn is called the time of the n-th renewal, n > N.The name „renewal process“ is given by the following interpretation. The „interarrival times“
Tn are interpreted as the lifetime of a technical spare part or mechanism within a system, thusSn is the time of the n-th break down of the system. The defective part is immediately replacedby a new part (comparable with the exchange of a lightbulb). Thus, Nt is the number ofrepairs (the so-called „renewals“) of the system until time t.Remark 2.1.1 1. It is Nt ª if Sn B t for all n > N.
2. Often it is assumed that only T2, T3, . . . are identically distributed with ETn @ ª. Thedistribution of T1 is freely selectable. Such a process N Nt, t C 0 is called delayedrenewal process (with delay T1).
3. Sometimes the requirement Tn C 0 is omitted.
19
20 2 Counting processes
Abb. 2.1: Konstruktion und Trajektorien eines Erneuerungsprozesses
4. It is clear that Snn>N0 with S0 0 a.s., Sn Pnk1 Tk, n > N is a random walk.
5. If one requires that the n-th exchange of a defective part in the system takes a time T
n,then by Tn Tn T
n, n > N a different renewal process is given. Its stochastic propertydoes not differ from the process which is given in definition 2.1.1.
In the following we assume that µ ETn > 0,ª, n > N.
Theorem 2.1.1 (Individual ergodic theorem):Let N Nt, t C 0 be a renewal process. Then it holds that:
limtª
Ntt
1µ
a.s..
Proof For all t C 0 and n > N it holds that Nt n Sn B t @ Sn1, therefore SNt B t @SNt1 and
SNtNt B
t
Nt B SNt1
Nt 1Nt 1Nt .
If we can show that SNtNt a.s
ÐÐtª
µ and Nt a.s.ÐÐtª
ª, then tNt a.s
ÐÐtª
µ holds and therefore theassertion of the theorem.According to the strong law of large numbers of Kolmogorov (cf. lecture notes „Wahrschein-lichkeitsrechnung“ (WR), theorem 7.4) it holds that Sn
n
a.s.ÐÐÐnª
µ, thus Sna.s.ÐÐÐnª
ª and thereforePNt @ª 1 since PNt ª P Sn B t,¦n 1 P§n ¦m > N0 Snm A t´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
1, if Sna.sÐÐÐnª
ª
1 1 0.
Then Nt, t C 0, is a real random variable.We show that Nt a.s.
ÐÐtª
ª. All trajectories of Nt are monotonously non-decreasing in t C 0,
2 Counting processes 21
thus § limtªNω, t for all ω > Ω. Moreover it holds that
P limtª
Nt @ª limnª
P limtª
Nt @ n lim
nª
limtª
PNt @ n lim
nª
limtª
PSn A t limnª
limtª
P n
Qk1
Tk A tB lim
nª
limtª
n
Qk1
PTk A t
n´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
ÐÐtª
0
0.
The equality () holds since limtªNt @ n §t0 > Q ¦t C t0 Nt @ n 8t0>Q 9t>QtCt0Nt @ n lim inf t>Q
tª
Nt @ n, then the continuity of the probability measure is used,
where Q Q9R q > Q q C 0. Since for every ω > Ω it holds that limnª
Snn limtª
SNtNt
(the codomain of a realization of N is a subsequence of N), it holds that limtª
SNtNt a.s
µ.
Remark 2.1.2One can generalize the ergodic theorem to the case of non-identically distributed Tn. Therebywe require that µn ETn, Tn µnn>N are uniformly integrable and 1
n Pnk1 µk ÐÐÐnª
µ A 0.
Then we can prove that Ntt
PÐÐtª
1µ (cf. [2], page 276).
Theorem 2.1.2 (Central limit theorem):If µ > 0,ª, σ2 VarT1 > 0,ª, it holds that
µ32 Nt t
µ
σºt
dÐÐtª
Y,
where Y N 0,1.Proof According to the central limit theorem for sums of i.i.d. random variables (cf. theorem7.5, WR) it holds that
Sn nµºnσ2
dÐÐÐnª
Y. (2.1.1)
Let x be the whole part of x > R. It holds for a σ2
µ3 that
PNt
tµº
atB x
PNt B xºat t
µ P Smt A t ,
where mt xºat tµ 1, t C 0, and limtªmt ª. Therefore we get that
RRRRRRRRRRRPNt t
µºat
B x ϕxRRRRRRRRRRR TP Smt A t ϕxT
RRRRRRRRRRRPSmt µmtσ»mt A
t µmtσ»mt ϕx
RRRRRRRRRRR Itx
22 2 Counting processes
for arbitrary t C 0 and x > R, where ϕ is the distribution function of the N 0,1-distribution.For fixed x > R we introduce Zt tµmt
σ»mt x, t C 0. Then it holds that
Itx RRRRRRRRRRRPSmt µmtσ»mt Zt A x
ϕxRRRRRRRRRRR .If we can prove that Zt ÐÐ
tª
0, then applying (2.1.1) and the theorem of Slutsky (theorem
6.4.1, WR) would result in Smtµmtσ»mt Zt
dÐÐtª
Y N 0,1 since Zt ÐÐtª
0 a.s. results
in Ztd
ÐÐtª
0. Therefore we could write Itx ÐÐtª
Sϕx ϕxS Sϕx ϕxS 0, whereϕx 1 ϕx is the tail function of the N 0,1-distribution, and the property of symmetryof N 0,1 ϕx ϕx, x > R was used.Now we show that Zt ÐÐ
tª
0, thus tµmtσ»mt ÐÐtª
x. It holds that mt xºat tµ εt,
where εt > 0,1. Then it holds thatt µmtσ»mt
t µxºat t µεt
σ»mt x
ºat µ
σ¼xºat t
µ εt µεt
σ»mt
xµ
σ
½xºat
1µa
εtat
µ εtσ»mt
xµσ½
µ2
σ2 xºatεtat´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
ÐÐtª
x
µεt
σ»mt´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
ÐÐtª
0
ÐÐtª
x.
Remark 2.1.3In Lineberg form, the central limit theorem can also be proven for non-identically distributedTn, cf. [2], pages 276 - 277.Definition 2.1.2The function Ht ENt, t C 0 is called renewal function of the process N (or of the sequenceSnn>N).Let FT x PT1 B x, x > R be the distribution function of T1. For arbitrary distribution
functions F,G R 0,1 the convolution F G is defined as F Gx R xª F x ydGy.The k-fold convolution F k of the distribution F with itself, k > N0, is defined inductive:
F 0x 1x > 0,ª, x > R,F 1x F x, x > R,
F k1x F k F x, x > R.
Lemma 2.1.1The renewal function H of a renewal process N is monotonously non-decreasing and right-sidedcontinuous on R. Moreover it holds that
Ht ª
Qn1
PSn B t ª
Qn1
F nT t, t C 0. (2.1.2)
2 Counting processes 23
Proof The monotony and right-sided continuity of H are consequences from the almost surelymonotony and right-sided continuity of the trajectories of N . Now we prove (2.1.2):
Ht ENt Eª
Qn1
1Sn B t
ª
Qn1
E1Sn B t ª
Qn1
PSn B t ª
Qn1
F nT t,
since PSn B t PT1 . . .Tn B t F nT t, t C 0. The equality holds for all partial sums
on both sides, therefore in the limit as well.
Except for exceptional cases it is impossible to calculate the renewal function H by theformula (2.1.2) analytically. Therefore the Laplace transform of H is often used in calulations.For a monotonously (e.g. monotonously non-decreasing) right-sided continuous function G 0,ª R the Laplace transform is defined as lGs R ª0 esxdGx, s C 0. Here the integral isto be understood as the Lebesgue-Stieltjes integral, thus as a Lebesgue integral with respect tothe measure µG on BR defined by µGx, y GyGx, 0 B x @ y @ª, if G is monotonouslynon-decreasing.Just to remind you: the Laplace transform lX of a random variable X C 0 is defined bylXs R ª0 esxdFXx, s C 0.Lemma 2.1.2For s A 0 it holds that:
lHs lT1s1 lT1s .
Proof It holds that:
lHs Sª
0esxdHx (2.1.2)
Sª
0esxd ª
Qn1
F nT x ª
Qn1S
ª
0esxdF nx
ª
Qn1
lT1...Tns ª
Qn1
lT1sn lT1s1 lT1s ,
where for s A 0 it holds that lT1s @ 1 and thus the geometric series Pª
n1 lT1sn converges.
Remark 2.1.4If N Nt, t C 0 is a delayed renewal process (with delay T1), the statements of lemmas2.1.1 - 2.1.2 hold in the following form:
1.Ht ª
Qn0
FT1 FnT2 t, t C 0,
where FT1 and FT2 , respectively are the distribution functions of T1 and Tn, n C 2,respectively.
2.lHs lT1s
1 lT2s , s C 0, (2.1.3)
where lT1 and lT2 are the Laplace transforms of the distribution of T1 and Tn, n C 2.
24 2 Counting processes
For further observations we need a theorem (of Wald) about the expected value of a sum(with random number) of independent random variables.Definition 2.1.3Let ν be a N-valued random variable and be Xnn>N a sequence of random variables definedon the same probability space. ν is called independent of the future, if for all n > N the eventν B n does not depend on the σ-algebra σXk, k A n.Theorem 2.1.3 (Wald’s identity):Let Xnn>N be a sequence of random variables with sup ESXnS @ª, EXn a, n > N and be ν aN-valued random variable which is independent of the future, with Eν @ª. Then it holds that
E ν
Qn1
Xn a Eν.Proof Calculate Sn Pnk1Xk, n > N. Since Eν Pª
n1 Pν C n, the theorem follows fromLemma 2.1.3.
Lemma 2.1.3 (Kolmogorov-Prokhorov):Let ν be a N-valued random variable which is independent of the future and it holds that
ª
Qn1
Pν C nESXnS @ª. (2.1.4)
Then ESν Pª
n1 Pν C nEXn holds. If Xn C 0 a.s., then condition (2.1.4) is not required.
Proof It holds that Sν Pνn1Xn Pª
n1Xn1ν C n. We introduce the notation Sν,n
Pnk1Xk1ν C k, n > N. First, we prove the lemma for Xn C 0 f.s., n > N. It holds Sν,n Sν ,nª for every ω > Ω, and thus according to the monotone convergence theorem it holds that:ESν limnª ESν,n limPnk1 EXk1ν C k. Since ν C k ν B k 1c does not dependon σXk ` σXn, n C k it holds that EXk1ν C k EXkPν C k, k > N, and thusESν Pª
n1 Pν C nEXn.Now, let Xn be arbitrary. Take Yn SXnS, Zn Pnn1 Yn, Zν,n Pnk1 Yk1ν C k, n > N.Since Yn C 0, n > N, it holds that EZν Pª
n1 EXn S Pν C k @ ª from (2.1.4). SinceSSν,nS B Zν,n B Zν , n > N, according to the dominated convergence theorem of Lebesgue it holdsthat ESν limnª ESν,n Pª
n1 EXnPν C n, where this series converges absolutely.
Conclusion 2.1.1 1. Ht @ª, t C 0.
2. For an arbitrary Borel measurable function g R R and the renewal process N Nt, t C 0 with interarrival times Tn, Tn i.i.d., µ ETn > 0,ª it holds that
ENt1Qk1
gTn 1 HtEgT1, t C 0.
Proof 1. For every t C 0 it is obvious that ν 1 Ht does not depend on the future ofTnn>N, the rest follows from theorem 2.1.3 with Xn gTn, n > N.
2. For s A 0 consider T sn minTn, s, n > N. Choose s A 0 such that for freely selected
(but fixed) ε A 0 µs ET s1 C µ ε A 0. Let N s be the renewal process which is based
2 Counting processes 25
on the sequence T sn n>N of interarrival times: N st Pª
n1 1T sn B t, t C 0. It holds
Nt B N st, t C 0, a.s., according to conclusion 2.1.1:
µ εEN st 1 B µsEN st 1 ESsNst1 ESs
Nst´¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¶Bt
TsNst1´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Bs
B t s,t C 0, where Ss
n Ts1 . . . T
sn , n > N. Thus Ht ENt B EN st B ts
µε , t C 0.Since ε A 0 is arbitrary, it holds that lim suptª
Htt B
1µ , and also our assertionHt @ª,
t C 0.
Conclusion 2.1.2 (Elementary renewal theorem):For a renewal process N as defined in conclusion 2.1.1, 1) it holds:
limtª
Htt
1µ.
Proof In conclusion 2.1.1, part 2) we already proved that lim suptª
Htt B
1µ . If we show
lim inftª
Htt C
1µ , our assertion would be proven. According to theorem 2.1.1 it holds that
Ntt ÐÐ
tª
1µ a.s., therefore according to Fatou’s lemma
1µ E lim inf
tª
Ntt
B lim inftª
ENtt
lim inftª
Htt
.
Remark 2.1.5 1. We can prove that in the case of the finite second moment of Tn (µ2
ET 21 @ª) we can derive a more exact asymptotics for Ht, tª:
Ht t
µµ22µ2 o1, tª.
2. The elementary renewal theorem also holds for delayed renewal processes, where µ ET2.We define the renewal measure H on BR by HB Pª
n1 RB dF nT x, B > BR. It
holds Hª, t Ht, Hs, t Ht Hs, s, t C 0, if H is the renewal functionas well as the renewal measure.
Theorem 2.1.4 (Fundamental theorem of the renewal theory):Let N Nt, t C 0 be a (delayed) renewal process associated with the sequence Tnn>N,where Tn, n > N are independent, Tn, n C 2 identically distributed, and the distributionof T2 is not arithmetic, thus not concentrated on a regular lattice with probability 1. Thedistribution of T1 is arbitrary. Let ET2 µ > 0,ª. Then it holds that
St
0gt xdHxÐÐ
tª
1µS
ª
0gxdx,
where g R R is Riemann integrable 0, n, for all n > N, and Pª
n0 maxnBxBn1 SgxS @ª.
26 2 Counting processes
Abb. 2.2:
Without proof.
In particular Ht u, t ÐÐtª
uµ holds for an arbitrary u > R, thus H asymptotically (for
tª) behaves as the Lebesgue measure.
Definition 2.1.4The random variable χt SNt1 t is called excess of N at time t C 0.
Obviously χ0 T1 holds. We now give an example of a renewal process with stationaryincrements.Let N Nt, t C 0 be a delayed renewal process associated with the sequence of interarrivaltimes Tnn>N. Let FT1 and FT2 be the distribution functions of the delays T1 and Tn, n C 2.We assume that µ ET2 > 0,ª, FT20 0, thus T2 A 0 a.s. and
FT1x 1µS
x
0FT2ydy, x C 0. (2.1.5)
In this case FT1 is called the integrated tail distribution function of T2.
Theorem 2.1.5Under the conditions we mentioned above, N is a process with stationary increments.
Abb. 2.3:
Proof Let n > N, 0 B t0 @ t1 @ . . . @ tn @ ª. Because N does not depend on Tn, n > N thecommon distribution of Nt1 tNt0 t, . . . ,Ntn tNtn1 t does not depend ont, if the distribution of χt does not depend on t, thus χt d χ0 T1, t C 0, see Figure ....
2 Counting processes 27
We show that FT1 Fχt, t C 0.
Fχtx Pχt B x ª
Qn0
PSn B t, t @ Sn1 B t x PS0 0 B t, t @ S1 T1 B t x
ª
Qn1
EE1Sn B t, t @ Sn Tn1 B t x S Sn FT1t x FT1t ª
Qn1S
t
0Pt y @ Tn1 B t x ydFSny
FT1t x FT1t S t
0Pt y @ T2 B t x yd ª
Qn1
FSny´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶Hy
.If we can prove that Hy y
µ , y C 0, then we would get
Fχtx FT1t x FT1t 1µS
0
tFT2z x 1 1 FT2zdz
FT1t x FT1t 1µS
t
0FT2z FT2z xdz
FT1t x FT1t FT1t 1µS
tx
xFT2ydy
FT1t x FT1t x FT1x FT1x, x C 0,
according to the form (2.1.5) of the distribution of T1.Now we like to show that Ht t
µ , t C 0. For that we use the formula (2.1.4): it holds that
lT1s 1µS
ª
0est1 FT2tdt 1
µS
ª
0estdt´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶1s
1µS
ª
0estFT2tdt
1µs
1 Sª
0FT2tdest 1
µs1 estFT2t´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
FT200
Tª0 Sª
0estdFT2t´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶lT2s
1µs1 lT2s, s C 0.
Using the formula (2.1.4) we get
lHs lT1s1 lT2s 1
µs
1µS
ª
0estdt l t
µs, s C 0.
Since the Laplace transform of a function uniquely determines this function, it holds thatHt t
µ , t C 0.
Remark 2.1.6In the proof of Theorem 2.1.5 we showed that for the renewal process with delay which possessesthe distribution (2.1.5), Ht t
µ not only asymptotical for tª (as in the elementary renewal
28 2 Counting processes
theorem) but it holds Ht tµ , for all t C 0. This means, per unit of the time interval we
get an average of 1µ renewals. For that reason such a process N is called homogeneous renewal
process.We can prove the following theorem:
Theorem 2.1.6If N Nt, t C 0 is a delayed renewal process with arbitrary delay T1 and non-arithmeticdistribution of Tn, n C 2, µ ET2 > 0,ª, then it holds that
limtª
Fχtx 1µS
x
0FT2ydy, x C 0.
This means, the limit distribution of excess χt, tª is taken as the distribution of T1 whendefining a homogeneous renewal process.
2.2 Poisson processes
2.2.1 Poisson processes
In this section we generalize the definition of a homogeneous Poisson process (see section 1.2,example 5)Definition 2.2.1The counting process N Nt, t C 0 is called Poisson process with intensity measure Λ if
1. N0 0 a.s.
2. Λ is a locally finite measure R,i.e., Λ BR R possesses the property ΛB @ª forevery bounded set B > BR.
3. N possesses independent increments.
4. Nt Ns PoisΛs, t for all 0 B s @ t @ª.
Sometimes the Poisson process N Nt, t C 0 is defined by the corresponding randomPoisson counting measure N NB, B > BR, i.e., N 0, t, t C 0, where a countingmeasure is a locally finite measure with values in N0.Definition 2.2.2A random counting measure N NB, B > BR is called Poissonsh with locally finiteintensity measure Λ if
1. For arbitrary n > N and for arbitrary pairwise disjoint bounded sets B1,B2, . . . ,Bn >
BR the random variables NB1,NB2, . . . ,NBn are independent.
2. NB PoisΛB, B > BR, B-bounded.
It is obvious that properties 3 and 4 of definition 2.2.1 follow from properties 1 and 2 ofdefinition 2.2.2. Property 1 of definition 2.2.1 however is an autonomous assumption. NB,B > BR is interpreted as the number of points of N within the set B.
2 Counting processes 29
Remark 2.2.1As stated in definition 2.2.2, a Poisson counting measure can also be defined on an arbitrarytopological space E equipped with the Borel-σ-algebra BE. Very often E Rd, d C 1 ischosen in applications.Lemma 2.2.1For every locally finite measure Λ on R there exists a Poisson process with intensity measureΛ.
Proof If such a Poisson process had existed, the characteristic function ϕNtNs of theincrement Nt Ns, 0 B s @ t @ ª would have been equal to ϕs,tz ϕPoisΛs,tz eΛs,teiz1, z > R according to property 4 of definition 2.2.1. We show that the family ofcharacteristic functions ϕs,t, 0 B s @ t @ ª possesses property 1.7.1: for all n 0 B s @ u @ t,ϕs,uzϕu,tz eΛs,ueiz1eΛu,teiz1 eΛs,uΛu,teiz1 eΛs,teiz1 ϕs,tz,z > R since the measure Λ is additive. Thus, the existence of the Poisson process N followsfrom theorem 1.7.1.
Remark 2.2.2The existence of a Poisson counting measure can be proven with the help of the theorem ofKolmogorov, yet in a more general form than in theorem 1.1.2.From the properties of the Poisson distribution it follows that ENB VarNB ΛB,
B > BR. Thus ΛB is interpreted as the mean number of points of N within the set B,B > BR.We get an important special case if Λdx λdx for λ > 0,ª, i.e., Λ is proportional to theLebesgue measure ν1 on R. Then we call λ EN1 the intensity of N .Soon we will prove that in this case N is a homogeneous Poisson process with intensity λ. Toremind you: In section 1.2 the homogeneous Poisson process was defined as a renewal processwith interarrival times TN Expλ: Nt supn > N Sn B t, Sn T1 . . . Tn, n > N, t C 0.Exercise 2.2.1Show that the homogeneous Poisson process is a homogeneous renewal process with T1
d T2
Expλ. Hint: you have to show that for an arbitrary exponential distributed random variableX the integrated tail distribution function of X is equal to FX .Theorem 2.2.1Let N Nt, t C 0 be a counting process. The following statements are equivalent.
1. N is a homogeneous Poisson process with intensity λ A 0.
2. a) Nt Poisλt, t C 0b) for an arbitrary n > N, t C 0, it holds that the random vector S1, . . . , Sn under
condition Nt n possesses the same distribution as the order statistics of i.i.d.random variables Ui > U0, t, i 1, . . . , n.
3. a) N has independent increments,b) EN1 λ, andc) property 2b) holds.
4. a) N has stationary and independent increments, and
30 2 Counting processes
b) PNt 0 1 λt ot, PNt 1 λt ot, t 0 holds.
5. a) N hast stationary and independent increments,
b) property 2a) holds.
Remark 2.2.3 1. It is obvious that Definition 2.2.1 with Λdx λdx, λ > 0,ª is anequivalent definition of the homogeneous Poisson process according to Theorem 2.2.1.
2. The homogeneous Poisson process N was introduced in the beginning of the 20th centuryfrom the physicists A. Einstein and M. Smoluchovsky to be able to model the countingprocess of elementary particle in the Geiger counter.
3. From 4b) it follows PNt A 1 ot, t 0.
4. The intensity of N has the following interpretation: λ EN1 1
ETn , thus the meannumber of renewals of N within a time interval with length 1.
5. The renewal function of the homogeneous Poisson process is Ht λt, t C 0. TherebyHt Λ0, t, t A 0 holds.
Proof Structure of the proof: 1 2 3 4 5 11 2:From 1) follows Sn Pnk1 Tk Erln,λ since Tk Poisλ, n > N, thus PNt 0 PT1 At eλt, t C 0, and for n > N
PNt n PNt C n Nt C n 1 PNt C n PNt C n 1 PSn B t PSn1 B t S t
0
λnxn1n 1!eλxdx S t
0
λn1xn
n!eλxdx
St
0
d
dxλxn
n!eλxdx λtn
n!eλt, t C 0.
Thus 2a) is proven.Now let’s prove 2b). According to the transformation theorem of random variables (cf. theorem3.6.1, WR), it follows from ¢
¦¤S1 T1S2 T1 T2
Sn1 T1 . . . Tn1
that the density fS1,...,Sn of S1, . . . , Sn1 can be expressed by the density of T1, . . . , Tn1,Ti Expλ, i.i.d.:
fS1,...,Sn1t1, . . . , tn1 n1Mk1
fTktk tk1 n1Mk1
λeλtktk1 λn1eλtn1
for arbitrary 0 B t1 B . . . B tn1, t0 0.For all other t1, . . . , tn1 it holds fS1,...,Sn1t1, . . . , tn1 0.
2 Counting processes 31
Therefore
fS1,...,Snt1, . . . , tnSNt n fS1,...,Snt1, . . . , tnSSk B t, k B n, Sn1 A t
R ªt fS1,...,Sn1t1, . . . , tn1dtn1
R t0 R tt1 . . . R ttn1 R ªt fS1,...,Sn1t1, . . . , tn1dtn1dtn . . . dt1
R ªt λn1eλtn1dtn1
R t0 R tt1 . . . R ttn1 R ªt λn1eλtn1dtn1dtn . . . dt1
I0 B t1 B t2 B . . . B tn B t
n!tn
I0 B t1 B t2 B . . . B tn B t.This is exactly the density of n i.i.d. U0, t-random variables.Exercise 2.2.2Proof this.
2 3From 2a) obviously follows 3b). Now we just have to prove the independence of the incrementsof N . For an arbitrary n > N, x1, . . . , xn > N, t0 0 @ t1 @ . . . @ tn for x x1 . . . xn it holdsthat
P9nk1Ntk Ntk1 xk P9nk1Ntk Ntk1 xkSNtn x´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶x!
x1!...xn! Lnk1 tktk1
tnxk according to 2b)
PNtn x´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶eλtn
λtnxx! according to 2a)
n
Mk1
λtk tk1xkxk!
eλtktk1,
since the probability of () belongs to the polynomial distribution with parameters n, tktk1tn
nk1
.Because the event ()is that at the independent uniformly distributed toss of x points on 0, t,exactly xk points occur within the basket of length tk tk1, k 1, . . . , n:
Abb. 2.4:
Thus 3a) is proven since P9nk1Ntk Ntk1 xk Lnk1 PNtk Ntk1 xk.
32 2 Counting processes
3 4We prove that N possesses stationary increments. For an arbitrary n > N0, x1, . . . , xn > N,t0 0 @ t1 @ . . . @ tn and h A 0 we consider Ih P9nk1Ntk h Ntk1 h xk andshow that Ih does not depend on h > R. According to the formula of the total probability itholds that
Ih
ª
Qm0
P9nk1Ntk h Ntk1 h xk S Ntn h m PNtn h m
ª
Qm0
m!x1! . . . xn!
n
Mk1
tk h tn1 h
tn h hxk eλtnh λtn hm
m!
ª
Qm0
P9nk1Ntk Ntk1 xk S Ntn h m PNtn h m I0.We now show property 4b) for h > 0,1:
PNh 0
ª
Qk0
PNh 0,N1 k ª
Qk0
PNh 0,N1 Nh k
ª
Qk0
PN1 Nh k,N1 k
ª
Qk0
PN1 kPN1 Nh k S N1 k
ª
Qk0
PN1 k1 hk.We have to show that PNh 0 1λh oh, i.e., limhª
1h1PNh 0 λ. Indeed
it holds that
1h1 PNh 0
1h1
ª
Qk0
PN1 k1 hk ª
Qk1
PN1 k 1 1 hkh
ÐÐh0
ª
Qk1
PN1 k limh0
1 1 hkh´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
k
ª
Qk0
PN1 kk EN1 λ,since the series uniformly converges in h because it is dominated by Pª
k0 PN1 kk λ @ª
because of the inequality 1 hk C 1 kh, h > 0,1, k > N.Similarly one can show that limh0
PNh1h limh0Pª
k1 PN1 kk1hk1 λ. 4 5We have to show that for an arbitrary n > N and t C 0
pnt PNt n eλt λtnn!
(2.2.1)
holds. We will prove that by induction with respect to n. First we show that p0t eλt, h 0.For that we consider p0th PNth 0 PNt 0,NthNt 0 p0tp0h p0t1λh oh, h 0. Similarly one can show that p0t p0th1λh oh, h 0.
2 Counting processes 33
Thus p0t limh0p0thp0t
h λp0t, t A 0 holds. Since p00 PN0 0 1, itfollows from p0t λp0t
p00 1,
that it exists an unique solution p0t eλt, t C 0. Now for n the formular (2.2.1) be approved.Let’s prove it for n 1.
pn1t h PNt h n 1 PNt n,Nt h Nt 1 PNt n 1,Nt h Nt 0 pnt p1h pn1t p0h pntλh oh pn1t1 λh oh, h 0, h A 0.
Thus pn1t λpn1t λpnt, t A 0pn10 0 (2.2.2)
Since pnt eλt λtnn! , we obtain pn1t eλt λtn1
n1! as solution of (2.2.2). (Indeed pn1t Cteλt C teλt λCteλt........... λpntC t λn1tn
n! Ct λn1tn1n1! , C0 0)5 1Let N be a counting process Nt maxn Sn B t, t C 0, which fulfills conditions 5a) and5b). We show that Sn Pnk1 Tk, where Tk i.i.d. with Tk Expλ, k > N. Since Tk Sk Sk1,k > N, S0 0, we consider for b0 0 B a1 @ b1 B . . . B an @ bn
P 9nk1ak @ Sk B bk P9n1
k1Nak Nbk1 0,Nbk Nak 19Nan Nbn1 0,Nbn Nan C 1
n1Mk1
PNak bk1 0´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶eλakbk1
PNbk ak 1´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶λbkakeλbkak
PNan bn1 0´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
eλanbn1
PNbn an C 1´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶1eλbnan eλanbn11 eλbnan n1
Mk1
λbk akeλbkbk1
λn1eλan eλbn n1Mk1
bk ak S b1
a1. . .S
bn
anλneλyndyn . . . y1.
The common density of S1, . . . , Sn therefore is given by λneλyn1y1 B y2 B . . . B yn.2.2.2 Compound Poisson processDefinition 2.2.3Let N Nt, t C 0 be a homogeneous Poisson process with intensity λ A 0, build by meansof the sequence Tnn>N of interarrival times. Let Unn>N be a sequence of i.i.d. randomvariables, independent of Tnn>N. Let FU be the distribution function of U1. For an arbitrary
34 2 Counting processes
t C 0 let Xt PNtk1 Uk. The stochastic process X Xt, t C 0 is called compound Poisson
process with parameters λ, FU . The distribution of Xt thereby is called compound Poissondistribution with parameters λt, FU .The compound Poisson process Xt, t C 0 can be interpreted as the sum of „marks“ Un of
a homogeneous marked Poisson process N,U until time t.In queueing theory Xt is interpreted as the overall workload of a server until time t if therequests to the service occur at times Sn Pnk1 Tk, n > N and represent the amount of workUn, n > N.In actuarial mathematics Xt, t C 0 is the total damage in a portfolio until time t C 0 withnumber of damages Nt and amount of loss Un, n > N.Theorem 2.2.2Let X Xt, t C 0 be a compound Poisson process with parameters λ, FU . The followingproperties hold:
1. X has independent and stationary increments.
2. If mUs EesU1 , s > R, is the moment generating function of U1, such that mUs @ª,s > R, then it holds that
mXts eλtmU s1, s > R, t C 0, EXt λtEU1, VarXt λtEU21 , t C 0.
Proof 1. We have to show that for arbitrary n > N, 0 B t0 @ t1 @ . . . @ tn and h
P Nt1h
Qi1Nt0h1
Ui1 B x1, . . . ,NtnhQ
inNtn1h1Uin B xn
n
Mk1
P Ntk
QikNtk11
Uik B xk
for arbitrary x1, . . . , xn > R. Indeed it holds that
P Nt1h
Qi1Nt0h1
Ui1 B x1, . . . ,NtnhQ
inNtn1h1Uin B xn
ª
Qk1,...,kn0
n
Mj1
Fkjn xjP 9nm1 Ntm h Ntm1 h km
ª
Qk1,...,kn0
n
Mj1
Fkjn xj n
Mm1
PNtm Ntm1 km
n
Mm1
ª
Qkm0
F kmn xmPNtm Ntm1 km
n
Mm1
P Ntm
QkmNtm11
B xm
2.Exercise 2.2.3
2 Counting processes 35
2.2.3 Cox process
A Cox process is a (in general inhomogeneous) Poisson process with intensity measure Λ whichas such is a random measure. The intuitive idea is stated in the following definition.Definition 2.2.4Let Λ ΛB, B > BR be a random a.s. locally finite measure. The random countingmeasure N NB, B > BR is called Cox counting measure (or doubly stochastic Poissonmeasure) with random intensity measure Λ if for arbitrary n > N, k1, . . . , kn > N0 and 0 B a1 @ b1 B
a2 @ b2 B . . . B an @ bn it holds that P9ni1Nai, bi ki E Lni1 e
Λai,bi Λkiai,biki! .
The process Nt, t C 0 with Nt N0, t is called Cox process (or doubly stochasticPoisson process) with random intensity measure Λ.Example 2.2.1 1. If the random measure Λ is a.s. absolutely continuous with respect to
the Lebesgue measure, i.e., ΛB RB λtdt, B - bounded, B > BR, where λt, t C 0is a stochastic process with a.s. Borel-measurable Borel-integrable trajectories, thenλt C 0 a.s. for all t C 0 is called the intensity process of N .
2. In particular, it can be that λt Y where Y is a non-negative random variable. Thenit holds that ΛB Y ν1B, thus N has a random intensity Y . Such Cox processes arecalled mixed Poisson processes.
A Cox process N Nt, t C 0 with intensity process λt, t C 0 can be build explicitlyas the following. Let N Nt, t C 0 be a homogeneous Poisson process with intensity 1,which is independent of λt, t C 0. Then N
d N1, where the process N1 N1t, t C 0
is given by N1t NR t0 λydy, t C 0. The assertion Nd N1 of course has to be proven.
However, we shall assume it without proof. It is also the basis for the simulation of the Coxprocess N .
2.3 Additional exercisesExercise 2.3.1Let NttC0 be a renewal process with interarrival times Ti, which are exponentially dis-tributed, i.e. Ti Expλ.
a) Prove that: Nt is Poisson distributed for every t A 0.
b) Determine the parameter of this Poisson distribution.
c) Determine the renewal function Ht ENt.Exercise 2.3.2Prove that a (real-valued) stochastic process X Xt, t > 0,ª with independent incre-ments already has stationary increments if the distribution of the random variable Xt h Xh does not depend on h.Exercise 2.3.3Let N Nt, t > 0,ª be a Poisson process with intensity λ. Calculate the probabilitiesthat within the interval 0, s exactly i events occur under the condition that within the interval0, t exactly n events occur, i.e. PNs i S Nt n for s @ t, i 0,1, . . . , n.
36 2 Counting processes
Exercise 2.3.4Let N 1 N 1t, t > 0,ª and N 2 N 2t, t > 0,ª be independent Poissonprocesses with intensities λ1 and λ2. In this case the independence indicates that the sequencesT11 , T
12 , . . . and T 2
1 , T22 , . . . are independent. Show that N Nt N 1tN 2t, t >0,ª is a Poisson process with intensity λ1 λ2.
Exercise 2.3.5 (Queuing paradox):Let N Nt, t > 0,ª be a renewal process. Then T t SNt1 t is called the time ofexcess, Ct t SNt the current lifetime and Dt T t Ct the lifetime at time t A 0.Now let N Nt, t > 0,ª be a Poisson process with intensity λ.
a) Calculate the distribution of the time of excess T t.b) Show that the distribution of the current lifetime is given by PCt t eλt and the
density is given by fCtSNtA0s λeλs1s B t.c) Show that PDt B x 1 1 λmint, xeλx1x C 0.d) To determine ET t, one could argue like this: On average t lies in the middle of the
surrounding interval of interarriving time SNt, SNt1, i.e. ET t 12ESNt1
SNt 12ETNt1
12λ . Considering the result from part (a) this reasoning is false.
Where is the mistake in the reasoning?
Exercise 2.3.6Let X Xt PNt
i1 Ui, t C 0 be a compound Poisson process. Let MNts EsNt,s > 0,1, be the generating function of the Poisson processes Nt, LUs E expsU theLaplace Transform of Ui, i > N, and LXts the Laplace Transform of Xt. Prove that
LXts MNtLUs, s C 0.
Exercise 2.3.7Let X Xt, t > 0,ª be a compound Poisson process with Ui i.i.d., U1 Expγ, wherethe intensity of Nt is given by λ. Show that for the Laplace transform LXts of Xtit holds:
LXts exp λts
γ s .
Exercise 2.3.8Write a function in R (alternatively: Java) to which we pass time t, intensity λ and a value γas parameters. The return of the function is a random value of the compound Poisson processwith characteristics λ,Expγ at time t.Exercise 2.3.9Let the stochastic process N Nt, t > 0,ª be a Cox process with intensity functionλt Z, where Z is a discrete random variable which takes values λ1 and λ2 with probabilities1~2. Determine the moment generating function as well as the expected value and the varianceof Nt.Exercise 2.3.10Let N 1 N 1t, t > 0,ª and N 2 N 2t, t C 0 be two independent homogeneousPoisson processes with intensities λ1 and λ2. Moreover, let X C 0 be an arbitrary non-negative
2 Counting processes 37
random variable which is independent of N 1 and N 2. Show that the process N Nt, t C0 with
Nt ¢¦¤N1t, t BX,
N 1X N 2t X, t AX
is a Cox process whose intensity process λ λt, t C 0 is given by
λt ¢¦¤λ1, t BX,
λ2, t AX.
3 Wiener process
3.1 Elementary properties
In Example 2) of Section 1.2 we defined the Brownian motion (or Wiener process)W W t, t C 0 as an Gaussian process with EW t 0 and covW s,W t mins, t,s, t C 0. The Wiener process is called after the mathematician Norbert Wiener (1894 - 1964).Why does the Brownian motion exist? According to theorem of Kolmogorov (Theorem 1.1.2)it exists a real-valued Gaussian process X Xt, t C 0 with mean value EXt µt, t C 0,and covariance function covXs,Xt Cs, t, s, t C 0 for every function µ R R andevery positive semidefinite function C R R R. We just have to show that Cs, t
mins, t, s, t C 0 is positive semidefinite.Exercise 3.1.1Prove this!We now give a new (equivalent) definition.
Definition 3.1.1A stochastic process W W t, t C 0 is called Wiener process (or Brownian motion) if
1. W 0 0 a.s.
2. W possesses independent increments
3. W t W s N 0, t s, 0 B s @ t
The existence of W according to Definition 3.1.1 follows from Theorem 1.7.1 since ϕs,tz EeizW tW s e
tsz22 , z > R, and e
tuz22 e
usz22 e
tsz22 for 0 B s @ u @ t, thus
ϕs,uzϕu,tz ϕs,tz, z > R. From Theorem 1.3.1 the existence of a version with continuoustrajectories follows.Exercise 3.1.2Show that Theorem 1.3.1 holds for α 3, σ
12 .
Therefore, it is often assumed that the Wiener process possesses continuous paths (just takeits corresponding version).Theorem 3.1.1Both definitions of the Wiener process are equivalent.
Proof 1. From definition in Section 1.2 follows Definition 3.1.1.W 0 0 a.s. follows from VarW 0 min0,0 0. Now we prove that the incrementsof W are independent. If Y N µ,K is a n-dimensional Gaussian random vectorand A a n n-matrix, then AY N Aµ,AKA holds, this follows from the explicitform of the characteristic function of Y . Now let n > N, 0 t0 B t1 @ . . . @ tn, Y
38
3 Wiener process 39
W t0,W t1, . . . ,W tn. For Z W t0,W t1 W t0, . . . ,W tn W tn1 itholds that Z AY , where
A
1 0 0 . . . . . . 0
1 1 0 . . . . . . 00 1 1 0 . . . 0. . . . . . . . . . . . . . . . . . . . . .0 0 0 . . . 1 1
.
Thus Z is also Gaussian with a covariance matrix which is diagonal. Indeed, it holdscovW ti1 W ti,W tj1 W tj minti1, tj1 minti1, tj minti, tj1 minti, tj 0 for i x j. Thus the coordinates of Z are uncorrelated, which means inde-pendence in case of a multivariate Gaussian distribution. Thus the increments of W areindependent. Moreover, for arbitrary 0 B s @ t it holds that W t W s N 0, t s.The normal distribution follows since Z AY is Gaussian, obviously it holds thatEW tEW s 0 and VarW tW s VarW t2 covW s,W tVarW s t 2 mins, t s t s.
2. From Definition 3.1.1 the definition in Section 1.2 follows.Since W t W s N 0, t s for 0 B s @ t, it holds
covW s,W t EW sW tW sW s EW sEW tW sVarW s s,thus it holds covW s,W t mins, t. FromW tW s N 0, ts andW 0 0it also follows that EW t 0, t C 0. The fact that W is a Gaussian process, follows frompoint 1) of the proof, relation Y A1Z.
Definition 3.1.2The process W t, t C 0,W t W1t, . . . ,Wdt, t C 0, is called d-dimensional Brownianmotion if Wi Wit, t C 0 are independent Wiener processes, i 1, . . . , d.The definitions above and Exercise 3.1.2 ensure the existence of a Wiener process with
continuous paths. How do we find an explicit way of building these paths? We will showthat in the next section.
3.2 Explicit construction of the Wiener processFirst we construct the Wiener process on the interval 0,1. The main idea of the constructionis to introduce a stochastic process X Xt, t > 0,1 which is defined on a probabilitysubspace of Ω,A,P with X d
W , where Xt Pª
n1 cntYn, t > 0,1, Ynn>N is a sequenceof i.i.d. N 0,1-random variables and cnt R t0 Hnsds, t > 0,1, n > N. Here, Hnn>N isthe orthonormed Haar basis in L20,1 which is introduced shortly now.
3.2.1 Haar- and Schauder-functionsDefinition 3.2.1The functions Hn 0,1 R, n > N, are called Haar functions if H1t 1, t > 0,1,H2t 10, 1
2 t 1 12 ,1t, Hkt 2n2 1In,kt 1Jn,kt, t > 0,1, 2n @ k B 2n1, where
In,k an,k, an,k 2n1, Jn,k an,k 2n1, an,k 2n, an,k 2nk 2n 1, n > N.
40 3 Wiener process
Abb. 3.1: Haar functions
Lemma 3.2.1The function system Hnn>N is an orthonormal basis in L20,1 with scalar product@ f, g A R 1
0 ftgtdt, f, g > L20,1.Proof The orthonormality of the system `Hk,Hne δkn, k,n > N directly follows from def-inition 3.2.1. Now we prove the completeness of Hnn>N. It is sufficient to show that forarbitrary function g > L20,1 with `g,Hne 0, n > N, it holds g 0 almost everywhere on0,1. In fact, we always can write the indicator function of an interval 1an,k,an,k2n1 as alinear combination of Hn, n > N.
10, 12
H1 H22
,
1 12 ,1
H1 H22
,
10, 14
10, 12 1º
2H22
,
1 14 ,
12
10, 12 1º
2H22
,
1an,k,an,k2n1 1an,k,an,k2n 2n2Hk
2, 2n @ k B 2n1.
Therefore it holds Rk1
2nk
2ngtdt 0, n > N0, k 1, . . . ,2n 1, and thus Gt R t0 gsds 0 for
t k2n , n > N0, k 1, . . . ,2n 1. Since G is continuous on 0,1, it follows Gt 0, t > 0,1,
and thus gs 0 for almost every s > 0,1.From lemma 3.2.1 it follows that two arbitrary functions f, g > L20,1 have expan-
sions f Pª
n1`f,HneHn and g Pª
n1`g,HneHn (these series converge in L20,1) and`f, ge Pª
n1`f,Hne`g,Hne (Parseval identity).Definition 3.2.2The functions Snt R t0 Hnsds `10,t,Hne, t > 0,1, n > N are called Schauder functions.
3 Wiener process 41
Abb. 3.2: Schauder functions
Lemma 3.2.2It holds:
1. Snt C 0, t > 0,1, n > N 1,2. P2n
k1 S2nkt B 122n2 , t > 0,1, n > N,
3. Let ann>N be a sequence of real numbers with an Onε, ε @ 12 , nª. Then the series
Pª
n1 anSnt converges absolutly and uniformly in t > 0,1 and therefore is a continuousfunction on 0,1.
Proof 1. follows directly from definition 3.2.2.
2. follows since functions S2nk for k 1, . . . ,2n have disjoint supports and S2nkt B
S2nk2k12n1 2n2 1, t > 0,1.
3. It suffices to show that Rn supt>0,1PkA2n SakSSkt ÐÐÐnª
0. For every k > N and c A 0it holds SakS B ckε. Therefore it holds for all t > 0,1, n > N
Q2n@kB2n1
SakSSkt B c 2n1ε Q2n@kB2n1
Skt B c 2n1ε 2
n2 1
B c 2εn 12ε.
Since ε @ 12 , it holds Rm B c 2εPnCm 2n 1
2ε ÐÐÐmª
0.
Lemma 3.2.3Let Ynn>N be a sequence of (not necessarily independent) random variables defined on Ω,A,P,Yn N 0,1, n > N. Then it holds SYnS Ologn 1
2 , nª, a.s.
Proof We have to show that for c Aº
2 and almost all ω > Ω it exists a n0 n0ω, c > N suchthat SYnS B clogn 1
2 for n C n0. If Y N 0,1, x A 0, it holds
PY A x 1º2π S
ª
xe
y22 dy
1º2π S
ª
x1yde y2
2
1º2π
1xe
y22 S
ª
xe
y22
1y2dy B 1º
2π1xe
x22 .
42 3 Wiener process
(We also can show that Φx 1º2π
1xe
x22 , xª.) Thus for c A
º2 it holds
QnC2
PSYnS A clogn 12 B c1 2º
2πQnC2
logn 12 e
c22 logn
c1º2º
πQnC2
logn 12n
c22 @ª.
According to the Lemma of Borel-Cantelli (cf. WR, Lemma 2.2.1) it holds P9n 8kCn Ak 0if Pk PAk @ª with Ak SYkS A e log k 1
2 , k > N. Thus Ak occurs in infinite number onlywith probability 0, with SYnS B clogn 1
2 for n C n0.
3.2.2 Wiener process with a.s. continuous paths
Lemma 3.2.4Let Ynn>N be a sequence of independent N 0,1-distributed random variables. Let ann>Nand bnn>N be sequences of numbers with P2m
k1 Sa2mkS B 2m2 , P2mk1 Sb2mkS B 2m2 , m > N. Then
the limits U Pª
n1 anYn and V Pª
n1 bnYn, U N 0,Pª
n1 a2n, V N 0,Pª
n1 b2n exist a.s.,
where covU,V Pª
n1 anbn. U and V are independent if and only if covU,V 0.
Proof Lemma 3.2.2 and 3.2.3 reveal the a.s. existence of the limits U and V (replace an by Ynand Sn by e.g. bn in Lemma 3.2.2). From the stability under convolution of the normal distri-bution it follows for U m Pmn1 anYn, V m Pmn1 bnYn, that U m N 0,Pmn1 a
2n, V m
N 0,Pmn1 b2n. Since U m d
Ð U , V m dÐ V it follows U N 0,Pª
n1 a2n, V N 0,Pª
n1 b2n.
Moreover, it holds
covU,V limmª
covU m, V m lim
mª
m
Qi,j1
aibj covYi, Yj lim
mª
m
Qi1aibi
ª
Qi1aibi,
according to the dominated convergence theorem of Lebesgue, since according to Lemma 3.2.3it holds SYnS B c logn 1
2´¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¶Bcnε, ε@ 1
2
, for n C N0, and the dominated series converges according to Lemma
3.2.2:
2m1
Qn,k2m
anbkYnYka.s.B
2m1
Qn,k2m
anbkc2nεkε B 22εm1
2m2 2
m2 212εm, 1 2ε A 0.
For sufficient large m it holds Pª
n,km anbkYnYk B Pª
jm 212εj @ª, and this series convergesa.s.Now we show
covU,V 0 U and V are independent
Independence always results in the uncorrelation of random variables. We prove the other
3 Wiener process 43
direction. From U m, V m dÐÐÐmª
U,V it follows ϕUm,V m ÐÐÐmª
ϕU,V , thus
ϕUm,V ms, t limmª
E expit mQk1
akYk sm
Qn1
bnYn lim
mª
E expi mQk1
tak sbkYk limmª
m
Mk1
E expitak sbkYk lim
mª
m
Mk1
exptak sbk2
2 exp ª
Qk1
tak sbk2
2
exp t22
ª
Qk1
a2k¡ exp
¢¦¨¤ts
ª
Qk1
akbk´¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¶covU,V 0
£§¨¥
exps2
2ª
Qk1
b2k¡ ϕUtϕV s,
s, t > R. Thus, U and V are independent if covU,V 0.
Theorem 3.2.1Let Yn, n > N be a sequence of i.i.d. random variables that are N 0,1-distributed, definedon a probability space Ω,A,P. Then there exists a probability space Ω0,A0,P of Ω,A,Pand a stochastic process X Xt, t > 0,1 on it such that Xt, ω Pª
n1 YnωSnt,t > 0,1, ω > Ω0 and X d
W . Here, Snn>N is the family of Schauder functions.
Proof According to Lemma 3.2.2, 2) the coefficients Snt fulfill the conditions of Lemma 3.2.4for every t > 0,1. In addition to that it exists according to Lemma 3.2.3 a subset Ω0 ` Ω,Ω0 > A with PΩ0 1, such that for every ω > Ω0 the relation SYnωS Oºlogn, n ª,holds. Let A0 A 9 Ω0. We restrict the probability space to Ω0,A0,P. Then conditionan Ynω Onε, ε @ 1
2 , is fulfilled sinceº
logn @ nε for sufficient large n, and accordingto Lemma 3.2.2, 3) the series Pª
n1 YnωSnt converges absolutely and uniformly in t > 0,1to the function Xω, t, ω > Ω0, which is a continuous function in t for every ω > Ω0. X, tis a random variable since in Lemma 3.2.4 the convergence of this series holds almost surely.Moreover it holds Xt N 0,Pª
n1 S2nt, t > 0,1.
We show that this stochastic process, defined on Ω0,A0,P, is a Wiener process. For that wecheck the conditions of Definition 3.1.1. We consider arbitrary times 0 B t1 @ t2, t3 @ t4 B 1 and
44 3 Wiener process
evaluate
covXt2 Xt1,Xt4 Xt3 cov ª
Qn1
YnSnt2 Snt1, ª
Qn1
YnSnt4 Snt3
ª
Qn1
Snt2 Snt1Snt4 Snt3
ª
Qn1
`Hn,10,t2e `Hn,10,t1e `Hn,10,t4e `Hn,10,t3e
ª
Qn1
`Hn,10,t2 10,t1e`Hn,10,t4 10,t3e `10,t2 10,t1,10,t4 10,t3e `10,t2,10,t4e `10,t1,10,t4e
`10,t2,10,t3e `10,t1,10,t3e mint2, t4 mint1, t4 mint2, t3 mint1, t3,
by Parseval inequality and since @ 10,s,10,t A R mins,t0 du mins, t, s, t > 0,1. If 0 B
t1 @ t2 B t3 @ t4 @ 1, it holds covXt2 Xt1,Xt4 Xt3 t2 t1 t2 t1 0, thusthe increments of X (according to Lemma 3.2.4) are uncorrelated. Moreover it holds X0 N 0,Pª
n1 S2n0 N 0,0, therefore X0 a.s. 0. For t1 0, t2 t, t3 0, t4 t it follows that
VarXt t, t > 0,1, and for t1 t3 s, t2 t4 t, that VarXtXs tsss ts,0 B s @ t B 1. Thus it holds XtXs N 0, t s, and according to Definition 3.1.1 it holdsX
dW .
Remark 3.2.1 1. Theorem 3.2.1 is the basis for an approximative simulation of the pathsof a Brownian motion through the partial sums Xnt Pnk1 YkSkt, t > 0,1, forsufficient large n > N.
2. The construction in Theorem 3.2.1 can be used to construct the Wiener process withcontinuous paths on the interval 0, t0 for arbitrary t0 A 0. If W W t, t > 0,1 is aWiener process on 0,1 then Y Y t, t > 0, t0 with Y t ºt0W tt0 , t > 0, t0, isa Wiener process on 0, t0.Exercise 3.2.1Prove that.
3. The Wiener process W with continuous paths on R can be constructed as follows. LetW n W nt, t > 0,1 be independent copies of the Wiener process as in Theorem3.2.1. Define W t Pª
n1 1t > n 1, nPn1k1 W
k1 W nt n 1, t C 0, thus,
W t ¢¦¤W 1t, t > 0,1,W 11 W 2t 1, t > 1,2,W 11 W 21 W 3t 2, t > 2,3,etc.
Exercise 3.2.2Show that the introduced stochastic process W W t, t C 0 is a Wiener process on R.
3 Wiener process 45
Abb. 3.3:
3.3 Distribution and path properties of Wiener processes
3.3.1 Distribution of the maximum
Theorem 3.3.1Let W W t, t > 0,1 be the Wiener process defined on a probability space Ω,F ,P.Then it holds:
Pmaxt>0,1W t A x ¾ 2
πS
ª
xe
y22 dy (3.3.1)
for all x C 0.
The mapping maxt>0,1W t Ω 0,ª given in relation (3.3.1) is a well-defined randomvariable since it holds: maxt>0,1W t, ω limnª maxi1,...,kW ik , ω for all ω > Ω since thetrajectories of W t, t > 0,1 are continuous. From 3.3.1 it follows that maxt>0,1W t hasan exponential bounded tail: thus maxt>0,1W t has finite k-th moments.Useful ideas for the proof of Theorem 3.3.1
Let W t, t > 0,1 be a Wiener process and Z1, Z2, . . . a sequence of independent randomvariables with PZi 1 PZi 1 1
2 for all i C 1. For every n > N we define Wnt, t >0,1 by Wnt Sntºn nt ntZnt1º
n, where Si Z1 . . . Zi, i C 1, S0 0.
Lemma 3.3.1For every k C 1 and arbitrary t1, . . . , tk > 0,1 it holds:
W nt1, . . . , W ntk d W t1, . . . ,W tk .
Proof Consider the special case k 2 (for k A 2 the proof is analogous). Let t1 @ t2. For all
46 3 Wiener process
s1, s2 > R it holds:
s1Wnt1 s2W
nt2 s1 s2Snt1ºn
s2Snt2 Snt11º
n
Znt11 nt1 nt1 s1ºns2ºn
Znt21nt2 nt2 s2ºn,
since Snt2 Snt1 Snt2 Snt11 Snt11.Now observe that the 4 summands on the right-hand-side of the previous equation are inde-pendent and moreover that the latter two summands converge (a.s. and therefor particularlyin distribution) to zero.Consequently, it holds
limnª
Eeis1W nt1s2W nt2 limnª
Eeis1s2º
nSnt1Eei
s2ºnSnt2Snt11
limnª
Eeis1s2¼ nt1
n
Snt1»nt1 Eeis2ºnSnt2nt11
CLT, CMT e
t12 s1s22
et2t1
2 s22
e12 s2
1t12s1s2t1s22t2
e12 s2
1t12s1s2 mint1,t2s22t2
ϕW t1,W t2s1, s2,where ϕW t1,W t2 is the characteristic function of W t1,W t2.Lemma 3.3.2Let W n maxt>0,1 W nt. Then it holds:
W n d
1ºn
maxk1,...,n
Sk, for all n > N
and
limnª
PW nB x ¾ 2
πS
x
0e
y22 dy, for all x C 0.
Without proof
Proof of Theorem 3.3.1. We shall prove only the upper bound in Theorem 3.3.1.From Lemma 3.3.1 and the continuous mapping theorem it follows for x C 0, k C 1 and t1, . . . , tk >0,1 that
limnª
P maxt>t1,...,tk W
nt A x P maxt>t1,...,tkW t A x ,
since x1, ..., xk(maxx1, ..., xk is continuous.Consequently, it holds
lim infnª
Pmaxt>0,1 W
nt A x C P maxt>t1,...,tkW t A x ,
3 Wiener process 47
since maxt>t1,...,tk W
nt A x b maxt>0,1 W
nt A x.With t1, . . . , tk 1
k , . . . ,kk and max
t>0,1W t limkª
maxi1,...,k
W ik a.s. (and therefore partic-
ularly in distribution) it holds
lim infnª
Pmaxt>0,1 W
nt A x C limkª
P maxi1,...,k
W ik A x Pmax
t>0,1W t A x .Conclusively, the assertion follows from lemma 3.3.2.
Corollary 3.3.1Let W t, t C 0 be a Wiener process. Then it holds:
P limtª
W tt
0 1.
Proof Exercise.
3.3.2 Invariance propertiesSpecific transformations of the Wiener process again reveal the Wiener process.Theorem 3.3.2Let W t, t C 0 be a Wiener process. Then the stochastic processes Y it, t C 0,i 1, . . . ,4, with
Y 1t W t, (Symmetry)Y 2t W t t0 W t0 for a t0 A 0, Translation of the origin)Y 3t
ºcW tc for a c A 0, (Scaling)
Y 4t tW 1t , t A 0,0, t 0. (Reflection at t=0)
are Wiener processes as well.
Proof 1. Y i, i 1, . . . ,4, have independent increments with Y it2Y it1 N 0, t2t1.
2. Y i0 0, i 1, . . . ,4.
3. Y i, i 1, . . . ,3, have continuous trajectories. Y it, t C 0 has continuous trajectoriesfor t A 0.
4. We have to prove that Y 4t is continuous at t 0, i.e. that limt0 tW 1t 0.
limt0 tW 1t limtª
W tt
a.s. 0 because of Corollary 3.3.1.
Corollary 3.3.2Let W t, t C 0 be the Wiener process. Then it holds:
PsuptC0
W t ª PinftC0
W t ª 1.
48 3 Wiener process
and consequently
PsuptC0
W t ª, inftC0
W t ª .Proof For x, c A 0 it holds:
PsuptC0
W t A x PsuptC0
W tc A xº
c Psup
tC0W t A xº
c
PsuptC0
W t 0¡ 8 suptC0
W t ª¡ PsuptC0
W t 0 PsuptC0
W t ª 1.
Moreover it holds
PsuptC0
W t 0 PsuptC0
W t B 0 B PW t B 0, suptC1
W t B 0 PW 1 B 0, sup
tC1W t W 1 B W 1
S0
ª
PsuptC1
W t W 1 B W t SW 1 xP W 1 > dx S
0
ª
PsuptC0
W t W 1 B x SW 1 xP W 1 > dx S
0
ª
PsuptC0
W t 0P W 1 > dx Psup
tC0W t 0 1
2,
thus P suptC0W t 0 0 and thus P suptC0W t ª 1.Analogously one can show that P inftC0W t ª 1.
The remaining part of the claim follows from P A 9 B 1 for any A,B > F with P A
P B 1.
Remark 3.3.1P suptC0Xt ª, inftC0Xt ª 1 implies that the trajectories of W oscillate betweenpositive and negative values on 0,ª an infinite number of times.Corollary 3.3.3Let W t, t C 0 be a Wiener process. Then it holds
P ω > Ω W ω is nowhere differentiable in 0,ª 1.
Proof ω > Ω W ω is nowhere differentiable in 0,ª 9
ª
n0ω > Ω W ω is nowhere differentiable in n,n 1.It is sufficient to show that Pω > Ω W ω is differentiable for a t0 t0ω > 0,1 0. Definethe set
Anm ω > Ω it exists a t0 t0ω > 0,1 with SW t0ω h,ω W t0ω, ωS Bmh, ¦h > 0, 4n .
3 Wiener process 49
Then it holds
ω > Ω W ω differentiable for a t0 t0ω mC1
nC1
Anm.
We still have to show P8mC1 8nC1 Anm 0.Let k0ω mink1,2,... kn C t0ω. Then it holds for ω > Anm and j 0,1,2
WW k0ω j 1n
,ω W k0ω jn
,ωW B WW k0ω j 1n
,ω W t0ω, ωW WW k0ω j
n,ω W t1ω, ωW
B8mn.
Let ∆nk W k1n W kn. Then it holds
PAnm B P n
k0
2j0
S∆nk jS B 8mn
B
n
Qk0
P 2j0
S∆nk jS B 8mn n
Qk0
PS∆n0S B 8mn3
B n 1 16mº2πn
3 0, nª,
by the independence of the increments of the Wiener Process.Since Anm ` An1,m, it follows P Anm 0.
Corollary 3.3.4With probability 1 it holds:
supnC1
sup0Bt0@...@tnB1
n
Qi1
SW ti W ti1S ª,i.e. W t, t > 0,1 possesses a.s. trajectories with unbounded variation.
Proof Since every continuous function g 0,1 R with bounded variation is differentiablealmost everywhere, the assertion follows from Corollary 3.3.3.
Alternative proofIt is sufficient to show that limnªP2n
i1 UW it2n W i1t2n U ª.
Let Zn P2ni1 W it2n W i1t
2n 2t. Hence EZn 0 and EZ2
n t22n1 and with Tchebysh-
eff’s inequality
P SZnS @ ε B EZ2n
ε2 tε2
2n1, i.e.ª
Qi1
P SZnS A ε a.s. 0.
50 3 Wiener process
From lemma of Borel-Cantelli it follows that limnªZn 0 almost surely and thus
0 B t B
2n
Qi1
W it2n W i 1t
2n2
B lim infnª
max1BkB2n
WW kt2n W k 1t
2nW 2n
Qi1
WW it2n W i 1t
2nW .
Hence the assertion follows since W has continuous trajectories and therefore
limnª
max1BkB2n
WW kt2n W k 1t
2nW 0.
3.4 Additional exercisesExercise 3.4.1Give an intuitive (exact!) method to realize trajectories of a Wiener process W W t, t >0,1. Thereby use the independence and the distribution of the increments of W . Addition-ally, write a program in R for the simulation of paths of W . Draw three paths t(W t, ω fort > 0,1 in a common diagram.Exercise 3.4.2Given are the Wiener process W W t, t > 0,1 and L argmaxt>0,1W t. Show that itholds:
PL B x 2π
arcsinºx, x > 0,1.
Hint: Use relation maxr>0,tW r d SW tS.Exercise 3.4.3For the simulation of a Wiener processW W t, t > 0,1 we also can use the approximation
Wnt n
Qk1
Sktzkwhere Skt, t > 0,1, k C 1 are the Schauder functions, and zk N 0,1 i.i.d. random variablesand the series converges almost surely for all t > 0,1 (nª).
a) Show that for all t > 0,1 the approximation Wnt also converges in the L2-sense toW t.
b) Write a program in R (alternative: C) for the simulation of a Wiener process W W t, t > 0,1.c) Simulate three paths t ( W t, ω for t > 0,1 and draw these paths into a common
diagram. Hereby consider the sampling points tk kn , k 0, . . . , n with n 28
1.
Exercise 3.4.4For the Wiener process W W t, t C 0 we define the process of the maximum that is givenby M Mt maxs>0,tW s, t C 0. Show that it holds:
3 Wiener process 51
a) The density fMt of the maximum Mt is given by
fMtx ¾ 2πt
expx2
2t¡1x C 0.
Hint: Use property PMt A x 2PW t A x.b) Expected value and variance of Mt are given by
EMt ¾2tπ, VarMt t1 2~π.
Now we define τx argmin s>RW s x as the first point in time for which the Wienerprocess takes value x.
c) Determine the density of τx and show that: Eτx ª.
Exercise 3.4.5Let W W t, t C 0 be a Wiener process. Show that the following processes are Wienerprocesses as well:
W1t ¢¦¤0, t 0,tW 1~t, t A 0,
W2t ºcW t~c, c A 0.
Exercise 3.4.6The Wiener process W W t, t C 0 is given. Size Qa, b denotes the probability that theprocess exceeds the half line y at b, t C 0, a, b A 0. Proof that:
a) Qa, b Qb, a and Qa, b1 b2 Qa, b1Qa, b2,b) Qa, b is given by Qa, b exp2ab.
4 Lévy Processes
4.1 Lévy Processes
Definition 4.1.1A stochastic process Xt, t C 0 is called Lévy process, if
1. X0 0,
2. Xt has stationary and independent increments,
3. Xt is stochastically continuous, i.e for an arbitrary ε A 0, t0 C 0:
limtt0
PSXt Xt0S A ε 0.
Remark 4.1.1One can easily see, that compound Poisson processes fulfill the 3 conditions, since for arb. ε A 0it holds
P SXt Xt0S @ ε C P SXt Xt0S A 0 B 1 eλStt0S ÐÐtt0
0.
Further holds for the Wiener process for arb. ε A 0
P SXt Xt0S A ε
¾2
πt t0 S ª
texp y2
2t t0dyx y»
tt0
2πS
ª
t»tt0
ex22 dxÐÐ
tt00.
4.1.1 Infinitely Divisibility
Definition 4.1.2Let X Ω R be an arbitrary random variable. Then X is called infinitely divisible, if forarbitrary n > N there exist i.i.d. random variables Y n
1 , . . . , Ynn with X d
Yn
1 . . . Ynn .
Lemma 4.1.1The random variable X Ω R is infinitely divisible if and only if the characteristic functionϕX of X can be expressed for every n C 1 in the form
ϕXs ϕnsn for all s > R,
where ϕn are characteristic functions of random variables.
52
4 Lévy Processes 53
Proof „ “Y
n1 , . . . , Y
nn i.i.d., X d
Yn
1 . . . Ynn . Hence, it follows that ϕXs Ln
i1ϕY ni
s ϕnsn.“ “
ϕXs ϕnsn there exist Y n1 , . . . , Y
nn i.i.d. with characteristic function ϕn and
ϕY1,...,Yns ϕnsn ϕXs. With the uniqueness theorem for characteristic functionsit follows that X d
Yn
1 . . . Ynn .
Theorem 4.1.1Let Xt, t C 0 be a Lévy process. Then the random variable Xt is infinitely divisible forevery t C 0.
Proof For arbitrary t C 0 and n > N it obviously holds that
X t X tn X 2t
n X t
n . . . X nt
n X n 1t
n .
Since Xt has independent and stationary increments, the summands are obviously inde-pendent and identically distributed random variables.
Lemma 4.1.2Let X1,X2, . . . Ω R be a sequence of random variables. If there exists a function ϕ R C,such that ϕs is continuous in s 0 and limnªϕXns ϕs for all s > R, then ϕ is thecharacteristic function of a random variable X and it holds that Xn
dÐX.
Definition 4.1.3Let ν be a measure on the measure space R,BR. Then ν is called a Lévy measure, ifν0 0 and
SR
min y2,1νdy @ª. (4.1.1)
Abb. 4.1: y (miny2,1Note
• Apparently every Lévy measure is σ-finite and
ν ε, εc @ ε, for all ε A 0, (4.1.2)
where ε, εc R ε, ε.
54 4 Lévy Processes
• In particular every finite measure ν is a Lévy measure, if ν0 0.
• If νdy gydy then gy 1SySδ for y 0, where δ > 0,3.• An equivalent condition to (4.1.2) is
SR
y2
1 y2 νdy @ª, since y2
1 y2 B min y2,1 B 2 y2
1 y2 . (4.1.3)
Theorem 4.1.2Let a > R, b C 0 be arbitrary and let ν be an arbitrary Lévy measure. Let the characteristicfunction of a random variable X Ω R be given through the function ϕ R C with
ϕs expias bs2
2 S
Reisy 1 isy1y > 1,1νdy¡ for all s > R. (4.1.4)
Then X is infinitely divisible.Remark 4.1.2 • The formula (4.1.4) is also called Lévy-Khintchine formula.
• The inversion of Theorem 4.1.2 also holds, hence every infinitely divisible random variablehas such a representation. Therefore the characteristic triplet a, b, ν is also called Lévycharacteristic of an infinitely divisible random variable.
Proof of Theorem 4.1.2 1st stepShow that ϕ is a characteristic function.For y > 1,1 it holds
• Teisy 1 isyT W ªQk0
isykk!
1 isyW W ªQk2
isykk!
W B y2 W ªQk2
sk
k!W´¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¶
c
B y2c
Hence it follows from (4.1.1) that the integral in (4.1.4) exists and therefore it is well-defined.
• Let now cn be an arbitrary sequence of numbers with cn A cn1 A . . . A 0 and limnª cn 0. Then the function ϕn R C with
ϕns expisa Scn,cnc91,1 yνdy bs2
2¡ expScn,cnc eisy 1νdy
is the characteristic function of the sum of independent random variables Zn1 and Zn
2 ,since– the first factor is the characteristic function of the normal distribution with expec-
tation a Rcn,cnc91,1 yνdy and variance b.– the second factor is the characteristic function of a compound Poisson process with
parameters
λ νcn, cnc and PU ν 9 cn, cnc~νcn, cnc.
4 Lévy Processes 55
• Furthermore limnªϕns ϕs for all s > R, where ϕ is obviously continuous in 0,since it holds for the function ψ R C in the exponent of (4.1.4)
ψs SReisy 1 isy1 y > 1,1νdy for all s > R
that SψsS cs2 R1,1 y2νdy R1,1c Teisy 1Tνdy. Out of this and from (4.1.3) itfollows by Lebesgue’s theorem that lim
s0ψs 0.
• Lemma 4.1.2 yields that the function ϕ given in (4.1.4) is the characteristic function of arandom variable.
2nd stepThe infinite divisibility of this random variable follows from Lemma 4.1.1 and out of the fact,that for arbitrary n > N ν
n is also a Lévy measure and that
ϕs exp¢¦¤i ans
bns
2
2 S
Reisy 1 isy1y > 1,1 ν
n dy£§¥ for all s > R.
Remark 4.1.3The map η R C with
ηs ias bs2
2 S
Reisy 1 isy1y > 1,1νdy
from (4.1.4) is called Lévy exponent of this infinitely divisible distribution.
4.1.2 Lévy-Khintchine Representation
Xt, t C 0 – Lévy process. We want to represent the characteristic function of Xt, t C 0,through the Lévy-Khintchine formula.Lemma 4.1.3Let Xt, t C 0 be a stochastically continuous process, i.e. for all ε A 0 and t0 C 0 it holdsthat limtt0 PSXtXt0S A ε 0. Then for every s > R, tz ϕXts is a continuous mapfrom 0,ª to C.
Proof • y z eisy continuous in 0, i.e. for all ε A 0 there exists a δ1 A 0, such that
supy>δ1,δ1 Teisy 1T @ ε
2.
• Xt, t C 0 is stochastically continuous, i.e. for all t0 C 0 there exists a δ2 A 0, such that
suptC0, Stt0S@δ2
P SXt Xt0S A δ1 @ ε4 .
56 4 Lévy Processes
Hence, it follows that for s > R, t C 0 and St t0S @ δ2 it holds
TϕXts ϕXt0sT UE eisXt eisXt0U B E UeisXt0 eisXtXt0 1U
B E UeisXtXt0 1U SRTeisy 1TPXtXt0dy
B Sδ1,δ1 Teisy 1TPXtXt0dySδ1,δ1c Teisy 1T´¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¶
B2
PXtXt0dyB sup
y>δ1,δ1 Teisy 1T 2P SXt Xt0S A δ1 B ε.Theorem 4.1.3Let Xt, t C 0 be a Lévy process. Then for all t C 0 it holds
ϕXts etηs, s > R,
, where η R C is a continuous function. In particular it holds that
ϕXts etηs eηst ϕX1st , for all s > R, t C 0.
Proof Due to stationarity and independence of increments we have
ϕXtts EeisXtt E eisXteisXttXt ϕXtsϕXts, s > R.
Let gs 0,ª C be defined by gst ϕXts, s > R, gst t gstgst, t, t C 0.X0 0. ¢¦¤
gst t gstgst, t, t C 0,gs0 1,gs 0,ª C continuous.
Hence there exists η R C, such that gst eηst for all s > R, t C 0. ϕX1s eηs and itfollows that η is continuous.
Lemma 4.1.4 (Prokhorov):Let µ1, µ2, . . . be a sequence of finite measures (on BR) with
1. supnC1 µnR @ c, c const @ª (uniformly bounded)
2. for all ε A 0 there exists Bε > BR compact, such that fulfills the tightness conditionsupnC1 µnBc
ε B ε. Hence follows that there exists a subsequence µn1 , µn2 , . . . and a finitemeasure over BR, such that for all f R C, bounded, continous, it holds that
limkª
SRfyµnkdy SR fyµdy
Proof See [14], page 122 - 123.
4 Lévy Processes 57
Theorem 4.1.4Let Xt, t C 0 be a Lévy process. Then there exist a > R, b C 0 and a Lévy measure ν, suchthat
ϕX1s eias bs22 S
Reisy 1 iy1y > 1,1νdy, for all s > R.
Proof For all sequences tnn>N b 0,ª, with limnª
tn 0, it holds
ηs etηsVt0
limnª
etnηs 1tn
limnª
ϕXtns 1tn
, (4.1.5)
since η R C is continuous. The latter convergence is even uniform in s > s0, s0 for anys0 A 0, since Taylor’s theorem yields
limnª
Wηs etnηs 1tn
W limnª
Wηs 1tn
ª
Qk1
tnηskk!
W lim
nª
W 1tn
ª
Qk2
tnηskk!
W lim
nª
Wηs ª
Qk1
tnηskk 1! W lim
nª
Wη2stn ª
Qk1
tnηsk1k 1! WB lim
nª
M2tnª
Qk1
StnM Sk1k 1! , where M sups>s0,s0 SηsS @ª
limnª
M2tnª
Qk1
StnM Sk1k 1! 1kk 1
B limnª
M2tnª
Qk1
StnM Sk1k 1! lim
nª
M2tneStnM S
0.
Now let tn 1n and Pn be the distribution of X 1
n. Hence it follows that
limnª
nSReisy 1Pnds lim
nª
nϕX 1
ns 11n
ηslimnª
SRnS
s0
s0eisy 1Pndyds S s0
s0ηsds
and consequently
limnª
nSR1 sins0y
s0yPndy lim
nª
nSR
12s0S
s0
s0eisy 1dsPndy 1
2s0S
s0
s0ηsds.
Since η R C is continuous with η0 0 and it follows from the mean value theorem, thatfor all ε A 0 it exists δ0 A 0, such that U 1
2s0 Rs0s0
ηsdsU @ ε. Since 1 sins0ys0y
C12 , for Ss0yS C 2,
58 4 Lévy Processes
it holds: for all ε A 0 there exist s0 A 0, n0 A 0, such that
lim supnª
n
2 SySySC 2s0
Pndy B lim supnª
nSR1 sins0y
s0yPndy @ ε.
Hence for all ε A 0 there exist s0 A 0, n0 A 0, such that
nSySySC 2s0
Pndy B 4ε, for all n C n0.
Decreasing s0 givesnSySySC 2
s0 Pndy B 4ε, for all n C 1.
y2
1 y2 B c1 sin yy
, for all y x 0 and a c A 0.
Hence, it follows that
supnC1
nSR
y2
1 y2 Pndy B c for a c @ª.
Let now µn BR 0,ª be defined as
µnB nSB
y2
1 y2 Pndy for all B > BR.It follows that µnn>N is uniformly bounded, supnC1 µnR @ c. Furthermore holds y2
1y2 B 1,supnC1 µn y SyS A 2
s0 B 4ε and µnn>N relatively compact. After lemma 4.1.3 it holds: there
exists µnkk>N, such that
limkª
SRfyµnkdy SR fyµdy
for a measure µ and f continuous and bounded. Let for s > R the function fs R C be definedas
fsy ¢¦¤ eisy 1 is siny 1y2
y2 , y x 0,s2
2 , otherwise.
Hence follows that fs is bounded and continuous and
ηs limnª
nSReisy 1Pndy
limnª
SRfsyµndy isnS
Rsin yPndy
limkª
SRfsyµnkdy isnk SR sin yPnkdy
SRfsyµdy lim
kª
isnk SR
sin yPnkdyηs ias
bs2
2 S
Reisy 1 is sin yνdy,
4 Lévy Processes 59
for all s > R with a limkª isnk RR sin yPnkdy @ª, b µ 0, ν BR 0,ª,νdy ¢¦¤
1y2
y2 µdy, y x 0,0 , y 0.
SRSy1y > 1,1 sin ySνdy @ª.
Sy1y > 1,1 sin yS 1 y2
y2 @ c, for all y x 0 and a c A 0.
Hence follows that
ηs ias bs2
2 S
Reisy 1 isy1 y > 1,1νdy, for all s > R.
a a SRy1y > 1,1 sin y νdy.
4.1.3 Examples
1. Wiener process (it is enough to look at X1)X1 N 0,1, ϕX1s e s2
2 and hence follows
a, b, ν 0,1,0.Let X Xt, t C 0 be a Wiener proess with drift µ, i.e. Xt µt σW t, W W t, t C 0 – Brownian motion. It follows
a, b, ν µ,σ2,0.ϕX1s EeisX1
EeµσW 1is eµisϕW 1σs eisµσ2 s2
2 , s > R.
2. Compound Poisson process with parameters λ,PnXt PNt
i1 Ui, Nt Poisλt, Ui i.i.d. PU .
ϕX1s expλSReisx 1PUdx
expλisSRx1x > 1,1PUdx λS
Reisx 1 isx1x > 1,1PUdx
expλisS 1
1xPUdx λS
Reisx 1 isx1x > 1,1PUdx , s > R.
Hence follows
a, b, ν λS 1
1xPUdx,0, λPU , PU – finite on R.
60 4 Lévy Processes
3. Process of Gauss-Poisson typeX Xt, t C 0, Xt X1t X2t, t C 0.X1 X1t, t C 0 and X2 X2t, t C 0 independent.X1 – Wiener process with drift µ and variance σ2,X2 – Compound Poisson process with parameters λ,PU .
ϕXts ϕX1tsϕX2ts exp
isµ σ2s2
2 λS
R
eisx 1PUdx expisµ λS 1
1xPUdx σ2s2
2
SRλ eisx 1 isx1x > 1,1PUdx , s > R.
Hence follows a, b, ν µ λS 1
1xPUdx, σ2, λPU .
4. Stable Lévy processX Xt, t C 0 – Lévy process with Xt α stable distribution, α > 0,2. Tointroduce α-stable laws ν, let us begin with an example.If X W (Wiener process), then X1 N 0,1. Let Y,Y1, . . . , Yn be i.i.d. N µ,σ2-variables. Since the normal distribution is stable w.r.t. convolution it holds
Y1 . . . Yn N nµ,nσ2 d
ºnY nµ
ºnµ
ºnY µ n ºn
n12Y µ n 2
2 n12
n1αY µ n n 1
α , α 2.
Definition 4.1.4The distribution of a random variable Y is called α-stable, if for all n > N independent copiesY1, . . . , Yn of Y exist, such that
Y1 . . . Ynd n
1αY dn,
where dn is deterministic. The constant α > 0,2 is called index of stability.Moreover, one can show that
dn
¢¦¤ µ n n 1α , α x 1,
µn logn , α 1.
Example 4.1.1 • α 2: Normal distribution, with any mean and any variance.
• α 1: Cauchy distribution with parameters µ,σ2. The density:
fY x σ
π x µ2 σ2 , x > R.
It holds EY 2 ª, EY does not exist.
4 Lévy Processes 61
• α 12 : Lévy distribution with parameters µ,σ2. The density:
fY x ¢¦¤ σ2π 1
2 1xµ 3
2exp σ
2xµ, x A µ,
0 , otherwise.
These examples are the only examples of α-stable distribution, where an explicit form ofthe density is available. For other α > 0,2, α x
12 , 1, the α-stable distribution is introduced
through its characteristic function. In general holds: If Y α-stable, α > 0,2, then ESY Sp @ ª,0 @ p @ α.Definition 4.1.5The distribution of a random variable is called symmetric, if Y d
Y .If Y has a symmetric α-stable distribution, α > 0,2, then
ϕY s expc SsSα , s > R.Indeed, it follows from the stability of Y that
ϕY sn eidnsϕY n 1α s , s > R.
It follows that dn 0, since ϕY s ϕY s varphiY s. It holds: eidns eidns, s > R anddn 0. The rest is left as an exercise.Lemma 4.1.5Lévy-Khintchine representation of the characteristic function of a stable distribution. Anystable law is infinitely divisible with the Lévy triplet a, b, ν, where a > R arbitrary,
b σ2, α 2,0 , α @ 2.
andνdx 0 , α 2,
c1x1α 1x C 0dx c2SxS1α 1x @ 0dx, α @ 2, c1, c2 C 0 c1 c2 A 0
Without proofExercise: Prove that
P SY S C x
xª
¢¦¤ ex2
2σ2 , α 2,cxα , α @ 2.
Definition 4.1.6The Lévy process X Xt, t C 0 is called stable, if X1 has an α-stable distribution,α > 0,2 (α 2: Brownian motion (with drift)).
4.1.4 Subordinators
Definition 4.1.7A Lévy process X Xt, t C 0 is called subordinator, if for all 0 @ t1 @ t2, Xt1 BXt2 a.s.Since
X0 0 a.s. Xt C 0, t C 0, a.s.
62 4 Lévy Processes
This class of Subordinators is important since you can easily introduce R ba gtdXt as aLebesgue-Stieltjes-integral.Theorem 4.1.5The Lévy process X Xt, t C 0 is a subordinator if and only if the Lévy-Khintchine repre-sentation can be expressed in the form
ϕX1s expias SReisx 1νdx , s > R, (4.1.6)
where a > 0,ª and ν is the Lévy measure, with
ν ª,0 0, Sª
0min 1, y2νdy @ª.
Proof SufficiencyIt has to be shown that Xt2 CXt1 a.s., if t2 C t1 C 0.First of all we show that X1 C 0 a.s.. If ν 0, then X1 a C 0 a.s., hence
ϕXts ϕX1st eiats, s > R.
Xt at a.s. and therefore it follows that Xt and X is a subordinator.If ν0,ª A 0, then there exists N A 0 such that for all n C N it holds 0 @ ν 1
n ,ª @ª. Itfollows
ϕX1s expias limnª
Sª
1n
eisx 1νdx¡ eias limnª
ϕns, s > R,
where ϕns R ª1neisx 1νdx is the characteristic function of a compound Poisson process
distribution with parametersν 1
n ,ª , ν 9 1n,ª
ν 1n,ª
for all n > N. Let Zn be the random
variable with characteristic function ϕn. It holds: Zn PNni1Ui, Nn Pois ν 1n ,ª,
Ui ν9 1
n,ª
ν 1n,ª ; hence follows Zn C 0 a.s. and X1 d
a®C0
limZn´¹¹¹¹¹¸¹¹¹¹¹¹¶C0
C 0 a.s.. Since X is a Lévy
process, it holds
X 1 X 1n X 2
n X 1
n . . . X n
n X n 1
n ,
where, because of stationarity and independence of the increments, X kn X k1
n a.s.C 0 for
1 B k B n for all n. Hence Xq2 Xq1 C 0 a.s. for all q1, q2 > Q, q2 C q1 C 0. Now let t1, t2 > Rsuch that 0 B t1 B t2. Let qn1 , q
n2 be sequences of numbers from Q with q
n1 B q
n2 such
that qn1 t1, qn2 t2, nª. For ε A 0
P Xt2 Xt1 @ ε PXt2 X qn2 X qn2 X qn1 ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
C0
X qn1 X t1 @ εB P Xt2 X qn2 X qn1 X t1 @ εB PXt2 X qn2 @ ε
2 PX qn1 Xt1 B ε2ÐÐÐnª
0,
4 Lévy Processes 63
since X is stochastically continuous. Then
P Xt2 Xt1 @ ε 0 for all ε A 0 andP Xt2 Xt1 @ 0 lim
ε0P Xt2 Xt1 @ ε 0
Xt2 CXt1 a.s.
NecessityLet X be a Lévy process, which is a subordinator. It has to be shown that ϕX1 has theform (4.1.6).After the Lévy-Khintchine representation for X1 it holds that
ϕX1s expias b2s2
2 S
ª
0eisx 1 isx1x > 1,1νdx¡ , s > R.
The measure ν is concentrated on 0,ª, since Xt a.s.C 0 for all t C 0 and from the proof ofTheorem 4.1.4 ν ª,0 0 can be chosen.
ϕX1s expias b2s2
2¡´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
ϕY1s
expS ª
0eisx 1 isx1 x > 1,1νdx ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
ϕY2s
Hence it follows that X1 Y1 Y2, where Y1 and Y2 are independent, Y1 N a, b2 andtherefore b 0. (Otherwise Y1 could attain negative values and consequently X1 could attainnegative values as well.)For all ε > 0,1
ϕX1s expisa S 1
εxνdx S ε
0eisx 1 isxνdx S ª
0eisx 1νdx .
It has to be shown that for ε 0 it holds R ªε eisx 1νdx R ª0 eisx 1νdx @ ª withR 1
0 min x,1νdx @ ª. ϕX1s expis a R 1ε xνdxϕZ1sϕZ2s, where Z1 and Z2
are independent, ϕZ1s exp ε
R0eisx 1 isxνdx¡, ϕZ2s expR ªε eisx 1νdx,
s > R. Then X1 d a R 1ε xνdxZ1 Z2. There exist ϕ2
Z10 EZ2
12 @ª, ϕ1
Z10 0 iEZ1
and it therefore follows that EZ1 0 and PZ1 B 0 A 0. On the other hand, Z2 has a compoundPoisson distribution with parameters ν ε,ª , ν9ε,ª
νε,ª , ε > 0,1. P Z2 B 0 A 0, since PZ2 0 A 0. P Z1 Z2 B 0 C P Z1 B 0, Z2 B 0 P Z1 B 0P Z2 B 0 A 0
For X1 to be positive it follows that a R 1ε xνdx C 0 for all ε > 0,1. Hence a C 0 and
Sª
0min x,1dx @ª.
Moreover, for ε 0 it holds Z1d 0 and consequently
ϕX1s expisa S 1
0xνdx S ª
0eisx 1νdx , s > R.
64 4 Lévy Processes
Example 4.1.2 (α-stable subordinator):Let X Xt, t C 0 be a subordinator, with a 0 and the Lévy measure
νdx αΓ1α 1
x1αdx, x A 0,0 1
x1αdx 0 , x B 0.
By Lemma 4.1.5 it follows that X is an α-stable Lévy process.We show that lXts EesXt etsα for all s, t C 0.
ϕXts ϕX1st exptS ª
0eisx 1 α
Γ1 α 1x1αdx¡ , s > R.
It has to be shown that
uα α
Γ1 α S ª
01 eux dx
x1α , u C 0.
This is enough since ϕXt can be continued analytically to z > C Imz C 0, i.e. ϕXtiu lXtu, u C 0. In fact, it holds that
Sª
01 eux dx
x1d Sª
0uS
x
0euydyx1αdx
Fubini S
ª
0 Sª
yueuyx1αdxdy
Sª
0 Sª
yx1αdxueuydy
u
αS
ª
0euyyαdy
Subst.
u
αS
ª
0ezzα
1uα
d zu
uα
αS
ª
0ezz1α1dz
uα
αΓ1 α
and hence follows lXts etsα , t, s C 0.
4.2 Additional Exercises
Exercise 4.2.1Let X eb a random variable with distribution function F and characteristic function ϕ. Showthat the following statements hold:
a) If X is infinitely divisible, then it holds ϕt x 0 for all t > R. Hint: Show thatlimnª SϕnsS2 1 for all s > R, if ϕs ϕnsn. Note further that SϕnsS2 isagain a characteristic function and limnª x
1n 1 holds for x A 0.
b) Give an example (with explanation) for a distribution, which is not infinitely divisible.
4 Lévy Processes 65
Exercise 4.2.2Let X Xt, t C 0 be a Lévy process. Show that the random variable Xt is then infinitelydivisible for every t C 0.Exercise 4.2.3Show that the sum of two independent Lévy processes is again a Lévy process, and state thecorresponding Lévy characteristic.Exercise 4.2.4Look at the following function ϕ R C with
ϕt eψt, where ψt 2ª
Qkª
2kcos2kt 1.Show that ϕt is the characteristic function of an infinitely divisible distribution. Hint: Lookat the Lévy-Khintchine representation with measure ν2k 2k, k > Z.Exercise 4.2.5Let the Lévy process Xt, t C 0 be a Gamma process with parameters b, p A 0, that is, forevery t C 0 it holds Xt Γb, pt. Show that Xt, t C 0 is a subordinator with the Laplaceexponent ξu R ª0 1 euyνdy with νdy py1ebydy, y A 0. (The Laplace exponentof Xt, t C 0 is the function ξ 0,ª 0,ª, for which holds that EeuXt etξu forarbitrary t, u C 0)Exercise 4.2.6Let Xt, t C 0 be a Lévy process with charactersistic Lévy exponent η and τs, s C 0 aindependent subordinator witch characteristic Lévy exponent γ. The stochastic process Y bedefined as Y Xτs, s C 0.(a) Show that
E eiθY s eγiηθs, θ > R,
where Imz describes the imaginary part of z.Hint: Since τ is a process with non-negative values, it holds Eeiθτs eγθs for allθ > z > C Imz C 0 through the analytical continuation of Theorem 4.1.3.
(b) Show that Y is a Lévy process with characteristic Lévy exponent γiη.Exercise 4.2.7Let Xt, t C 0 be a compound Poisson process with Lévy measure
νdx λº
2σºπe
x22σ2 dx, x > R,
where λ,σ A 0. Show that σW Nt, t C 0 has the same finite-dimensional distributions asX, where Ns, s C 0 is a Poisson process with intensity 2λ and W is a standard Wienerprocess independent from N .Hint to exercise 4.2.6 a) and exercise 4.2.7
• In order to calculate the expectation for the characteristic function, the identity EX EEX SY RR EX SY yFY dy for two random variables X and Y can be used. Indoing so, it should be conditioned on τs.
66 4 Lévy Processes
• R ªª cossye y22a dy
º2πa eas
22 for a A 0 and s > R.
Exercise 4.2.8Let W be a standard Wiener process and τ an independent α
2 -stable subordinator, whereα > 0,2. Show that W τs, s C 0 is a α-stable Lévy process.Exercise 4.2.9Show that the subordinator T with marginal density
fT ts t
2ºπs
32 e
t24s 1s A 0
is a 12 -stable subordinator. (Hint: Differentiate the Laplace transform of T t and solve the
differential equation)
5 Martingales
5.1 Basic Ideas
Let Ω,F ,P be a complete probability space.Definition 5.1.1Let Ft, t C 0 be a family of σ-algebras Ft ` F . It is called
1. a filtration if Fs b Ft, 0 B s @ t.
2. a complete filtration if it is a filtration such that F0 (and therefore all Fs, s A 0) containsall sets of zero probability.Later on we will always assume, that we have a complete filtration.
3. a right-continuous filtration if for all t C 0 Ft 9sAtFs.
4. a natural filtration for a stochastic process Xt, t C 0, if it is generated by the past ofthe process until time t C 0, i.e. for all t C 0 Ft is the smallest σ-algebra which containsthe sets ω > Ω Xt1, . . . ,Xtn ` B, for all n > N, 0 B t1, . . . , tn B t, B > BRn.
A random variable τ Ω R is called stopping time (w.r.t. the filtration Ft, t C 0), if forall t C 0 ω > Ω τω B t > Ft.If Ft, t C 0 is the natural filtration of a stochastic process Xt, t C 0, then τ being astopping time means that by looking at the past of the process X you can tell, whether themoment τ occurred.Lemma 5.1.1Let Ft, t C 0 be a right-continuous filtration. τ is a stopping time w.r.t. Ft, t C 0 if andonly if τ @ t > Ft´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ω>Ωτω@t>Ft
for all t C 0.
Proof „ “Let τ @ t > Ft, t C 0. To show: τ B t > Ft.τ B t 9s>t,tετ @ s for all ε A 0 τ B t > 9sAtFs Ft
„ “To show: τ B t > Ft, t C 0 τ @ t > Ft, t C 0.τ @ t 8s>0,tτ B t s > 8s>0,tFts ` Ft
Definition 5.1.2Let Ω,F ,P be a probability space, Ft, t C 0 a filtration (Ft ` F , t C 0) and X Xt, t C0 a stochastic process on Ω,F ,P. X is adapted w.r.t. the filtration Ft, t C 0, if Xt isFt-measurable for all t C 0, i.e. for all B > BR Xt > B > Ft.
67
68 5 Martingales
Definition 5.1.3The time τBω inft C 0 Xt > B, ω > Ω, is called first hitting time of the set B > BR bythe stochastic process X Xt, t C 0 (also called: first passage time, first entrance time).Theorem 5.1.1Let Ft, t C 0 be a right-continuous filtration and X Xt, t C 0 an adapted (w.r.t.Ft, t C 0) càdlàg process. For open B ` R, τB is a stopping time. If B is closed, thenτBω inft C 0 Xt > B or Xt > B is a stopping time, where Xt limstXs.Proof 1. Let B > BR be open. Because of Lemma 5.1.1 it is enough to show that τB @
t > Ft, t C 0. Because of right-continuity of the trajectories of X it holds:τB @ t 8s>Q90,tXs > B > 8s>Q90,tFs b Ft, since Fs b Ft, s @ t.
2. Let B > BR be closed. For all ε A 0 let Bε x > R dx,B @ ε be a parallel set of B,where dx,B infy>B Sx yS. Bε is open for all ε A 0.
τB B t s>Q90,tXs > B 8 nC1
s>Q90,tXs > B 1
n > Ft,
since X is adapted w.r.t. Ft, t C 0.Lemma 5.1.2Let τ1, τ2 be stopping times w.r.t. the filtration Ft, t C 0. Then minτ1, τ2, maxτ1, τ2,τ1 τ2 and ατ1, α C 1, are stopping times (w.r.t. Ft, t C 0).Proof For all t C 0 holds:minτ1, τ2 B t τ1 B t´¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¶
>Ft
8τ2 B t´¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¶>Ft
> Ft,
maxτ1, τ2 B t τ1 B t 9 τ2 B t > Ft,ατ1 B t τ1 Btα > F t
α` Ft, since t
α B t,τ1 τ2 B tc τ1 τ2 A t τ1 A t´¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¶>Ft
8τ2 A t´¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¶>Ft
8τ1 C t, τ2 A 0´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶>Ft
8τ2 C t, τ1 A 0´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶>Ft
8
0 @ τ2 @ t, τ1 τ2 A t 8 0 @ τ1 @ t, τ2 τ1 A t,To show: 0 @ τ2 @ t, τ1 τ2 A t > Ft. (That also 0 @ τ1 @ t, τ2 τ1 A t > Ft works analogously.)0 @ τ2 @ t, τ1 τ2 A t
s>Q90,ts @ τ1 @ t, τ2 A t s > FtTheorem 5.1.2Let τ be an a.s. finite stopping time w.r.t. the filtration Ft, t C 0 on the probability spaceΩ,F ,P, i.e. Pτ @ ª 1. Then there exists a sequence of discrete stopping times τnn>N,τ1 C τ2 C τ3 C . . ., such that τn τ , nª a.s.
Proof For all n > N let
τn 0, if τω 0k12n , if k
2n @ τω B k12n , for a k > N0
For all t C 0 and for all n > N §k > N0 k
2n B t B k12n , i.e. it holds τn B t τn B
k2n τ B k
2n > F k2n
` Ft τn is a stopping time. Obviously τn τ , nª a.s.
5 Martingales 69
Corollary 5.1.1Let τ be an a.s. finite stopping time w.r.t. the filtration Ft, t C 0 and X Xt, t C 0 acàdlàg process on Ω,F ,P, Ft ` F for all t C 0. Then Xω, τω, ω > Ω is a random variableon Ω,F ,P.Proof To show: Xτ Ω R measurable, i.e. for all B > BR Xτ > B > F . Let τn τ ,n ª be as in Theorem 5.1.2. Since X is càdlàg, it holds that Xτn ÐÐÐ
nª
Xτ a.s.. ThenXτ is F-measurable as the limit of Xτn, which are themselves F-measurable. Indeed, forall B > BR it holds
Xτn > B ª
k0
τn k
2n ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
>F
9X k2n > B ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
>F
> F
5.2 (Sub-, Super-)Martingales
Definition 5.2.1Let X Xt, t C 0 be a stochastic process adapted w.r.t. to a filtration Ft, t C 0, Ft ` F ,t C 0, on the probability space Ω,F ,P, E SXtS @ª, t C 0.X is called martingale, resp. sub- or supermartingale, if EXt S Fs Xs a.s., resp.EXt S Fs C Xs a.s. or EXt S Fs B Xs a.s. for all s, t C 0 with t C s: EXt EXs const for all s, t.Examples
Very often martingales are constructed on the basis of a stochastic process Y Y t, t C 0as follows: Xt gY t EgY t for some measurable function g R R, or byXt eiuY t
ϕY tu , for any fixed u > R.
1. Poisson processLet Y Y t, t C 0 be the homogeneous Poisson process with intensity λ A 0. EY t VarY t λt since Y t Poisλt, t C 0.a) Xt Y t λt, t C 0 Xt is a martingale w.r.t. the natural filtrationFs, s C 0.
EXt S FssBt EY t λt Y s λs Y s λs S Fs Y s λs EY t Y s λt s S Fs
ind.incr. Y s λs EY t Y s λt s Y s λs EY t s´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
λtsλt s
Y s λs a.s. Xs
70 5 Martingales
b) Xt X2t λt, t C 0 Xt is a martingale w.r.t. Fs, s C 0.EXt S Fs EX2t λt S Fs EXt Xs Xs2
λt S Fs EXt Xs2
2Xt XsXs X2s λs λt s S Fs Xs EXt Xs2´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
VarY tY sλts2XsEXt Xs´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
0
λt sa.s. X s, s B t.
2. Compound Poisson processY t PNt
i1 Ui, t C 0, N – homogeneous Poisson process with intensity λ A 0, Ui –independent identically distributed random variables, ESUiS @ª, Ui independent of N .Let Xt Y t EY t Y t λtEU1, t C 0.Exercise 5.2.1Show that X Xt, t C 0 is a martingale w.r.t. its natural filtration.
3. Wiener processLet W W t, t C 0 be a Wiener process, Fs, s C 0 be the natural filtration.a) Y Y t, t C 0, where Y t W 2t EW 2t W 2t t, t C 0, is a martingale
w.r.t. Fs, s C 0.Indeed, it holds
EY t S Fs EW t W s W s2
s t s S Fs EW t W s2
2W sW t W s W s2 S Fs s t s EW t W s2 2W sEW t W s EW 2s S Fs s t s t s W 2s s t s W 2s s a.s. Y s, s B t.
b) Y t euW tu2 t2 , t C 0 and a fixed u > R.
ESY tS eu2 t2 EeuW t eu2 t
2 eu2 t
2 1 @ ª. We show that Y Y t, t C 0 is amartingale w.r.t. Fs, s C 0.
EY t S Fs EeuW tW sW su2 s2u
2 ts2 S Fs
eu2 s
2 euW s´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶Y s
eu2 ts
2 EeuW tW s S Fs´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶EeuW tseu2 ts
2
Y seu2 ts2 eu
2 ts2 Y s, s B t.
4. Regular martingaleLet X be a random variable (on Ω,F ,P) with ESX S @ª. Let Fs, s C 0 be a filtrationon Ω,F ,P.Construct Y t EX S Ft, t C 0. Y Y t, t C 0 is a martingale.Indeed, ESY tS ESEX S FtS B EESX S S Ft ESX S @ª, t C 0.
5 Martingales 71
EY t S Fs EEX S Ft S Fs a.s. EX S Fs Y s, s B t since Fs b Ft.If Y nS n 1, ...,N, where N > N, is a martingale, then it is regular since Y n
EY N S Fn for all n 1, ...,N , i.e. X Y N.However, the latter is not possible for processes of the form Y nS n > N orY tS t C 0.
5. Lévy processesLet X Xt, t C 0 be a Lévy process with Lévy exponent η and natural filtrationFs, s C 0.a) If ESX1S @ ª, define Y t Xt tEX1´¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¶
EXt, t C 0. As in the previous cases it can
be shown that Y Y t, t C 0 is martingale w.r.t. the filtration Fs, s C 0.(Compare Example 1 and 2)
b) In the general case one can use the combination from example 3b – normalize eiuXtby the characteristic function of Xt, i.e. let Y t eiuXt
ϕuXt
eiuXtetηu eiuXttηu,
t C 0, u > R.To show: Y Y t, t C 0 is a complex-valued martingale.ESY tS SetηuS @ª, since η R C. EY t 1, t C 0. Furthermore, it holds
EY t S Fs EeiuXtXstsηueiuXssηu S Fs eiuXssηuetsηuEeiuXtXs Y setsηuetsηu a.s. Y s
6. Monotone Submartingales/SupermartingalesEvery integrable stochastich process X Xt, t C 0, which is adapted w.r.t. toa filtration Fs, s C 0 and has a.s. monotone nondecreasing (resp. non-increasing)trajectories, is a sub- (resp. a super-)martingale.In fact, it holds e.g. Xt a.s.C Xs, t C s EXt S Fs a.s.C EXs S Fs a.s. Xs. Inparticular, every subordinator is a submartingale.
Lemma 5.2.1Let X Xt, t C 0 be a stochastic process, which is adapted w.r.t. a filtration Ft, t C 0and let f R R be convex such that ESfXtS @ ª, t C 0. Then Y fXt, t C 0 is asub-martingale, if
a) X is a martingale, or
b) X is a sub-martingale and f is monotone nondecreasing.
Proof Use Jensen’s inequality for conditional expectations.
EfXt S Fs a.s.C fEXt S Fs´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
b)CXs, a)
Xs
a.s.C fXs.
72 5 Martingales
5.3 Uniform Integrability
It is known, that in general Xna.s.ÐÐÐnª
X does not yield XnL1ÐÐÐnª
X. Here X,X1,X2, . . .
are random variables, defined on the probability space Ω,F ,P. When does „Xna.s.ÐÐÐnª
X“
„XnL1ÐÐÐnª
X“hold? The answer for this provides the notion of uniformly integrability ofXn, n > N.Definition 5.3.1The sequence Xn, n > N of random variables is called uniformly integrable, if ESXnS @ ª,n > N, and sup
n>NESXnS1SXnS A εÐÐÐ
εª
0.
Lemma 5.3.1The sequence Xn, n > N of random variables is uniformly integrable if and only if
1. supn>N
ESXnS @ª (uniformly bounded) and
2. for every ε A 0 there is a δ A 0 such that ESXnS1A @ ε for all n > N and all A > F withPA @ δ.
Proof Let Xn be a sequence of random variables.It has to be shown that
supn>N
ESXnS1SXnS A xÐÐÐxª
01 sup
n>NESXnS @ª
2 ¦ε A 0 §δ A 0 ESXnS1A @ ε ¦A > F PA @ δ„“Set An SXnS A x for all n > N and x A 0. It holds PAn B 1
xESXnS by virtue of Markov’s in-equality and consequently supn PAn B 1
x supn ESXnS B cx ÐÐÐxª
0 §N A 0 ¦x A N PAn @ δ2 supn ESXnS1An B ε.Since ε A 0 can be chosen arbitrarily small supn ESXnS1SXnS A xÐÐÐ
xª
0.
„“
1.
supn
ESXnS B supnESXnS1SXnS A x ESXnS1SXnS B x
B supnESXnS1SXnS A x xPSXnS B x´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
B1
B ε x @ª
2. For all ε A 0 §x A 0 such that ESXnS1SXnS A x @ ε2 because of uniform integrability.
Choose δ A 0 such that xδ @ ε2 . Then it holds
ESXnS1A E SXnS±Bx
1SXnS B x´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶B1
1A ESXnS1SXnS A x 1A²B1
B xPA´¹¹¹¹¹¹¸¹¹¹¹¹¹¹¶
Bε2
ESXnS1SXnS A x´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶Bε2
,
5 Martingales 73
Lemma 5.3.2Let Xnn>N be a sequence of random variables with ESXnS @ ª, n > N, Xn
PÐÐÐnª
X. Then
XnL1ÐÐÐnª
X if and only if Xnn>N is uniformly integrable.
Particularly, XnP
ÐÐÐnª
X implies EXn ÐÐÐnª
EX.
Proof 1) Let Xnn>N be uniformly integrable. It has to be shown that ESXn X SÐÐÐnª
0.
Since XnP
ÐÐÐnª
X one obtains that Xnn>N has a subsequence Xnkk>N converging almostsurely to X. Consequently, Fatou’s Lemma yields
ESX S B lim infkª
ESXnk S B supn>N
ESXnS,and therefore ESX S @ª by Lemma 5.3.1.1).Moreover, one infers from lim
nª
P SXn X S A ε 0 and Lemma 5.3.1.2) that
limnª
ESXn X S1 SXn X S A ε 0
for all ε A 0. (It is an easy exercise to verify that the uniform integrability of Xnn>N impliesthe uniform integrability of Xn Xn>N.)Conclusively it follows
limnª
ESXn X S limnª
ESXn X S1 SXn X S A ε ESXn X S1 SXn X S B ε B εfor all ε A 0, i.e. Xn
L1ÐÐÐnª
X.
2) Now let ESXn X SÐÐÐnª
0. The properties 1 and 2 of Lemma 5.3.1 have to be shown.
1. supn
ESXnS B supn
ESXn X S ESX S @ª, since XnL1ÐX.
2. For all A ` F , PA B δ:ESXnS1A B ESXn X S 1A²
B1
ESX S1A B ESXn X S´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶@ε2
ε
2 ε
with an appropriate choice of δ, since ESX S @ª and since for all ε A 0 §N , such that forall n A N ESXn X S @ ε
2 .
74 5 Martingales
5.4 Stopped MartingalesNotation: x x maxx,0, x > R.Theorem 5.4.1 (Doob’s inequality):Let X Xt, t C 0 be a càdlàg process, adapted w.r.t. the filtration F , t C 0. Let X bea submartingale. Then for arbitary t A 0 and arbitrary x A 0 it holds:
P sup0BsBt
Xx A x B EXtx
Proof W.l.o.g. assume Xt C 0, t C 0 a.s..Psup0BsBtXs A x Psup0BsBtXs A x, for all t C 0, x A 0. A supt1,...,tnXs A x,0 B t1 @ t2 @ . . . @ tn B t – arbitrary times. A 8nk1Ak,
A1 Xt1 A xA2 Xt2 B x,Xt2 A x
Ak Xt1 B x,Xt1 B x, . . . ,Xtk1 B x,Xtk A x,k 2, . . . , n, Ai 9Aj g, i x j.It has to be shown that PA B EXtn
x .EXtn C EXtn1A Pnk1 EXtn1Ak C xPnk1 PAk xPA, k 1, . . . , n 1,since X is a martingale and thus follows that EXtn1Ak C EXtk1Ak C Ex1Ak xPAk, k 1, . . . , n 1, tn A tk.Let B ` 0, t be a finite subset, 0 > B, t > B it is proven similarly that Pmaxs>BXs Ax B EXt
x .Q is dense in R 0, t 9Q 8 t 8ªk1Bk, Bk ` 0, t 9Q 8 t finite, Bk ` Bn, k @ n. By themonotonicity of the probability measure it holds:
limnª
Pmaxs>B
Xs C x P8nmaxs>Bn
Xs A x P sups>8nBn
Xs A x B EXtx
By the right-continuity of the paths of X it holds Psup0BsBtXs A x B EXtx .
Corollary 5.4.1For the Wiener process W W t, t C 0 we are looking at the Wiener process with negativedrift: Y t W t µt, µ A 0, t C 0. From example nr.3 of section 5.3 Xt expuY t tµ u2t
2 , t C 0 is a martingale w.r.t. the natural filtration of W . For u 2µ it holds
Xt exp2µY t, t C 0.
Psup0BsBt Y s A x Psup0BsBt e2µY s A e2µx B Ee2µY t
e2µx e2µx, x A 0and consequently
P supsC0
Y s A x limtª
P sup0BsBt
Y s A x B e2µx.
Theorem 5.4.2Let X Xt, t C 0 be a martingale w.r.t. the filtration Ft, t C 0 with càdlàg paths. Ifτ Ω 0,ª is a finite stopping time w.r.t. the filtration Ft, t C 0, then, the stochasticprocess Xτ,tt C 0 is called a stopped martingale. Here a , b mina, b.
5 Martingales 75
Lemma 5.4.1Let X Xt, t C 0 be a martingale with càdlàg-trajectories w.r.t. the filtration Ft, t C 0.Let τ be a finite stopping time and let τnn>N be the sequence of discrete stopping times outof Theorem 5.1.2, for which τn τ , nª, holds. Then Xτn , tn>N is uniformly integrablefor every t C 0.
Proof
τn 0 , if τ 0k12n , if k
2n @ τ B k12n , for a k > N0
1. It is to be shown: ESXτn , tS @ª for all n.ESXτn,tS B Pk k2n @t ESX k2n SESXtS @ª, sinceX is a martingale, therefore integrable.
2. It is to be shown: supn ESXτn , tS1SXτn , tS A x´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶An
ÐÐÐxª
0.
supn
ESXτn , tS1An sup
n
Qk k2n @tEVX k2nV1τn k
2n 9An E SXtS1 τn A t1 An
subm.B sup
n
Qk k2n @tESXtS1τn k
2n 9An E SXtS1 τn A t 9An
supn
ESXtS Qk k2n @t1τn
k
2n 9An E SXtS1 τn A t 9An
supn
E SXtS1 An B supn
E SXtS1 Y A x E SXtS1 Y A x ,
where 1An B 1supn SXτn , tS´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶Y
A x. It remains to prove that PY A x ÐÐÐxª
0. (The
latter obviously implies limxª
E SXtS1 Y A x 0, since ESXtS @ª for all t C 0.)Doob’s inequality yields
PY A x B P sup0BsBt
SXsS A x B ESXtSx
ÐÐÐxª
0.
Proof of Theorem 5.4.2It is to be shown that Xτ , t, t C 0 is a martingale.
1. ESXτ,tS @ª for all t C 0. As in conclusion 5.1.1 τn τ , nªXτn,t a.s.ÐÐÐnª
Xτ,tis approximated, but since ESXτn , tS @ª for all n it follows ESXτ , tS @ª because ofLemma 5.4.1, since uniform integrability gives L1-convergence.
76 5 Martingales
2. Martingale propertyIt is to be shown:
EXτ , t S Fs a.s. Xτ , s, s B t
EXτ , t1A a.s. EXτ , s1A, A > Fs
First of all, we show that ESXτn , tS1A ESXτn , sS1A, A > Fs, n > N. Lett1, . . . , tk > s, t be discrete values, which τn takes with positive probability in s, t.
EXτn , t S Fs EEXτn , t S Ftk S Fs EEXτn , t´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Xtk1τn B tk S Ftk S Fs
EEXτn , t´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶Xt
1τn A tk S Ftk S Fs EXtk1τn B tk S Fs E1τn A tkEXt S Ftk S Fs EXtk , τn S Fs . . . EXtk1 , τn S Fs . . . EXt1 , τn S Fs . . . EXτn , s S Fsa.s. Xτn , s
Since X is càdlàg and τn τ , n ª, it holds Xτn , t a.s.ÐÐÐnª
Xτn , t. FurthermoreXτn , tn>N is uniformly integrable because of L1-convergence. Therefore follows that
EXτn , t1A EXτn , s1A for all A > Fs
EXτ , t1A EXτ , s1A Xτ , t, t C 0 is a martingale.
Definition 5.4.1Let τ Ω R be a stopping time w.r.t. the filtration Ft, t C 0, Ft ` F , t C 0. The „stopped“σ-algebra Fτ is defined by A > Fτ A 9 τ B t > Ft for all t C 0.
Lemma 5.4.2 1. Let η, τ be stopping times w.r.t. the filtration Ft, t C 0, η a.s.B τ . Thenit holds Fη ` Fτ .
2. Let X Xt, t C 0 be a martingale with càdlàg-trajectories w.r.t. the filtrationFt, t C 0 and let τ be a stopping time w.r.t. Ft, t C 0. Then Xτ is Fτ -measurable.
Proof 1. A > Fs A9 η B t > Ft, t C 0. A9 τ B t A 9 η B t´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶>Ft
9τ B t´¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¶>Ft
> Ft for all t C 0
A > Fτ .
2. Xτ g X f , f Ω Ω R, fω ω, τω, g Ω R R, gω, s Xs,ω.It has to be shown: f -F S F BR-measurable, g-F BR S Fτ -measurable g Xf -F S Fτ -measurable.
5 Martingales 77
f -F S F BR-measurable is obvious, since τ is a random variable. If we’re looking at therestriction of X Xs, s C 0 on s > 0, t, t C 0.It has to be shown: Xτ > B 9 τ B t > Ft for all t C 0, B > BR.X – càdlàg Xs,ω X0, ω1s 0 limnªP2n
k1Xt k2n , ω1k12n t @ s B
k2n t
Xs,ω is B0,t Ft-measurable Xτ is F S Fτ -measurable.
Theorem 5.4.3 (Optional sampling theorem):Let X Xt, t C 0 be a martingale with càdlàg-trajectories w.r.t. a filtration Ft, t C 0,and let τ be a finite stopping time w.r.t. Ft, t C 0. Then EXt S Fτ a.s. Xτ , t, t A 0.
Proof First of all we show that EXt S Fτn a.s. Xτn , t, t C 0, n > N, where τn τ , nª,is the discrete approximation of τ , cf. Theorem 5.1.2. Let t1 B t2 B . . . B tk t be the values,attained by τn , t with positive probability. It is to be shown that for all A > Fτn it holds:EXt1A EXτn , t1A.
Xt Xτn , t1A tkt
k1Qi1Xtk Xti1τn , t ti 9A
k
Qi2Xtk Xti11τn , t ti1 9A
k
Qi2
k
Qji
Xtj Xtj11τn , t ti1 9A
k
Qj2
j
Qi2Xtj Xtj11τn , t ti1 9A
k
Qj2Xtj Xtj11τn , t B tj1 9A
EXt Xτn , t1A
k
Qj2
EXtj Xtj11τn , t B tj1 9A
k
Qj2
EEXtj Xtj1´¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¶>Ftj1
1τn , t B tj1 9A´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶>Ftj1
S Ftj1
k
Qj2
E1τn , t B tj1 9AEXtj S Ftj1´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶Xtj1
Xtj1 0
by Definition 5.4.1 and martingale property. Hence it holds EXt S Fτn a.s. Xτn , t, sinceXτn is Fτn-measurable. Since τ B τn, ¦n, by Lemma 5.4.2, Fτ b Fτn . Then
EXt S Fτ a.s. EXt S Fτn Xτn , t a.s. Xτ , t, nª,since X is càdlàg.
78 5 Martingales
Corollary 5.4.2Let X Xt, t C 0 be a càdlàg-martingale and let η, τ be finite stopping times, suchthat Pη B τ 1. Then it holds EXτ , t S Fη a.s.
EXη , t, t C 0. In particularEXτ , t EX0 holds.
Proof Since X is a martingale, by Theorem 5.4.2 Xτ , t, t A 0 is also a martingale. ApplyTheorem 5.4.3 to it:
EXτ , t S Fη a.s. Xτ , η , t a.s. Xη , t,since η
a.s.B τ . Set η 0, then EEXτ , t S F0 EX0 , t EX0.
5.5 Lévy processes and Martingales
Theorem 5.5.1Let X Xt, t C 0 be a Lévy process with characteristics a, b, ν.
1. There exists a càdlàg-modification of X Xt, t C 0 ofX with the same characteristicsa, b, ν.2. The natural filtration of càdlàg-Lévy processes is right-continuous.
Without proof
Theorem 5.5.2 (Regeneration theorem for Lévy processes):Let X Xt, t A 0 be a càdlàg-Lévy process with natural filtration FX
t , t C 0 and letτ be a finite stopping time w.r.t. FX
t , t C 0. The process Y Y t, t C 0, given byY t Xτ tXτ, t C 0, is also a Lévy process, adapted w.r.t. the filtration FX
τt, t C 0,which is independent of FX
τ and has the same characteristics as X. τ is called regenerationtime.
Abb. 5.1:
5 Martingales 79
Proof For any n > N, take arbitrary 0 B t0 @ @ tn and u1, . . . , un > R. We state that allassertions of the theorem follow from the relation
E1A exp
¢¦¤n
Qj1
iujY tj Y tj1£§¥ PAEexp
¢¦¤n
Qj1
iujXtj Xtj1£§¥ (5.5.1)
for all A > FXτ .
1. By Lemma 5.4.2, since τ and τ t, t C 0 are stopping times with τ B τ t a.s., we haveFXτs b FX
τt, i.e. FXτttC0 is a filtration, and Y t Xτ t Xτ is FX
τt-adapted:Xτ is FX
τ -measurable, Xτ t is FXτt-measurable, FX
τ b FXτt.
2. It follows from (5.5.1) for A Ω that X d Y , i.e., Y is a Lévy process with the same Lévy
exponent η as X.
3. It also follows from (5.5.1) that Y and FXτ are independent, since arbitrary increments
of Y do not depend on FXτ .
4. Now let us prove (5.5.1). We begin with the case ofa.) § c A 0: Pτ B c 1. By example 5, b) of section 5.2., Yj YjttC0, j 1, . . . , n with
Yjt expiujXt tηuj, t C 0 are complex-valued martingales. Furthermore,it holds
E1A exp n
Qj1
iujY tj Y tj1 E1A exp n
Qj1
iujXτ tj Xτ Xτ tj1 Xτ E
1A n
Mj1
Yjτ tjYjτ tj1 expηujτ tj
expηujτ tj1 E
E1A n
Mj1
Yjτ tjYjτ tj1 exptj tj1ηuj S FX
τtj1
E
1A n1Mj1
Yjτ tjYjτ tj1etjtj1ηuj etntn1ηun
Ynτ tn1 EYnτ tn S FXτtn1
E1A <@@@@>
n1Mj1
Yjτ tjYjτ tj1etjtj1ηuj=AAAA? etntn1ηun
. . . E1A n
Mj1
etjtj1ηuj PA n
Mj1
etjtj1ηuj
PAEexpi n
Qj1ujXtj Xtj1
b.) Prove (5.5.1) for arbitrary finite stopping times τ : Pτ @ ª 1. For any k > N itholds Ak A9 τ B k > FX
τ,k, if A > FXτ . Then it follows from 4. a.) and (5.5.1) that
E1Ak exp
¢¦¤n
Qj1
iujYk,tj Yk,tj1£§¥ PAkEexp
¢¦¤n
Qj1
iujXtj Xtj1£§¥
(5.5.2)
80 5 Martingales
for Yk,t Xτ , k t Xτ , k. Now let n ª on both sides of (5.5.2). ByLebesgue’s theorem on majorized convergence, (5.5.1) follows for any a.s. finite τ ,since τ , k a.s.
τ , nª.
5.6 Martingales and Wiener ProcessOur goal: We would like to show that if W W t, t C 0 is a Wiener process, then it holds
Pmaxs>0,tW s A x ¾ 2
πtS
ª
xe
y22t dy, x C 0.
Theorem 5.6.1 (Reflection principle):Let τ be an arbitrary finite stopping time w.r.t. the natural filtration FW
t , t C 0. Let X Xt, t C 0 be the reflected Wiener process at time τ , i.e. Xt W τ ,tW tW τ ,t,t C 0. Then X d
W holds.
Abb. 5.2: Reflection principle.
Proof It holds
Xt W τ , t W t W τ , t ¢¦¤W t , t @ τ
2W τ W t , t C τ
Let X1t W τ , t, X2t W τ t W τ, t C 0. From Theorem 5.5.2 it follows thatX2 is independent of τ,X1 (W – Lévy process and τ – regeneration time). It holds W t X1t X2t τ, Xt X1t X2t τ, t C 0. Indeed,
X1t X2t τ ¢¦¤W t X20 W t , t @ τ
W τ W τ t τ W τ W t , t C τ
5 Martingales 81
W t X1t X2t τ, t C 0. Furthermore
X1t X2t τ ¢¦¤W t X20 W t , t @ τ
W τ W t W τ 2W τ W t , t C τ
Xt X1t X2t τ, t C 0. From theorems 5.5.2 and 3.3.2 it follows that X2d W ,
WdW and hence τ1,X1,X2 d
τ,X1,X2+ +
Wd X
Let W W t, t C 0 be the Wiener process on Ω,F ,P, let FWt , t C 0 be the natural
filtration w.r.t. W . For z > R let τWz inft C 0 W t z. τWz τWz is an a.s. finitestopping time w.r.t. FW
t , t C 0, z A 0, since it obviously holds FWz B t > FW
t . Since W hascontinuous paths (a.s.), FW
t , t C 0 is right-continuous.Corollary 5.6.1Let Mt maxs>0,tW s, t C 0. Then it follows for all z A 0, y C 0, that PMt C z,W t Bz y PW t A y z.Proof Mt is a random variable, sinceW has continuous paths. Let τ τWz . By Theorem 5.6.1,it holds for Y t W τ , tW tW τ , t, t C 0, that Y d
W . Hence, τWz ,W d τYz , Y ,
since W τ z, τWz τYz . Therefore
Pτ B t,W t @ z y PτYz B t, Y t @ z y,whereas τYz B t 9 Y t @ z y τYz B t 9 2z W t @ z y. If τ τYz B t thenY t 2W τ W t 2z W t, and hence
Pτ B t,W t @ z y Pτ B t,2z W t @ z y Pτ B t,W t A z y PW t A z y.Per definition of τ τWz it holds:
Pτ B t,W t @ z y PMt C z,W t @ z y PW t A y z,since τWz B t maxs>0,tW s C z.Theorem 5.6.2 (Distribution of the maximum of W ):For t A 0 and x C 0 it holds
PMt A x ¾ 2πtS
ª
xe
y22t dy
Proof In Corollary 5.6.1, set y 0 PMt C z,W t @ z PW t A z. It holds PW t Az PW t C z for all t and all z, since W t N 0, t, thus continuously distributed PMt C z,W t @ z PW t C z PW t A z PW t A z PMt C z,W t @ z PMt C z,W t C z PMt C z 2PW t A z PMt A z 2PW t A z 2 1º
2πt Rª
z ey22z dy
¼2πt R ªz e
y22t dy
82 5 Martingales
Let Xt W t tµ, t C 0, µ A 0, be the Wiener process with negative drift. ConsiderPsuptC0Xt A x, x C 0.
Motivation Calculation of the following ruin probability in risk theory.
Assumptions Initial capital x C 0. Let µ be the volume of premiums per time unit. µt– earned premiums at time t C 0. Let W t be the loss process (price development). Y t x tµ W t – remaining capital at time t. The ruin probability is
PinftC0
Y t @ 0 Px suptC0
Xt @ 0 PsuptC0
Xt A xTheorem 5.6.3It holds
PsuptC0
Xt A x e2µx, x C 0, µ A 0.
Proof Let τ τXx inft C 0 Xt x. It holdsPsup
tC0Xt A x Pτ @ª lim
tª
Pτ @ t.Compute this limit. For that, introduce the process Y Y t, t C 0,
Y t expuXt tu2
2 µu¡ , t C 0,
u C 0 fixed. It can be easily shown that Y is a martingale. τ τ , t is a finite stopping timew.r.t. filtration FX
t , t C 0. By Corollary 5.4.1, EY τ EY 0 e0 1. On other hand,
1 EY τ EY τ 1τ @ t EY τ 1τ C t EY τ1τ @ t EY τ 1τ C t.
If we can show thatlimtª
EY τ 1τ C t 0, (5.6.1)
then limtª EY τ1τ @ t 1. Since Y τ expux τ u2
2 µu, it followslimtª
E expτ u2
2 uµ¡1τ @ t eux,
and for u 2µ it holds
limtª
E e01τ @ t limtª
Pτ @ t e2µx
PsuptC0
Xt A x e2µx. Now let us prove (5.6.1). By Corollary 3.3.2, it is known that
W tt
a.s. 0 t ª.
5 Martingales 83
Hence,
limtª
Xtt
limtª
W tt
µ µXt a.s. ª, t ª. Then,
Y τ 1τ C t exp2µXt1τ C t a.s. 0 t ªBy Lebesgue’s theorem, it holds E Y τ 1τ C t 0, t ª.Theorem 5.6.4Let X Xt, t C 0 be a Lévy process and let τ τt, t C 0 be a subordinator, whichare both defined on a probability space Ω,F ,P. Let X and τ be independent. Then Y Y t, t C 0 definded by Y t Xτt, t C 0, is a Lévy process.Without proof
5.7 Additional ExercisesExercise 5.7.1Let X,Y Ω R be arbitrary random variables on Ω,F ,P with
ESX S @ª, ESY S @ª, ESXY S @ª,and let G ` F be an arbitrary sub-σ-Algebra of F . Then it holds a.s. that
(a) EX Sg,Ω EX,EX SF X,
(b) EaX bY SG aEX SG bEY SG for arbitrary a, b > R,
(c) EX SG B EY SG, if X B Y ,
(d) EXY SG Y EX SG, if Y is a G,BR-measurable random variable,
(e) EEX SG2SG1 EX SG1, if G1 and G2 are sub-σ-algebras of F with G1 ` G2,
(f) EX SG EX, if the σ-algebra G and σX X1BR are independent, i.e.,if PA 9A PAPA for arbitrary A > G and A > σX.
(g) EfXSG C fEX SG, if f R R is a convex function such that ESfXS @ª.
Exercise 5.7.2Look at the two random variables X and Y on the probability space 1,1,B1,1, 1
2νwith ESX S @ª, where ν is the Lebesgue measure on 1,1. Determine σY and a version ofthe conditional expectation EX SY for the following random variables.
(a) Y ω ω5 (Hint: Show first that σY B1,1)(b) Y ω 1k for ω > k3
2 , k22 , k 1, . . . ,4 and Y 1 1
(Hint: It holds EX SB EX1BPB for B > σY with PB A 0)
(c) Calculate the distribution of EX SY in (a) and (b), if X U1,1.
84 5 Martingales
Exercise 5.7.3Let X and Y be random variables on a probability space Ω,F ,P. The conditional varianceVarY SX is defined by
VarY SX EY EY SX2SX.Show that
VarY EVarY SX VarEY SX.Exercise 5.7.4Let S and τ be stopping times w.r.t. the filtration Ft, t C 0. Show:(a) A 9 S B τ > Fτ , A > FS
(b) FminS,τ FS 9Fτ
Exercise 5.7.5 (a) Let Xt, t C 0 be a martingale. Show that EXt EX0 holds forall t C 0.
(b) Let Xt, t C 0 be a sub- resp. supermartingale. Show that EXt C EX0 (resp.,EXt B EX0) holds for all t C 0.
Exercise 5.7.6The stochastic process X Xt, t C 0 is adapted and càdlàg. Show that
P sup0BvBt
Xv A x B EXt2
x2 EXt2
holds for arbitrary x A 0 and t C 0, if X is a submartingale with EXt 0 and EXt2 @ª.Exercise 5.7.7 (a) Let g 0,ª 0,ª be a monotone increasing function with
gxx
ª, xª.
Show that the sequence X1,X2, . . . of random variables is uniformly integrable, ifsupn>N EgSXnS @ª.
(b) Let X Xn, n > N be a martingale. Show that the sequence of random variablesXτ , 1,Xτ , 2, . . . is uniformly integrable for every finite stopping time τ , ifESXτS @ª and ESXnS1τAn 0 for nª.
Exercise 5.7.8Let S Sn a Pni1Xi, n > N be a symmetric random walk with a A 0 and PXi 1 PXi 1 1~2 for i > N. The random walk is stopped at the time τ , when it exceeds or fallsbelow one of the two values 0 and K A a for the first time, i.e.
τ minkC0
Sk B 0 or Sk CK.Show that Mn Pni0 Si
13S
3n is a martingale and EPτi0 Si 1
3K2 a2a a holds.
Hint: To calculate EMnSFMm , n A m, you can use EPlikXi3 0, 1 B k B l, Mn
Pmr0 Sr Pnrm1 Sr 13S
3n and Sn Sn Sm Sm.
5 Martingales 85
A discrete martingale w.r.t. a filtration Fnn>N is a sequence of random variables Xnn>N ona probability space Ω,F ,P, such that Xn is measurable w.r.t. Fnn>N and EXn1SXn Xn
a.s. for all n > N. A discrete stopping time w.r.t. Fnn>N is a random variableτ Ω N 8 ª such that τ B n > Fn for all n > N 8 ª, where Fª σª
n1 Fn.Exercise 5.7.9Let Xnn>N be a discrete martingale and τ a discrete stopping time w.r.t. Fnn>N. ShowthatXτ,nn>N is also a martingale w.r.t. Fnn>N.Exercise 5.7.10Let Snn>N be a symmetric random walk with Sn Pni1Xi for a sequence of independent andidentically distributed random variables X1,X2, . . ., such that PX1 1 PX1 1 1
2 . Letτ infn SSnS Aºn and Fn σX1, . . . ,Xn, n > N.
(a) Show that τ is a stopping time w.r.t. Fnn>N.(b) Show that Gnn>N with Gn S2
τ,n τ , n is a martingale w.r.t. Fnn>N. (Hint: UseExercise 5.7.9)
(c) Show that SGnS B 4τ holds for all n > N.(Hint: It holds SGnS B SS2
τ,nS Sτ , nS B S2τ,n τ)
Exercise 5.7.11Let X1,X2, . . . be a sequence of independent and identically distributed random variables withESX1S @ ª. Let Fn σX1, . . . ,Xn, n > N, and let τ be a stopping time w.r.t. Fnn>N withEτ @ª.
(a) Let τ be independent of X1,X2, . . .. Derive a formula for the characteristic function ofSτ Pτi1Xi and verify Wald’s identity ESτ EτEX1.
(b) Let additionally EX1 0 and τ infn Sn @ 0. Use Theorem 2.1.3, to show thatEτ ª. (Hint: Proof by contradiction)
Bibliography
[1] D. Applebaum. Lévy processes and stochastic calculus. Cambridge University Press, Cam-bridge, 2004.
[2] A. A. Borovkov. Wahrscheinlichkeitstheorie. Eine Einfürung. Birkhäuser, Basel, 1976.
[3] W. Feller. An introduction to probability theory and its applications. Vol. I/II. John Wiley& Sons Inc., New York, 1970/71.
[4] P. Gänssler and W. Stute. Wahrscheinlichkeitstheorie. Springer, Berlin, 1977.
[5] G. Grimmett and D. Stirzaker. Probability and random processes. Oxford University Press,New York, 2001.
[6] C. Hesse. Angewandte Wahrscheinlichkeitstheorie. Vieweg, Braunschweig, 2003.
[7] O. Kallenberg. Foundations of modern probability. Springer, New York, 2002.
[8] J. F. C. Kingman. Poisson processes. Oxford University Press, New York, 1993.
[9] N. Krylov. Introduction to the theory of random processes. American Mathematical Society,Providence, RI, 2002.
[10] S. Resnick. Adventures in stochastic processes. Birkhäuser, Boston, MA, 1992.
[11] G. Samorodnitsky and M. Taqqu. Stable non-Gaussian random processes. Chapman &Hall, New York, 1994.
[12] K.-I. Sato. Lévy Processes and Infinite Divisibility. Cambridge University Press, Cam-bridge, 1999.
[13] A. N. Shiryaev. Probability. Springer, New York, 1996.
[14] A. V. Skorokhod. Basic principles and applications of probability theory. Springer, Berlin,2005.
[15] J. Stoyanov. Counterexamples in probability. John Wiley & Sons Ltd., Chichester, 1987.
[16] A. D. Wentzel. Theorie zufälliger Prozesse. Birkhäuser, Basel, 1979.
86