Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic...

26
chapter 5 Itˆ o’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge posed by integrals of the sort in (3.1.22). There are several reasons for my decision to postpone doing so until now, perhaps the most important of which is my belief that, in spite of, or maybe because of, its elegance, Itˆ o’s theory of stochastic integration tends to mask the essential simplicity and beauty of his ideas as we have been developing them heretofore. However, it is high time that I explain his theory of integration, and that is what I will be doing in this chapter. However, we will not deal with the theory in full generality and will restrict our attention to the case when the paths are continuous. In terms of equations like (3.1.22), this means that we will not try to rationalize the “dp (t)” integral except in the case when p is Brownian motion (i.e., M = 0 and p = w( · ,p )). The general theory is beautiful and has been fully developed by the French school, particularly by C. Dellacherie and P.A. Meyer who have published a detailed account of their findings in [5]. Because it already contains most of the essential ideas, we will devote this chapter to stochastic integration with respect to Brownian motion. 5.1 Brownian Stochastic Integrals Let (Ω, F , P) be a complete probability space. Then ( β(t), F t , P ) will be called an R n -valued Brownian motion if {F t : t 0} is a non-decreasing family of P-complete sub σ-algebras of the σ-algebra F and β : [0, )×Ω -→ R n is a B [0,) ×F -measurable map with the properties that properties that 1 (a) β(0) = 0 and t β(t, ω) is continuous for P-almost every ω, (b) ω β(t, ω) is F t -measurable for each t [0, ), (c) for all s [0, ) and t (0, ), β(s + t) - β(s) is P-independent of F s and has the distribution of a centered Gaussian with covariance tI R n under P. 1 It should be noticed that our insistence on the completeness of all σ-algebras imposes no restriction. Indeed, if F or the Ft ’s are not complete but (a), (b), and (c) hold for some ω β( · ), then they will continue to hold after all σ-algebras have been completed. 125

Transcript of Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic...

Page 1: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

chapter 5

Ito’s Theory of Stochastic Integration

Up to this point, I have been recognizing but not confronting the challengeposed by integrals of the sort in (3.1.22). There are several reasons for mydecision to postpone doing so until now, perhaps the most important ofwhich is my belief that, in spite of, or maybe because of, its elegance, Ito’stheory of stochastic integration tends to mask the essential simplicity andbeauty of his ideas as we have been developing them heretofore. However,it is high time that I explain his theory of integration, and that is what Iwill be doing in this chapter. However, we will not deal with the theory infull generality and will restrict our attention to the case when the paths arecontinuous. In terms of equations like (3.1.22), this means that we will nottry to rationalize the “dp′(t)” integral except in the case when p′ is Brownianmotion (i.e., M ′ = 0 and p′ = w( · , p′)). The general theory is beautiful andhas been fully developed by the French school, particularly by C. Dellacherieand P.A. Meyer who have published a detailed account of their findings in[5].

Because it already contains most of the essential ideas, we will devote thischapter to stochastic integration with respect to Brownian motion.

5.1 Brownian Stochastic Integrals

Let (Ω,F ,P) be a complete probability space. Then(β(t),Ft,P

)will

be called an Rn-valued Brownian motion if Ft : t ≥ 0 is a non-decreasingfamily of P-complete sub σ-algebras of the σ-algebra F and β : [0,∞)×Ω −→Rn is a B[0,∞)×F-measurable map with the properties that properties that1

(a) β(0) = 0 and t β(t, ω) is continuous for P-almost every ω,(b) ω β(t, ω) is Ft-measurable for each t ∈ [0,∞),(c) for all s ∈ [0,∞) and t ∈ (0,∞), β(s+ t)− β(s) is P-independent of

Fs and has the distribution of a centered Gaussian with covariancetIRn under P.

1 It should be noticed that our insistence on the completeness of all σ-algebras imposes norestriction. Indeed, if F or the Ft’s are not complete but (a), (b), and (c) hold for some

ω β( · , ω), then they will continue to hold after all σ-algebras have been completed.

125

Page 2: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

126 5 Ito’s Theory of Stochastic Integration

Notice that the preceding definition is very close to saying that β( · , ω) iscontinuous for all ω and that the P-distribution of ω ∈ Ω 7−→ β( · , ω) ∈C([0,∞); Rn)

)is given by Wiener measure, the measure which would have

been denoted by P(I,0,0) in § 2.4 and by P0 starting in § 3.1.1. To be moreprecise, first observe that, without loss in generality, one can assume thatβ( · , ω) ∈ C

([0,∞); Rn)

)for all ω, in which case (a), (b), and (c) guar-

antee that P(I,0,0) is the P-distribution of ω β( · , ω). Conversely, ifΩ = C

([0,∞); Rn)

), P = P(I,0,0), and F and Ft are, respectively, the

P-completions of B and Bt, then one gets a Brownian motion by takingβ(t, p) = p(t). Thus, the essential generalization afforded by the precedingdefinition is that the σ-algebras need not be inextricably tied to the randomvariables β(t). That is, Ft must contain but need not be the completion ofσ(β(τ) : τ ∈ [0, t]

).

5.1.1. A Review of Paley–Wiener Integral. As an aid to understand-ing Ito’s theory, it may be helpful to recall the theory of stochastic integrationwhich was introduced by Paley and Wiener. Namely, let

(β(t),Ft,P

)be a

Brownian motion on the complete probability space (Ω,F ,P), and assumeβ( · , ω) ∈ C

([0,∞); Rn)

)for all ω ∈ Ω. Given a Borel measurable function

θ : [0,∞) −→ Rn which has bounded variation on each finite interval, onecan use Riemann-Stieltjes theory2 to define

t ∈ [0,∞) 7−→ Iθ(t, ω) =∫ t

0

(θ(τ), dβ(τ, ω)

)Rn ∈ R

Because it is given by a Riemann–Stieltjes integral, we can say that (cf.(3.3.5))

(5.1.1) Iθ(t, ω) = limN→∞

∞∑m=0

(θ(m2−N ),∆N

mβ(t, ω))

Rn ,

By (c), we know that, for each N ∈ N,

ω ∞∑

m=0

(θ(m2−N ),∆N

mβ(t, ω))

Rn

is a centered Gaussian with variance∫ t

0

∣∣θ([τ ]N )∣∣2 dτ , and so it is an easy

step to the conclusion that ω Iθ(t, ω) is a centered Gaussian with vari-ance equal to

∫ t

0|θ(τ)|2 dτ . In fact, with only a little more effort, one sees

2 What is needed here is the fact (cf. Theorem 1.2.7 in [34]) that Riemann-Stieltjes theoryis completely symmetric: ϕ is Riemann–Stieltjes integrable with resect of ψ if and onlyif ψ is with respect to ϕ. In fact, the integration by parts formula is what allows one to

exchange the two.

Page 3: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

5.1 Brownian Stochastic Integrals. 127

that I(s + t) − I(s) is P-independent of σ(I(σ) : σ ∈ [0, s]

)and that

its P-distribution is that of a centered Gaussian with variance∫ t

s|θ(τ)|2 dτ .

Moreover, Iθ( · , ω) ∈ C([0,∞); Rn)

), and so we now know that

(Iθ(t),Ft,P

)is a continuous, square integrable martingale. In particular, by Doob’s in-equality,

(5.1.2) ‖θ‖2L2([0,∞);Rn) = limt→∞

EP[Iθ(t)2] ≤ EP[‖Iθ‖2[0,∞)

]≤ 4‖θ‖2L2([0,∞);Rn).

The relations in (5.1.2) can be used as the basis on which to extend thedefinition of θ Iθ to square integrable θ’s which do not necessarily pos-sess locally bounded variation. Indeed, (5.1.2) says that, as a map takingθ ∈ L2

([0,∞); Rn

)with locally bounded variation into continuous square

integrable martingales on (Ω,Ft,P), θ Iθ is a continuous. Hence, becausethe smooth elements of L2

([0,∞); Rn

)are dense there, this map admits a

unique continuous extention. To be precise, define M2(P; R) to be the spaceof all R-valued, square integrable, P-almost surely continuous P-martingalesM relative to Ft : t ≥ 0 such that

(5.1.3) ‖M‖M2(P;R) = supt∈[0,∞)

EP[|M(t)|2] 1

2 <∞.

Although it may not be apparent, M2(P; R) is actually a Hilbert space.In fact, it can be isometrically embedded as a closed subspace of L2(P; R).Namely, if M ∈M2(P; R), then M is an L2-bounded martingale and there-fore, by the L2-martingale convergence theorem (cf. Theorem 7.1.16 in [36]),there exists an M(∞) ∈ L2(P; R) to which M(t) : t ≥ 0 converges both P-almost surely and in L2(P; R). In particular, this means that, for each t ≥ 0,M(t) = EP[M(∞)|Ft

]P-almost surely. Moreover, because

(M(t)2,Ft,P

)is

a submartingale,

‖M(t)‖L2(P;R) ‖M‖M2(P;R) = ‖M(∞)‖L2(P;R).

Hence, the map M ∈ M2(P; R) 7−→ M(∞) ∈ L2(P; R) is a linear isometry.Finally, to see that M(∞) : M ∈M2(P; R) is closed in L2(P; R), supposethat Mk∞k=1 ⊆ M2(P; R) and that Mk(∞) −→ X is L2(P; R). By Doob’sInequality,

sup`>k

EP[‖M` −Mk‖2[0,∞)

]≤ 4 sup

`>k‖M`(∞)−Mk(∞)‖2L2(P;R) −→ 0

as k → ∞. Hence, there exists an F-measurable map ω ∈ Ω 7−→ M ∈C([0,∞); Rn)

)such that limk→∞ EP[‖M −Mk‖2[0,∞)

]= 0. But clearly, for

each t ≥ 0, M(t) = EP[X|Ft] P-almost surely, and so not only is M ∈M2(P; R) but also, since X is σ

(⋃t≥0 Ft

)-measurable, X = M(∞). For

future reference, we will collect these observations in a lemma.

Page 4: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

128 5 Ito’s Theory of Stochastic Integration

5.1.4 Lemma. The space M2(P; R) with the norm given by (5.1.3) is a

Hilbert space. Moreover, for each M ∈ M2(P; R), M(∞) ≡ limt→∞M(t)exists both P-almost surely and in L2(P; R), and the mapM ∈M2(P; R) 7−→M(∞) ∈ L2(P; R) is a linear isometry.

By combining Lemma 5.1.4 with the remarks which precede it, we arriveat the following statement, which summarizes the Paley–Wiener theory ofstochastic integration.

5.1.5 Theorem. There is a unique, linear isometry θ ∈ L2([0,∞); Rn) 7−→Iθ ∈ M2(P; R) with the property that Iθ(t, ω) is given by (5.1.1) when

θ has locally bounded variation. In particular, for each T ≥ 0, Iθ(T ) =I1[0,T ]θ(∞) P-almost surely. Finally, for each θ ∈ L2([0,∞); Rn) and all

0 ≤ s < t ≤ ∞, Iθ(t)− Iθ(s) is P-independent of Fs and its P-distribution is

that of a centered Gaussian with variance∫ t

s|θ(τ)|2 dτ .

5.1.2. Ito’s Extension. Ito’s extention of the preceding to θ’s which maydepend on ω as well as t is completely natural if one keeps in mind thereason for his wanting to make such an extention. Namely, he was trying tomake sense out integrals which appear in his method of constructing Markovprocesses. Thus, he wanted to find a notion of integration which would allowhim to interpret

limN→∞

∞∑m=0

σ(XN (m2−N , x)

)∆N

mp′(t)

as an integral. In particular, he had reason to suppose that it was best tomake sure that the integrand is independent of the differential by which itis being multiplied.

With this in mind, we say that a map F on [0,∞)×Ω into a measurablespace is progressively measurable if F [0, T ]× Ω is B[0,T ] × FT -measurablefor each T ∈ [0,∞).3 The following elementary facts about progressivemeasurability are proved in Lemma 7.1.2 of [36].

5.1.6 Lemma. Let PM denote the collection of all A ⊆ [0,∞) × Ω such

that 1A is an R-valued, progressively function. Then PM is a sub σ-algebra

of B[0,∞) × F , and a function on [0,∞) × Ω is progressively measurable if

and only if it is measurable with respect to P(M). Furthermore, if F :

3 So far as I know, the notion of progressive measurability is one of the many contributionswhich P.A. Meyer made to the subject of stochastic integration. In particular, Ito dealt

with adapted functions: F ’s for which ω F (T, ω) is FT -measurable for each T . Even

though adaptedness is more intuitively appealing, there are compelling technical reasonsfor preferring progressive measurability. See Remark 7.1.1 in [36] for further comments

on this matter.

Page 5: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

5.1 Brownian Stochastic Integrals. 129

[0,∞) × Ω −→ E, where E is a metric space, with the properties that

F ( · , ω) is continuous for each ω and F (T, · ) is FT for each T ≥ 0, then

F is progressively measurable. In fact, if Ft is P-complete for each t ≥ 0and F ( · , ω) is continuous for P-almost every ω, then F is progressively

measurable if F (T, · ) is FT -measurable for each T ≥ 0.

Now let Θ2(P; Rn) denote the space of all progressively measurable θ :[0,∞)× Ω −→ Rn with the property that

‖θ‖Θ2(P;Rn) ≡ EP[∫ ∞

0

|θ(t)|2 dt]<∞.

Since, by Lemma 5.1.6, an equivalent way to describe Θ2(P; Rn) is as thesubspace of progressively measurable elements of L2

(Leb[0,∞)×P

), we know

that Θ2(P; Rn) is a Hilbert space.Our goal is to show (cf. Exercise 5.1.20 below) that there is a unique linear

isometry θ ∈ Θ2(P; Rn) 7−→ Iθ ∈ M2(P; R) with the property that when θ

is an element of Θ2(P; Rn) such that θ( · , ω) has locally bounded variationfor each ω

(5.1.7)

Iθ(t, ω) =∫ t

0

(θ(τ, ω), dβ(τ, ω)

)Rn

=(θ(t, ω), β(t, ω)

)Rn −

∫ t

0

(β(τ, ω), dθ(τ, ω)

)Rn ,

where the integrals are taken in the sense of Riemann–Stieltjes.

5.1.8 Lemma. Let SΘ2(P; Rn)4 be the subspace of uniformly bounded

θ ∈ Θ2(P; Rn) for which there exists an N ∈ N with the property that

θ(t, ω) = θ([t]N , ω) for all (t, ω) ∈ [0,∞) × Ω. Then SΘ2(P; Rn) is dense in

Θ2(P; Rn). Moreover, if θ ∈ SΘ2(P; Rn), Iθ is given as in (5.1.7), and

(5.1.9) Aθ(t, ω) ≡∫ t

0

|θ(τ, ω)|2 dτ,

then (Eθ(t),Ft,P) is a martingale when

(5.1.10) Eθ(t, ω) ≡ exp(Iθ(t, ω)− 1

2Aθ(t)).

In particular, both(Iθ(t),Ft,P

)and

(Iθ(t)2 −Aθ(t),Ft,P

)are martingales.

4 The “S” here stands for “simple.”

Page 6: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

130 5 Ito’s Theory of Stochastic Integration

Proof: To prove the density statement, first observe that, by any standardtruncation procedure, it is easy to check that uniformly bounded elementsof Θ2(P; Rn) which are supported on [0, T ] × Ω for some T ∈ [0,∞) form adense subspace of Θ2(P; Rn). Thus, suppose that θ ∈ Θ2(P; Rn) is uniformlybounded and vanishes off of [0, T ]×Ω. Choose a ψ ∈ C∞c

(R; [0,∞)

)so that

ψ ≡ 0 on (−∞, 0] and∫

R ψ(t) dt = 1, and set

θk(t, ω) = k

∫ ∞

0

ψ(k(t− τ)

))θ(τ, ω) dτ.

Then, for each k ∈ Z+ and ω ∈ Ω, θk( · , ω) is smooth and vanishes off of[0, T+1]. In addition, because ψ is supported on the right half line, it followsfrom Lemma 5.1.6 that θk is progressively measurable. At the same time,for each ω, ‖θk( · , ω)‖L2(P;Rn) ≤ ‖θ( · , ω)‖L2(P;Rn) and

‖θk( · , ω)− θ( · , ω)‖L2([0,∞);Rn) −→ 0 as k →∞.

Hence, by Lebesgue’s Dominated Convergence Theorem, ‖θk−θ‖Θ2(P;Rn) −→0. In other words, we have now proved the density of the uniformly boundedelements of Θ2(P; R) which are supported on [0, T ]× Ω for some T ≥ 0 andare smooth as functions of t ∈ [0,∞) for each ω ∈ Θ. But clearly, if θ is suchan element of Θ2(P; R) and θN (t, ω) ≡ θ([t]N , ω), then θN −→ θ in Θ2(P; R).

To prove the second assertion, let θ be a uniformly bounded element ofΘ2(P; R) which satisfies θ(t, ω) = θ([t]N , ω). Then, for m2−N ≤ s < t ≤(m+ 1)2−N ,

EP[Eθ(t)∣∣Fs

]= Eθ(s)e−

t−s2 |θ(m2−N )|2EP

[exp((θ(m2−N ), β(t)− β(s)

)Rn

) ∣∣∣Fs

]= Eθ(s)

since β(t)− β(s) is P-independent of Fs. Clearly, for general 0 ≤ s < t, onegets the same conclusion by iterating the preceding result. Hence, we nowknow that

(Eθ(t),Ft,P

)is a martingale.

To complete the proof from here, first observe that we can replace θ byλθ for any λ ∈ R. In particular, this means that

EP[e|Iθ(t)|] ≤ e12 A2t

(EP[Eθ(t) + E−θ(t)

])= 2e

12 A2t,

where A is the uniform upper bound on |θ(t, ω)|. At the same time, byTaylor’s Theorem, ∣∣∣∣Eλθ(t)− 1

λ− Iθ(t)

∣∣∣∣ ≤ λ

2e|Iθ(t)|

and ∣∣∣∣Eλθ(t) + E−λθ(t)− 2λ2

−(Iθ(t)2 −Aθ(t)

)∣∣∣∣ ≤ λ

3e|Iθ(t)|

for 0 < λ ≤ 1. Hence, by Lebesgue’s Dominated Convergence Theorem, weget the desired conclusion after letting λ 0.

Page 7: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

5.1 Brownian Stochastic Integrals. 131

5.1.11 Theorem. There is a unique linear, isometric map θ ∈ Θ2(P; Rn)7−→ Iθ ∈ M2(P; R) with the properties that Iθ is given by (5.1.7) for each

θ ∈ SΘ2(P; R). Moreover, given θ1, θ2 ∈ Θ2(P; R),

(5.1.12)(Iθ1(t)Iθ2(t)−

∫ t

0

(θ1(τ), θ2(τ)

)Rn ,Ft,P

)is a martingale. Finally, if θ ∈ Θ2(P; R) and Eθ(t) is defined as in (5.1.10),then

(Eθ(t),Ft,P

)is always a supermartingale and is a martingale if Aθ(T )

is bounded for each T ∈ [0,∞). (See Exercises 5.1.24 and 5.3.4 for more

refined information.)

Proof: The existence and uniqueness of θ Iθ is immediate from Lemma5.1.8. Indeed, from that lemma, we know that this map is linear and iso-metric on SΘ2(P; Rn) and that SΘ2(P; Rn) is dense in Θ2(P; Rn). Fur-thermore, Lemma 5.1.8 says that

(Iθ(t)2 − Aθ(t),Ft,P

)is a martingale

when θ ∈ SΘ2(P; R), and so the general case follows from the fact thatIθk

(t)2 − Aθk(t) −→ Iθ(t)2 − Aθ(t) in L1(P; R) when θk −→ θ in Θ2(P; R).

Knowing that(Iθ(t)2 −Aθ(t),Ft,P

)is a martingale for each θ ∈ Θ2(P; Rn),

we get (5.1.12) by polarization. That is, one uses the identity

Iθ1(t)Iθ2(t)−∫ t

0

(θ1(τ), θ2(τ)

)Rn dτ

=14(Iθ1+θ2(t)

2 − Iθ1−θ2(t)2 −Aθ1+θ2(t) +Aθ1−θ2(t)

).

Finally, to prove the last assertion, choose θk∞1 ⊆ SΘ2(P; R) so thatθk −→ θ in Θ2(P; Rn). Because

(Eθk

(t),Ft,P)

is a martingale for each k

and Eθk(t) −→ Eθ(t) in P-probability for each t ≥ 0, Fatou’s Lemma implies

that(Eθ(t),Ft,P

)is a supermartingale. Next suppose that θ is uniformly

bounded by a constant Λ < ∞, and choose the θk’s so that they are alluniformly bounded by Λ as well. Then, for each t ∈ [0, T ],

EP[Eθk(t)2

]≤ eΛtEP[E2θk

(t)]

= eΛt,

and so Eθk(t) −→ Eθ(t) in L1(P; R) for each t ∈ [0, T ]. Hence, we now know

that(Eθ(t),Ft,P

)is a martingale when θ is bounded. Finally, if Aθ(t, ω) ≤

Λ(t) < ∞ for each t ∈ [0,∞) and θm(t, ω) ≡ 1[0,m]

(|θ(t, ω)|

)θ(t, ω), then

Eθm(t) −→ Eθ(t) in P-probability and EP[Eθm

(t)2]≤ eΛ(t) for each t ≥ 0,

which again is sufficient to show that(Eθ(t),Ft,P

)is a martingale.

Remark 5.1.13. With Theorem 5.1.11, we have completed the basic con-struction in Ito’s theory of Brownian stochastic integration, and, as time

Page 8: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

132 5 Ito’s Theory of Stochastic Integration

goes on, we will increasingly often replace the notation Iθ by the more con-ventional notation

(5.1.14)∫ t

0

(θ(τ), dβ(τ)

)Rn = Iθ(t).

Because it recognizes that Ito theory is very like a classical integration the-ory, (5.1.14) is good notation. On the other hand, it can be misleading.Indeed, one has to keep in mind that, in reality, Ito’s “integral” is, like theFourier transform on R, defined only up to a set of measure 0 and via anL2-completion procedure. In addition, for the cautionary reasons discussedin § 3.3.2 & 3.3.3, it is a serious mistake to put too much credence in thenotion that an Ito integral behaves like a Riemann–Stieltjes integral.

5.1.3. Stopping Stochastic Integrals and a Further Extension. Thenotation

∫ t

0

(θ(τ), dβ(τ)

)Rn for Iθ(t) should make one wonder to what extent

it is true that, for ζ1 ≤ ζ2,

(5.1.15)

∫ t∧ζ2

t∧ζ1

(θ(τ), dβ(τ)

)Rn ≡ Iθ(t ∧ ζ2)− Iθ(t ∧ ζ1)

=∫ t

0

1[ζ1,ζ2)(t)(θ(τ), dβ(τ)

)Rn .

Of course, in order for the right hand side of preceding to even make sense,it is necessary that 1[ζ1,ζ2)θ be progressively measurable, which is more orless equivalent to insisting that ζ1 and ζ2 be stopping times.

5.1.16 Lemma. Given θ ∈ Θ2(P; Rn) and stopping times ζ1 ≤ ζ2, (5.1.15)holds P-almost surely. In fact, if α is a bounded, Fζ1-measurable function

and α1[ζ1,ζ2)θ(t, ω) equals α(ω)θ(t, ω) or 0 depending on whether t is or is

not in [ζ1(ω), ζ2(ω)), then α1[ζ1,ζ2)θ ∈ Θ2(P; Rn) and

α(Iθ(t ∧ ζ2)− Iθ(t ∧ ζ1)

)=∫ t

0

(α1[ζ1,ζ2)θ(τ), dβ(τ)

)Rn .

Proof: Clearly, to check that θ ≡ α1[ζ1,ζ2)θ is in Θ2(P; Rn), it is enough tocheck that θ is progressively measurable, and this is an elementary exercise.Next, set

∆(t) = Iθ(t ∧ ζ2)− Iθ(t ∧ ζ1) and I(t) =∫ t

0

(θ(τ), dβ(τ)

)Rn .

Page 9: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

5.1 Brownian Stochastic Integrals. 133

Then, by Doob’s Stopping Time Theorem and (5.1.12),

EP[∣∣α∆(t)− I(t)

∣∣2] = EP[α2∆(t)2]− 2EP[α∆(t)I(t)

]+ EP[I(t)2]

= EP

[α2

∫ t∧ζ2

t∧ζ1

|θ(τ)|2 dτ

]− 2EP

∫ t∧ζ2

t∧ζ1

(θ(τ), θ(τ)

)Rn dτ

]

+ EP[∫ t

0

∣∣θ(τ)∣∣2 dτ] = 0.

The preceding makes it possible to introduce the following extension ofIto’s theory. Namely, define Θ2

loc(P; Rn) to be the space of progressivelymeasurable θ : [0,∞) × Ω −→ Rn with the property that, P-almost surely,Aθ(t) ≡

∫ T

0|θ(t)|2 dt < ∞ for all T ∈ [0,∞). At the same time, define

Mloc(P; R) to be the space of continuous local martingales. That is, M ∈Mloc(P; R) if M : [0,∞) × Ω −→ R is a progressively measurable function,M( · , ω) is continuous for P-almost every ω, and there exists a sequenceζk∞1 of stopping times such that ζk ∞ P-almost surely and, for eachk ∈ Z+,

(M(t ∧ ζk),Ft,P

)is a martingale.

5.1.17 Theorem. There is a unique linear map θ ∈ Θ2loc(P; Rn) 7−→ Iθ ∈

Mloc(P; R) with the property that for any θ ∈ Θ2loc(P; Rn) and stopping time

ζ:

EP

[∫ ζ

0

∣∣θ(τ)∣∣2 dτ] <∞ =⇒ Iθ(t ∧ ζ) =∫ t

0

1[0,ζ)(τ)(θ(τ), dβ(τ)

)Rn .

Because it is completely consistent to do so, we will continue to use thenotation

∫ t

0

(θ(τ), dβ(τ)

)Rn to denote Iθ, even when θ ∈ Θ2

loc(P; Rn).Another direction in which it is useful to extend Ito’s theory is to matrix-

valued integrands. Namely, we have the following, which is a more or lesstrivial corollary of the preceding theorem.

5.1.18 Corollary. Let(β(t),Ft,P

)be an Rn′

-valued Brownian mo-

tion, and suppose that σ : [0,∞) × Ω −→ Hom(Rn′; Rn) is a progressively

measurable function with the property that

(5.1.19)∫ T

0

‖σ(t)‖2H.S. dt <∞ P-almost surely for all T ∈ (0,∞).

Then there is a P-almost surely unique, Rn-valued, progressively measurable

function t ∫ t

0σ(τ) dβ(τ) with the property that, P-almost surely,(

ξ,

∫ t

0

σ(τ) dβ(τ))

Rn

=∫ t

0

(σ(τ)>ξ, dβ(τ)

)Rn for all t ∈ [0,∞) & ξ ∈ Rn.

Page 10: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

134 5 Ito’s Theory of Stochastic Integration

In particular, if ζ is a stopping time and

EP

[∫ ζ

0

∥∥σ(τ)∥∥2

H.S.dτ

]<∞,

then (∫ t∧ζ

0

σ(τ) dβ(τ),Ft,P)

is an Rn-valued martingale, and∣∣∣∣∣∫ t∧ζ

0

σ(τ) dβ(τ)

∣∣∣∣∣2

−∫ t∧ζ

0

∥∥σ(τ)∥∥2

H.S.dτ,Ft,P

is an R-valued martingale.

5.1.4. Exercises.

Exercise 5.1.20. We claimed, but did not prove, that Ito’s integrationtheory coincides with Riemann–Stieltjes’s when the integrand has locallybounded variation. To be more precise, let θ ∈ Θ2

loc(P; Rn) and assume thatθ( · , ω) has locally bounded variation for each ω ∈ Ω. Show that, one versionof the random variable ω Iθ( · , ω) is given by the indefinite, Riemann–Stieltjes integral of θ( · , ω) with respect to β( · , ω).

Hint: First show that there is no loss in generality to assume that θ isuniformly bounded and that, for each ω, θ( · , ω) is right continuous at 0,and left continuous on (0,∞). Second, because θ( · , ω) is Riemann–Stieltjesintegrable with respect to β( · , ω) on each compact interval, verify that theRiemann–Stieltjes integral of θ( · , ω) on [0, t] with respect to β( · , ω) can becomputed as the limit of the Riemann sums

∞∑m=0

(θ(m2−N , ω),∆N

mβ(t, ω))

Rn .

Finally, use the boundedness of θ and the left continuity of θ( · , ω) to seethat

limN→∞

EP[∫ t

0

∣∣θ(τ)− θ([τ ]N )∣∣2 dτ] = 0.

Exercise 5.1.21. One of the most important applications of the Paley–Wiener integral was made by Cameron and Martin. To explain their ap-plication, use, as in § 3.1.1, P0 to denote the measure on C

([0,∞); Rn)

)corresponding to the Levy system (I, 0, 0). That is, P0 is the standard

Page 11: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

5.1 Brownian Stochastic Integrals. 135

Wiener measure on C([0,∞); Rn)

), and so

(p(t),Bt,P0

)is a Brownian mo-

tion. Next, given η ∈ L2([0,∞); Rn

), set h(t) =

∫ t

0η(τ) dτ , and let τh :

C([0,∞); Rn)

)−→ C

([0,∞); Rn)

)denote the translation map given by

τh(p) = p + h. Then the theorem of Cameron and Martin states that themeasure (τh)∗P0 is equivalent of P0 and that its Radon–Nikodym derivativeRh is given by the Cameron–Martin formula

(5.1.22) Rh = exp(Iη(∞)− 1

2‖η‖2L2([0,∞);Rn)

).

Prove their theorem.

Hint: There are several ways in which to prove their theorem. Perhapsthe one best suited to the development here is to first prove that P0 can becharacterized as the unique probability measure P on C

([0,∞); Rn)

)with

the property that

EP[eIθ(∞)

]= exp

(12‖θ‖2L2([0,∞);Rn)

)for all piecewise constant, compactly supported θ : [0,∞) −→ Rn. (In thisexpression, Iθ(∞, p) denotes the Riemann–Stieltjes integral of θ with respectto p over taken over the support of θ.) Knowing this, it is easy to check thatR−1

h d(τh)∗P0 = dP0 for compactly supported, piecewise constant η’s. Tocomplete the proof, one need only construct a sequence ηk∞1 of compactsupported, piecewise constant functions so that ηk −→ η in L2

([0,∞); Rn

).

Since the corresponding functions hk tend uniformly on compacts to h whilethe corresponding Rhk

’s tend in L1(P0; R) to Rh, the general case follows.

Exercise 5.1.23. Because we are dealing with processes which are almostsurely continuous, most of the subtlety in the notion of a local martingaleis absent. Indeed, given any progressively measurable M : [0,∞)×Ω −→ Rwith the property that M( · , ω) is continuous for P-almost every ω, showthat

(M(t),Ft,P

)is a local martingale if and only if, for each k ∈ Z+,(

M(t∧ζk),Ft,P)

is a martingale, where ζk(ω) ≡ inft ≥ 0 : |M(t, ω)| ≥ k

.

In this connection, show that a local martingale(M(t),Ft,P

)is a martingale

if ‖M‖[0,T ] ∈ L1(P) for each T ∈ [0,∞). In particular, use this to see that anIto stochastic integral Iθ(t) is a P-square integrable martingale if and only if

EP

[∫ T

0

∣∣θ(t)∣∣2 dt] <∞ for all T ∈ [0,∞).

In a slightly different direction, show that a local martingale(M(t),Ft,P

)is a supermartingale if it is uniformly bounded from below.

Page 12: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

136 5 Ito’s Theory of Stochastic Integration

Exercise 5.1.24. Assume that σ : [0,∞) × Ω −→ Hom(Rn′; Rn) is a pro-

gressively measurable function for which (5.1.19) holds, set

X(t) =∫ t

0

σ(τ) dβ(τ) and A(t) =∫ t

0

σ(τ)σ(τ)> dτ,

and assume that, for some T ∈ (0,∞) and Λ(T ) ∈ (0,∞),(ξ, A(T )ξ

)Rn ≤

Λ(T )|ξ|2, ξ ∈ Rn. Show that, for each ε ∈ (0, 1),

(5.1.25) EP

[exp

(ε‖X‖2[0,T ]

2Λ(T )

)]≤ e(1− ε)−

n2 .

In particular, conclude that

(5.1.26) P(‖X‖[0,T ] ≥ R

)≤ e(1− ε)−

n2 e−

εR22Λ(T ) for all R > 0,

and

EP

[exp

(‖X‖2[0,T ]

2nΛ(T )

)]≤ e

32 for all n ≥ 1.

Hint: Given ξ ∈ Rn, set

Xξ(t) =(ξ,X(t)

)Rn and E(t, ξ) = exp

(Xξ(t)− 1

2

(ξ,A(t)ξ

)Rn

),

and show, using the last part of Theorem 5.1.11 and Doob’s inequality, that,for each q ∈ (1,∞),

EP

[sup

t∈[0,T ]

eq(ξ,X(t)

)Rn

]≤ e

12 qΛ(T )|ξ|2EP

[‖E( · , ξ)‖q

[0,T ]

]≤ e

12 qΛ(T )|ξ|2

(q

q − 1

)q

EP[E(T, ξ)q

]≤ e

12 q2Λ(T )|ξ|2

(q

q − 1

)q

EP[E(T, qξ)

]= e

12 qΛ(T )|ξ|2

(q

q − 1

)q

.

Next, multiply the preceding through by (2πτ)−n2 e−

|ξ|22τ where τ = ε

q2Λ(T ) ,and integrate with respect of ξ over Rn. Finally, let q ∞.

Exercise 5.1.27. In this exercise we will develop some of the intriguingrelations between Ito stochastic integrals and Hermite polynomials. Theserelations reflect the residual Gaussian characteristics which stochastic inte-grals inherit from their driving Brownian fore-bearers. Deeper examples ofthese relations will be investigated in Exercise 6.3.10.

Page 13: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

5.2 Ito’s Integral Applied to Ito’s Construction Method. 137

(i) Given a ≥ 0, define Hm(x, a) for x ∈ R so that the identity

eλx−λ22 a =

∞∑m=0

λm

m!Hm(x, a), λ ∈ R,

holds. Show that Hm(x, 0) = xm and, for a > 0, Hm(x, a) = am2 Hm

(a−

12x)

where Hm(x) ≡ Hm(x, 1). In addition, show that

Hm(x) = (−1)mex22 ∂m

x e− x2

2 ,

and conclude that the Hm’s can be generated inductively from H0(x) = 1and Hm =

(x− ∂x

)Hm−1. In particular, use this to conclude that Hm is an

mth order polynomial with the properties that: the coefficient of xm is 1,the coefficient of x` is 0 unless ` has the same parity as m, and the constantterm in H2m is (−1)m (2m)!

2mm! .

(ii) Assume that θ ∈ Θ2(P; R) with Aθ(T ) is uniformly bounded, and showthat e|Iθ(T )| is P-integrable. Use this along with the last part of Theorem5.1.11 to justify

EP[Hm

(Iθ(T ), Aθ(T )

)]=

dm

dλmEP[Eλθ(T )

] ∣∣∣λ=0

= 0

for all m ≥ 1.

(iii) By combining (i) and (ii), show that there exist universal constantB2m for m ∈ Z+ such that

B−12mEP[Iθ(T )2m

]≤ EP[Aθ(T )m

]≤ B2mEP[Iθ(T )2m

],

which is a primitive version of Burkholder’s inequality. Finally, check thatthese inequalities continue to hold for any θ ∈ Θ2

loc(P; Rn), not just those forwhich Aθ(T ) is uniformly bounded.

5.2 Ito’s Integral Applied to Ito’s Construction Method

Because our motivation for introducing stochastic integration was the de-sire to understand equations like (3.1.22), it seems reasonable to ask whetherthe theory developed in § 5.2 does in fact do anything to increase our un-derstanding. Of course, because we have been dealing with nothing butBrownian stochastic integrals, we will have to restrict our attention to pro-cesses for which the Levy measure is 0.

5.2.1. Basic Existence and Uniqueness Result for S.D.E.’s. Letσ : [0,∞) × Ω × Rn′ −→ Hom(Rn′

; Rn) and b : [0,∞) × Ω × Rn −→ Rn

be functions with the properties that, for each x ∈ Rn, (t, ω) σ(t, x, ω)

Page 14: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

138 5 Ito’s Theory of Stochastic Integration

and (t, ω) b(t, x, ω) are progressively measurable, (t, ω) σ(t, 0, ω) and(t, ω) b(t, 0, ω) are bounded, and

(5.2.1)sup

x1 6=x2

sup(t,ω)

‖σ(t, x2, ω)− σ(t, x1, ω)‖H.S.

|x2 − x1|

∨ |b(t, x2, ω)− b(t, x1, ω)||x2 − x1|

<∞.

5.2.2 Theorem. Given functions σ and b satisfying (5.2.1) and an Rn′-

valued Brownian motion(β(t),Ft,P

), there exists for each x ∈ Rn a P-almost

surely unique Rn-valued, progressively measurable function X( · , x) which

solves5 (cf. Corollary 5.1.18)

(5.2.3) X(t, x) = x+∫ t

0

σ(τ,X(τ, x)

)dβ(τ) +

∫ t

0

b(τ,X(τ, x)

)dτ, t ≥ 0.

Moreover, there exist a C ∈ (0,∞), depending only on

supx∈Rn

sup(t,ω)

‖σ(t, x, ω)‖H.S. ∨ |b(t, x, ω)|1 + |x|

,

such that, for all T ∈ (0,∞),

EP[∥∥X( · , x)

∥∥2

[0,T ]

]≤ C

(1 + |x|2

)eCT 2

and

EP[∣∣X(t, x)−X(s, x)

∣∣2] ≤ C(1 + |x|2

)eCT 2

(t− s), 0 ≤ s < t ≤ T.

Proof: Set X0(t, x) ≡ x. Assuming that XN ( · , x) has been defined andthat t σ

(t,XN (t, x)

)satisfies (5.1.19), set (cf. Corollary 5.1.18)

(*) XN+1(t, x) = x+∫ t

0

σ(τ,XN (τ, x)

)dβ(τ) +

∫ t

0

b(τ,XN (τ, x)

)dτ,

and observe that t σ(t,XN+1(t, x)

)will again satisfy (5.1.19). Hence, by

induction on N ≥ 0, we can produce the sequence XN ( · , x) : N ≥ 0 so

5 Just in case it is not clear, the condition that X( · , x) satisfy the equation implicitly

contains the condition that X( · , x) is P0-almost surely continuous. In particular, this

guarantees that the right hand side of (5.2.3) makes sense.

Page 15: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

5.2 Ito’s Integral Applied to Ito’s Construction Method. 139

that, for each N ∈ N, t σ(t,XN (t, x)

)satisfies (5.1.19) and (*) holds.

Moreover, by the last part of Corollary 5.1.18 plus Doob’s Inequality,

EP[∥∥X1( · , x)−X0( · , x)

∥∥2

[0,T ]

]≤ 8EP

[∫ T

0

∥∥σ(τ, x)∥∥2

H.S.dτ

]

+ 2TEP

[∫ T

0

∣∣b(τ, x)∣∣2 dτ] <∞and, for any N ≥ 1:

EP[∥∥XN+1( · , x)−XN ( · , x)

∥∥2

[0,T ]

]≤ 8EP

[∫ T

0

∥∥σ(τ,XN (τ, x))− σ

(XN−1(τ, x)

)∥∥2

H.S.

]

+ 2TEP

[∫ T

0

∣∣b(τ,XN (τ, x))− b(XN−1(τ, x)

)∣∣2 dτ] .Thus, by (5.2.1), there is a K <∞ such that

EP[∥∥XN+1( · , x)−XN ( · , x)

∥∥2

[0,T ]

]≤ K(1 + T )

∫ T

0

EP[∥∥XN ( · , x)−XN−1( · , x)

∥∥2

[0,t]

]dt

for all N ≥ 1; and so, by induction, we see that, for each T ∈ (0,∞),

EP[∥∥XN+1( · , x)−XN ( · , x)

∥∥2

[0,T ]

]≤ KN (1 + T )NTN

N !EP[∥∥X1( · , x)−X0( · , x)

∥∥2

[0,T ]

].

From the preceding it is clear that, for each T ∈ (0,∞),

limN→∞

supN ′>N

EP[∥∥XN ′( · , x)−XN ( · , x)

∥∥2

[0,T ]

]= 0.

Hence, we have now verified the existence statement.To prove the uniqueness assertion, suppose that X( · , x) and X ′( · , x) are

two solutions, and note that, for each k ≥ 1,

EP[∥∥X ′( · , x)−X( · , x)

∥∥2

[0,T∧ζk]

]≤ K(1 + T )

∫ T

0

EP[∥∥X ′( · , x)−X( · , x)

∥∥2

[0,t∧ζk]

]dt,

Page 16: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

140 5 Ito’s Theory of Stochastic Integration

where ζk ≡ inft ≥ 0 : |X(t, x)| ∨ |X ′(t, x)| ≥ k. But (e.g., by Gronwall’sinequality) this means that

EP[∥∥X ′( · , x)−X( · , x)

∥∥2

[0,T∧ζk]

]= 0,

and so we obtain the desired conclusion after letting k ∞.Turning to the asserted estimates, note that

EP[∥∥X( · , x)

∥∥2

[0,T ]

]≤ 3|x|2 + 12EP

[∫ T

0

∥∥σ(t,X(t, x))∥∥2

H.S.dt

]

+ 3TEP

[∫ T

0

∣∣b(t,X(t, x))∣∣2 dt] ,

and so there exists a K, with the required dependence, such that

EP[1 +

∥∥X( · , x)∥∥2

[0,T ]

]≤ 3|x|2 +K(1 + T )

∫ T

0

EP[1 +

∥∥X( · , x)∥∥2

[0,t]

]dt.

Hence the first estimate follows from Gronwall’s inequality (2.2.12). As forthe second estimate, assume that 0 ≤ s < t ≤ T , and use

EP[∣∣X(t, x)−X(s, x)

∣∣2] ≤ 2EP[∫ t

s

∥∥σ(τ,X(τ, x))∥∥2

H.S.dτ

]+ 2TEP

[∫ t

s

∣∣b(τ,X(τ, x))∣∣2 dτ]

together with the first estimate to arrive at the second estimate.

5.2.4 Corollary. Assume that σ : Rn −→ Hom(Rn′; Rn) and b : Rn −→

Rn are continuous functions which satisfy

supx1 6=x2

‖σ(x2)− σ(x1)‖H.S. ∨ |b(x2)− b(x1)||x2 − x1|

<∞.

Refer to the setting in § 3.1.2, and takeM ′ = 0 and F ≡ 0. Then(p′(t),B′t,P0

)is an Rn′

-valued Browning motion and the function p′ X( · , x, p′) ∈C([0,∞); Rn)

)described in (G1) is the P0-almost surely unique, B′t : t ≥

0-progressively measurable solution to

(5.2.5) X(t, x) = x+∫ t

0

σ(X(τ, x)

)dp′(τ) +

∫ t

0

b(X(τ, x)

)dτ, t ≥ 0.

Page 17: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

5.2 Ito’s Integral Applied to Ito’s Construction Method. 141

Proof: Let X( · , x) be the solution to (5.2.5), and define XN ( · , x) : N ≥0 as in (H4) of § 3.1.2. Note that

XN (t, x, p′) = x+∫ t

0

σ(XN ([τ ]N , x, p′) dp′(τ) +

∫ t

0

b(XN ([τ ]N , x, p′) dτ.

Thus,

EP0[∥∥X( · , x)−XN ( · , x)

∥∥2

[0,t]

]≤ 16EP

[∫ t

0

∥∥σ(X([τ ]N , x))− σ

(XN ([τ ]N , x)

)∥∥2

H.S.dτ

]+ 4tEP

[∫ t

0

∣∣b(X([τ ]N , x))− b(XN ([τ ]N , x)

)∣∣2 dτ]+ 16EP

[∫ t

0

∥∥σ(X(τ, x))− σ

(X([τ ]N , x)

)∥∥2

H.S.dτ

]+ 4tEP

[∫ t

0

∣∣b(X(τ, x))− b(X([τ ]N , x)

)∣∣2 dτ] ;

and so, by the last estimates in Theorem 5.2.2 and the Lipschitz estimateson σ and b, we see that, for each (T, x) ∈ (0,∞)×Rn, there is a K(T, x) <∞such that

EP0[∥∥X( · , x)−XN ( · , x)

∥∥2

[0,t]

]≤ K(T, x)2−N +K(T, x)

∫ t

0

EP0[∥∥X( · , x)−XN ( · , x)

∥∥2

[0,τ ]

]dτ

for all t ∈ [0, T ]. Finally, apply Gronwall’s inequality to conclude that

limN→∞

EP0[∥∥X( · , x)−XN ( · , x)

∥∥2

[0,t]

]= 0.

Remark 5.2.6. In the literature, (5.2.5) is called the stochastic integralequation for a diffusion processes with diffusion coefficient σσ> and driftcoefficient b. Often such equations are written in differential notation:

(5.2.7) dX(t, x) = σ(X(t, x)

)dp′(t) + b

(X(t, x)

)dt with X(0, x) = x,

in which case they are called a stochastic differential equation. It is impor-tant to recognize that the joint distribution of p′

(p′, X( · , x, p′)

)under

P0 would be the same were we to replace the canonical Brownian motion(p′(t),B′t,P0

)by any other Rn′

-valued Brownian motion(β(t),Ft,P

). In-

deed, as both the construction in Theorem 5.2.2 and the one in § 3.1.2 makeclear, this joint distribution depends only on the distribution of ω β( · , ω),which is the same no matter which realization of Brownian motion is used.

Page 18: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

142 5 Ito’s Theory of Stochastic Integration

5.2.2. Subordination. When solving a system of ordinary differentialequations one can sometimes take advantage of an inherent lower triangu-larity in the system. That is, one may be able to arrange the equationsin such a way that the system contains a subsystem which is autonomousonto itself. In this case, the entire system can be solved by first solving theautonomous subsystem, plugging the solution into the remaining equations,and then solving resulting, now time-dependent, equations.

The same device applies when solving stochastic differential equations,and is particularly interesting in the following situation. Let σ : Rn −→Hom(Rn′

; Rn) and b : Rn −→ Rn be as in Corollary 5.2.4. Suppose thatn = k + `, n′ = k′ + `′, and

σ(z) =(α(x) 0

0 β(x, y)

)& b(z) =

(v(x)w(x, y)

)for z = (x, y) ∈ Rk × R`,

where v : Rk −→ Rk, w : Rn −→ R`, α : Rk −→ Hom(Rk′ ; Rk) andβ : Rn −→ Hom(R`′ ; R`). Next, define Σ : [0,∞) × R` × C

([0,∞); Rk

)−→

Hom(R`′ ; R`) and B : [0,∞)× R` × C([0,∞); R`

)−→ R` so that

Σ(t, y; p

)= β

(p(t), y

)and B

(t, y; p

)≡ w

(p(t), y

).

Finally, write r′(t) =(p′(t), q′(t)

), where p′(t) and q′(t) are, respectively,

the orthogonal projections of r′(t) onto Rk′ and R`′ thought of as subspacesof Rn′

= Rk′ ⊕ R`′ , and note that, when C([0,∞); Rn′)

is identified withC([0,∞); Rk′

)×C

([0,∞); R`′

), P0 can be identified with P×Q, where P and

Q are the standard Wiener measures on C([0,∞); Rk′

)and C

([0,∞); R`′

),

respectively.Under these conditions, we want go about solving (5.2.5) by first solving

the equation

X(t, x, p′) = x+∫ t

0

α(X(τ, x, p′)

)dp′(τ)

+∫ t

0

v(X(τ, x, p′)

)dτ,

relative to the Wiener measure P, then, for each p ∈ C([0,∞); Rk

)and

y ∈ R`, solving

Y(t, y, q′; p

)= y +

∫ t

0

Σ(τ, Y (τ, y, q′; p); p

)dq′(τ)

+∫ t

0

B(τ, Y (τ, y, q′; p); p

)dτ

Page 19: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

5.2 Ito’s Integral Applied to Ito’s Construction Method. 143

relative to Q, and finally showing that

(5.2.8) t Z(t, z, (p′, q′)

)=(

X(t, x, p′)Y(t, y, q′;X( · , x, p′)

))is a solution to original stochastic integral equation whose coefficients wereσ and b.

The following theorem says that this subordination procedure works.

5.2.9 Theorem. Let r′ = (p′, q′) Z( · , z, r′) be defined for z = (x, y)as in (5.2.8). Then r′ Z( · , z, r′) is the P0-almost surely unique solution

to the stochastic integral equation

Z(t, z, r′) = z +∫ t

0

σ(Z(τ, z, r′)

)dr′(τ) +

∫ t

0

b(Z(τ, z, r′)

)dτ, t ≥ 0.

Hence, if Φ : C([0,∞); Rk

)× C

([0,∞); R`

)−→ R is measurable and either

bounded or non-negative, then

EP0[Φ(Z( · , z, r′)

)]=∫ (∫

Φ(X( · , x, p′);Y

(· , y, q′;X( · , x, p′)

)Q(dq′)

)P(dp′).

Proof: The proof of this result is really just an exercise in the use of Fu-bini’s Theorem. Namely, let r′ Z( · , z, r′) be the P0-almost surely uniquesolution to the stochastic integral equation

Z(t, z, r′) = z +∫ t

0

σ(Z(τ, z, r′)

)dr′(τ) +

∫ t

0

b(Z(τ, z, r′)

)dτ, t ≥ 0.

What we have to show is that Z( · , z, r′) = Z( · , z, r′) for P0-almost everyr′ ∈ C

([0,∞); Rn)

).

Let X(t, z, r′) and Y (t, z, r′) be the orthogonal projections of Z(t, z, r′)onto Rk and R`. By Fubini’s Theorem, we know that, for Q-almost everyq′, the function p′ X

(· , z, (p′, q′)

)satisfies

X(t, z, (p′, q′)

)= x+

∫ t

0

α(X(τ, z, (p′, q′)

))dp′(τ)

+∫ t

0

v(X(τ, z, (p′, q′)

))dτ, t ≥ 0,

P-almost surely. Thus, by uniqueness, for Q-almost every q′, X(· , z, (p′, q′)

)= X( · , x, p′) for P-almost every p′.

Page 20: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

144 5 Ito’s Theory of Stochastic Integration

Starting from the preceding and making another application of Fubini’sTheorem, we see that, for P-almost every p′, the function q′ Y

(· , z, (p′, q′)

)satisfies

Y(t, z, (p′, q′)

)= y +

∫ t

0

β(X(τ, x, p′), Y

(τ, z, (p′, q′)

))dq′(τ)

+∫ t

0

w(X(τ, x, p′), Y

(τ, z, (p′, q′)

)dτ, t ≥ 0,

Q-almost surely. Hence, by uniqueness, for P-almost every p′, Y(· , z,

(p′, q′)

)= Y

(· , y, q′;X( · , x, p′)

)Q-almost surely. After combining these two and

making yet another application of Fubini’s Theorem, we have now shownthat Z( · , z, r′) = Z( · , z, r′) for P0-almost every r′.

5.2.3. Exercises.

Exercise 5.2.10. Here is an example which indicates how Theorem 5.2.9gets applied. Namely, referring to the setting in that theorem, suppose thatβ and w are independent of y. That is, β(x, y) = β(x) and w(x, y) = w(x).Next, define V : [0,∞) × C

([0,∞); Rk

)−→ Hom(R`; R`) and B : [0,∞) ×

C([0,∞); Rk

)−→ R` so that

V(t, p)

=∫ t

0

ββ>(p(τ)

)dτ and B

(t, p)

=∫ t

0

w(p(τ)

)dτ,

and let Γt

(p, dη

)denote the Gaussian measure on R` with mean B(t, p) and

covariance V (t, p). Given a stopping time ζ : C([0,∞); Rk′

)7−→ [0,∞] and

a bounded measurable f : Rn −→ R show that

EP0[f(Z(ζ(p′), z, r′)

), ζ(p′) <∞

]= EP[Φ, ζ <∞] where

Φ(p′) =∫

R`

f(X( · , x, p′), y + η

)Γζ(p′)

(X( · , x, p′), dη

)for ζ(p′) <∞.

5.3 Ito’s Formula

The jewel in the crown of Ito’s stochastic integration theory is Ito’s for-mula. Depending on ones point of view, his formula can be seen as thesolution to any one of a variety of problems. From the point of view whichcomes out of the considerations in Chapters 3 and 4, especially (G2) in§ 3.1.2, it gives us a representation for the martingales being discussed there.From a more general standpoint, Ito’s formula is the fundamental theoremof the calculus for which his integration theory is the integral, and, as such,it is the identity on which nearly everything else relies.

Page 21: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

5.3 Ito’s Formula. 145

In the following statement:(β(t),Ft,P

)is an Rn-valued Brownian motion,

and

(X) X : [0,∞)×Ω −→ Rk is a progressively measurable and, for P-almostevery ω, X( · , ω) is a continuous function of locally bounded variation

(Y) Y : [0,∞)×Ω −→ R` is a progressively measurable function and, foreach 1 ≤ j ≤ `, the jth component Yj of Y − Y (0) is the dβ(t)-Itostochastic integral Iθj

of some θj ∈ Θ2loc(P; Rn).

(Z) Z(t) =(X(t), Y (t)

).

With this notation, Ito’s Formula is the formula given in the next theorem.Our proof is based on the technique introduced by Kunita and Watanabe intheir now famous article [21].

5.3.1 Theorem. Given any F ∈ C1,2(Rk × R`; R),

F(Z(t)

)− F

(Z(0)

)=

k∑i=1

∫ t

0

∂xiF(Z(τ)

)dXj(τ) +

∑j=1

∫ t

0

∂yjF(Z(τ)

)(θj(τ), dβ(τ)

)Rn

+12

∑j,j′=1

(θj(τ), θj′(τ)

)Rn∂yj∂yj′F

(Z(τ)

)dτ, t ≥ 0,

P-almost surely. Here, dXi-integrals are taken in the sense of Riemann–

Stieltjes and dβ-integrals are taken in the sense of Ito.

Proof: We begin by making several reductions. In the first place, withoutloss in generality, we may assume that Z( · , ω) is continuous for all ω ∈ Ω.Secondly, we may assume that all the Xi’s have uniformly bounded variationand that all the θj ’s are elements of Θ2(P; Rn). Indeed, if this is not the casealready, then we can introduce the stopping times

ζR = inf

k∑

i=1

var[0,t](Xi) +∑j=1

∫ t

0

∣∣θj(τ)∣∣2 dτ ≥ R

,

replace Z(t) by Z(t ∧ ζR), and, at the end, let R ∞. Similarly, we mayassume that F has compact support. Finally, under these assumptions, astandard mollification procedure makes it clear that we need only handleF ’s which are both smooth and compactly supported. Thus, from now on,we will be assuming that: Z( · , ω) is continuous for all ω, the Xi’s haveuniformly bounded variation, the θj ’s are elements of Θ2(P; Rn), and F issmooth and compactly supported. Finally, by continuity, it suffices to provethat the identity holds P-almost surely for each t ≥ 0.

Page 22: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

146 5 Ito’s Theory of Stochastic Integration

Given N ∈ N and t ∈ (0,∞), define stopping times ζNm : m ≥ 0 so that

ζN0 ≡ 0 and

ζNm+1 = inf

τ ≥ ζN

m :∣∣Z(τ)− Z(ζN

m )∣∣ ∨ max

1≤j≤`

∫ τ

ζNm

|θj(σ)|2 dσ ≥ 2−N

∧ t.

Next, set ZNm = (XN

m , YNm ) = Z

(ζNm

), and ∆N

m,j = Yj(ζNm+1) − Yj(ζN

m ). Bycontinuity, we know that, for each ω, ζN

m (ω) = t for all but a finite numberof m’s. Hence,

F(Z(t)

)− F

(Z(0)

)=

∞∑m=0

(F (ZN

m+1)− F (ZNm )).

Furthermore, by the Fundamental Theorem of Calculus,

F (ZNm+1)− F (ZN

m ) = F (XNm , Y

Nm+1)− F (ZN

m )

+k∑

i=1

∫ ζNm+1

ζNm

∂xiF(X(τ), Y N

m+1

)dXi(τ),

and, by Taylor’s Theorem,

F (XNm , Y

Nm+1)− F (ZN

m ) =∑j=1

∂yjF (ZN

m )∆Nm,j

+12

∑j,j′=1

∂yj∂yj′F (ZN

m )∆Nm,j∆

Nm,j′ + EN

m ,

where there exists a C3 < ∞, depending only on the bound on the thirdorder derivatives of F , such that

|ENm | ≤ C32−N

∑j=1

(∆Nm,j)

2.

Next, we use Lemma 5.1.16 to first write

∂yjF (ZNm )∆N

m,j =∫ t

0

(∂yjF (ZN

m )1[ζNm .ζN

m+1)θj(τ), dβ(τ)

)Rn ,

and then conclude that

F(Z(t)

)− F

(Z(0)

)=

k∑i=1

∫ t

0

FNi (τ) dXi(τ) +

∑j=1

∫ t

0

∂yj

(θN

j (τ), dβ(τ))

Rn

+12

∞∑m=0

∑j,j′=1

∂yj∂yj′F (ZNm )∆N

m,j∆Nm,j′ +

∞∑m=0

ENm ,

Page 23: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

5.3 Ito’s Formula. 147

where

FNi (τ) ≡ ∂xi

F(X(τ), Y N

m+1

)& θN

j (τ) ≡ ∂yjF (ZN

m )θj(τ) for τ ∈ [ζNm , ζ

Nm+1).

Because |ENm | ≤ C32−N

∑`j=1(∆

Nm,j)

2 and

EP

[ ∞∑m=0

(∆Nm,j)

2

]= EP

( ∞∑m=0

(∆Nm,j)

)2 = EP[(Yj(t)− Yj(0)

)2],

we know that∑∞

m=0ENm tends to 0 in L2(P; R) as N → ∞. At the same

time, it is clearly that, if C2 is a bound on the second derivatives of F , then∣∣∣∣∫ t

0

FNi (τ) dXi(τ)−

∫ t

0

∂xiF(Z(τ)

)dXi(τ)

∣∣∣∣ ≤ C22−Nvar[0,t](Xi)

and

EP

[∣∣∣∣∫ t

0

(θN

j (τ), dβ(τ))

Rn −∫ t

0

(∂yj

FZ(τ))(θj(τ), dβ(τ)

)Rn

∣∣∣∣2]

= EP[∫ t

0

∣∣θNj (τ)− ∂yj

F(Z(τ)

)θj(τ)

∣∣2 dτ] ≤ C224−N‖θj‖2Θ2(P;Rn).

Hence, since it is obvious that

∞∑m=0

∫ ζNm+1

ζNm

(θj(τ), θj′(τ)

)Rn∂yj

∂yj′F(Z(ζN

m ))dτ

−→∫ t

0

(θj(τ), θj′(τ)

)Rn∂yj∂yj′F

(Z(τ)

)dτ,

all that remains is to show that, for all 1 ≤ j, j′ ≤ `,

∞∑m=0

∂yj∂yj′F (ZNm )

(∆N

m,j∆Nm,j′ −

∫ ζNm+1

ζNm

(θj(τ), θj′(τ)

)Rn

)dτ −→ 0

in P-measure. To this end, remark that

EP[∆Nm,j∆

Nm,j′

∣∣FζNt

]= EP

[Yj(ζN

m+1)Yj′(ζNm+1)− Yj(ζN

m )Yj′(ζNm )∣∣∣FζN

m

]= EP

[∫ ζNm+1

ζNm

(θj(τ), θj′(τ)

)Rn dτ

∣∣∣∣FζNm

],

Page 24: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

148 5 Ito’s Theory of Stochastic Integration

and conclude that the terms

SNm,j,j′ ≡ ∂yj∂yj′F (ZN

m )∆Nm,j∆

Nm,j′

−∫ ζN

m+1

ζNm

(θj(τ), θj′(τ)

)Rn∂yj

∂yj′F (ZNm ) dτ

are orthogonal for different m’s. Hence,

EP

( ∞∑m=0

SNm,j,j′

)2 =

∞∑m=0

EP[(SNm,j,j′)

2].

At the same time,

(SNm,j,j′)

2 ≤ C224−N

((∆N

m,j)2 +(∆N

m,j′)2 +∫ ζN

m+1

θj

(|θj(τ)|2 ++|θj′(τ)|2

)dτ).

Hence,

EP

( ∞∑m=0

SNm,j,j′

)2

≤ C24−NEP[(Yj(t)− Yj(0)

)2 +(Yj′(t)− Yj′(0)

)2+∫ t

0

(|θj(τ)|2 + θj′(τ)|2

)dτ

]≤ 2C24−NEP[|Y (t)− Y (0)|2

].

Remark 5.3.2. If one likes to think about the Fundamental Theorem ofCalculus in terms of differentials, then the following is an appealing way tothink about Ito’s formula. In the first place, given a progressively measurablefunction α : [0,∞)× Ω −→ R which satisfies

P(∫ t

0

α(τ)2|θj(τ)|2 dτ <∞)

= 1 for all t ≥ 0,

one should introduce the notation∫ t

0

α(τ)dYj(τ) ≡∫ t

0

(α(τ)θj(τ), dβ(τ)

)Rn .

Page 25: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

5.3 Ito’s Formula. 149

That is, in terms of “differentials,” “dYj(t) ≡(θj(t), dβ(t)

)RN .” At the same

time, one should define∫ t

0

α(τ) dYj(τ)dYj′(τ) ≡∫ t

0

α(τ)(θj(τ), θj′(τ)

)Rn dτ

for progressively measurable α’s satisfying

P(∫ t

0

|α(τ)|(|θj(τ)|2 + |θj′(τ)|2

)dτ <∞

)= 1 for all t ≥ 0.

In terms of “differentials,” this means that we are taking “dYj(t)dYj′(t) =(θj(t), θj′(t)

)Rn dt.” Finally, define “dXi(t)dXi′(t) = 0 = dXi(t)dYj(t)” for

all 1 ≤ i, i′ ≤ k and 1 ≤ j ≤ `. With this notation, a differential version ofIto’s formula becomes

(5.3.3) dF(Z(t)

)=(gradZ(t), dZ(t)

)Rk+` +

12

(dZ(t),HessZ(t)F dZ(t)

)Rk+`

.

Although it looks a little questionable, this way of writing Ito’s formula hasmore than mnemonic value. In fact, it highlights the essential fact on whichhis formula rests. Namely, “dβm(t)” may be a “differential,” but it is not,in a classical sense, an “infinitesimal.” Indeed, there is abundant evidencethat “|dβm(t)|” is of order “

√dt.”6 Thus, it is only reasonable that, when

dealing with Brownian paths, one will not get a differential relation in whichthe left hand side differs from the right hand side by o(dt) unless one uses atwo-place Taylor’s expansion. Furthermore, it is clear why

(dβm(t)

)2 = dt.In order to figure out how “dβmdβm′” should be interpreted when m 6= m′,remember that βm+βm′√

2and βm−βm′√

2are both Brownian motions. Hence,

“2dβm(t)dβm′(t) =(dβm + βm′√

2(t))2

−(dβm − βm′√

2(t))2

= dt−dt = 0 dt.”

In other words, “dβm(t)dβm′(t) = δm,m′dt,” which now “explains” why thepreceding works. Namely,

“dXi(t)dXi′(t) = 0 dt and dXi(t)dYj(t) = dXi(t)(θj(t), dβ(t)

)Rn = 0 dt”

because “dXi(t)dXi′(t)” and “dβ(t)dYj(t)” are “o(dt),” and

“dYj(t)dYj′(t) =∑

m,m′

θj(t)mθj′(t)m′dβm(t)dβm′(t) =(θj(t), θj′(t)

)Rndt”

because ”dβm(t)dβm′(t) = δm,m′dt.”

6 Levy understood this point thoroughly and in fact made systematic use of the notation√dt.

Page 26: Itˆo’s Theory of Stochastic Integrationdws/ito/ito5.pdf · Itˆo’s Theory of Stochastic Integration Up to this point, I have been recognizing but not confronting the challenge

150 5 Ito’s Theory of Stochastic Integration

5.3.1. Exercises.

Exercise 5.3.4. Given an Rn-valued Brownian motion(β(t),Ft,P

)and a

θ ∈ Θ2loc(Rn), define Eθ as in (5.1.10).

(i) Show that Eθ is P-uniquely determined by the fact that it is progres-sively measurable and

(5.3.5) dEθ(t) = Eθ(t)(θ(t), dβ(t)

)Rn and Eθ(0) = 1.

For this reason, Eθ is sometimes called the Ito exponential.Hint: Checking that Eθ is a solution to (5.1.10) is an elementary applicationof Ito’s formula. To see that it is the only solution, suppose thatX is a secondsolution, and apply Ito’s formula to see that d X(t)

Eθ(t) = 0.

(ii) Show that(Eθ(t),Ft,P

)is always a supermartingale and that it is a

martingale if and only if EP[Eθ(t)]

= 1 for all t ≥ 0.(iii) From now on, assume that

(5.3.6) EP[e

12 Aθ(T )

]<∞ for each T ≥ 0.

First observe that(Iθ(t),Ft,P

)is a square integrable martingale, and con-

clude that, for every α ≥ 0,(eαIθ(t),Ft,P

)is a submartingale. In addition,

show that

EP[e 12 Iθ(T )

]≤ EP[e 1

2 Aθ(T )] 1

2 <∞ for all T ≥ 0.

Hint: Write e12 Iθ = E

12θ e

14 Aθ , and remember that EP[Eθ(T )

]≤ 1.

(iv) Given λ ∈ (0, 1), determine pλ ∈ (1,∞) so that λpλ(pλ−λ)1−λ2 = 1

2 . For

p ∈ [1, pλ], set γ(λ, p) = λp(p−λ)1−λ2 , note that Ep2

λθ =(eγ(λ,p)Iθ

)1−λ2

Eλ2

pθ , andconclude that, for any stopping time ζ,

EP[Eλθ(T ∧ ζ)p2]≤ EP[e 1

2 Iθ(T )]2λp(p−λ)

.

By taking p = pλ and applying part (iii), see that this provides sufficientuniform integrability to check that

(Eλθ(t),Ft,P

)is a martingale for each

λ ∈ (0, 1).(v) By taking p = 1 in the preceding, justify

1 = EP[Eλθ(T )]≤ EP

[e

12 Iθ(T )

]2λ(1−λ)

EP[Eθ(T )]λ2

for all λ ∈ (0, 1). After letting λ 1, conclude first that EP[Eθ(T )]

= 1and then that

(Eθ(t),Ft,P

)is a martingale. The fact that (5.3.6) implies(

Eθ(t),Ft,P)

is a martingale is known as Novikov’s criterion.