Koc3(dba)

Post on 01-Nov-2014

990 views 0 download

Tags:

description

 

Transcript of Koc3(dba)

Lectures on Lévy Processes and StochasticCalculus (Koc University)

Lecture 3: The Lévy-Itô Decomposition

David Applebaum

School of Mathematics and Statistics, University of Sheffield, UK

7th December 2011

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 1 / 44

Filtrations, Markov Processes and Martingales

We recall the probability space (Ω,F ,P) which underlies ourinvestigations. F contains all possible events in Ω.When we introduce the arrow of time, its convenient to be able toconsider only those events which can occur up to and including time t .We denote by Ft this sub-σ-algebra of F . To be able to consider alltime instants on an equal footing, we define a filtration to be anincreasing family (Ft , t ≥ 0) of sub-σ-algebras of F , , i.e.

0 ≤ s ≤ t <∞⇒ Fs ⊆ Ft .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 2 / 44

Filtrations, Markov Processes and Martingales

We recall the probability space (Ω,F ,P) which underlies ourinvestigations. F contains all possible events in Ω.When we introduce the arrow of time, its convenient to be able toconsider only those events which can occur up to and including time t .We denote by Ft this sub-σ-algebra of F . To be able to consider alltime instants on an equal footing, we define a filtration to be anincreasing family (Ft , t ≥ 0) of sub-σ-algebras of F , , i.e.

0 ≤ s ≤ t <∞⇒ Fs ⊆ Ft .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 2 / 44

Filtrations, Markov Processes and Martingales

We recall the probability space (Ω,F ,P) which underlies ourinvestigations. F contains all possible events in Ω.When we introduce the arrow of time, its convenient to be able toconsider only those events which can occur up to and including time t .We denote by Ft this sub-σ-algebra of F . To be able to consider alltime instants on an equal footing, we define a filtration to be anincreasing family (Ft , t ≥ 0) of sub-σ-algebras of F , , i.e.

0 ≤ s ≤ t <∞⇒ Fs ⊆ Ft .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 2 / 44

Filtrations, Markov Processes and Martingales

We recall the probability space (Ω,F ,P) which underlies ourinvestigations. F contains all possible events in Ω.When we introduce the arrow of time, its convenient to be able toconsider only those events which can occur up to and including time t .We denote by Ft this sub-σ-algebra of F . To be able to consider alltime instants on an equal footing, we define a filtration to be anincreasing family (Ft , t ≥ 0) of sub-σ-algebras of F , , i.e.

0 ≤ s ≤ t <∞⇒ Fs ⊆ Ft .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 2 / 44

A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtrationif each X (t) is Ft -measurable.

e.g. any process is adapted to its natural filtration,

FXt = σX (s); 0 ≤ s ≤ t.

An adapted process X = (X (t), t ≥ 0) is a Markov process if for allf ∈ Bb(Rd ),0 ≤ s ≤ t <∞,

E(f (X (t))|Fs) = E(f (X (t))|X (s)) (a.s.). (0.1)

(i.e. “past” and “future” are independent, given the present).

The transition probabilities of a Markov process are

ps,t (x ,A) = P(X (t) ∈ A|X (s) = x),

i.e. the probability that the process is in the Borel set A at time t giventhat it is at the point x at the earlier time s.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 3 / 44

A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtrationif each X (t) is Ft -measurable.

e.g. any process is adapted to its natural filtration,

FXt = σX (s); 0 ≤ s ≤ t.

An adapted process X = (X (t), t ≥ 0) is a Markov process if for allf ∈ Bb(Rd ),0 ≤ s ≤ t <∞,

E(f (X (t))|Fs) = E(f (X (t))|X (s)) (a.s.). (0.1)

(i.e. “past” and “future” are independent, given the present).

The transition probabilities of a Markov process are

ps,t (x ,A) = P(X (t) ∈ A|X (s) = x),

i.e. the probability that the process is in the Borel set A at time t giventhat it is at the point x at the earlier time s.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 3 / 44

A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtrationif each X (t) is Ft -measurable.

e.g. any process is adapted to its natural filtration,

FXt = σX (s); 0 ≤ s ≤ t.

An adapted process X = (X (t), t ≥ 0) is a Markov process if for allf ∈ Bb(Rd ),0 ≤ s ≤ t <∞,

E(f (X (t))|Fs) = E(f (X (t))|X (s)) (a.s.). (0.1)

(i.e. “past” and “future” are independent, given the present).

The transition probabilities of a Markov process are

ps,t (x ,A) = P(X (t) ∈ A|X (s) = x),

i.e. the probability that the process is in the Borel set A at time t giventhat it is at the point x at the earlier time s.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 3 / 44

A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtrationif each X (t) is Ft -measurable.

e.g. any process is adapted to its natural filtration,

FXt = σX (s); 0 ≤ s ≤ t.

An adapted process X = (X (t), t ≥ 0) is a Markov process if for allf ∈ Bb(Rd ),0 ≤ s ≤ t <∞,

E(f (X (t))|Fs) = E(f (X (t))|X (s)) (a.s.). (0.1)

(i.e. “past” and “future” are independent, given the present).

The transition probabilities of a Markov process are

ps,t (x ,A) = P(X (t) ∈ A|X (s) = x),

i.e. the probability that the process is in the Borel set A at time t giventhat it is at the point x at the earlier time s.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 3 / 44

A stochastic process X = (X (t), t ≥ 0) is adapted to the given filtrationif each X (t) is Ft -measurable.

e.g. any process is adapted to its natural filtration,

FXt = σX (s); 0 ≤ s ≤ t.

An adapted process X = (X (t), t ≥ 0) is a Markov process if for allf ∈ Bb(Rd ),0 ≤ s ≤ t <∞,

E(f (X (t))|Fs) = E(f (X (t))|X (s)) (a.s.). (0.1)

(i.e. “past” and “future” are independent, given the present).

The transition probabilities of a Markov process are

ps,t (x ,A) = P(X (t) ∈ A|X (s) = x),

i.e. the probability that the process is in the Borel set A at time t giventhat it is at the point x at the earlier time s.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 3 / 44

TheoremIf X is a Lévy process (adapted to its own natural filtration) whereineach X (t) has law qt , then it is a Markov process with transitionprobabilities ps,t (x ,A) = qt−s(A− x).

Proof. This essentially follows from

E(f (X (t))|Fs) = E(f (X (s) + X (t)− X (s))|Fs)

=

∫Rd

f (X (s) + y)qt−s(dy). 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 4 / 44

TheoremIf X is a Lévy process (adapted to its own natural filtration) whereineach X (t) has law qt , then it is a Markov process with transitionprobabilities ps,t (x ,A) = qt−s(A− x).

Proof. This essentially follows from

E(f (X (t))|Fs) = E(f (X (s) + X (t)− X (s))|Fs)

=

∫Rd

f (X (s) + y)qt−s(dy). 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 4 / 44

TheoremIf X is a Lévy process (adapted to its own natural filtration) whereineach X (t) has law qt , then it is a Markov process with transitionprobabilities ps,t (x ,A) = qt−s(A− x).

Proof. This essentially follows from

E(f (X (t))|Fs) = E(f (X (s) + X (t)− X (s))|Fs)

=

∫Rd

f (X (s) + y)qt−s(dy). 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 4 / 44

Now let X be an adapted process defined on a filtered probabilityspace which also satisfies the integrability requirement E(|X (t)|) <∞for all t ≥ 0.We say that it is a martingale if for all 0 ≤ s < t <∞,

E(X (t)|Fs) = X (s) a.s.

Note that if X is a martingale, then the map t → E(X (t)) is constant.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 5 / 44

Now let X be an adapted process defined on a filtered probabilityspace which also satisfies the integrability requirement E(|X (t)|) <∞for all t ≥ 0.We say that it is a martingale if for all 0 ≤ s < t <∞,

E(X (t)|Fs) = X (s) a.s.

Note that if X is a martingale, then the map t → E(X (t)) is constant.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 5 / 44

Now let X be an adapted process defined on a filtered probabilityspace which also satisfies the integrability requirement E(|X (t)|) <∞for all t ≥ 0.We say that it is a martingale if for all 0 ≤ s < t <∞,

E(X (t)|Fs) = X (s) a.s.

Note that if X is a martingale, then the map t → E(X (t)) is constant.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 5 / 44

Now let X be an adapted process defined on a filtered probabilityspace which also satisfies the integrability requirement E(|X (t)|) <∞for all t ≥ 0.We say that it is a martingale if for all 0 ≤ s < t <∞,

E(X (t)|Fs) = X (s) a.s.

Note that if X is a martingale, then the map t → E(X (t)) is constant.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 5 / 44

An adapted Lévy process with zero mean is a martingale (with respectto its natural filtration)since in this case, for 0 ≤ s ≤ t <∞ and using the convenient notationEs(·) := E(·|Fs):

Es(X (t)) = Es(X (s) + X (t)− X (s))

= X (s) + E(X (t)− X (s)) = X (s)

Although there is no good reason why a generic Lévy process shouldbe a martingale (or even have finite mean), there are some importantexamples:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 6 / 44

An adapted Lévy process with zero mean is a martingale (with respectto its natural filtration)since in this case, for 0 ≤ s ≤ t <∞ and using the convenient notationEs(·) := E(·|Fs):

Es(X (t)) = Es(X (s) + X (t)− X (s))

= X (s) + E(X (t)− X (s)) = X (s)

Although there is no good reason why a generic Lévy process shouldbe a martingale (or even have finite mean), there are some importantexamples:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 6 / 44

An adapted Lévy process with zero mean is a martingale (with respectto its natural filtration)since in this case, for 0 ≤ s ≤ t <∞ and using the convenient notationEs(·) := E(·|Fs):

Es(X (t)) = Es(X (s) + X (t)− X (s))

= X (s) + E(X (t)− X (s)) = X (s)

Although there is no good reason why a generic Lévy process shouldbe a martingale (or even have finite mean), there are some importantexamples:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 6 / 44

e.g. the processes whose values at time t are

σB(t) where B(t) is a standard Brownian motion, and σ is an r × dmatrix.N(t) where N is a compensated Poisson process with intensity λ.

Some important martingales associated to Lévy processes include:

expi(u,X (t))− tη(u), where u ∈ Rd is fixed.|σB(t)|2 − tr(A)t where A = σTσ.N(t)2 − λt .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44

e.g. the processes whose values at time t are

σB(t) where B(t) is a standard Brownian motion, and σ is an r × dmatrix.N(t) where N is a compensated Poisson process with intensity λ.

Some important martingales associated to Lévy processes include:

expi(u,X (t))− tη(u), where u ∈ Rd is fixed.|σB(t)|2 − tr(A)t where A = σTσ.N(t)2 − λt .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44

e.g. the processes whose values at time t are

σB(t) where B(t) is a standard Brownian motion, and σ is an r × dmatrix.N(t) where N is a compensated Poisson process with intensity λ.

Some important martingales associated to Lévy processes include:

expi(u,X (t))− tη(u), where u ∈ Rd is fixed.|σB(t)|2 − tr(A)t where A = σTσ.N(t)2 − λt .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44

e.g. the processes whose values at time t are

σB(t) where B(t) is a standard Brownian motion, and σ is an r × dmatrix.N(t) where N is a compensated Poisson process with intensity λ.

Some important martingales associated to Lévy processes include:

expi(u,X (t))− tη(u), where u ∈ Rd is fixed.|σB(t)|2 − tr(A)t where A = σTσ.N(t)2 − λt .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44

e.g. the processes whose values at time t are

σB(t) where B(t) is a standard Brownian motion, and σ is an r × dmatrix.N(t) where N is a compensated Poisson process with intensity λ.

Some important martingales associated to Lévy processes include:

expi(u,X (t))− tη(u), where u ∈ Rd is fixed.|σB(t)|2 − tr(A)t where A = σTσ.N(t)2 − λt .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44

e.g. the processes whose values at time t are

σB(t) where B(t) is a standard Brownian motion, and σ is an r × dmatrix.N(t) where N is a compensated Poisson process with intensity λ.

Some important martingales associated to Lévy processes include:

expi(u,X (t))− tη(u), where u ∈ Rd is fixed.|σB(t)|2 − tr(A)t where A = σTσ.N(t)2 − λt .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44

e.g. the processes whose values at time t are

σB(t) where B(t) is a standard Brownian motion, and σ is an r × dmatrix.N(t) where N is a compensated Poisson process with intensity λ.

Some important martingales associated to Lévy processes include:

expi(u,X (t))− tη(u), where u ∈ Rd is fixed.|σB(t)|2 − tr(A)t where A = σTσ.N(t)2 − λt .

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 7 / 44

Càdlàg Paths

A function f : R+ → Rd is càdlàg if it is continue à droite et limité àgauche, i.e. right continuous with left limits. Such a function has onlyjump discontinuities.Define f (t−) = lims↑t f (s) and ∆f (t) = f (t)− f (t−). If f is càdlàg,0 ≤ t ≤ T ,∆f (t) 6= 0 is at most countable.

If the filtration satisfies the “usual hypotheses” of right continuity andcompletion, then every Lévy process has a càdlàg modification whichis itself a Lévy process.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 8 / 44

Càdlàg Paths

A function f : R+ → Rd is càdlàg if it is continue à droite et limité àgauche, i.e. right continuous with left limits. Such a function has onlyjump discontinuities.Define f (t−) = lims↑t f (s) and ∆f (t) = f (t)− f (t−). If f is càdlàg,0 ≤ t ≤ T ,∆f (t) 6= 0 is at most countable.

If the filtration satisfies the “usual hypotheses” of right continuity andcompletion, then every Lévy process has a càdlàg modification whichis itself a Lévy process.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 8 / 44

Càdlàg Paths

A function f : R+ → Rd is càdlàg if it is continue à droite et limité àgauche, i.e. right continuous with left limits. Such a function has onlyjump discontinuities.Define f (t−) = lims↑t f (s) and ∆f (t) = f (t)− f (t−). If f is càdlàg,0 ≤ t ≤ T ,∆f (t) 6= 0 is at most countable.

If the filtration satisfies the “usual hypotheses” of right continuity andcompletion, then every Lévy process has a càdlàg modification whichis itself a Lévy process.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 8 / 44

Càdlàg Paths

A function f : R+ → Rd is càdlàg if it is continue à droite et limité àgauche, i.e. right continuous with left limits. Such a function has onlyjump discontinuities.Define f (t−) = lims↑t f (s) and ∆f (t) = f (t)− f (t−). If f is càdlàg,0 ≤ t ≤ T ,∆f (t) 6= 0 is at most countable.

If the filtration satisfies the “usual hypotheses” of right continuity andcompletion, then every Lévy process has a càdlàg modification whichis itself a Lévy process.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 8 / 44

Càdlàg Paths

A function f : R+ → Rd is càdlàg if it is continue à droite et limité àgauche, i.e. right continuous with left limits. Such a function has onlyjump discontinuities.Define f (t−) = lims↑t f (s) and ∆f (t) = f (t)− f (t−). If f is càdlàg,0 ≤ t ≤ T ,∆f (t) 6= 0 is at most countable.

If the filtration satisfies the “usual hypotheses” of right continuity andcompletion, then every Lévy process has a càdlàg modification whichis itself a Lévy process.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 8 / 44

From now on, we will always make the following assumptions:-

(Ω,F ,P) will be a fixed probability space equipped with a filtration(Ft , t ≥ 0) which satisfies the “usual hypotheses”.Every Lévy process X = (X (t), t ≥ 0) will be assumed to beFt -adapted and have càdlàg sample paths.X (t)− X (s) is independent of Fs for all 0 ≤ s < t <∞.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 9 / 44

From now on, we will always make the following assumptions:-

(Ω,F ,P) will be a fixed probability space equipped with a filtration(Ft , t ≥ 0) which satisfies the “usual hypotheses”.Every Lévy process X = (X (t), t ≥ 0) will be assumed to beFt -adapted and have càdlàg sample paths.X (t)− X (s) is independent of Fs for all 0 ≤ s < t <∞.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 9 / 44

From now on, we will always make the following assumptions:-

(Ω,F ,P) will be a fixed probability space equipped with a filtration(Ft , t ≥ 0) which satisfies the “usual hypotheses”.Every Lévy process X = (X (t), t ≥ 0) will be assumed to beFt -adapted and have càdlàg sample paths.X (t)− X (s) is independent of Fs for all 0 ≤ s < t <∞.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 9 / 44

From now on, we will always make the following assumptions:-

(Ω,F ,P) will be a fixed probability space equipped with a filtration(Ft , t ≥ 0) which satisfies the “usual hypotheses”.Every Lévy process X = (X (t), t ≥ 0) will be assumed to beFt -adapted and have càdlàg sample paths.X (t)− X (s) is independent of Fs for all 0 ≤ s < t <∞.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 9 / 44

The Jumps of A Lévy Process - Poisson RandomMeasures

The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévyprocess is defined by

∆X (t) = X (t)− X (t−),

for each t ≥ 0.

Theorem

If N is a Lévy process which is increasing (a.s.) and is such that(∆N(t), t ≥ 0) takes values in 0,1, then N is a Poisson process.

Proof. Define a sequence of stopping times recursively by T0 = 0 andTn = inft > Tn−1; N(t + Tn−1)− N(Tn−1)) 6= 0 for each n ∈ N. Itfollows from (L2) that the sequence (T1,T2 − T1, . . . ,Tn − Tn−1, . . .) isi.i.d.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 10 / 44

The Jumps of A Lévy Process - Poisson RandomMeasures

The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévyprocess is defined by

∆X (t) = X (t)− X (t−),

for each t ≥ 0.

Theorem

If N is a Lévy process which is increasing (a.s.) and is such that(∆N(t), t ≥ 0) takes values in 0,1, then N is a Poisson process.

Proof. Define a sequence of stopping times recursively by T0 = 0 andTn = inft > Tn−1; N(t + Tn−1)− N(Tn−1)) 6= 0 for each n ∈ N. Itfollows from (L2) that the sequence (T1,T2 − T1, . . . ,Tn − Tn−1, . . .) isi.i.d.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 10 / 44

The Jumps of A Lévy Process - Poisson RandomMeasures

The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévyprocess is defined by

∆X (t) = X (t)− X (t−),

for each t ≥ 0.

Theorem

If N is a Lévy process which is increasing (a.s.) and is such that(∆N(t), t ≥ 0) takes values in 0,1, then N is a Poisson process.

Proof. Define a sequence of stopping times recursively by T0 = 0 andTn = inft > Tn−1; N(t + Tn−1)− N(Tn−1)) 6= 0 for each n ∈ N. Itfollows from (L2) that the sequence (T1,T2 − T1, . . . ,Tn − Tn−1, . . .) isi.i.d.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 10 / 44

The Jumps of A Lévy Process - Poisson RandomMeasures

The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévyprocess is defined by

∆X (t) = X (t)− X (t−),

for each t ≥ 0.

Theorem

If N is a Lévy process which is increasing (a.s.) and is such that(∆N(t), t ≥ 0) takes values in 0,1, then N is a Poisson process.

Proof. Define a sequence of stopping times recursively by T0 = 0 andTn = inft > Tn−1; N(t + Tn−1)− N(Tn−1)) 6= 0 for each n ∈ N. Itfollows from (L2) that the sequence (T1,T2 − T1, . . . ,Tn − Tn−1, . . .) isi.i.d.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 10 / 44

The Jumps of A Lévy Process - Poisson RandomMeasures

The jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévyprocess is defined by

∆X (t) = X (t)− X (t−),

for each t ≥ 0.

Theorem

If N is a Lévy process which is increasing (a.s.) and is such that(∆N(t), t ≥ 0) takes values in 0,1, then N is a Poisson process.

Proof. Define a sequence of stopping times recursively by T0 = 0 andTn = inft > Tn−1; N(t + Tn−1)− N(Tn−1)) 6= 0 for each n ∈ N. Itfollows from (L2) that the sequence (T1,T2 − T1, . . . ,Tn − Tn−1, . . .) isi.i.d.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 10 / 44

By (L2) again, we have for each s, t ≥ 0,

P(T1 > s + t) = P(N(s) = 0,N(t + s)− N(s) = 0)

= P(T1 > s)P(T1 > t)

From the fact that N is increasing (a.s.), it follows easily that the mapt → P(T1 > t) is decreasing and by a straightforward application ofstochastic continuity (L3) we find that the map t → P(T1 > t) iscontinuous at t = 0. Hence there exists λ > 0 such thatP(T1 > t) = e−λt for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 11 / 44

By (L2) again, we have for each s, t ≥ 0,

P(T1 > s + t) = P(N(s) = 0,N(t + s)− N(s) = 0)

= P(T1 > s)P(T1 > t)

From the fact that N is increasing (a.s.), it follows easily that the mapt → P(T1 > t) is decreasing and by a straightforward application ofstochastic continuity (L3) we find that the map t → P(T1 > t) iscontinuous at t = 0. Hence there exists λ > 0 such thatP(T1 > t) = e−λt for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 11 / 44

By (L2) again, we have for each s, t ≥ 0,

P(T1 > s + t) = P(N(s) = 0,N(t + s)− N(s) = 0)

= P(T1 > s)P(T1 > t)

From the fact that N is increasing (a.s.), it follows easily that the mapt → P(T1 > t) is decreasing and by a straightforward application ofstochastic continuity (L3) we find that the map t → P(T1 > t) iscontinuous at t = 0. Hence there exists λ > 0 such thatP(T1 > t) = e−λt for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 11 / 44

By (L2) again, we have for each s, t ≥ 0,

P(T1 > s + t) = P(N(s) = 0,N(t + s)− N(s) = 0)

= P(T1 > s)P(T1 > t)

From the fact that N is increasing (a.s.), it follows easily that the mapt → P(T1 > t) is decreasing and by a straightforward application ofstochastic continuity (L3) we find that the map t → P(T1 > t) iscontinuous at t = 0. Hence there exists λ > 0 such thatP(T1 > t) = e−λt for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 11 / 44

By (L2) again, we have for each s, t ≥ 0,

P(T1 > s + t) = P(N(s) = 0,N(t + s)− N(s) = 0)

= P(T1 > s)P(T1 > t)

From the fact that N is increasing (a.s.), it follows easily that the mapt → P(T1 > t) is decreasing and by a straightforward application ofstochastic continuity (L3) we find that the map t → P(T1 > t) iscontinuous at t = 0. Hence there exists λ > 0 such thatP(T1 > t) = e−λt for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 11 / 44

So T1 has an exponential distribution with parameter λ and

P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt)n

n! ,then

P(N(t) = n + 1) = P(Tn+2 > t ,Tn+1 ≤ t) = P(Tn+2 > t)−P(Tn+1 > t).

But Tn+1 = T1 + (T2 − T1) + · · ·+ (Tn+1 − Tn)

is the sum of (n + 1) i.i.d. exponential random variables, and so has a

gamma distribution with density fTn+1(s) = e−λs λn+1sn

n!for s > 0.

The required result follows on integration. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44

So T1 has an exponential distribution with parameter λ and

P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt)n

n! ,then

P(N(t) = n + 1) = P(Tn+2 > t ,Tn+1 ≤ t) = P(Tn+2 > t)−P(Tn+1 > t).

But Tn+1 = T1 + (T2 − T1) + · · ·+ (Tn+1 − Tn)

is the sum of (n + 1) i.i.d. exponential random variables, and so has a

gamma distribution with density fTn+1(s) = e−λs λn+1sn

n!for s > 0.

The required result follows on integration. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44

So T1 has an exponential distribution with parameter λ and

P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt)n

n! ,then

P(N(t) = n + 1) = P(Tn+2 > t ,Tn+1 ≤ t) = P(Tn+2 > t)−P(Tn+1 > t).

But Tn+1 = T1 + (T2 − T1) + · · ·+ (Tn+1 − Tn)

is the sum of (n + 1) i.i.d. exponential random variables, and so has a

gamma distribution with density fTn+1(s) = e−λs λn+1sn

n!for s > 0.

The required result follows on integration. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44

So T1 has an exponential distribution with parameter λ and

P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt)n

n! ,then

P(N(t) = n + 1) = P(Tn+2 > t ,Tn+1 ≤ t) = P(Tn+2 > t)−P(Tn+1 > t).

But Tn+1 = T1 + (T2 − T1) + · · ·+ (Tn+1 − Tn)

is the sum of (n + 1) i.i.d. exponential random variables, and so has a

gamma distribution with density fTn+1(s) = e−λs λn+1sn

n!for s > 0.

The required result follows on integration. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44

So T1 has an exponential distribution with parameter λ and

P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt)n

n! ,then

P(N(t) = n + 1) = P(Tn+2 > t ,Tn+1 ≤ t) = P(Tn+2 > t)−P(Tn+1 > t).

But Tn+1 = T1 + (T2 − T1) + · · ·+ (Tn+1 − Tn)

is the sum of (n + 1) i.i.d. exponential random variables, and so has a

gamma distribution with density fTn+1(s) = e−λs λn+1sn

n!for s > 0.

The required result follows on integration. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44

So T1 has an exponential distribution with parameter λ and

P(N(t) = 0) = P(T1 > t) = e−λt ,

for each t ≥ 0.Now assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt)n

n! ,then

P(N(t) = n + 1) = P(Tn+2 > t ,Tn+1 ≤ t) = P(Tn+2 > t)−P(Tn+1 > t).

But Tn+1 = T1 + (T2 − T1) + · · ·+ (Tn+1 − Tn)

is the sum of (n + 1) i.i.d. exponential random variables, and so has a

gamma distribution with density fTn+1(s) = e−λs λn+1sn

n!for s > 0.

The required result follows on integration. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 12 / 44

The following result shows that ∆X is not a straightforward process toanalyse.

LemmaIf X is a Lévy process, then for fixed t > 0,∆X (t) = 0 (a.s.).

Proof. Let (t(n),n ∈ N) be a sequence in R+ with t(n) ↑ t as n→∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)),n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44

The following result shows that ∆X is not a straightforward process toanalyse.

LemmaIf X is a Lévy process, then for fixed t > 0,∆X (t) = 0 (a.s.).

Proof. Let (t(n),n ∈ N) be a sequence in R+ with t(n) ↑ t as n→∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)),n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44

The following result shows that ∆X is not a straightforward process toanalyse.

LemmaIf X is a Lévy process, then for fixed t > 0,∆X (t) = 0 (a.s.).

Proof. Let (t(n),n ∈ N) be a sequence in R+ with t(n) ↑ t as n→∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)),n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44

The following result shows that ∆X is not a straightforward process toanalyse.

LemmaIf X is a Lévy process, then for fixed t > 0,∆X (t) = 0 (a.s.).

Proof. Let (t(n),n ∈ N) be a sequence in R+ with t(n) ↑ t as n→∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)),n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44

The following result shows that ∆X is not a straightforward process toanalyse.

LemmaIf X is a Lévy process, then for fixed t > 0,∆X (t) = 0 (a.s.).

Proof. Let (t(n),n ∈ N) be a sequence in R+ with t(n) ↑ t as n→∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)),n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44

The following result shows that ∆X is not a straightforward process toanalyse.

LemmaIf X is a Lévy process, then for fixed t > 0,∆X (t) = 0 (a.s.).

Proof. Let (t(n),n ∈ N) be a sequence in R+ with t(n) ↑ t as n→∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)),n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44

The following result shows that ∆X is not a straightforward process toanalyse.

LemmaIf X is a Lévy process, then for fixed t > 0,∆X (t) = 0 (a.s.).

Proof. Let (t(n),n ∈ N) be a sequence in R+ with t(n) ↑ t as n→∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)),n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 13 / 44

Much of the analytic difficulty in manipulating Lévy processes arisesfrom the fact that it is possible for them to have∑

0≤s≤t

|∆X (s)| =∞ a.s.

and the way in which these difficulties is overcome exploits the fact thatwe always have ∑

0≤s≤t

|∆X (s)|2 <∞ a.s.

We will gain more insight into these ideas as the discussionprogresses.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 14 / 44

Much of the analytic difficulty in manipulating Lévy processes arisesfrom the fact that it is possible for them to have∑

0≤s≤t

|∆X (s)| =∞ a.s.

and the way in which these difficulties is overcome exploits the fact thatwe always have ∑

0≤s≤t

|∆X (s)|2 <∞ a.s.

We will gain more insight into these ideas as the discussionprogresses.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 14 / 44

Much of the analytic difficulty in manipulating Lévy processes arisesfrom the fact that it is possible for them to have∑

0≤s≤t

|∆X (s)| =∞ a.s.

and the way in which these difficulties is overcome exploits the fact thatwe always have ∑

0≤s≤t

|∆X (s)|2 <∞ a.s.

We will gain more insight into these ideas as the discussionprogresses.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 14 / 44

Rather than exploring ∆X itself further, we will find it more profitable tocount jumps of specified size. More precisely, let 0 ≤ t <∞ andA ∈ B(Rd − 0). Define

N(t ,A) = #0 ≤ s ≤ t ; ∆X (s) ∈ A=

∑0≤s≤t

1A(∆X (s)).

Note that for each ω ∈ Ω, t ≥ 0, the set function A→ N(t ,A)(ω) is acounting measure on B(Rd − 0) and hence

E(N(t ,A)) =

∫N(t ,A)(ω)dP(ω)

is a Borel measure on B(Rd − 0). We write µ(·) = E(N(1, ·)).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 15 / 44

Rather than exploring ∆X itself further, we will find it more profitable tocount jumps of specified size. More precisely, let 0 ≤ t <∞ andA ∈ B(Rd − 0). Define

N(t ,A) = #0 ≤ s ≤ t ; ∆X (s) ∈ A=

∑0≤s≤t

1A(∆X (s)).

Note that for each ω ∈ Ω, t ≥ 0, the set function A→ N(t ,A)(ω) is acounting measure on B(Rd − 0) and hence

E(N(t ,A)) =

∫N(t ,A)(ω)dP(ω)

is a Borel measure on B(Rd − 0). We write µ(·) = E(N(1, ·)).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 15 / 44

Rather than exploring ∆X itself further, we will find it more profitable tocount jumps of specified size. More precisely, let 0 ≤ t <∞ andA ∈ B(Rd − 0). Define

N(t ,A) = #0 ≤ s ≤ t ; ∆X (s) ∈ A=

∑0≤s≤t

1A(∆X (s)).

Note that for each ω ∈ Ω, t ≥ 0, the set function A→ N(t ,A)(ω) is acounting measure on B(Rd − 0) and hence

E(N(t ,A)) =

∫N(t ,A)(ω)dP(ω)

is a Borel measure on B(Rd − 0). We write µ(·) = E(N(1, ·)).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 15 / 44

Rather than exploring ∆X itself further, we will find it more profitable tocount jumps of specified size. More precisely, let 0 ≤ t <∞ andA ∈ B(Rd − 0). Define

N(t ,A) = #0 ≤ s ≤ t ; ∆X (s) ∈ A=

∑0≤s≤t

1A(∆X (s)).

Note that for each ω ∈ Ω, t ≥ 0, the set function A→ N(t ,A)(ω) is acounting measure on B(Rd − 0) and hence

E(N(t ,A)) =

∫N(t ,A)(ω)dP(ω)

is a Borel measure on B(Rd − 0). We write µ(·) = E(N(1, ·)).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 15 / 44

We say that A ∈ B(Rd − 0) is bounded below if 0 /∈ A.

Lemma

If A is bounded below, then N(t ,A) <∞ (a.s.) for all t ≥ 0.

Proof. Define a sequence of stopping times (T An ,n ∈ N) by

T A1 = inft > 0; ∆X (t) ∈ A, and for

n > 1,T An = inft > T A

n−1; ∆X (t) ∈ A.Since X has càdlàg paths, we have T A

1 > 0 (a.s.) and limn→∞ T An =∞

(a.s.).Indeed suppose that T A

1 = 0 with non-zero probability and letN = ω ∈ Ω : T A

1 6= 0. Assume that ω ∈ Ω−N . Then given anyu > 0, we can find 0 < δ, δ′ < u and ε > 0 such that|X (δ)(ω)− X (δ′)(ω)| > ε and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44

We say that A ∈ B(Rd − 0) is bounded below if 0 /∈ A.

Lemma

If A is bounded below, then N(t ,A) <∞ (a.s.) for all t ≥ 0.

Proof. Define a sequence of stopping times (T An ,n ∈ N) by

T A1 = inft > 0; ∆X (t) ∈ A, and for

n > 1,T An = inft > T A

n−1; ∆X (t) ∈ A.Since X has càdlàg paths, we have T A

1 > 0 (a.s.) and limn→∞ T An =∞

(a.s.).Indeed suppose that T A

1 = 0 with non-zero probability and letN = ω ∈ Ω : T A

1 6= 0. Assume that ω ∈ Ω−N . Then given anyu > 0, we can find 0 < δ, δ′ < u and ε > 0 such that|X (δ)(ω)− X (δ′)(ω)| > ε and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44

We say that A ∈ B(Rd − 0) is bounded below if 0 /∈ A.

Lemma

If A is bounded below, then N(t ,A) <∞ (a.s.) for all t ≥ 0.

Proof. Define a sequence of stopping times (T An ,n ∈ N) by

T A1 = inft > 0; ∆X (t) ∈ A, and for

n > 1,T An = inft > T A

n−1; ∆X (t) ∈ A.Since X has càdlàg paths, we have T A

1 > 0 (a.s.) and limn→∞ T An =∞

(a.s.).Indeed suppose that T A

1 = 0 with non-zero probability and letN = ω ∈ Ω : T A

1 6= 0. Assume that ω ∈ Ω−N . Then given anyu > 0, we can find 0 < δ, δ′ < u and ε > 0 such that|X (δ)(ω)− X (δ′)(ω)| > ε and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44

We say that A ∈ B(Rd − 0) is bounded below if 0 /∈ A.

Lemma

If A is bounded below, then N(t ,A) <∞ (a.s.) for all t ≥ 0.

Proof. Define a sequence of stopping times (T An ,n ∈ N) by

T A1 = inft > 0; ∆X (t) ∈ A, and for

n > 1,T An = inft > T A

n−1; ∆X (t) ∈ A.Since X has càdlàg paths, we have T A

1 > 0 (a.s.) and limn→∞ T An =∞

(a.s.).Indeed suppose that T A

1 = 0 with non-zero probability and letN = ω ∈ Ω : T A

1 6= 0. Assume that ω ∈ Ω−N . Then given anyu > 0, we can find 0 < δ, δ′ < u and ε > 0 such that|X (δ)(ω)− X (δ′)(ω)| > ε and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44

We say that A ∈ B(Rd − 0) is bounded below if 0 /∈ A.

Lemma

If A is bounded below, then N(t ,A) <∞ (a.s.) for all t ≥ 0.

Proof. Define a sequence of stopping times (T An ,n ∈ N) by

T A1 = inft > 0; ∆X (t) ∈ A, and for

n > 1,T An = inft > T A

n−1; ∆X (t) ∈ A.Since X has càdlàg paths, we have T A

1 > 0 (a.s.) and limn→∞ T An =∞

(a.s.).Indeed suppose that T A

1 = 0 with non-zero probability and letN = ω ∈ Ω : T A

1 6= 0. Assume that ω ∈ Ω−N . Then given anyu > 0, we can find 0 < δ, δ′ < u and ε > 0 such that|X (δ)(ω)− X (δ′)(ω)| > ε and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44

We say that A ∈ B(Rd − 0) is bounded below if 0 /∈ A.

Lemma

If A is bounded below, then N(t ,A) <∞ (a.s.) for all t ≥ 0.

Proof. Define a sequence of stopping times (T An ,n ∈ N) by

T A1 = inft > 0; ∆X (t) ∈ A, and for

n > 1,T An = inft > T A

n−1; ∆X (t) ∈ A.Since X has càdlàg paths, we have T A

1 > 0 (a.s.) and limn→∞ T An =∞

(a.s.).Indeed suppose that T A

1 = 0 with non-zero probability and letN = ω ∈ Ω : T A

1 6= 0. Assume that ω ∈ Ω−N . Then given anyu > 0, we can find 0 < δ, δ′ < u and ε > 0 such that|X (δ)(ω)− X (δ′)(ω)| > ε and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44

We say that A ∈ B(Rd − 0) is bounded below if 0 /∈ A.

Lemma

If A is bounded below, then N(t ,A) <∞ (a.s.) for all t ≥ 0.

Proof. Define a sequence of stopping times (T An ,n ∈ N) by

T A1 = inft > 0; ∆X (t) ∈ A, and for

n > 1,T An = inft > T A

n−1; ∆X (t) ∈ A.Since X has càdlàg paths, we have T A

1 > 0 (a.s.) and limn→∞ T An =∞

(a.s.).Indeed suppose that T A

1 = 0 with non-zero probability and letN = ω ∈ Ω : T A

1 6= 0. Assume that ω ∈ Ω−N . Then given anyu > 0, we can find 0 < δ, δ′ < u and ε > 0 such that|X (δ)(ω)− X (δ′)(ω)| > ε and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 16 / 44

Similarly, we assume that limn→∞ T An = T A <∞ with non-zero

probability and defineM = ω ∈ Ω : limn→∞ T An =∞. If ω ∈ Ω−M

then we obtain a contradiction with the fact that X has a left limit(almost surely) at T A(ω).Hence, for each t ≥ 0,

N(t ,A) =∑n∈N

1T An ≤t <∞ a.s. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 17 / 44

Similarly, we assume that limn→∞ T An = T A <∞ with non-zero

probability and defineM = ω ∈ Ω : limn→∞ T An =∞. If ω ∈ Ω−M

then we obtain a contradiction with the fact that X has a left limit(almost surely) at T A(ω).Hence, for each t ≥ 0,

N(t ,A) =∑n∈N

1T An ≤t <∞ a.s. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 17 / 44

Similarly, we assume that limn→∞ T An = T A <∞ with non-zero

probability and defineM = ω ∈ Ω : limn→∞ T An =∞. If ω ∈ Ω−M

then we obtain a contradiction with the fact that X has a left limit(almost surely) at T A(ω).Hence, for each t ≥ 0,

N(t ,A) =∑n∈N

1T An ≤t <∞ a.s. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 17 / 44

Be aware that if A fails to be bounded below, then this lemma may nolonger hold, because of the accumulation of large numbers of smalljumps.The following result should at least be plausible, given Theorem 2 andLemma 4.

Theorem

1 If A is bounded below, then (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A).

2 If A1, . . . ,Am ∈ B(Rd −0) are disjoint, then the random variablesN(t ,A1), . . . ,N(t ,Am) are independent.

It follows immediately that µ(A) <∞ whenever A is bounded below,hence the measure µ is σ-finite.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 18 / 44

Be aware that if A fails to be bounded below, then this lemma may nolonger hold, because of the accumulation of large numbers of smalljumps.The following result should at least be plausible, given Theorem 2 andLemma 4.

Theorem

1 If A is bounded below, then (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A).

2 If A1, . . . ,Am ∈ B(Rd −0) are disjoint, then the random variablesN(t ,A1), . . . ,N(t ,Am) are independent.

It follows immediately that µ(A) <∞ whenever A is bounded below,hence the measure µ is σ-finite.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 18 / 44

Be aware that if A fails to be bounded below, then this lemma may nolonger hold, because of the accumulation of large numbers of smalljumps.The following result should at least be plausible, given Theorem 2 andLemma 4.

Theorem

1 If A is bounded below, then (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A).

2 If A1, . . . ,Am ∈ B(Rd −0) are disjoint, then the random variablesN(t ,A1), . . . ,N(t ,Am) are independent.

It follows immediately that µ(A) <∞ whenever A is bounded below,hence the measure µ is σ-finite.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 18 / 44

Be aware that if A fails to be bounded below, then this lemma may nolonger hold, because of the accumulation of large numbers of smalljumps.The following result should at least be plausible, given Theorem 2 andLemma 4.

Theorem

1 If A is bounded below, then (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A).

2 If A1, . . . ,Am ∈ B(Rd −0) are disjoint, then the random variablesN(t ,A1), . . . ,N(t ,Am) are independent.

It follows immediately that µ(A) <∞ whenever A is bounded below,hence the measure µ is σ-finite.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 18 / 44

The main properties of N, which we will use extensively in the sequel,are summarised below:-.

1 For each t > 0, ω ∈ Ω,N(t , ·)(ω) is a counting measure onB(Rd − 0).

2 For each A bounded below, (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A) = E(N(1,A)).

3 The compensator (N(t ,A), t ≥ 0) is a martingale-valued measurewhere N(t ,A) = N(t ,A)− tµ(A), for A bounded below, i.e.For fixed A bounded below, (N(t ,A), t ≥ 0) is a martingale.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 19 / 44

The main properties of N, which we will use extensively in the sequel,are summarised below:-.

1 For each t > 0, ω ∈ Ω,N(t , ·)(ω) is a counting measure onB(Rd − 0).

2 For each A bounded below, (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A) = E(N(1,A)).

3 The compensator (N(t ,A), t ≥ 0) is a martingale-valued measurewhere N(t ,A) = N(t ,A)− tµ(A), for A bounded below, i.e.For fixed A bounded below, (N(t ,A), t ≥ 0) is a martingale.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 19 / 44

The main properties of N, which we will use extensively in the sequel,are summarised below:-.

1 For each t > 0, ω ∈ Ω,N(t , ·)(ω) is a counting measure onB(Rd − 0).

2 For each A bounded below, (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A) = E(N(1,A)).

3 The compensator (N(t ,A), t ≥ 0) is a martingale-valued measurewhere N(t ,A) = N(t ,A)− tµ(A), for A bounded below, i.e.For fixed A bounded below, (N(t ,A), t ≥ 0) is a martingale.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 19 / 44

The main properties of N, which we will use extensively in the sequel,are summarised below:-.

1 For each t > 0, ω ∈ Ω,N(t , ·)(ω) is a counting measure onB(Rd − 0).

2 For each A bounded below, (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A) = E(N(1,A)).

3 The compensator (N(t ,A), t ≥ 0) is a martingale-valued measurewhere N(t ,A) = N(t ,A)− tµ(A), for A bounded below, i.e.For fixed A bounded below, (N(t ,A), t ≥ 0) is a martingale.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 19 / 44

The main properties of N, which we will use extensively in the sequel,are summarised below:-.

1 For each t > 0, ω ∈ Ω,N(t , ·)(ω) is a counting measure onB(Rd − 0).

2 For each A bounded below, (N(t ,A), t ≥ 0) is a Poisson processwith intensity µ(A) = E(N(1,A)).

3 The compensator (N(t ,A), t ≥ 0) is a martingale-valued measurewhere N(t ,A) = N(t ,A)− tµ(A), for A bounded below, i.e.For fixed A bounded below, (N(t ,A), t ≥ 0) is a martingale.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 19 / 44

Poisson Integration

Let f be a Borel measurable function from Rd to Rd and let A bebounded below, then for each t > 0, ω ∈ Ω, we may define the Poissonintegral of f as a random finite sum by∫

Af (x)N(t ,dx)(ω) :=

∑x∈A

f (x)N(t , x)(ω).

Note that each∫

A f (x)N(t ,dx) is an Rd -valued random variable andgives rise to a càdlàg stochastic process, as we vary t .Now since N(t , x) 6= 0⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t , wehave ∫

Af (x)N(t ,dx) =

∑0≤u≤t

f (∆X (u))1A(∆X (u)). (0.2)

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 20 / 44

Poisson Integration

Let f be a Borel measurable function from Rd to Rd and let A bebounded below, then for each t > 0, ω ∈ Ω, we may define the Poissonintegral of f as a random finite sum by∫

Af (x)N(t ,dx)(ω) :=

∑x∈A

f (x)N(t , x)(ω).

Note that each∫

A f (x)N(t ,dx) is an Rd -valued random variable andgives rise to a càdlàg stochastic process, as we vary t .Now since N(t , x) 6= 0⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t , wehave ∫

Af (x)N(t ,dx) =

∑0≤u≤t

f (∆X (u))1A(∆X (u)). (0.2)

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 20 / 44

Poisson Integration

Let f be a Borel measurable function from Rd to Rd and let A bebounded below, then for each t > 0, ω ∈ Ω, we may define the Poissonintegral of f as a random finite sum by∫

Af (x)N(t ,dx)(ω) :=

∑x∈A

f (x)N(t , x)(ω).

Note that each∫

A f (x)N(t ,dx) is an Rd -valued random variable andgives rise to a càdlàg stochastic process, as we vary t .Now since N(t , x) 6= 0⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t , wehave ∫

Af (x)N(t ,dx) =

∑0≤u≤t

f (∆X (u))1A(∆X (u)). (0.2)

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 20 / 44

Poisson Integration

Let f be a Borel measurable function from Rd to Rd and let A bebounded below, then for each t > 0, ω ∈ Ω, we may define the Poissonintegral of f as a random finite sum by∫

Af (x)N(t ,dx)(ω) :=

∑x∈A

f (x)N(t , x)(ω).

Note that each∫

A f (x)N(t ,dx) is an Rd -valued random variable andgives rise to a càdlàg stochastic process, as we vary t .Now since N(t , x) 6= 0⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t , wehave ∫

Af (x)N(t ,dx) =

∑0≤u≤t

f (∆X (u))1A(∆X (u)). (0.2)

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 20 / 44

Poisson Integration

Let f be a Borel measurable function from Rd to Rd and let A bebounded below, then for each t > 0, ω ∈ Ω, we may define the Poissonintegral of f as a random finite sum by∫

Af (x)N(t ,dx)(ω) :=

∑x∈A

f (x)N(t , x)(ω).

Note that each∫

A f (x)N(t ,dx) is an Rd -valued random variable andgives rise to a càdlàg stochastic process, as we vary t .Now since N(t , x) 6= 0⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t , wehave ∫

Af (x)N(t ,dx) =

∑0≤u≤t

f (∆X (u))1A(∆X (u)). (0.2)

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 20 / 44

In the sequel, we will sometimes use µA to denote the restriction to Aof the measure µ. in the following theorem, Var stands for variance.

Theorem

Let A be bounded below, then

1(∫

A f (x)N(t ,dx), t ≥ 0)

is a compound Poisson process, withcharacteristic function

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= exp

[t∫Rd

(ei(u,x) − 1)µf ,A(dx)

]

for each u ∈ Rd , where µf ,A(B) := µ(A ∩ f−1(B)), for eachB ∈ B(Rd ).

2 If f ∈ L1(A, µA), then

E(∫

Af (x)N(t ,dx)

)= t

∫A

f (x)µ(dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 21 / 44

In the sequel, we will sometimes use µA to denote the restriction to Aof the measure µ. in the following theorem, Var stands for variance.

Theorem

Let A be bounded below, then

1(∫

A f (x)N(t ,dx), t ≥ 0)

is a compound Poisson process, withcharacteristic function

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= exp

[t∫Rd

(ei(u,x) − 1)µf ,A(dx)

]

for each u ∈ Rd , where µf ,A(B) := µ(A ∩ f−1(B)), for eachB ∈ B(Rd ).

2 If f ∈ L1(A, µA), then

E(∫

Af (x)N(t ,dx)

)= t

∫A

f (x)µ(dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 21 / 44

In the sequel, we will sometimes use µA to denote the restriction to Aof the measure µ. in the following theorem, Var stands for variance.

Theorem

Let A be bounded below, then

1(∫

A f (x)N(t ,dx), t ≥ 0)

is a compound Poisson process, withcharacteristic function

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= exp

[t∫Rd

(ei(u,x) − 1)µf ,A(dx)

]

for each u ∈ Rd , where µf ,A(B) := µ(A ∩ f−1(B)), for eachB ∈ B(Rd ).

2 If f ∈ L1(A, µA), then

E(∫

Af (x)N(t ,dx)

)= t

∫A

f (x)µ(dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 21 / 44

In the sequel, we will sometimes use µA to denote the restriction to Aof the measure µ. in the following theorem, Var stands for variance.

Theorem

Let A be bounded below, then

1(∫

A f (x)N(t ,dx), t ≥ 0)

is a compound Poisson process, withcharacteristic function

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= exp

[t∫Rd

(ei(u,x) − 1)µf ,A(dx)

]

for each u ∈ Rd , where µf ,A(B) := µ(A ∩ f−1(B)), for eachB ∈ B(Rd ).

2 If f ∈ L1(A, µA), then

E(∫

Af (x)N(t ,dx)

)= t

∫A

f (x)µ(dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 21 / 44

Theorem3 If f ∈ L2(A, µA), then

Var(∣∣∣∣∫

Af (x)N(t ,dx)

∣∣∣∣) = t∫

A|f (x)|2µ(dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 22 / 44

Theorem3 If f ∈ L2(A, µA), then

Var(∣∣∣∣∫

Af (x)N(t ,dx)

∣∣∣∣) = t∫

A|f (x)|2µ(dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 22 / 44

Proof. - part of it!1) For simplicity, we will prove this result in the case wheref ∈ L1(A, µA). First let f be a simple function and write f =

∑nj=1 cj1Aj

where each cj ∈ Rd . We can assume, without loss of generality, thatthe Aj ’s are disjoint Borel subsets of A.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 23 / 44

Proof. - part of it!1) For simplicity, we will prove this result in the case wheref ∈ L1(A, µA). First let f be a simple function and write f =

∑nj=1 cj1Aj

where each cj ∈ Rd . We can assume, without loss of generality, thatthe Aj ’s are disjoint Borel subsets of A.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 23 / 44

Proof. - part of it!1) For simplicity, we will prove this result in the case wheref ∈ L1(A, µA). First let f be a simple function and write f =

∑nj=1 cj1Aj

where each cj ∈ Rd . We can assume, without loss of generality, thatthe Aj ’s are disjoint Borel subsets of A.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 23 / 44

By Theorem 5, we find that

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= E

exp

i

u,n∑

j=1

cjN(t ,Aj)

=n∏

j=1

E(exp

i(u, cjN(t ,Aj)

))=

n∏j=1

exp

t(

ei(u,cj ) − 1)µ(Aj)

= exp

t∫

A(ei(u,f (x)) − 1)µ(dx)

.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 24 / 44

By Theorem 5, we find that

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= E

exp

i

u,n∑

j=1

cjN(t ,Aj)

=n∏

j=1

E(exp

i(u, cjN(t ,Aj)

))=

n∏j=1

exp

t(

ei(u,cj ) − 1)µ(Aj)

= exp

t∫

A(ei(u,f (x)) − 1)µ(dx)

.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 24 / 44

By Theorem 5, we find that

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= E

exp

i

u,n∑

j=1

cjN(t ,Aj)

=n∏

j=1

E(exp

i(u, cjN(t ,Aj)

))=

n∏j=1

exp

t(

ei(u,cj ) − 1)µ(Aj)

= exp

t∫

A(ei(u,f (x)) − 1)µ(dx)

.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 24 / 44

By Theorem 5, we find that

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= E

exp

i

u,n∑

j=1

cjN(t ,Aj)

=n∏

j=1

E(exp

i(u, cjN(t ,Aj)

))=

n∏j=1

exp

t(

ei(u,cj ) − 1)µ(Aj)

= exp

t∫

A(ei(u,f (x)) − 1)µ(dx)

.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 24 / 44

Now for an arbitrary f ∈ L1(A, µA), we can find a sequence of simplefunctions converging to f in L1 and hence a subsequence whichconverges to f almost surely. Passing to the limit along thissubsequence in the above yields the required result, via dominatedconvergence.(2) and (3) follow from (1) by differentiation. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 25 / 44

Now for an arbitrary f ∈ L1(A, µA), we can find a sequence of simplefunctions converging to f in L1 and hence a subsequence whichconverges to f almost surely. Passing to the limit along thissubsequence in the above yields the required result, via dominatedconvergence.(2) and (3) follow from (1) by differentiation. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 25 / 44

Now for an arbitrary f ∈ L1(A, µA), we can find a sequence of simplefunctions converging to f in L1 and hence a subsequence whichconverges to f almost surely. Passing to the limit along thissubsequence in the above yields the required result, via dominatedconvergence.(2) and (3) follow from (1) by differentiation. 2

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 25 / 44

It follows from Theorem 6 (2) that a Poisson integral will fail to have afinite mean if f /∈ L1(A, µ).

For each f ∈ L1(A, µA), t ≥ 0, we define the compensated Poissonintegral by∫

Af (x)N(t ,dx) =

∫A

f (x)N(t ,dx)− t∫

Af (x)µ(dx).

A straightforward argument shows that(∫A f (x)N(t ,dx), t ≥ 0

)is a martingale and we will use this fact

extensively in the sequel.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 26 / 44

It follows from Theorem 6 (2) that a Poisson integral will fail to have afinite mean if f /∈ L1(A, µ).

For each f ∈ L1(A, µA), t ≥ 0, we define the compensated Poissonintegral by∫

Af (x)N(t ,dx) =

∫A

f (x)N(t ,dx)− t∫

Af (x)µ(dx).

A straightforward argument shows that(∫A f (x)N(t ,dx), t ≥ 0

)is a martingale and we will use this fact

extensively in the sequel.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 26 / 44

It follows from Theorem 6 (2) that a Poisson integral will fail to have afinite mean if f /∈ L1(A, µ).

For each f ∈ L1(A, µA), t ≥ 0, we define the compensated Poissonintegral by∫

Af (x)N(t ,dx) =

∫A

f (x)N(t ,dx)− t∫

Af (x)µ(dx).

A straightforward argument shows that(∫A f (x)N(t ,dx), t ≥ 0

)is a martingale and we will use this fact

extensively in the sequel.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 26 / 44

Note that by Theorem 6 (2) and (3), we can easily deduce the followingtwo important facts:

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= exp

t∫Rd

(ei(u,x) − 1− i(u, x))µf ,A(dx)

, (0.3)

for each u ∈ Rd , and for f ∈ L2(A, µA),

E

(∣∣∣∣∫A

f (x)N(t ,dx)

∣∣∣∣2)

= t∫

A|f (x)|2µ(dx). (0.4)

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 27 / 44

Note that by Theorem 6 (2) and (3), we can easily deduce the followingtwo important facts:

E(

exp

i(

u,∫

Af (x)N(t ,dx)

))= exp

t∫Rd

(ei(u,x) − 1− i(u, x))µf ,A(dx)

, (0.3)

for each u ∈ Rd , and for f ∈ L2(A, µA),

E

(∣∣∣∣∫A

f (x)N(t ,dx)

∣∣∣∣2)

= t∫

A|f (x)|2µ(dx). (0.4)

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 27 / 44

Processes of Finite Variation

We begin by introducing a useful class of functions. LetP = a = t1 < t2 < · · · < tn < tn+1 = b be a partition of the interval[a,b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We definethe variation VarP(g) of a càdlàg mapping g : [a,b]→ Rd over thepartition P by the prescription

VarP(g) =n∑

i=1

|g(ti+1)− g(ti)|.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 28 / 44

Processes of Finite Variation

We begin by introducing a useful class of functions. LetP = a = t1 < t2 < · · · < tn < tn+1 = b be a partition of the interval[a,b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We definethe variation VarP(g) of a càdlàg mapping g : [a,b]→ Rd over thepartition P by the prescription

VarP(g) =n∑

i=1

|g(ti+1)− g(ti)|.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 28 / 44

Processes of Finite Variation

We begin by introducing a useful class of functions. LetP = a = t1 < t2 < · · · < tn < tn+1 = b be a partition of the interval[a,b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We definethe variation VarP(g) of a càdlàg mapping g : [a,b]→ Rd over thepartition P by the prescription

VarP(g) =n∑

i=1

|g(ti+1)− g(ti)|.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 28 / 44

Processes of Finite Variation

We begin by introducing a useful class of functions. LetP = a = t1 < t2 < · · · < tn < tn+1 = b be a partition of the interval[a,b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We definethe variation VarP(g) of a càdlàg mapping g : [a,b]→ Rd over thepartition P by the prescription

VarP(g) =n∑

i=1

|g(ti+1)− g(ti)|.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 28 / 44

Processes of Finite Variation

We begin by introducing a useful class of functions. LetP = a = t1 < t2 < · · · < tn < tn+1 = b be a partition of the interval[a,b] in R, and define its mesh to be δ = max1≤i≤n |ti+1 − ti |. We definethe variation VarP(g) of a càdlàg mapping g : [a,b]→ Rd over thepartition P by the prescription

VarP(g) =n∑

i=1

|g(ti+1)− g(ti)|.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 28 / 44

If V (g) = supP VarP(g) <∞, we say that g has finite variation on[a,b]. If g is defined on the whole of R (or R+), it is said to have finitevariation if it has finite variation on each compact interval.It is a trivial observation that every non-decreasing g is of finitevariation. Conversely if g is of finite variation, then it can always bewritten as the difference of two non-decreasing functions - to see this,just write g = V (g)+g

2 − V (g)−g2 , where V (g)(t) is the variation of g on

[a, t ].

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 29 / 44

If V (g) = supP VarP(g) <∞, we say that g has finite variation on[a,b]. If g is defined on the whole of R (or R+), it is said to have finitevariation if it has finite variation on each compact interval.It is a trivial observation that every non-decreasing g is of finitevariation. Conversely if g is of finite variation, then it can always bewritten as the difference of two non-decreasing functions - to see this,just write g = V (g)+g

2 − V (g)−g2 , where V (g)(t) is the variation of g on

[a, t ].

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 29 / 44

If V (g) = supP VarP(g) <∞, we say that g has finite variation on[a,b]. If g is defined on the whole of R (or R+), it is said to have finitevariation if it has finite variation on each compact interval.It is a trivial observation that every non-decreasing g is of finitevariation. Conversely if g is of finite variation, then it can always bewritten as the difference of two non-decreasing functions - to see this,just write g = V (g)+g

2 − V (g)−g2 , where V (g)(t) is the variation of g on

[a, t ].

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 29 / 44

If V (g) = supP VarP(g) <∞, we say that g has finite variation on[a,b]. If g is defined on the whole of R (or R+), it is said to have finitevariation if it has finite variation on each compact interval.It is a trivial observation that every non-decreasing g is of finitevariation. Conversely if g is of finite variation, then it can always bewritten as the difference of two non-decreasing functions - to see this,just write g = V (g)+g

2 − V (g)−g2 , where V (g)(t) is the variation of g on

[a, t ].

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 29 / 44

If V (g) = supP VarP(g) <∞, we say that g has finite variation on[a,b]. If g is defined on the whole of R (or R+), it is said to have finitevariation if it has finite variation on each compact interval.It is a trivial observation that every non-decreasing g is of finitevariation. Conversely if g is of finite variation, then it can always bewritten as the difference of two non-decreasing functions - to see this,just write g = V (g)+g

2 − V (g)−g2 , where V (g)(t) is the variation of g on

[a, t ].

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 29 / 44

Functions of finite variation are important in integration, for supposethat we are given a function g which we are proposing as an integrator,then as a minimum we will want to be able to define the Stieltjesintegral

∫I fdg, for all continuous functions f (where I is some finite

interval). In fact a necessary and sufficient condition for obtaining suchan integral as a limit of Riemann sums is that g has finite variation.A stochastic process (X (t), t ≥ 0) is of finite variation if the paths(X (t)(ω), t ≥ 0) are of finite variation for almost all ω ∈ Ω.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 30 / 44

Functions of finite variation are important in integration, for supposethat we are given a function g which we are proposing as an integrator,then as a minimum we will want to be able to define the Stieltjesintegral

∫I fdg, for all continuous functions f (where I is some finite

interval). In fact a necessary and sufficient condition for obtaining suchan integral as a limit of Riemann sums is that g has finite variation.A stochastic process (X (t), t ≥ 0) is of finite variation if the paths(X (t)(ω), t ≥ 0) are of finite variation for almost all ω ∈ Ω.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 30 / 44

Functions of finite variation are important in integration, for supposethat we are given a function g which we are proposing as an integrator,then as a minimum we will want to be able to define the Stieltjesintegral

∫I fdg, for all continuous functions f (where I is some finite

interval). In fact a necessary and sufficient condition for obtaining suchan integral as a limit of Riemann sums is that g has finite variation.A stochastic process (X (t), t ≥ 0) is of finite variation if the paths(X (t)(ω), t ≥ 0) are of finite variation for almost all ω ∈ Ω.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 30 / 44

The following is an important example for us.Example Poisson Integrals

Let N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) =

∫A f (x)N(t ,dx), then Y is of finite

variation on [0, t ] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t ], we have

VarP(Y ) ≤∑

0≤s≤t

|f (∆X (s))|1A(∆X (s)) <∞ a.s. (0.5)

where X (t) =∫

A xN(t ,dx), for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44

The following is an important example for us.Example Poisson Integrals

Let N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) =

∫A f (x)N(t ,dx), then Y is of finite

variation on [0, t ] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t ], we have

VarP(Y ) ≤∑

0≤s≤t

|f (∆X (s))|1A(∆X (s)) <∞ a.s. (0.5)

where X (t) =∫

A xN(t ,dx), for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44

The following is an important example for us.Example Poisson Integrals

Let N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) =

∫A f (x)N(t ,dx), then Y is of finite

variation on [0, t ] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t ], we have

VarP(Y ) ≤∑

0≤s≤t

|f (∆X (s))|1A(∆X (s)) <∞ a.s. (0.5)

where X (t) =∫

A xN(t ,dx), for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44

The following is an important example for us.Example Poisson Integrals

Let N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) =

∫A f (x)N(t ,dx), then Y is of finite

variation on [0, t ] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t ], we have

VarP(Y ) ≤∑

0≤s≤t

|f (∆X (s))|1A(∆X (s)) <∞ a.s. (0.5)

where X (t) =∫

A xN(t ,dx), for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44

The following is an important example for us.Example Poisson Integrals

Let N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) =

∫A f (x)N(t ,dx), then Y is of finite

variation on [0, t ] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t ], we have

VarP(Y ) ≤∑

0≤s≤t

|f (∆X (s))|1A(∆X (s)) <∞ a.s. (0.5)

where X (t) =∫

A xN(t ,dx), for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44

The following is an important example for us.Example Poisson Integrals

Let N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) =

∫A f (x)N(t ,dx), then Y is of finite

variation on [0, t ] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t ], we have

VarP(Y ) ≤∑

0≤s≤t

|f (∆X (s))|1A(∆X (s)) <∞ a.s. (0.5)

where X (t) =∫

A xN(t ,dx), for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 31 / 44

In fact, a necessary and sufficient condition for a Lévy process to be offinite variation is that there is no Brownian part (i.e. a = 0 in theLévy-Khinchine formula) , and

∫|x |<1 |x |ν(dx) <∞.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 32 / 44

In fact, a necessary and sufficient condition for a Lévy process to be offinite variation is that there is no Brownian part (i.e. a = 0 in theLévy-Khinchine formula) , and

∫|x |<1 |x |ν(dx) <∞.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 32 / 44

The Lévy-Itô Decomposition

This is the key result of this lecture.First, note that for A bounded below, for each t ≥ 0∫

AxN(t ,dx) =

∑0≤u≤t

∆X (u)1A(∆X (u))

is the sum of all the jumps taking values in the set A up to the time t .Since the paths of X are càdlàg, this is clearly a finite random sum. Inparticular,

∫|x |≥1 xN(t ,dx) is the sum of all jumps of size bigger than

one. It is a compound Poisson process, has finite variation but mayhave no finite moments.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 33 / 44

The Lévy-Itô Decomposition

This is the key result of this lecture.First, note that for A bounded below, for each t ≥ 0∫

AxN(t ,dx) =

∑0≤u≤t

∆X (u)1A(∆X (u))

is the sum of all the jumps taking values in the set A up to the time t .Since the paths of X are càdlàg, this is clearly a finite random sum. Inparticular,

∫|x |≥1 xN(t ,dx) is the sum of all jumps of size bigger than

one. It is a compound Poisson process, has finite variation but mayhave no finite moments.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 33 / 44

The Lévy-Itô Decomposition

This is the key result of this lecture.First, note that for A bounded below, for each t ≥ 0∫

AxN(t ,dx) =

∑0≤u≤t

∆X (u)1A(∆X (u))

is the sum of all the jumps taking values in the set A up to the time t .Since the paths of X are càdlàg, this is clearly a finite random sum. Inparticular,

∫|x |≥1 xN(t ,dx) is the sum of all jumps of size bigger than

one. It is a compound Poisson process, has finite variation but mayhave no finite moments.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 33 / 44

The Lévy-Itô Decomposition

This is the key result of this lecture.First, note that for A bounded below, for each t ≥ 0∫

AxN(t ,dx) =

∑0≤u≤t

∆X (u)1A(∆X (u))

is the sum of all the jumps taking values in the set A up to the time t .Since the paths of X are càdlàg, this is clearly a finite random sum. Inparticular,

∫|x |≥1 xN(t ,dx) is the sum of all jumps of size bigger than

one. It is a compound Poisson process, has finite variation but mayhave no finite moments.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 33 / 44

On the other hand it can be shown that X (t)−∫|x |≥1 xN(t ,dx) is a

Lévy process having finite moments to all orders.Now lets turn our attention to the small jumps. We study compensatedintegrals, which we know are martingales. Introduce the notation

M(t ,A) :=

∫A

xN(t ,dx)

for t ≥ 0 and A bounded below. For each m ∈ N, let

Bm =

x ∈ Rd ,

1m + 1

< |x | ≤ 1m

and for each n ∈ N, let An =

⋃nm=1 Bm.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 34 / 44

On the other hand it can be shown that X (t)−∫|x |≥1 xN(t ,dx) is a

Lévy process having finite moments to all orders.Now lets turn our attention to the small jumps. We study compensatedintegrals, which we know are martingales. Introduce the notation

M(t ,A) :=

∫A

xN(t ,dx)

for t ≥ 0 and A bounded below. For each m ∈ N, let

Bm =

x ∈ Rd ,

1m + 1

< |x | ≤ 1m

and for each n ∈ N, let An =

⋃nm=1 Bm.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 34 / 44

On the other hand it can be shown that X (t)−∫|x |≥1 xN(t ,dx) is a

Lévy process having finite moments to all orders.Now lets turn our attention to the small jumps. We study compensatedintegrals, which we know are martingales. Introduce the notation

M(t ,A) :=

∫A

xN(t ,dx)

for t ≥ 0 and A bounded below. For each m ∈ N, let

Bm =

x ∈ Rd ,

1m + 1

< |x | ≤ 1m

and for each n ∈ N, let An =

⋃nm=1 Bm.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 34 / 44

On the other hand it can be shown that X (t)−∫|x |≥1 xN(t ,dx) is a

Lévy process having finite moments to all orders.Now lets turn our attention to the small jumps. We study compensatedintegrals, which we know are martingales. Introduce the notation

M(t ,A) :=

∫A

xN(t ,dx)

for t ≥ 0 and A bounded below. For each m ∈ N, let

Bm =

x ∈ Rd ,

1m + 1

< |x | ≤ 1m

and for each n ∈ N, let An =

⋃nm=1 Bm.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 34 / 44

On the other hand it can be shown that X (t)−∫|x |≥1 xN(t ,dx) is a

Lévy process having finite moments to all orders.Now lets turn our attention to the small jumps. We study compensatedintegrals, which we know are martingales. Introduce the notation

M(t ,A) :=

∫A

xN(t ,dx)

for t ≥ 0 and A bounded below. For each m ∈ N, let

Bm =

x ∈ Rd ,

1m + 1

< |x | ≤ 1m

and for each n ∈ N, let An =

⋃nm=1 Bm.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 34 / 44

Define ∫|x |<1

xN(t ,dx) := L2 − limn→∞

M(t ,An),

which is a martingale. Moreover, on taking limits in (0.3), we get

E

(exp i

(u,∫|x |<1

xN(t ,dx)

))= exp

t∫|x |<1

(ei(u,x) − 1− i(u, x))µ(dx)

.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 35 / 44

Define ∫|x |<1

xN(t ,dx) := L2 − limn→∞

M(t ,An),

which is a martingale. Moreover, on taking limits in (0.3), we get

E

(exp i

(u,∫|x |<1

xN(t ,dx)

))= exp

t∫|x |<1

(ei(u,x) − 1− i(u, x))µ(dx)

.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 35 / 44

Consider

BA(t) = X (t)− bt −∫|x |<1

xN(t ,dx)−∫|x |≥1

xN(t ,dx),

where b = E(

X (1)−∫|x |≥1 xN(1,dx)

). The process BA is a centred

martingale with continuous sample paths. With a little more work, wecan show that Cov(Bi

A(t)BjA(t)) = Aij t . Using Lévy’s characterisation of

Brownian motion (see later) we have that BA is a Brownian motion withcovariance a. Hence we have:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 36 / 44

Consider

BA(t) = X (t)− bt −∫|x |<1

xN(t ,dx)−∫|x |≥1

xN(t ,dx),

where b = E(

X (1)−∫|x |≥1 xN(1,dx)

). The process BA is a centred

martingale with continuous sample paths. With a little more work, wecan show that Cov(Bi

A(t)BjA(t)) = Aij t . Using Lévy’s characterisation of

Brownian motion (see later) we have that BA is a Brownian motion withcovariance a. Hence we have:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 36 / 44

Consider

BA(t) = X (t)− bt −∫|x |<1

xN(t ,dx)−∫|x |≥1

xN(t ,dx),

where b = E(

X (1)−∫|x |≥1 xN(1,dx)

). The process BA is a centred

martingale with continuous sample paths. With a little more work, wecan show that Cov(Bi

A(t)BjA(t)) = Aij t . Using Lévy’s characterisation of

Brownian motion (see later) we have that BA is a Brownian motion withcovariance a. Hence we have:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 36 / 44

Consider

BA(t) = X (t)− bt −∫|x |<1

xN(t ,dx)−∫|x |≥1

xN(t ,dx),

where b = E(

X (1)−∫|x |≥1 xN(1,dx)

). The process BA is a centred

martingale with continuous sample paths. With a little more work, wecan show that Cov(Bi

A(t)BjA(t)) = Aij t . Using Lévy’s characterisation of

Brownian motion (see later) we have that BA is a Brownian motion withcovariance a. Hence we have:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 36 / 44

Consider

BA(t) = X (t)− bt −∫|x |<1

xN(t ,dx)−∫|x |≥1

xN(t ,dx),

where b = E(

X (1)−∫|x |≥1 xN(1,dx)

). The process BA is a centred

martingale with continuous sample paths. With a little more work, wecan show that Cov(Bi

A(t)BjA(t)) = Aij t . Using Lévy’s characterisation of

Brownian motion (see later) we have that BA is a Brownian motion withcovariance a. Hence we have:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 36 / 44

Consider

BA(t) = X (t)− bt −∫|x |<1

xN(t ,dx)−∫|x |≥1

xN(t ,dx),

where b = E(

X (1)−∫|x |≥1 xN(1,dx)

). The process BA is a centred

martingale with continuous sample paths. With a little more work, wecan show that Cov(Bi

A(t)BjA(t)) = Aij t . Using Lévy’s characterisation of

Brownian motion (see later) we have that BA is a Brownian motion withcovariance a. Hence we have:

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 36 / 44

Theorem (The Lévy-Itô Decomposition)

If X is a Lévy process, then there exists b ∈ Rd , a Brownian motion Bawith covariance matrix A in Rd and an independent Poisson randommeasure N on R+ × (Rd − 0) such that for each t ≥ 0,

X (t) = bt + BA(t) +

∫|x |<1

xN(t ,dx) +

∫|x |≥1

xN(t ,dx) (0.6)

Note that the three processes in this decomposition are allindependent.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 37 / 44

Theorem (The Lévy-Itô Decomposition)

If X is a Lévy process, then there exists b ∈ Rd , a Brownian motion Bawith covariance matrix A in Rd and an independent Poisson randommeasure N on R+ × (Rd − 0) such that for each t ≥ 0,

X (t) = bt + BA(t) +

∫|x |<1

xN(t ,dx) +

∫|x |≥1

xN(t ,dx) (0.6)

Note that the three processes in this decomposition are allindependent.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 37 / 44

Theorem (The Lévy-Itô Decomposition)

If X is a Lévy process, then there exists b ∈ Rd , a Brownian motion Bawith covariance matrix A in Rd and an independent Poisson randommeasure N on R+ × (Rd − 0) such that for each t ≥ 0,

X (t) = bt + BA(t) +

∫|x |<1

xN(t ,dx) +

∫|x |≥1

xN(t ,dx) (0.6)

Note that the three processes in this decomposition are allindependent.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 37 / 44

An interesting by-product of the Lévy-Itô decomposition is theLévy-Khintchine formula, which follows easily by independence in theLévy-Itô decomposition:-

Corollary

If X is a Lévy process, then for each u ∈ Rd , t ≥ 0,

E(ei(u,X(t))) =

exp(

t[i(b,u)− 1

2(u,Au)

+

∫Rd−0

(ei(u,y) − 1− i(u, y)1B(y))µ(dy)

])(0.7)

so the intensity measure µ is the Lévy measure for X and from now onwe write µ as ν.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 38 / 44

An interesting by-product of the Lévy-Itô decomposition is theLévy-Khintchine formula, which follows easily by independence in theLévy-Itô decomposition:-

Corollary

If X is a Lévy process, then for each u ∈ Rd , t ≥ 0,

E(ei(u,X(t))) =

exp(

t[i(b,u)− 1

2(u,Au)

+

∫Rd−0

(ei(u,y) − 1− i(u, y)1B(y))µ(dy)

])(0.7)

so the intensity measure µ is the Lévy measure for X and from now onwe write µ as ν.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 38 / 44

An interesting by-product of the Lévy-Itô decomposition is theLévy-Khintchine formula, which follows easily by independence in theLévy-Itô decomposition:-

Corollary

If X is a Lévy process, then for each u ∈ Rd , t ≥ 0,

E(ei(u,X(t))) =

exp(

t[i(b,u)− 1

2(u,Au)

+

∫Rd−0

(ei(u,y) − 1− i(u, y)1B(y))µ(dy)

])(0.7)

so the intensity measure µ is the Lévy measure for X and from now onwe write µ as ν.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 38 / 44

An interesting by-product of the Lévy-Itô decomposition is theLévy-Khintchine formula, which follows easily by independence in theLévy-Itô decomposition:-

Corollary

If X is a Lévy process, then for each u ∈ Rd , t ≥ 0,

E(ei(u,X(t))) =

exp(

t[i(b,u)− 1

2(u,Au)

+

∫Rd−0

(ei(u,y) − 1− i(u, y)1B(y))µ(dy)

])(0.7)

so the intensity measure µ is the Lévy measure for X and from now onwe write µ as ν.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 38 / 44

The process∫|x |<1 xN(t ,dx) is the compensated sum of small jumps.

The compensation takes care of the analytic complications in theLévy-Khintchine formula in a probabilistically pleasing way, since it isan L2-martingale.The process

∫|x |≥1 xN(t ,dx) describes the “large jumps” - it is a

compound Poisson process, but may have no finite moments.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 39 / 44

The process∫|x |<1 xN(t ,dx) is the compensated sum of small jumps.

The compensation takes care of the analytic complications in theLévy-Khintchine formula in a probabilistically pleasing way, since it isan L2-martingale.The process

∫|x |≥1 xN(t ,dx) describes the “large jumps” - it is a

compound Poisson process, but may have no finite moments.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 39 / 44

The process∫|x |<1 xN(t ,dx) is the compensated sum of small jumps.

The compensation takes care of the analytic complications in theLévy-Khintchine formula in a probabilistically pleasing way, since it isan L2-martingale.The process

∫|x |≥1 xN(t ,dx) describes the “large jumps” - it is a

compound Poisson process, but may have no finite moments.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 39 / 44

A Lévy process has finite variation iff its Lévy-Itô decomposition takesthe form

X (t) = γt +

∫x 6=0

xN(t ,dx)

= γt +∑

0≤s≤t

∆X (s),

where γ = b −∫|x |<1 xν(dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 40 / 44

A Lévy process has finite variation iff its Lévy-Itô decomposition takesthe form

X (t) = γt +

∫x 6=0

xN(t ,dx)

= γt +∑

0≤s≤t

∆X (s),

where γ = b −∫|x |<1 xν(dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 40 / 44

H.Geman, D.Madan and M.Yor have proposed a nice financialinterpretation for the jump terms in the Lévy-Itô decomposition:- wherethe intensity measure is infinite, the stock price manifests “infiniteactivity” and this is the mathematical signature of the jitter arising fromthe interaction of pure supply shocks and pure demand shocks. On theother hand, where the intensity measure is finite, we have “finiteactivity”, and this corresponds to sudden shocks that can causeunexpected movements in the market, such as a terrorist atrocity or amajor earthquake.

If a pure jump Lévy process (no Brownian part) has finite activity thenit has finite variation. The converse is false.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 41 / 44

H.Geman, D.Madan and M.Yor have proposed a nice financialinterpretation for the jump terms in the Lévy-Itô decomposition:- wherethe intensity measure is infinite, the stock price manifests “infiniteactivity” and this is the mathematical signature of the jitter arising fromthe interaction of pure supply shocks and pure demand shocks. On theother hand, where the intensity measure is finite, we have “finiteactivity”, and this corresponds to sudden shocks that can causeunexpected movements in the market, such as a terrorist atrocity or amajor earthquake.

If a pure jump Lévy process (no Brownian part) has finite activity thenit has finite variation. The converse is false.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 41 / 44

H.Geman, D.Madan and M.Yor have proposed a nice financialinterpretation for the jump terms in the Lévy-Itô decomposition:- wherethe intensity measure is infinite, the stock price manifests “infiniteactivity” and this is the mathematical signature of the jitter arising fromthe interaction of pure supply shocks and pure demand shocks. On theother hand, where the intensity measure is finite, we have “finiteactivity”, and this corresponds to sudden shocks that can causeunexpected movements in the market, such as a terrorist atrocity or amajor earthquake.

If a pure jump Lévy process (no Brownian part) has finite activity thenit has finite variation. The converse is false.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 41 / 44

H.Geman, D.Madan and M.Yor have proposed a nice financialinterpretation for the jump terms in the Lévy-Itô decomposition:- wherethe intensity measure is infinite, the stock price manifests “infiniteactivity” and this is the mathematical signature of the jitter arising fromthe interaction of pure supply shocks and pure demand shocks. On theother hand, where the intensity measure is finite, we have “finiteactivity”, and this corresponds to sudden shocks that can causeunexpected movements in the market, such as a terrorist atrocity or amajor earthquake.

If a pure jump Lévy process (no Brownian part) has finite activity thenit has finite variation. The converse is false.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 41 / 44

The first three terms on the rhs of (0.6) have finite moments to allorders, so if a Lévy process fails to have a moment, this is due entirelyto the “large jumps”/“finite activity” part. In fact:

E(|X (t)|n) <∞ for all t > 0 if and only if∫|x |≥1 |x |

nν(dx) <∞.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 42 / 44

The first three terms on the rhs of (0.6) have finite moments to allorders, so if a Lévy process fails to have a moment, this is due entirelyto the “large jumps”/“finite activity” part. In fact:

E(|X (t)|n) <∞ for all t > 0 if and only if∫|x |≥1 |x |

nν(dx) <∞.

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 42 / 44

A Lévy process is a martingale iff it is integrable and

b +

∫|x |≥1

xν(dx) = 0.

A square-integrable Lévy process is a martingale iff it is centred andthen

X (t) = BA(t) +

∫Rd−0

xN(t ,dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 43 / 44

A Lévy process is a martingale iff it is integrable and

b +

∫|x |≥1

xν(dx) = 0.

A square-integrable Lévy process is a martingale iff it is centred andthen

X (t) = BA(t) +

∫Rd−0

xN(t ,dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 43 / 44

A Lévy process is a martingale iff it is integrable and

b +

∫|x |≥1

xν(dx) = 0.

A square-integrable Lévy process is a martingale iff it is centred andthen

X (t) = BA(t) +

∫Rd−0

xN(t ,dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 43 / 44

Semimartingales

A stochastic process X is a semimartingale if it is an adapted processsuch that for each t ≥ 0,

X (t) = X (0) + M(t) + C(t),

where M = (M(t), t ≥ 0) is a local martingale and C = (C(t), t ≥ 0) isan adapted process of finite variation. In particular

Every Lévy process is a semimartingale.

To see this, use the Lévy-Itô decomposition to write

M(t) = Ba(t) +

∫|x |<1

xN(t ,dx) - a martingale,

C(t) = bt +

∫|x |≥1

xN(t ,dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 44 / 44

Semimartingales

A stochastic process X is a semimartingale if it is an adapted processsuch that for each t ≥ 0,

X (t) = X (0) + M(t) + C(t),

where M = (M(t), t ≥ 0) is a local martingale and C = (C(t), t ≥ 0) isan adapted process of finite variation. In particular

Every Lévy process is a semimartingale.

To see this, use the Lévy-Itô decomposition to write

M(t) = Ba(t) +

∫|x |<1

xN(t ,dx) - a martingale,

C(t) = bt +

∫|x |≥1

xN(t ,dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 44 / 44

Semimartingales

A stochastic process X is a semimartingale if it is an adapted processsuch that for each t ≥ 0,

X (t) = X (0) + M(t) + C(t),

where M = (M(t), t ≥ 0) is a local martingale and C = (C(t), t ≥ 0) isan adapted process of finite variation. In particular

Every Lévy process is a semimartingale.

To see this, use the Lévy-Itô decomposition to write

M(t) = Ba(t) +

∫|x |<1

xN(t ,dx) - a martingale,

C(t) = bt +

∫|x |≥1

xN(t ,dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 44 / 44

Semimartingales

A stochastic process X is a semimartingale if it is an adapted processsuch that for each t ≥ 0,

X (t) = X (0) + M(t) + C(t),

where M = (M(t), t ≥ 0) is a local martingale and C = (C(t), t ≥ 0) isan adapted process of finite variation. In particular

Every Lévy process is a semimartingale.

To see this, use the Lévy-Itô decomposition to write

M(t) = Ba(t) +

∫|x |<1

xN(t ,dx) - a martingale,

C(t) = bt +

∫|x |≥1

xN(t ,dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 44 / 44

Semimartingales

A stochastic process X is a semimartingale if it is an adapted processsuch that for each t ≥ 0,

X (t) = X (0) + M(t) + C(t),

where M = (M(t), t ≥ 0) is a local martingale and C = (C(t), t ≥ 0) isan adapted process of finite variation. In particular

Every Lévy process is a semimartingale.

To see this, use the Lévy-Itô decomposition to write

M(t) = Ba(t) +

∫|x |<1

xN(t ,dx) - a martingale,

C(t) = bt +

∫|x |≥1

xN(t ,dx).

Dave Applebaum (Sheffield UK) Lecture 3 December 2011 44 / 44