Seminar on Stochastic Processes, 1990

351

Transcript of Seminar on Stochastic Processes, 1990

Page 1: Seminar on Stochastic Processes, 1990
Page 2: Seminar on Stochastic Processes, 1990

Progress in Probability Volume 24

Series Editors Thomas Liggett Charles Newman Loren Pitt

Page 3: Seminar on Stochastic Processes, 1990

Seminar on Stochastic Processes, 1990

Eo <;lnlar Editor

PoJ o Fitzsimmons RoJ o Williams Managing Editors

Springer Science+Business Media, LLC 1991

Page 4: Seminar on Stochastic Processes, 1990

E.<;::mlar Departrnent of Civil Engineering and

Operations Research Princeton University Princeton, NI 08544 USA

P.I. Fitzsirnrnons R.I. Williarns (Managing Editors) Departrnent of Mathernatics University of California, San Diego La IoHa, CA 92093 USA

ISBN 978-0-8176-3488-9 ISBN 978-1-4684-0562-0 (eBook) DOI 10.1007/978-1-4684-0562-0

AII rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without prior permission of the copyright owner. Permission to photocopy for internal or personal use, or the internal or personal use of specific clients, is granted by Springer Science+Business Media, LLC, for libraries and other users registered with the Copyright Clearance Center (CCC), provided that the base fee of $0.00 per copy, plus $0.20 per page is paid directiy to CCC, 21 Congress Street, Salem, MA 01970, V.S.A. Special requests should be addressed directiy to Springer Science+Business Media, LLC,

3488-6/91 $0.00 + .20

Printed on acid-free paper.

©Springer Science+Business Media New York 1991 Originally published by Birkhăuser Boston in 1991 Softcover reprint ofthe hardcover l st edition 1991

ISBN 978-0-8176-3488-9

Camera-ready copy provided by the editors.

9 8 7 6 5 432 l

Page 5: Seminar on Stochastic Processes, 1990

FOREWORD

The 1990 Seminar on Stochastic Processes was held at the University of British

Columbia from May 10 through May 12, 1990. This was the tenth in a series of

annual meetings which provide researchers with the opportunity to discuss current

work on stochastic processes in an informal and enjoyable atmosphere. Previous

seminars were held at Northwestern University, Princeton University, the Univer­

sity of Florida, the University of Virginia and the University of California, San

Diego. Following the successful format of previous years, there were five invited

lectures, delivered by M. Marcus, M. Vor, D. Nualart, M. Freidlin and L. C. G.

Rogers, with the remainder of the time being devoted to informal communications

and workshops on current work and problems. The enthusiasm and interest of the

participants created a lively and stimulating atmosphere for the seminar. A sample

of the research discussed there is contained in this volume.

The 1990 Seminar was made possible by the support of the Natural Sciences and

Engin~ring Research Council of Canada, the Southwest University Mathematics

Society of British Columbia, and the University of British Columbia. To these

entities and the organizers of this year's conference, Ed Perkins and John Walsh, we

extend oul' thanks. Finally, we acknowledge the support and assistance of the staff

at Birkhauser Boston.

P. J. Fitzsimmons

R. J. Williams

La Jolla, 1990

Page 6: Seminar on Stochastic Processes, 1990

LIST OF PARTICIPANTS

A. AI-Hussaini P. Greenwood R. Pyke

R. Banuelos J. Hawkes L.C.G. Rogers

R. Bass U. Haussmann J. Rosen

D. Bel! P. Hsu T. Salisbury

R. Blumenthal P.lmkel!er Y.C. Sheu

C. Burdzy O. Kallenberg C.T. Shih

R. Dalang F. Knight H. Sikic

D. Dawson T. McConnell R. Song

N. Dinculeanu P. McGill W. Suo

P. Doyle P. March A.S. Sznitman

E.B. Dynkin M. Marcus 1. Taylor

R. El!iott J. Mitra E. Toby

S. Evans T. Mountford R. Tribe N. Falkner D. Nualart Z. Vondracek

R. Feldman M. Penrase J.B. Walsh

P. Fitzsimmons E. Perkins J. Watkins

K. Fleischmann M. Perman S. Weinryb

M. Freidlin J. Pitman R. Williams

R. Getoor A. Pittenger M. Vor

J. Glover Z. Pop-Stojanovic B. Zangeneh

1. Gorostiza S. Port Z. Zhao

Page 7: Seminar on Stochastic Processes, 1990

CONTENTS

A. A. BALKEMA A note on Trotter's proof of the continuity of local time for Brownian mot ion 1

A. A. BALKEMA and Paul Levy's way to his local time K. L. CHUNG 5

D. BELL Transformations of measure on an infinite dimensional vector space 15

J. K. BROOKS and Stochastic integrat ion in Banach spaces N. DINCULEANU 27

D. A. DAWSON, Absolute continuity of the measure states K. FLEISCHMANN in a branching model with catalysts and S. ROELLY 117

R. J. ELLIOTT Martingales associated with finite Markov chains 161

S. N. EVANS Equivalence and perpendicularity of local field Gaussian measures 173

P. J. FITZSIMMONS Skorokhod embedding by randomized hitting times 183

J. GLOVER and Multiplicative symmetry groups R. SONG of Markov processes 193

P.IMKELLER On the existence of occupation densities of stochastic integral processes via operator theory 207

F. B. KNIGHT Calculat ing the compensator: method and example 241

M. B. MARCUS Rate of growth of local times of strongly symmetric Markov processes 253

Page 8: Seminar on Stochastic Processes, 1990

viii Contents

E. PERKINS On the continuity of measure-valued processes 261

Z. R. POP-STOJANOVIC A remark on regularity of excessive functions for certain diffusions 269

L. C. G. ROGERS and A(t,Bt) is not a semimartingale J. B. WALSH 275

J. S. ROSEN Self-intersections of stable processes in the plane: local times and limit theorems 285

C. T. SHIH On piecing together locally defined Markov processes 321

B. Z. ZANGENEH Measurability of the solution of a semilinear evolution equation 335

Page 9: Seminar on Stochastic Processes, 1990

A Note on Trotter's Proof of the Continuity of Local Time for Brownian Motion

A.A. BALKEMA

In his 1939 paper [1], P. Levy introduced the notion of local time for Brownian

mot ion as the limit of the occupation time of the space interval (O, €) blown up

by a factor 1/ €:

(1) L.(t) = m{s E (O, t]1 O < B(s) < €}/€ -+ L(t) for € -+ 0+.

Here m is Lebesgue measure on R and B is Brownian motion on R started in

O. In this paper we give a simple proof of the a.s. continuity of local time based

on a moment inequality for the occupation time of the Brownian excursion in [2]

and the arguments of Trotter's 1958 paper, [3].

In Balkema & Chung [4] the bound 6VC€3 on the second moment of the occupa-

tion time of the space interval (O, €) for the first excursion of duration exceeding

c> O was used to prove relation (1). This bound is based on a general moment

inequality in Theorem 9 of [2]. For the proof of the a.s continuity of local time

we need the bound 120VC€7 on the fourth moment. The computation is similar

and is omitted.

As in [4]let S(c) denote the centered total occupation time divided by € ofthe

space interval (O, €) for the first n(c) = [u/V27rC] positive excursions of durat ion

> c. Here u > O is fixed and we shalllet c > O tend to O. It was shown in the

above paper, see (2.3), that S(c) -+ L.(Tu) a.s. as c -+ O where u 1-+ Tu denotes

Page 10: Seminar on Stochastic Processes, 1990

2 A.A. Balkema

the inverse function to local time in zero. Standard computation of the fourth

moment of a sum of i.i.d. centered random variables gives a bound Ce2 (u + u2 )

on the 4th moment of S(c) for 0< e < 1. Fatou's lemma then yields as in Lemma

2.2 of the paper above

The process L in (1) has continuous increasing unbounded sample functions. The

inverse process is the Levy process T which is a pure jump process. Note that

d L(t) = IB(t)1 and hence for u,r > O

(2) P{Tu < r} = P{L(r) > u} = P{IB(1)1 ~ u/y'r} ~ e-u2 / 2r •

Lemma 2. The process u t-+ L.(Tu ) - u is a martingale.

Proof. Observe that u t-+ L.(Tu ) is a pure jump increasing Levy process. This

follows from the Iti> decomposition, but can also be deduced from the indepen­

dence of the Brownian motion BI (t) = B(Tu + t) and the stopped Brownian mo­

tion B(t" Tu). The random variable L.(Tu) has finite expectation eL.(Tu) = cu

and c = 1 follows by letting u -+ 00 in Lemma 1.

The process t t-+ L.(t) - L(t) is no martingale but the submartingale inequality

holds at the times t = Tu: The jumps in the original process are replaced by

continuous increasing functions in the new process. Lemma 1 gives for e < 1

Let r ~ 1. Relation (2) with u = 2r2 then gives

(3)

The process L. defined in (1) and local time L are close if e is smal!. The

remainder of the argument follows Trotter's 1958 paper.

Page 11: Seminar on Stochastic Processes, 1990

Continuity of Local Time 3

Levy [1] proved that for almost every realization of Brownian motion the oc­

cupation time F defined by

F(x,t) = m{s S; ti B(s) S; x}

is a continuous function on R x [0,00). For x = ° the right hand partial deriva­

tive f(x, t) = 0+ F(x, t)/ox exists a.s. as a continuous increasing function in t.

(Indeed f(O,.) = L(.) is local time in ° for Brownian motion.) By spatial homo-

geneity this holds in each point x E R. Let b. denote the set of dyadic rationals

k /2 n . Since the set b. is countable almost every realization F of occupation time

has the property that it is continuous on R x [0,00) and that the function f( x, .)

is continuous on [0,00) for each x E b.. Fix such a realization F and define

fn:Rx[O,oo)-+Rby

fn(x, t) = f(x, t) if x = k/2n for some integer k

k + 1 k = 2n(F(~, t) - F(2 n ' t)) for k < 2nx < k + 1.

The function fn is a discrete approximation to oF/ox. Its discontinuities lie on

the lines x = k/2n . The function

measures the size of the discontinuities of fn.

Proposition 3. Let t f-t f(z, t) be a continuous function on [0,00) for each

dyadic rational z = k /2 n . Let F : R x [0,00) -+ R be continuous and define f n

and dn as above. If there exist constants Cn > ° with finite sum I: Cn < 00 such

that

dn(x, t) S; Cn on [-n, n] x [O, n] for all n

then oF / ox exists and is continuous on R x [0,00).

Proof. As in Trotter [3] one proves:

a) f : b. x [0,00) -+ Ris uniformly continuous on bounded sets (and hence has

a continuous extension f* on R x [0,00)),

Page 12: Seminar on Stochastic Processes, 1990

4 A.A. Balkema

b) f n -+ f* uniformly on bounded sets,

c) âF/âx = f* on R x [0,00).

Theorem 4. Occupation time F(x, t) for Brownian motion a.s. has a partial

derivative with respect to x which is continuous on R x [0,00).

Proof. With € = 2-n , Q = n-z and r = n inequality (3) gives

Pn = P{dn > 2/nz in some point (x, t) E [-n, n] X [O, n]}

~ 2· (2n2 n + l)P{max IL.(t) - L(t)1 > l/nZ } t$n

~ 2. (2n2 n + 1)· (e- zn3 + 6CTZnn4 nB),

and hence LPn < 00. The mst Borel-Cantelli lemma shows that the conditions

of Proposition 3 are satisfied a.s. Therefore the conclusion holds a.s.

References

[1] P. LEVY, Sur certains processus stochastiques homogenes. Compositio

Math. 7 (1939), 283-339.

[2] K.L. Chung, Excursions in Brownian motion. Arkiv fOr Mat. 14 (1976),

155-177.

[3] H. Trotter, A property of Brownian motion paths. fllinois J. Math. 2

(1958), 425-433.

[4] A.A. Balkema & K.L. Chung, Paul Levy's way to his local time. In this

volume.

A.A. Balkema

F.W.I., Universiteit van Amsterdam

Plantage Muidergracht 24

1018 TV Amsterdam, Holland

Page 13: Seminar on Stochastic Processes, 1990

Paul Levy's Way to His Local Time

A.A. BALKEMA and K.L. CHUNG

o. Foreword by Chung

In his 1939 paper [1] Levy introduced the notion of local time for Brownian

motion. He gave several equivalent definitions, and towards the end of that long

paper he proved the following result. Let e > O, t > O, B(O) = O,

(0.1) L.(t) = m{s E [O, t]1 O < B(s) < e}/e

where B(t) is the Brownian mot ion in R and m is the Lebesgue measure. Then

almost surely the !imit below exists for alI t > O:

(0.2) Iim L.(t) = L(t). ._0 This process L(.) is Levy's local time.

As I pointed out in my paper which was dedicated to the memory of Levy, [2;

p.174], there is a mistake in the proof given in [1], in that the moments of occu­

pation time for an excursion were confounded with something else, not specified.

Apart from this mistake which I was able to rectify in Theorem 9 of [2], Levy's

arguments can (easily) be made rigorous by standard "bookkeeping". As any

serious reader of Levy's work should know, this is quite usual with his intensely

intuitive style of writing. Hence at the time when I wrote [2], I clid not deem it

necessary to reproduce the details. Nevertheless I scribbled a memorandum for

my own file. Later, after I lectured on the subject in Amsterdam in 1975, I sent

that memo to Balkema in the expectation that he would render it legi bIe. This

Page 14: Seminar on Stochastic Processes, 1990

6 A.A. Balkema and K.L. Chung

valuable sheet of paper has apparently been lost. In my reminiscences of Levy

[3], spoken at the Ecole Polytechnique in June, 1987, I recounted his invention

of local time and the original proof of the theorem cited above. It struck me as

rather odd that although a supposedly historical account of this topic was given

in Volume 4 of Dellacherie-Meyer's encyclopaedic work [4], Levy's 1939 paper

was not even listed in the bibliography. This must be due to the failure of the

authors to realize that the contents of that paper were not entirely reproduced in

Levy's 1948 book [5]. Be that as it may, incredible events posterior to the Levy

conference in 1987 (see the Postscript in [3]) have convinced me that very few

people have read, much less understood, Levy's own way to his invention. I have

therefore asked Balkema to write a belated exposition based on my 1975lectures

on Brownian mot ion. Together with the results in my paper [2] on Brownian

excursions this forms the basis of the present exposition of Levy's ideas about

local time. Now I wonder who among the latter-day experts on local time will

have the curiosity (and humility) to read it?

1. Local time of the zero set of Brownian mot ion

One of the most striking results on Brownian motion is Levy's formula:

B~ IBI-L*

where B is Brownian motion and L* is the local time of IBI in zero defined in

terms ofthe zero set of B. Levy considered the pair (M - B, M) where M is the

max process for Brownian motion:

Mt = max{B(s) I s ::; t},

and proved that the process Y = M - B is distributed like the process IBI,

using the at that time not yet rigorously established strong Markov property

for Brownian motion. In one picture we have the continuous increasing process

M and dangling down from it the process Y (distributed like IBI). Note that

Page 15: Seminar on Stochastic Processes, 1990

Paul Uvy's Way to His Local Time 7

M increases only on the zero set of Y. Problem: Can one express the sample

functions of the increasing process M in terms of the sample functions of the

process Y?

Let us define

Tu = inf{t > ° I M(t) > u} u ~ O.

This is the right-continuous inverse process to M. Levy observed that it is a

pure jump process with stationary independent increments. It has Levy measure

p(y,oo) = J(2/7rY) on (0,00). There is a 1-1 correspondence between excursion

intervals of Y and jumps of the Levy process T. Hence the number of excursions

of Y in [O, Tu] of durat ion > c is equal to the number N = Ne(u) of jumps of

T of height > c during the interval [O, u]. For a Levy process this number is

Poisson distributed with parameter up(c, oo) = uJ(2/7rc) in our case. In fact if

we keep u fixed then t -+ Ne(t), with c(t) = 2/7rt2 , is the standard cumulative

Poisson process on [0,(0) with intensity u. The strong law of large numbers (for

exponential variables) implies

(1.1) Ne(u)/J(2/7rc) -+ U a.s. as c = c(t) -+ O.

Now vary u. The counting process Ne : [O, (0) -+ 0,1, ... will satisfy (1.1) for alI

rational u 2: ° for alI w outside some null set ilo in the underlying probability

space. For these realizations we have weak convergence of monotone functiona

and hence uniform convergence on bounded subsets (since the limit function is

continuous). In particular we have convergence for each u ~ 0, also if u = M t ( w)

depends on w. This proves:

Theorem 1.1 (Levy). Let B be a Brownian motion and let N;(t) denote the

number of excursion intervals of length > c contained in [O, t]. Then

N;(t)/J(2/7rC) -+ L*(t) a.s. as c -+ ° for some process L * with continuous increasing sample paths in the sense of weak

d convergence. Moreover (IBI,L*) = (M - B,M).

Page 16: Seminar on Stochastic Processes, 1990

8 A.A. Balkema and K.L. Chung

Corollary. L* is unbounded a.s. and L*(O) = O.

Note that local time L* has been defined in terms of the zero set Z = {t ~

O I B(t) = O}. We call this process L* the local time of the zero set of Brownian

mot ion in order to distinguish it from the process L introduced in (0.2). The

process L. in (0.1) depends on the behaviour of Brownian motion in the €-interval

(O, €). For a discussion of local times for random sets see Kingman [6]. Here

we only observe that one can construct another variant of local time in O by

counting excursions of sup norm > C rather than excursions of duration > c.

The Levy measure then is dy/y2 rather than dy/ V(27ry3). This latter procedure

has the nice property that it is invariant for time change and hence works for

any continuous local martingale.

The next result essentially is an alternative formulat ion of Theorem 1.1.

Lemma 1.2. Let u > O, and let Uc be the upper endpoint of the K(c)th

excursion of the Brownian motion B of durat ion > c. Assume that a.s. K(c) '"

uV(2/7rc) as c ...... O. Then Uc ...... T:; a.s. as c ...... O, where

(1.2) T:;(w) = inf{t > OI L;(w) > u}.

Proof. The process u ..... T:; is a Levy process since L * ~ M by Theorem 1.1.

Hence it has no fixed discontinuities. Choose a sample point w in the underlying

probability space such that

1) the function L~(w) in Theorem 1.1 is continuous, increasing, unbounded,

and vanishes in t = O,

2) the limit relation of Theorem 1.1 holds,

3) K(c)(w) '" uV(2/7rc), c ...... O,

4) the function T*(w) is continuous at the point u.

We omit the symbol w in the expressions below. Let O < Ul < u < U2 and let

Ni(c) denote the number of excursions of length > c in the interval [O,T:J for

i = 1,2. Theorem 1.1 gives the asymptotic relation Ni(C) '" u;V(2/7rc) as c ...... O.

Page 17: Seminar on Stochastic Processes, 1990

Paul Levy's Way to His Local Time 9

Hence for alI sufficiently small C we have the inequality NI(c) < K(c) < N2 (c),

and therefore T:, < Uc < T: 2 • The continuity of the sample function T* at u

then implies that Uc -+ T:.

This innocuous-looking lemma enables us to consider the S( c) in Section 2

with a constant n(c), rather than a random number, which would entail subt le

considerations of the dependence between the sequence {..pn} and the process L *.

2. Local time as a limit of occupation time

In order to prove Theorem 1.1 using the occupation time of the interval (O, e),

e -t 0, rather than the number of excursions, one needs a bound on the second

moment of the occupation time of the interval (O, e) for the excursions. We begin

with a simple but fundamental result.

Theorem 2.1. For fixed c > ° the sequence of excursions of Brownian mot ion

of duration exceeding c is i.i.d. provided the excursions are shifted so as to start

in t = O.

Proof. The upper endpoint TI of the first excurslOn 'PI of duration > c is

optional. By the strong Markov property the process BI(t) = B(TI + t), t ::::: 0,

is a Brownian mot ion and is independent of 'Pl. Hence 'PI is independent of

the sequence ('P2' 'P3, ... ) and 'PI ~ 'P2 since 'P2 is the first excursion of BI of

duration > c. Now proceed by induction.

As an aside let us show, as Levy did, that this theorem by it self gives local

time up to a multiplicative constant: Choose a sequence Cn decreasing to zero. We

obtain an increasing family of i.i.d. sequences of excursions which contains alI the

excursions of Brownian motion. Each of these i.i.d. sequences acts as a clock. The

large excursions of duration > Ca ring the hours. The next sequence contains alI

excursions of duration > CI and ticks off the minutes. The next one the seconds,

etc. Note that the number of minutes per hour is random: The sequence of

excursions of duration > CI is i.i.d. and hence the subsequence of excursions of

Page 18: Seminar on Stochastic Processes, 1990

10 A.A. Balkema and K.L. Chung

duration > Ca is generated by a selection procedure which gives negative binomial

waiting times with expectation V(Ca/Cl). Similarly the number of seconds per

hour is negative binomial with expectation V(ca/C2). Ifwe standardize the clocks

so that the intertick times of the nth clock are V (cn / ca) then the clocks become

ever more accurate. The limit is local time for Brownian mot ion. Pursuing this

line of thought one can show that the excursions of Brownian motion form a time

homogeneous Poisson point process on a product space [O, (Xl) x E where E is the

space of continuous excursions and the horizontal axis is parametrized by local

time. See Greenwood and Pitman [7) for details.

We now return to our main theme. Let 'l/Jl, 'l/J2, ... be the i.i.d. sequence of

positive excursions of durat ion > c. This is a subsequence of the sequence ('Pn)

of theorem 2.1. Given € > O let f«'l/Jn) denote the occupation time of the space

interval (O, €) for the nth excursion 'l/Jn:

and set

Section 3 contains the proofs of the following key estimates:

(2.1)

(2.2)

Now define

S(C) = Y 1 + ... + Yn(c)

where n(c) = [u/y'2"jfC) for some fixed u > O. We are interested in the case c -+ O.

We have by (2.2)

Page 19: Seminar on Stochastic Processes, 1990

Paul Levy's Way to His Local Time 11

which gives

By (2.1) we have

&(XI + ... + Xn(c)) = n(c)&XI -> u as c -> O.

Let Uc denote the upper endpoint ofthe n( c)th positive excursion 'l/Jn(c)' Note

that 'l/Jn(c) = 'PK(c) is the K(c)th excursion of durat ion exceeding c and that

K( c) '" 2n( c) a.s. by the strong law of large numbers for a fair coin. Lemma 1.2

shows that Uc -> T: a.s. as c -> O where T: is defined in (1.2). Hence

(2.3) Xl + ... + Xn(c) -> L,(T:) a.s. as c -> O.

Fatou' s lemma then yields

Lemma 2.2. &(L,(T:) - u)2 S liminfc-->o &(S(c?) S 6w.

This inequality will enable us to prove (0.2).

Theorem 2.3. Define L, by (0.1). Then

(2.4) L,(t) -> L*(t) a.s. as € -> O

in the sense of weak convergence of monotone functions.

Proof. It suffices to show that for each rational u > O the scaled occupation

time

L,(T:) = m{t E [O,T:ll O < B(t) < €}/€ -> U a.s. as € -> O.

Since occupation time is increasing for fixed € > O and local time is continuous

this will imply weak convergence. In the definition of L,( t) as a ratio both

numerator and denominator are increasing in €. Hence it suffices to prove the

convergence for €n = n -4, as n -> 00. We have by Lemma 2.2

Page 20: Seminar on Stochastic Processes, 1990

12 A.A. Balkema and K.L. Chung

Since L:Pn is finite, the desired result follows from the Borel-Cantelli lemma.

As Chung comments in [3], the preceding proof is in the grand tradition of

classical probability. But then, what of the result?

3. The moments of excursionary occupation

In this section we use the results in Chung [2], beginning with a review of the

notation. Let ,(t) = sup{s I s:::; t;B(s) = O}

{J(t) = inf{s I s ~ t; B(s) = O}

.\(t) = {J(t) - ,(t).

Then (,(t), {J(t)) is the excursion interval straddling t, and .\(t) is its durat ion.

For any Bore! set A in [O, 00):

j (3et )

S(t;A) = 1A(IB(u)1)du -yet)

is the occupation time of A by IBI during the said excursion. Its expectation

conditioned on ,(t) and >.(t) has a density given by

(3.1) &(S(t;dx) I ,(t) = s,>.(t) = a) = 4xe-2z2/adx.

This result is due to Levy; a proof is given in [2]. Integration gives

(3.2) &(S(t; (0,10)) I ,(t) = s, >.(t) = a) = a(l _ e-2 • 2 la).

Next it follows from (2.22) and (2.23) in [2] that

(3.3) 1

P{A(t) E da} = -y't/a3 da for a ~ t. 7r

In particular if r > e ~ t > O then P{.\(t) > e} > O and

(3.4) 1

P(.\(t) E dr I.\(t) > e) = 2y'e/r3 dr.

Levy derived (3.4) from the property of the Levy process T described in

section 1 above. It is a pleasure to secure this fundamental result directly from

our excursion theory.

Page 21: Seminar on Stochastic Processes, 1990

Paul Uvy's Way to His Local Time 13

What is the exact relation between the excursion straddling t and the se-

quence of excursions ('Pn) introduced in Section 2?

Recall that 'Pn is the nth excursion of duration exceeding c for given c > o. We daim that 'P1 is distributed like the excursion straddling c conditional on

its duration exceeding c. To see this we introduce a new sequence of excursions

(7Jn) with excursion intervals (,n, f3n) of duration An = f3n -,n· Define 7J1 as the

excursion straddling t = c with excursion interval (,1, f3I); then define 7J2 as the

excursion straddling t = f31 +c with excursion interval (,2, (32); 7J3 as the excursion

straddling t = f32 + c, etc. Note that the post-f31 process B 1(t) = B(f31 + t), is

a Brownian motion which is independent of the excursion 7J1. As in Theorem

2.1 a simple induction argument shows that the sequence (7Jn) is i.i.d., at least

if we shift the excursions so as to start at t = O. Since for any sample point

w in the underlying probability space 'P1 (w) is the D.rst element of the sequence

(7Jn( w)) of duration exceeding c, it follows that 'P1 is distributed like the excursion

straddling c, conditional on its duration exceeding c.

Now we can compute by (3.2) and (3.4):

1 /00 2 2, . dr ,r;;&(S(t; (O,e)) I A(t) > c) = r(l- e-' r)~ V c c 2r

as c -+ O.

This is (2.1) if we choose t = c.

Next Chung proved as a particular case of Theorem 9 in [2]:

(3.5) &(S(t; (O, e))k I,(t) = s, A(t) = a) ~ (k + 1)! e2k k ~ 1.

For k = 2 this is the missing estimate mentioned in Section O. But it is also

trivial that

(3.6) S(t;(O,e)) ~ A(t).

Page 22: Seminar on Stochastic Processes, 1990

14 A.A. Balkema and K.L. Chung

Using (3.4), (3.5) and (3.6) we have

100 -./Cdr &(S(tj (0,10))2 I A(t) > C) = &(S(tj (O,€)? I A(t) = r)---aj2

c 2r

< [00(6 4 2)-./Cdr - lo 10 fi r 2r3 / 2

~ -./C ( 6104 1: 2::/2 + 14<2 -; dr )

~ 6€3-./C.

Now choose t = c. Then S(Cj (O, 10)) conditional on A(c) > c is distributed like

Mlc,od). Hence

This is (2.2).

References

[1] P. LEVY, Sur certains processus stochastiques homogtmes. Compositio Math. 7 (1939), 283-339.

[2] KL. CHUNG, Excursions in Brownian motion. Arkiv for Mat. 14 (1976), 155-177.

[3] KL. CHUNG, Reminiscences of some of Paul Levy's Ideas in Brownian Motion and in Markov Chains. Seminar on Stochastic Processes 1988, 99-108. Birkhauser, 1989. Aiso printed with the author's permission but without the Postscript in Colloque Paul Levy, Soc. Math. de France, 1988.

[4] C. DELLACHERIE & P.A. MEYER, Probabilites et Potentiel. Chapitres XII it XVI, Hermann, Paris, 1987.

[5] P. LEVY, Processus Stochastiques et Mouvement Brownien. Gauthier­Villars, Paris, 1948 (second edition 1965).

[6] J.F.C. KINGMAN, Regenerative Phenomena. Wiley, New York, 1972.

[7] P. GREENWOOD & J. PITMAN, Construction of local time and Poisson point processes from nested arrays. J. London Math. Soc. (2) 22 (1980),

182-192.

A.A. Balkema F.W.I., Universiteit van Amsterdam Plantage Muidergracht 24 1018 TV Amsterdam, Holland

KL. Chung Department of Mathematics Stanford University Stanford, CA 94305

Page 23: Seminar on Stochastic Processes, 1990

Transformations of Measure on an Infinite Dimensional Vector Space

DENIS BELL

1 Introduction

Let E denote a Banach space equipped with a finite Borel

measure v. For any measurable transformation T: E ~ E,

let vT denote the measure defined by vT(B) = V(T-l(B))

for Borel sets B. A transformat ion theorem for v is a

result which gives conditions on T under which vT is

absolutely continuous with respect to v, and which gives

a formula for the corresponding Radon-Nikodym derivative

(RND) when these conditions hold.

The study of transformation of a measure defined on a

finite dimensional

straightforward. When

vector

E is

space

infinite

is relatively

dimensional the

situation is much more difficult and in this case

treatment of the problem has largely been restricted to

cases where v is a Gaussian measure. In this paper we

describe a procedure for deriving a transformat ion theorem

for an arbitrary Borel measure on an infinite dimensional

Banach space. Although formal, our argument yields a

formula (10) for the RND dVT/dv which we believe to be

new.

In §2 we give a brief survey of some existing results. In

§3 we describe our method, which has also been discussed

in [B.2, §5.3]. Finally in §4 we give some applications

of our formula, in which we derive the RND' s for the

theorems described in §2.

appear in [B.2].

These applications do not

2 Transformation theorems for Gaussian measure

These come in two varieties, classical and abstract. The

theory of transformation of the classical Wiener measure

Page 24: Seminar on Stochastic Processes, 1990

16 D. Bell

was developed by Cameron and Martin,

Girsanov theorem is as follows:

and Girsanov. The

Let w denote standard real valued Brownian mot ion and

let h be a bounded measurable process adapted to the

filtration of w. Let v denote the process

(1)

Then vi [0,1] is a standard Brownian motion with respect

to the measure d~(w) = G(w)dv(w).

There has been a ser ies of increasingly more general

results concern ing the transformation of abstract Gaussian

measure. The quintessential paper in this area is due to

Ramer [R]. Let (i,H,E) be an abstract Wiener space in

the sense of Gross [G] with Gaussian measure v on E,

where the Hilbert space H has inner product <.,.> and

norm 1·1.

Theorem (Ramer) Suppose U c E is an open set and T =

1 + K is a homeomorphism from U into E, where 1 is

the identity map on E and K is an H - c1 map from U

into H such that its H - derivative is continuous from

u into the space of Hilbert-Schmidt maps of H, and

I H + DK(x) E GL(H) for each x. Then the measure v(T·)

is absolutely continuous with repect to v and

dv(T·)(x) IIi(DT(x)) leXP[-II<K(X),X> - tr DK(X);

-1/21 K(x) 1 ] (2)

dv

where li denotes the Carleman-Fredholm determinant, and

tr denotes trace, defined with respect to H. (The

difference of the random variables contained inside II II

is defined as a limit of a certain convergent sequence in

L2; each of the terms may fail to exist by itself.)

Page 25: Seminar on Stochastic Processes, 1990

Transformations of Measure 17

The following result is proved in [B.1]:

Theorem (BeII) Let v be any finite Borel measure on E,

differentiable in a direction r E E in the sense that

there exists an L1 random variable X such that the

relations

J D 9dv E r

J e'Xdv E

(3)

hold for alI test functions e defined on E. Suppose X

satisfies the conditions: t E R ~ X(x + tr) is continuous

a.a. x and the following random variables are locally

integrable

sup [X(x + tr]4, t E[ o • 1 J

Define T(x) :; x - r,

equivalent and

dv W (x)

3 The scheme

x E E. Then v are

(4)

Let v now denote an arbitrary finite Borel measure on a

Banach space. Let ti denote a distinguished sub set of

the class of functions on E. We make the following

Definition A linear operator l from ti to L2 (v) will

be called an integrat ion by parts operator (IPO) for v

if the following relations hold for alI el functions ~: E ~ R and alI h E ti for which either side exists:

f D~(x)h(x)dv(x) = J ~(x) (lh) (x)dv(x) E E

Remark The Malliavin calculus provides a tool for

obtaining IPO's for the measures induced by both finite

and infinite dimensional-valued stochastic differential

equations (the finite dimensional case was established by

Page 26: Seminar on Stochastic Processes, 1990

18 D. Bell

Malliavin, Stroock, Bismut et al., see [B.2, Chs.2,3];

this was then extended to infinite dimensions by Bell

[B.2 §7.3]).

Suppose ([, ZI) is an IPO for v. The next resul t is

easily verified (see [B.2 §5.3]).

Lemma Let h E ZI n L2 (v), ~: E ~ R E L2 (v) n c1 .

Suppose ~h E ZI. Then

[(~h) (x) = ~ (x) [h (x) - D~ (x) h (x) a. s. (v)

Remark The set of functions for which this lemma is valid

can be enlarged by a closure argument.

Suppose now that T: E ~ E is a map of the form I + K,

where I is the identity on E and K E ZI. The key idea

is to construct a homotopy Tt connecting T and the

identity map. There are obviously many ways to do this;

the simplest is to define

Tt(x) = I + tK(x), t E [0,1]

Suppose that Tt defines a family of invertible maps of

E. Note that vTe «v for each t if and only if there

exists a family {Xt : t E [O,l]} of L1 random variables

(Le. the corresponding RND's) such that for all test

functions ~ on E:

(i) X O ;: 1

(ii) J ~(x)dv(x) = J ~OT~l(X)Xt(X)dV(X) E E

(5)

Note that the RHS in (5) (which we will denote by f(t»

is actually independent of t; thus f'(t) - O. This

will enable us to derive formulae for

certain formal

differentiating

integral gives

manipulations

the expression

are

for

Xt , assuming that

valid. Formally

f(t) inside the

Page 27: Seminar on Stochastic Processes, 1990

Transformations of Measure 19

(6)

The first term in the integrand can be simplified by using

the easily derived relation

-1 -1 -1-1 D~(Tt (x»d/dtTt (x) = -D(~oTt ) (x)KoTt (x)

Substitut ing this into (6) gives

Assume that for each t E [0,1], KoTt"Xt E U. using the

defining property of ! in the last relation gives

Observe that this holds for alI test functions ~ if and only if

a.s. (v) (7)

-1 Suppose that KoTt and Xt satisfy the respective

condi tions on hand 'li in the previous lemma. Then

applying the lemma to the second term in (7) yields

We now write Xt(x) = X(t,x), Xl = ax/at, X2 - ax/ax,

and make the substi tution x = Tt (y) in (8). We then

have

1 X1 (t,Tt (y» - X(t,Tt(y»![KoTt ](Tt(y»

+ X2 (t,Tt (y)K(y) O

Page 28: Seminar on Stochastic Processes, 1990

20 D. Bell

since K = dTt/dt this reduces to

In view of (5, (i» the above equation has the unique

solution

We thus arrive at the following expression for X:

In particular

(10)

Given a measure v and a map T such that the family of

maps Tt are invertible, the scheme is implemented by

defining X(t,x) as in (9) and, by reversing the steps

in the above argument (note that all the steps are

reversible), showing that (5) holds. This will yield a

transformat ion theorem for v with respect to the maps

Tt , with X(t,x) as the corresponding family of RND's.

This was done in [B.1] in the special case K = constant;

in this case the method yields the non-Gaussian theorem

described in §1. One can presumably find a larger class

of transformations for which the method is valid.

We now give a simple condition on K under which Ts is

invertible for all s e [0,1]. Recall that K is a

contraction (on E) if there exists O ~ c < 1 such that

Page 29: Seminar on Stochastic Processes, 1990

Transformations of Measure

Proposition If K is a contract ion then

invertible for alI s E [0,1].

21

is

Proof. It clearly suffices to prove that T = 1 + K is

invertible since sK is also a contract ion for alI s E

[0,1] . To element of

show T is surjective, suppose

E and define K'(x) = y - K(x),

y is any

x E E. Then

K' is also a contraction. It therefore follows from the

contract ion mapping theorem that K' has a fixed point E E. We then have y - K(xO) Le. X o

satisfies y. To see that T is injective,

suppose that T(x1 ) = T(x2). This implies

~xl - x2~E = ~K(X1) - K(X2)~E ~ c~x1 - x2~E. Since c < 1 this implies Xl = x2.

4 Applications

(A) Suppose v is the standard Wiener measure on the

space of paths CO[O,l]. Then the Ito integral

1

Lk = fok~ dws

defines an IPO for v. The domain U of L consists of

the set of adapted paths k (which we think of as being

functions of w) with square integrable time derivatives k'. This property of the Ito integral was first observed by Gaveau and Trauber [G-T]. (One can actually use functional techniques to define an IPO for v with an extended domain containing non-adapted paths, gives rise to the Skorohod integral.)

and this

Suppose h (= h(w» satisfies the conditions in the

Girsanov theorem, and define K: CO[O,l] ~ U by

K(w) = -Jhudu and T = 1 + K. Suppose T is invertible

and let S denote the inverse of T. The Girsanov

theorem states that vS« v and dVS/dv

is positive this is equivalent to saying

and (dvT/dv) °T = l/G. We will use (10) formula.

G. Since G

that vT « v to derive this

Page 30: Seminar on Stochastic Processes, 1990

22 D.Bell

Note that (10) gives

dll W oT(w)

using the Ito integral form of l we have

The last expression is equal to l/G(W) as required.

(B) Let 11 denote a Gaussian measure corresponding to an

abstract Wiener space (i,H,E) • Then 11 has an IPO l

defined by the divergence operator

lK(x) = <K(x) ,x> - tr DK(x)

where <., .> denotes the inner product on H and tr the

trace with respect to H. An initial domain for l can

be taken to be the set of el functions from E into E*

(where E* is identifed wi th a subset of E under the

inclusions defined by the map i). However this domain

can be extended in Ramer's sense and the extended domain

U consists of precisely the class of functions K: E ~ H

defined in the statement of Ramer's theorem. For K E U, one then has

IK(x)

Thus

"<K(x) ,x> - tr DK(X)"

"<K(x) ,x> + sIK(x) 12

-1 - tr D[KoTs ](Ts(x»"

Page 31: Seminar on Stochastic Processes, 1990

Transformations of Measure 23

In order to obtain (2) from this it will be necessary to

do some manipulations on trace term above. Under the

present assumptions these will necessarily be of a formal

nature since, as we remarked earlier, the trace might

fail to exist. One could overcome this difficulty by

working with the approximations used to define .. .. , then

passing to the limit. However in order to avoid having to

this we will assume that K is a el map into E*.

Under this assumption all the terms in

separately. We have

f~[KOT;l](TS (x»ds

O

" ..

<K(x) ,x> - tr I1D[KOT-l](T (x»ds + 1/2IK(x) 12 O s s

substituting this into (10) gives

~OT(X) = exp { <K(x) ,x> +

where (11) follows from the identity:

exist

(X)dS}

It is particularly easy to verify (12) under the

assumption that K is a contract ion from E into H,

for in this case IIDK(X) IIL(H) < 1. We then have

1 I DK(x) [DT (X)]-1ds O s

Page 32: Seminar on Stochastic Processes, 1990

24 D.Bell

J1 -1 DK(x) [I + sDK(x)] ds

O

1 J d/ds Log[I + sDK(x)]ds O

= Log[I + DK(x)]

where Log is defined by a power ser ies in the algebra of

operators on H. This implies (12). It follows from (Il)

that

dll(T') (x) dll IDet DT(x) lexP-{<K(X),X> + 1/2IK(x) 12}

Thus we obtain the formula given by the transformat ion

theorem of Kuo [K]. Under Ramer's assumptions it is necessary to introduce the term tr DK(x) into the

exponential in order to obtain convergence, and the

corresponding adjustment required outside the exponential

converts the standard determinant into the Carleman­

Fredholm determinant which appears in (2).

(C) Suppose that

which (3) holds.

by L(cr) = c·X.

11 a finite Borel measure on E for

Define U = {cr: cEi} and L on U Note that for Ts - I - sr, T -1 = I +

s sr. Thus 10) gives

dll dllT (x)

Hence we obtain the formula in (4).

Page 33: Seminar on Stochastic Processes, 1990

Transformations of Measure 25

REFERENCES

[B.1] Bell D 1985 A quasi-invariance theorem for measues on Banach spaces, Trans Amer Math Soc 290 no.2: 841-845

[B.2] Bell D 1987 The Malliavin Calculus Pitman Monographs and Surveys in Pure and Applied Mathematics # 34, wiley/New York

[G-T] Gaveau B and Trauber P 1982 L'integrale stochastique comme operateur de divergence dans L'espace fonctionnel, J Funct Anal 46: 230-238

[G] Gross L Abstract Wiener Spaces 1965 Proc Fifth Berkeley Sympos Math statist and Probability Vol 2, part 1: 31-42

[K] Kuo H-H 1971 Integration on infinite dimensional manifolds, Trans Amer Math Soc 159: 57-78

[R] Ramer R 1974 On Nonlinear transformations of Gaussian measures, J Funct Anal 15: 166-187

Denis Bell Department of Mathematics university of North Florida Jacksonville Florida 32216

Page 34: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces

J. K. BROOKS and N. DlNCULEANU

Introduction

The purpose of this paper is twofold: mst, to extend the definition of the

stochastic integral for processes with values in Banach spaces; and second, to

deflne the stochastic integral as a genuine integral, with respect to a measure,

that is, to provide a general integrat ion theory for vector measures, which,

when applied to stochastic processes, yields the stochastic integral along with

all its properties. For the reader interested only in scalar stochastic integrat ion ,

our approach should stiH be of interest, since it sheds new light on the stochastic

integral, enlarges the class of integrable processes and presents new convergence

theorems involving the stochastic integral.

The classical theory of stochastic integrat ion for real valued processes, as it

is presented, for example, by Dellacherie and Meyer in [D-M], reduces, essen­

tially, to integration with respect to a square integrable martingale; and this

is done by defining the stochastic integral, first for simple processes, and then

extending it to a larger class of processes, by means of an isometry between

certain L2-spaces of processes. This method has been used also by Kunita

in [K] for processes with values in Hilbert spaces, by using the existence of

the inner product to prove the isometry mentioned above. But this approach

cannot be used for Banach spaces, which lack an inner product. A number

of technical difficulties emerge for Banach valued processes, and one truly ap­

preciates the geometry that the Hilbert space setting provides in stochastic

integration, after considering the general case. A new approach is needed for

Banach valued processes.

Page 35: Seminar on Stochastic Processes, 1990

28 lK. Brooks and N. Dinculeanu

On the other hand, the classical stochastic integral, as described above, is

not a genuine integral, with respect to some measure. It would be desirable,

as in classical Measure Theory, to have a space of "integrable" processes, with

a norm on it, for which it is a Banach space, and an integral for the integrable

processes, which would coincide with the stochastic integral. Aiso desirable

would be to have Vitali and Lebesgue convergence theorems for the integrable

processes. Such a goal is legitimate and many attempts have been made to

fulfill it.

Any measure theoretic approach to stochastic integrat ion has to use an in­

tegration theory with respect to a vector measure. Pellaumail [P] was the mst

to attempt such an approachj but due to the lack of a satisfactory integrat ion

theory, this goal was not achieved--even the establishment of a cadlag modifi­

cation of the stochastic integral could not be obtained. Kussmaul [Ku.1] used

the idea of Pellaumail and was able to define a measure theoretic stochastic

integral, but only for real valued processes. He used in [Ku.2] the same method

for Hilbert valued processes, but the goal was only partially fulfilled, again due

to the lack of a satisfactory general integrat ion theory.

The integrat ion theory used in this paper is a general bilinear vector in­

tegration, with respect to a Banach valued measure with finite semivariation,

developed by the authors in [B-D.2]. This theory seems to be tailor-made for

application to the stochastic integral. For the convenience of the reader, we

give a short presentation in section 1, and a more complete presentation in

Appendix 1. The technical difficulties encountered in applying this theory to

stochastic integration have required us to extend and modify the integrat ion

theory given in [B-D.2] and to add a series of new results. We mention in this

respect the extension theorem of vector measures (Theorem A1.3) which is an

improvement over the existing extension theorems.

In order to apply this theory to define a stochastic integral with respect to

a Banach valued process X, we construct a stochastic measure Ix on the ring

n generated by the predictable rectangles. The process X is called summable

if Ix can be extended to a u-additive measure with finite semivariation on

the u-algebra P of predictable sets. Roughly speaking, the stochastic integral

Page 36: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 29

H . X is the process (!ro.t] H dIx k~o of integrals with respect to Ix.

The summable processes play in this theory the role played by the square

integrable martingales in the classical theory. It turns out that every Hilbert

valued square integrable martingale is summable; but we show by an exam­

ple that for any infinite dimensional Banach space E, there is an E-valued

summable process which is not even a semimartingale.

Not only does our theory allows to consider stochastic integration for a

larger class of processes than the semimartingales, but even in the classical

case our theory provides a larger space of integrable processes. Our space of

integrable processes with respect to a given summable process X is a Lebesgue­

type space, endowed with a seminorm; but, urrlike the classical Lebesgue

spaces, the simple processes are not necessarily dense. This creates consid­

erable difficulty, since usually most properties in integration theory are proved

first for simple functions and then are extended by continuity to the whole

space. To overcome this difficulty, we proved a Lebesgue-type theorem (The­

orem 3.1) which insures the convergence of the integrals (rather than the con­

vergence in the Lebesgue space itself). We are able then to prove that our

Lebesgue-type space is complete, that Vitali and Lebesgue convergence the­

orems are valid in this space, as well as weak compactness criteria and weak

convergence theorems for the stochastic integral, which are new even in the

scalar case.

The stochastic integral is extended then in the usual manner to processes

that are "locally integrable" with respect to "locally summable" processes. It

turns out that any caglad adapted process is locally integrable with respect

to any locally summable process. This allows the definit ion of the quadratic

variation which, in turns, is used in a separate paper [B-D.7) to prove the

Ito formula for Banach valued processes, for use in the theory of stochastic

differential equations in Banach spaces.

When is X summable? This crucial problem is treated in section 2. The

answer to this problem, which constitutes one of the main results of this paper,

can be stated, roughly, as follows: X is summable if and only if Ix is bounded

on the ring n (Theorems 2.3 and 2.5). It is quite unexpected that the mere

Page 37: Seminar on Stochastic Processes, 1990

30 lK. Brooks and N. Dinculeanu

boundedness of Ix of n implies not only that Ix is u-additive on n, but that

Ix has a u-additive extension to P. The proof of this result is quite involved

and uses the above mentioned new extension theorem for vector measures as

well as the theory of quasimartingales. The reader is referred to Appendix II

for pertinent results concerning quasimartingales used in connection with the

summability theory.

We mention that various definitions of a stochastic integral have been given

in a Banach space setting (Pellaumail [P], Yor [Y1 ], [Y2 ], Gravereaux and

Pellaumail [G-P], Metivier [M.I], Metivier and Pellaumail [M-P], Kussmaul

[Ku.2] and Pratelli [PrJ). However, either the Banach spaces were too restric­

tive, or the construction did not yield the convergence theorems necessary for

a full development of the stochastic integration theory.

Contents

1. Preliminaries.

Notation. Vector integration. Processes with finite variation.

2. Summable processes.

Definition of summable process. Extension of Ix to stochastic intervals. Sum­

mability criteria. u-additivity and the extension of Ix.

3. The stochastic integral.

Definition of the integral f H dIx. The stochastic integral. Notation and re­

marks. The stochastic integral of elementary processes. Stochastic integrals

and stopping times. Convergence theorems. The stochastic integral of caglad

and bounded processes. Summability of the stochastic integral. The stochastic

integral with respect to a martingale. Square integrable martingales. Processes

with integrable variation. Weak completeness of L~,G(B,X). Weak compact­

ness in L~,G(B, X).

4. Local summability and local integrability.

Basic properties. Convergence theorems. Additional properties. Semi-sum­

mable processes.

Appendix 1. General integration theory in Banach spaces.

Page 38: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 31

Strong additivity. Uniform u-additivity. Measures with finite variation. Stielt­

jes measures. Extensions of measures. The semivariation. Measures with

bounded semivariation. The space of integrable functions. The integral. The

indefinite integral. Relationship between the spaces :F F,G(8, m).

Appendix II. Quasimartingales.

llings of subsets of lR+ X Q. The Doleans function. Quasimartingales.

References.

1. Preliminaries

In this section we shall present some of the notat ion used throughout this

paper. In addition, for the reader's convenience we shall quickly sketch, in

a few paragraphs, the vector integration used in defining the stochastic inte­

gral. A full treatment is presented in Appendix AI. Finally, we present here

the stochastic integral (that is the pathwise Stieltjes integral) with respect to

processes with finite variation. The stochastic integral proper, with respect to

summable processes, will be presented in section 3.

Notations

Throughout the paper, E, F, G will be Banach spaces. The norm of a

Banach space will be denoted by I . 1. The dual of any Banach space M is

denoted by M*, and the unit ball of M by MI. The space of bounded linear

operators from F to G is denoted by L(F, G). We write E C L(F, G) to mean

that E is isometrically embedded in L(F, G). Examples of such embeddings 1\

are: E = L(lR, E)j E C L(E*, lR) = E**j E C L(F, E ®,.. F)j if E is a Hilbert

space over the reals, E = L(E, lR).

We write Co ct. G to mean that G does not cont ain a copy of Co, that is, G

does not cont ain a subspace which is isomorphic to the Banach space Co.

A subspace Z C D* is said to be norming for D if for every x E D we have

Ixl = sup{l(x,z)1 : z EZI}.

Obviously, D* is norming for D, and D C D** is norming for D*. Useful

examples of a norming space are the following.

Page 39: Seminar on Stochastic Processes, 1990

32 lK. Brooks and N. Dinculeanu

Let (n, F, P) be a probability space, and 1 ~ p ~ 00. If p < 00, then

L~ == L~(n, F, P) is the space of F-measurable, E-valued functions such that

IIJII~ = J IflPdP < 00. If p = 00, then LE denotes the space of E-valued,

essentially bounded, F-measurable functions. Note that Li,;- is contained

in (L~)*, where ~ + ~ = 1j if E* has the Radon-Nikodym property, then

Li,;- = (L~)*. One can show that Li,;- is a norming space for L~j if F is

generated by a ring R, then even the E*-valued, simple functions over R form

a norming space for L~.

Vector integration

Let S be a non--empty set, ~ a u-algebra of subsets of S and let

m : ~ --> E C L(F, G) be a u-additive measure with finite semivariation

mF,a (see AI for the definition of mF,a).

For z E G*, let mz : ~ --> F* be the u-additive measure, with finite

variat ion Imzl, defined by (x, mz(A») = (m(A)x, z), for A E ~ and x E F. We

denote by mF,a the set of alI measures Imzl with z E Gi. If D is any Banach space, we denote by FD(mF,a), the vector space of

functions f : S --> D belonging to the intersection

and such that

Then mF,a(-) is a seminorm and FD(mF,a) is complete for this seminorm.

We note that FD(mF,a) contains ali bounded measurable functions. But,

unlike the classical integrat ion theory, the step functions are not necessarily

dense in FD(mF,a).

The most important case is when D = F, for then we can define an integral

J fdm E G**, for f E FF(mF,a) as follows: since f E L~(lmzl) for every

z E G*, the mapping z --> J fdmz is an element of G**, which we denote by

J fdm:

(z, J fdm) = J fdmz, for z E G*.

Page 40: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 33

Under cert ain conditions, we have J Jdm E G, for example, if J is the limit

in :F F ( m F,G) of simple functions. If the set of measures m F,G is uniformly u­

additive, for example if F = IR, then J Jdm E F, for any J in the closure, in

:FF(mF,G), ofthe bounded measurable functions. Without this added hypoth­

esis, this need not be true in general-a fact which causes many complications

in vector integration theory.

Processes with finite variation

Let (Q,:F, P) be a probability space and (:Ftk,~:o a filtration satisfying the

usual conditions. Let X : IR+ x Q -+ E be a process.

We say that X has finite variation if for every w E Q and t ~ 0, the function

s -+ X.(w) has finite variation Var[O,tIX(.)(w) on [O,tl. For every t ~ 0, we

denote

The process IXI = (IXltk~o is called the variation process of X. We note that

IXlo = IXol· We say that X has bounded variation if IXloo(w) := IXI*(w) =

SUPt IXlt(w) < 00, for every w E Q. The process X is said to have integrable

variation if IXI* E Ll(P).

For the remainder of this section we shall assume that X : IR+ x Q -+ E C

L(F, G) is a cadlag process with finite variation IXI. Then IXI is also cadlag.

If X is adapted, then IXI is also adapted (see [D.3]).

We say that a process H : ~ x Q -+ F is locally integrable with respect to

X, iffor each w E Q and t ~ 0, the function s 1-+ H.(w) is Stieltjes integrable

with respect to s 1-+ IXI.(w) on [O, tl; then we can define the Stieltjes integral

!ro,tl H.(w)dX.(w). The function t 1-+ Iro,tl H.(w)dX.(w) is cadlag and has

finite variation $ Iro,tIIH.(w )ldIXI.(w).

We say that H is integrable with respect to X if for each w E Q the Stieltjes

integral !ro,oo) H.(w)dX.(w) is defined. Then, evidently, H is locally integrable

with respect to X. If H is jointly measurable, to say that H is locally integrable

with respect to X means that !ro,tIIH.(w)ldIXI.(w) < 00 for every w E Q and

t ~ O.

If H is jointly measurable and locally integrable with respect to X, then

Page 41: Seminar on Stochastic Processes, 1990

34 lK. Brooks and N. Dincu1eanu

we cau consider the G-valued process (!ro,t) H.(w)dX.(w»)t~o. This process

is cadlag aud has finite variationj it is adapted if both X aud H are adapted.

Assume X is cadlag, adapted, with finite variation aud H is jointly mea­

surable, adapted aud locally integrable with respect to X. Then the cadlag,

adapted process (!ro,t) H.(w)dX.(w») t>o is called the stochastic integral of H

with respect to X aud is denoted H . X or J H dX:

(H· X)t(w) = 1. H.(w)dX.(w), for w E n aud t ~ o. [O,t)

We list now some properlies of the stochastic integral:

1) The stochastic integral H . X has finite variat ion IH . XI satisfying

IH ·Xlt(w) ~ (IHI·IXI)'(w) < 00,

where IHI = (IHtl)t~o aud IXI = (IXlth~o. IT both H aud X are real valued,

then IH . XI = IHI· IXI· 2) IT T is a stopping time, then XT has finite variat ion aud

X T = 1[0,T) . X aud X T- = 1[0,T) . X

3) Let T be a stopping time. Then H is locally integrable with respect to

XT (respectively XT-) iff 1[0,T)H (respectively 1[0,T)H) is locally integrable

with respect to X. In this case we have

H· X T = (1[o,T)H)· X = (H· X)T

aud

H . X T- = (1[o,T)H) . X = (H . X)T-.

4) IT H is real valued aud K is F-valued, then K is locally integrable with

respect to H . X iff K H is locally integrable with respect to X. In this case

we have

K·(H·X) = (KH) ·X.

4') IT H is F-valued aud K is a real valued process such that K H is locally

integrable with respect to X, then K is locally integrable with respect to H . X

aud we have

K· (H ·X) = (KH) ·X.

Page 42: Seminar on Stochastic Processes, 1990

Stochastic Integration in 8anach Spaccs 35

5) ~(H·X)=H~X

where ~X, = XI - X I_ iA the jump of X at t.

In sed.iouH 3 mHl 4 we shall define the stochastic integral for processcs X

whkh m'(' sllllllllable or locally summable, and we shall prove that the sto­

clmst.ic iutegral stiH has alI these properties. A locally summable process is

uot. ueeessarily with finite variationj and a process with finite variat ion is not

uecessarily locally summable. If X has (locally) integrable variat ion, then it

is (locally) summable (Theorem 3.32 in/ra). The processcs with integrable

variation wiH be studied in section 3.

2. Summable processes

In this section, we shall introduce the notion of summability of a process

X. This concept replaces, in some sense, in the Banach space setting, the

classic assumption of X being a square integrable martingale, and allows us to

define the stochastic integral f H dX for a larger class of predictable processes

H than has been previously considered. For Hilbert valued processes X, we

recover the classical stochastic integral. As we mentioned in the introduction,

it turns out, surprisingly, that a mere boundedness condition on the stochastic

measure Ix, induced by X, implies the summability of X.

Throughout this paper, (fl,.1',P) is a probability space, (.1'tk~o is a fil­

tration satisfying the usual conditionsj 1 ::; p < ooj and X : 1R+ x fl --+ E C

L(F, G) is a cadlag, adapted process, with X t E Lk(P) == Lk for every tE 1R+

(the terminology of Dellacherie and Meyer, [D-M], wiH be used).

We shall denote n = A[O, 00), the ring of subsets of 1R+ x fl generated by

the predictable rectangles [OAJ, with A E .1'0 and (s, t] x A, with O ::; s < t < 00

and A E .1' •. The a-algebra of predictable sets is generated by n. There is a close connection between summability and quasimartingales

(Theorem 2.5 in/ra). Facts concerning quasimartingales, taken from [B-D.5]

and [Ku.l], are prescnted in Appendix AII.

Definition of summable processes

We define the finitely additive stochastic measure Ix : n --+ Lk, first for

Page 43: Seminar on Stochastic Processes, 1990

36 lK. Brooks and N. Dinculeanu

predictable rectaugles by

aud then we extend it in au additive fashion to R. We note that Ix([O, tl xU) = X t , for t ;:::: O. Frequently we shall write I in place of Ix. Since E C L(F, G),

we consider L~ C L(F, Lf.), aud therefore the semivariation of Ix cau be

computed relative to the pair (F, Lf.). The reader is referred to Appendix

AI for relevant information concerning vector measures, such as semivariation,

strong additivity, etc. Explicity, I F,a, which denotes the semivariation of Ix

relative to (F, Lf.) is defined by

I F,a(A) = sup IIEIx(Ai)x;lIL~' for AER,

where the supremum is extended over alI finite families of vectors Xi E FI

aud disjoint sets Ai from R contained in A. IT Ix cau be extended to 'P,

the semivariation of the extension is defined on sets belonging to 'P in au

aualogous fashion. We say that Ix has finite semivariation relative to (F,Lf.)

if I F,a(A) < 00, for every AER.

2.1 DEFINITION. We say that X is p-summable relative to (F,G) ii Ix has

a u-additive Lf.-valued extension (which will be unique), still denoted by

Ix, to the u-algebra 'P of predicatable sets aud, in addition, Ix has finite

semivariation on 'P relative to (F, Lf.).

IT p = 1, we say, simply, that X is summable relative to (F, G).

IT we consider E = L( lR, E), aud if X is p-summable relative to (lR, E),

we say that X is p-summable, without specifying the pair (lR, E).

Remarks. (a) X is p-summable relative to (lR, E) if aud only if Ix has a

u-additive extension to 'P, since in this case Ix is bounded in L~ on'P aud

automatically has finite semivariation relative to (lR, L~).

(b) IT 1 :::; p' < p < 00, aud if X is p-summable relative to (F, G), then X

is p'-summable relative to (F, G). In particular p-summable relative to (F, G)

implies summable relative to (F, G). For this reason, most theorems stated

aud proved for summable processes remain valid for p-summable processes.

Page 44: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 37

(c) If X is p-summable relative to (F, G), then X is p-summable relative

to (iR, E).

(d) IT X is p-summable relative to (F,G), then for any t ~ O we have

X t - E L~ and Ix([O, t) X il) = X t-. In fact, if tn / t then

X tn = Ix([O, tn] x il) -+ Ix([O, t) x il) in L~ and X tn -+ X t - pointwise.

( e) We shall prove in the next sections that the following classes of

processes are summable.

1) IT X : iR+ X il -+ E is a process with integrable variation then

X is p-summable relative to any pair (F, G) such that E C L( F, G) (Theorem

3.32 infra).

2) IT E and G are Hilbert spaces, then any square integrable mar­

tingale X : lR -+ E C L( F, G) is 2-summable relative to (F, G) (Theorem 3.24

in/ra).

(f) By proposition AI.5, X is p-summable relative to (F, G) iff Ix bas

a u-additive extension to l' and Ix has bounded semivariation on 'R (rather

than on 1') with respect to (F, L~). It follows that the problem of summability

reduces to agreat extent to that of the u-additive extension of Ix from 'R to

1'.

(g) Once the summability of X is assured, we can apply Appendix AI

to the measure Ix and define an integral with respect to Ix. This wiUlead to

the stochastic integral which wiU be studied in section 3.

Extension of Ix to stochastic intervaIs

The u-algebra l' of predictable subsets of iR contains stochastic intervals

ofthe form

(8, T] = ((t,w) E lR X il: 8(w) < t:5 T(w)},

where 8 :5 T are stopping times (possibly infinite). Other stochastic intervals

are similarly defined. IT Ix is extended to 1', it is convenient to extend it further

to sets ofthe form {oo}xA, with A E :Foo := Vt~o:Ft, by setting IxC {oo }xA) = O. Then l' U ({oo} X :Foo ) is the u-algebra 1'[0,00] of predictable subsets of

~ X il, where ~ = [0,00], and the above extension is stiU u-additive. Then

Ix «8, T]) has the same value whether (8, T] is regarded as a subset of lR, or

Page 45: Seminar on Stochastic Processes, 1990

38 lK. Brooks an9 N. Dinculeanu

as asubset oflR+x!ldefined by (S,T] = ((t,w) E lR+x!l: S(w) < t ~ T(w)}.

Similar considerations hold for other types of predictable stochastic intervals,

and in particular for Ix([TJ) if T is a predictable stopping time.

The following theorem extends the computation of Ix from predictable

rectangles to stochastic intervals.

2.2 THEOREM. Assume that X is p-summable relative to (F, G) and regard

Ix as the unique extension of Ix to 'P. Then

(a) There is a random variable, denoted by X oo, belonging to L~, such

that limt_ooXt = X oo in L~, and Ix«t, oo) x A) = lA(Xoo -Xt ), for A E :Ft .

If X has a pointwise leEt limit X oo-, then Xoo- = X oo a.s.

Consider now X extended at 00, by a representative of X oo , and detine

Xoo- to be Xoo.

(b) For any stopping time T, we have X T E L~ and Ix([O, TJ) = XT.

(c) IfT is a predictable stopping time, then XT- E L~ and Ix([O, T» = XT- and Ix([TJ) = tJ(T.

(d) If S ~ T are stopping times, then Ix «S, TJ) = XT - Xs. If

S is predictable, then Ix ([S, TJ) = X T - X s-. If T is predictable, then

Ix((S, T» = XT- - Xs. If both S and T and predictable, then Ix([S, T» =

XT- -Xs-·

Proof. Let tn / 00. Since Ix is O'-additive on 'P, we have Ix([O, 00) x !l) = limn Ix([O, tn] x !l) = limXtn in L~. Set Xoo- = Xoo = Ix([O, 00) x !l). The

rest of (a) easily follows.

To prove (b), assume first that T is a simple stopping timej it follows that

Ix «T, 00)) = X oo -XT. For the general case, when T is an arbitrary stopping

time, let Tn ! T, where the Tn are simple stopping times. Since Ix is 0'­

additive, we have Ix«T, oo» = limnIx«Tn,oo)) = limn(Xoo -XTn) in L~.

By right continuity of X, we have Xoo - XT = limn(Xoo - XTn) a.s., hence

XT E L~ and (b) follows.

To prove (c), let T be predietable and let Tn / T, where each Tn is a stop­

ping time. Hence Ix([O, T)) limn Ix([O, Tn]) limn XTn = limn(XTn1{T<oo} +XTnl{T=oo}) = XT_l{T<oo} + X oo1 {T=oo} = XT-, in L~,

and the rest of (e) follows, as well as (d).

Page 46: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 39

Summability criteria

The following theorems give necessary and sufficient conditions for a pro­

cess X to be p-summable. It is interesting to note that, if E "jJ Co, the mere

boundedness of Ix on'R implies that X is p-summable relative to (lR, E)j and

bounded semivariation on 'R, relative to (F, L~) implies that X is p-summable

relative to (F, G).

Summability of X reduces to q-additivity of Ix which will be studied in

the next subsection.

One of the main results of this section is the following.

2.3 THEOREM. Assume that E does not contain a copy of Co. If Ix is bounded

in L~ on 'R, then X is p-summable relative to (lR,E). If Ix has bounded

semivariation on 'R, relative to (F, L~), then X is p-summable relative to

(F,G).

The above theorem will follow from our fundamental q-additive extension

Theorem 2.5 intra, and the fact that if a vector measure bas finite semivariation

on 'R, relative to a pair (F, G), then its extension to 'P, if it exists, has finite

semivariation on 'P relative to (F,G) (Theorem AI.5 infra).

We state a corollary of the above theorem.

2.4 COROLLARY. Assume X is real valued and regard lR C L(F, F). Then X

is summable relative to (F, F) if and only if Ix has bounded semivariation on

'R relative to (F,L~).

q-additivity and the extension of Ix

For every 9 E Lk" we denote by G = (Gt)t~O the martingale defined

by Gt = E(glrt), and by XG the real valued process «(Xt,Gt))t~O' where

(x,x*) _ (x,x*) is the "duality mapping" on G x G*. We also denote (f,g) = E«(f(·),g(·))) the duality mapping in L~ x Lh •.

The following theorem gives a characterization of a process X to have a

q-additive extension of Ix to 'P. Note that just requiring boundedness of Ix

on 'R implies that (Ix, z) is q-additive for any z belonging to a norming space

Z C Lk" and in the case E tJ Co, this is sufficient for Ix to have a q-additive

Page 47: Seminar on Stochastic Processes, 1990

40 lK. Brooks and N. Dinculeanu

extension from 'P into L~. The proof of this theorem relies heavily on the

general extension Theorems ALI, AL2, AL3 in the appendix AI for vector

measures.

The main pari of the theorem is the equivalence of (1) and (2). This is

done by proving the equivalence of the first 6 assertions. The equivalence with

the rest of assertions is done for the sake of completeness.

2.5 THE EXTENSION THEOREM. Ii E does not contain a copy of co, then the

following assertions (1) - (10) are equivalent. Ii E is any general Banacb.

space, then assertions (2) - (10) are equivalent and (1) implies (2).

(1) Ix can be extended to a O'-additive measure on 'P.

(2) Ix is bounded on 'R, the ring generated by the predictable rectangles

in lR.

Let Z C L1,. be any norming subspace for L~.

(3) For eacb. 9 E Z, the real measure (Ix,g) is bounded on 'R.

(4) For eacb. 9 E Z, XG is a quasimartingale on (0,00).

(5) For eacb. 9 E Z, XG is a quasimartingale on (0,00].

(6) For eacb. 9 E Z, the measure (Ix,g) is O'-additive and bounded on 'R.

(7) For eacb. x* E E*, tbe measure Ix. x : R -+ LP is bounded in LP on R.

(8) For eacb. x* E E*, the measure Ix. X : 'R -+ V is bounded in LP and

is ·O'-additive on 'R.

(9) For eacb. 9 E Z, XG is a quasimartingale on (0,00) (or on (0,00]) and

(XG)* := SUPt I(XG)tl is integrable.

(10) For eacb. 9 E Z, XG is a quasimartingale on (0,00) (or on (0,00]) of

class (D).

Proof. The proof will be done in the following way: 1 =* 2 -<=* 3 -<=* 4 -<=*

5 -<=* 6 =* 1; 2 =* 6 =* 7 =* 2; 7 -<=* 8 and 5 -<=* 9 =* 10 =* 6 -<=* 5.

The only implication that requires E not to contain a copy of Co is 6 =* l.

All other implications are valid for any Banach space E.

The implicat ion 1 =* 2 is evident (since any O'-additive measure on a 0'­

algebra is bounded). The implication 2 =* 3 is also evident. To prove 3 =* 2

we remark that for each set A E 'R, the linear functional 9 1-+ (Ix(A), g) on Z is

continuous. Since Z is norming for L"e, we can embed L"e C Z* isometrically.

Page 48: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 41

IT we assume 3, then

sup{I(Ix(A),g} I : A E 'R.} < 00 for each 9 E Zj

by the Banach-Steinhaus theorem we deduce that

sup{IIIx(A)lIpj A E 'R.} < 00,

that is (2).

Let us prove 3 <:==} 4. Let 9 E Lko and consider the real measure (Ix,g)

on 'R. defined as follows:

(Ix, g}(A) = f (Ix (A), g}dP, for A E 'R..

We shall use the results conceming quasimartingales given in Appendix AII.

We shall show that (Ix,g) is bounded on 'R. if and on1y if XG is a quasimartin­

gale on (0,00). To prove this, we first show that

(Ix,g}(A) = ţ.LXG(A), for A E 'R.,

where ţ.LXG is the Doleans function of the process XG. In fact, for B E :Fo we

have

(IX,9}([OB]) = f lB{Xo,g}dP = f lBXoGodP = ţ.LXG([OB]).

For (s,t] x B, with B E :Fş , we have

(Ix ,g}«s, tJ x B) = f (lB(Xt - X.), g}dP

= la (X" Gt}dP - la (X .. G.}dP = ţ.LXG«s, tJ x B).

Hence, (Ix, g) is bounded on A(O, 00) if and only if ţ.LXG is bounded on A(O, 00),

which is true if and only if XG is a quasimartingale on (0,00).

It follows that 3 <==> 4, since 'R. = A[O] U A(O, 00), (Ix, g) = ţ.LXG on

A(O, 00) and Ix is always bounded on A[O].

We now show 4 <==> 5. Obviously 5 => 4. IT (4) holds, then from 2 <:==}

3 <==> 4 proved above, we deduce that Ix is bounded on 'R.. Thus for 9 E Z,

we have

Page 49: Seminar on Stochastic Processes, 1990

42 lK. Brooks and N. Dinculeanu

hence

Thus XG is a quasimartingale on [0,00), that is (5).

Next we prove 5 <===? 6. The implicat ion 6 ==> 5 is evident. Assume (5)

and let 9 E Z. Then XG is a quasimartingale on (0,00), where (XG)oo = ° by definition. For each n, define the stopping time Tn = inf{t : IXtl > n}.

Then Tn /'00 and IXtl ::; n on [O, Tn ). At this stage we do not know if XTn

belongs to L~, but since XG is a quasimartingale on (0,00), we know that

XTn GTn E LI, and

Since G is a uniformly integrable martingale, it follows that XTn GTn is a quasi­

martingale of class (D) on (0,00), hence the corresponding measure I'(XG)Tn is

u-additive with bounded variation on A(O, 00), therefore it can be extended to

a u-additive measure with bounded variation on the u-algebra 1'(0,00), the

class of predictable subsets of (0,00) x n. Now for each predictable rectangle

(s,t) X A, with s < t::; 00 and A E:F. we have

I'(XG)Tn «s, t) x A) = I'XG«s, t) x A) n [O, Tn )),

therefore

I'(XG)Tn (B) = I'xG(B n [O, Tn)), for B E 1'(0,00).

It follows that I'XG is u-additive on the u-ring 1'(0,00) n [O, Tn ]; consequently,

I'XG is u-additive on the ring B = UI::;n<oo'P(O,oo] n [O, Tn ]. On the other

hand, I'XG is bounded on R(O, 00) since XG is a quasimarlingale on (0,00],

hence I'XG has bounded variation on A(O, 00]. It follows that I'XG is u-additive

and has bounded variat ion on the ring B n A(O, 00], which generates 1'(0,00];

hence I'XG can be extended to a u-additive measure with bounded variation

on 1'(0,00]. Since (Ix,g) = I'XG on A(O, 00) it follows that (Ix,g) is bounded

and u-additive on A(O, 00). Since (Ix, g) is bounded and u-additive on A[O],

it follows that (Ix, g) is bounded and u-additive on R = A[O, 00); hence (6)

holds.

Page 50: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 43

To prove 6 ==> 1, we assume that E does not cont ain Co. If we assume (6)

then (Ix,g) is bounded and u-additive on R, for 9 E Z. By Theorem AI.3,

Ix can be extended to a u-additive measure on P = u(R), that is (1).

We show now that 6 ==> 7. If we assume (6), then by the equivalence

2 {==> 6 proved above, Ix is bounded on R. Then for each x' E E', the

measure Ix' x = x' Ix is bounded on R, which is (7).

Next we show 7 ==> 2. Assume (7), and let x' E E' and'P E U. Then

9 = x''P E L'f,; •. For AER we have

hence the measure (Ix,g) is bounded on R. It follows that (Ix,g) is bounded

on R for every step function 9 E L'f,; •. Since the step functions of L'f,;. form a

norming space for L'i:, we proved (3) for this particular norming space. Now,

since 2 {==> 3 for any norming space, assertion (2) follows.

Now we prove 7 {==> 8. Obviously 8 ==> 7. Assume (7) and prove (8).

By the implication 7 ==> 2 proved above, Ix is bounded in V on R. By the

equivalence 2 {==> 5 applied to Ix'x, we deduce that (Ix'x,'P) is u-addtive

and bounded for every 'P E Lq. Since LP does not cont ain a copy of co, by

applying Theorem AI.3, it follows that Ix' x can be extended to a u-additive

measure on P with values in LP. In particular, Ix' x is u-additive and bounded

on R, which is (8).

Finally we prove the implications 5 {==> 9 ==> 10 ==> 6 {==> 5.

Let us prove that 5 {==> 9. Obviously 9 ==> 5. Assume (5) and let 9 E Z.

Then XG is a quasimartingale on (0,00]. We have to prove that (XG)' IS

integrable. The proof will be carried out in several steps.

(a) By Theorem AII.9, XG has a decomposition XG = M + V, where

M is a real valued local martingale and V is a real valued predictable process

with integrable variation IVI. For each t, since XtGt and Vi are integrable,

we deduce that M t is integrable. Then M = X G - V is a quasimartingale

on (0,00], thus the stochastic measure IM is bounded in LI on R. As a

quasimartingale, we define M oo = O; thus, for any stopping time T, we have

MT ELI. IM can be extended to the algebra A generated by the stochastic

intervals [OA], with A E Fo and (S,T], with S::::; T, by IM((S,T]) = MT-MS.

Page 51: Seminar on Stochastic Processes, 1990

44 lK. Brooks and N. Dinculeanu

(b) IM is bounded on A(O, 00]. To see this let

a = sup{IIIM(A)lh : A E "R.} < 00.

IT T is a simple stopping time, then [O, T] E "R., hence IIMTlh = IIIM([O, T])lh :5

a. IT T is an arbitrary stopping time, then there is a decreasing sequence (Tn )

of simple stopping times converging to T. Then MTn -+ MT in Ll, hence

IIMTlh = Iim IIMTn Ih :5 a. Thus IIIM«S, T])lIl :5 2a, if S :5 T are stopping

times. Hence IM is bounded on A(O,oo].

(c) There exists an increasing sequence Tn /' 00 of stopping times such

that, for each n, MTn is a uniformly integrable martingale and (MTn )* E L l .

In fact, define the stopping times Un = inf{t : IMtl ~ n}. Let (Vn) be an

increasing sequence of stopping times, with Vn /' 00, such that each MVn is a

uniformly integrable martingale. The Tn = Un 1\ Vn is the required sequence,

since for each n we have

(d) The sequence (MTn )* is increasing and bounded in Ll. In fact, by the

corollary of Theorem 12.12 in [Ku.1] we have

where 1 M is the semivariation of IM relative to (lR, Ll).

(e) M* E Ll , since

thus

(f) Since (XG)* :5 M* + V*, we deduce that (XG)* is integrable, which

proves (9).

Obviously, 9 ==> 10. Now we shall assume (10) and prove (6). Let 9 E Z.

Then XG is a quasimartingale of class (D) on (0,00]. The corresponding

measure I-'XG is u-additive with bounded variation on A(O, 00]. From the

Page 52: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 45

equality (Ix, g) = ţtXG on A(O, (0), it follows that (Ix, g) is u-additive and

bounded on A(O, oo)i hence it is also bounded on n = A[O, (0), which is (6),

which in turn is equivalent to (5). This concludes the proof of the theorem.

3. The stochastic integral

In this section we shall define the stochastic integral with respect to a p­

summable process X and study various properties of this integral, including

various types of convergence theorems, some of them derived from the study

of the weak topology of the Lebesgue space constructed in Appendix AI.

Detinition of tbe integral J H dIx·

The setting for this section is the same as that of section 2. We shall always

assume in this section that X : 1R ~ E C L(F, G) is a p-Ilummable procells

relative to the pair (F, G)j hence the stochastic measure Ix is a u-additive

measure on l' with values in L~ C L(F, L~). As in the previous section, we

can extend Ix to 1'[0,00], with Ix({oo} x A) = ° for A E:F = Vt~o:Ft. As

usual we identify functions with their equivalence classes in L~ or L~.

Since Ix has bounded semivariation relative to the pair (F, L~), we can

apply the integration theory of section 1 and Appendix 1, with ~ = l' or

~ = 1'[0,00], m = Ix, E replaced by L~, G replaced by L~ and Z C (L~)*

a norming subspace for L~, (for example, we can take Z to be the space of

simple functions in L'b., where t + i = 1). For the reader's convenience, we

shall translate some of the general theory in AI to our particular setting.

For z E Z, consider the measure

m z = (Ix)z : 1'[0, 00] ~ F*

defined, for A E 1'[0,00] and y E F as follows:

(y,mz(A)) = (m(A)y,z) = !(Ix(A)(w)y,z(w»)dP(w).

Then we have

Page 53: Seminar on Stochastic Processes, 1990

46 lK. Brooks and N. Dinculeanu

We note that {oo} X n is Imz I-negligible for every z. If p is fixed, to simplify ~ ~

notation, we shall write 1 = Ix and 1 F,a = 1 F,L~' We shall also write IF,a =

(Ix )F,a = (Ix )F,L~ for the set of positive u-additive measures I (Ix )zl = Imzl

with z E Z and Izl ~ 1.

For any Banach space D, we denote by :FD(IF,a) = :FD(IF,L~) the space

of ali predictable processes H : 1R ---+ D such that

1 F,a(H) = ~F,L~(H) = sup{! IHldlmzl : Izl ~ I} < 00.

The definit ion of :F D(IF,a) and 1 F,a( H) is independent on the norming space

Z. For any extension of H to 1R+ X n, the value of 1 F,a( H) is the same.

We know that :FD(IF,a) is a vector space with seminorm IF,a, and :FD(IF,a)

is complete for this seminorm. For any set C C :FD(IF,a), we denote by

:FD(C,IF,a) the closure of C in :FD(IF,a).

If D = F, we can define the integral f HdIx E Z*, for H E :FF(IF,a) =

:FF(IF,L';,), and the mapping H ---+ f HdIx is a continuous linear mapping

from :FF(IF,a) into Z*. We have

(z,! HdIx) = ! Hd(Ix)z, for z E Z,

and

II! HdIxllz, ~ IF,a(H).

The integral f HdIx depends on the norming space Z. But the integral

corresponding to Z is the restriction to Z of the integral corresponding to

To further simplify notation, we write

If H E :FF,a(X), then for every t ~ O we have l[o,tlH E :FF,a(X). We

denote

Also we define

[ HdIx = J l[o,tl HdIx. i[O,tl

[ HdIx:= [ HdIx:= JHdIx, i[O,col i[o,co)

Page 54: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 47

Thus for each H E F F,G(X), we obtain a family (Iro,t) H dlx )tE[O,oo) of ele­

ments in Z·. We are interested in the subspace of FF,G(X) which consists of

processes H such that for every t E [0,00), the integral Iro,t) H dlx belongs to

the subspa.ce L~ of Z·. In this case we denote by the same symbol, the equiv­

alence class Iro,t) H dlx as well as any representative of this class. IT in each

equivalence class Iro,t) H dlx we choose a representative, we obtain a process

(Iro,t) H dlx )tE[O,oo) with values in G, such that Iro,t) H d1x E L~ for each t. This process does not necessarily have a cadlag modification. This situation

is discussed in detail in the following subsections. Before this, we shal! discuss

some general convergence theorems.

The Vitali and Lebesgue theorems can now be stated for sequences (HR)

in FF,G(X) which converge in mea.mTe to a process H (and satisfy addi­

tional conditions), and the conclusion is that Hn -+ H in FF,G(X), hence

f HRd1x -+ f Hd1x in (L~)··. Pointwise convergence of the Hn to H

will not suffice for this conclusion unless the family of measures IF,G is uni­

formly u-additive. We will postpone the statements of these theorems until

we will be able to add an important property to the conclusion, namely, that

the integrals belong to L~, and there exists a subsequence (nk) such that

Iro,t) HR'dlx -+ Iro,t) Hd1x, uniform1y on compact time intervals (Theorems

3.14 and 3.15 in/ro).

At this time we shal! state a very uSeful version of the Lebesgue theorem for

pointwise convergence in which the conclusion involves f HRd1x -+ f H d1x

weakly in L~-but not necessarily the convergence of HR to H in FF,G(X).

3.1 THEOREM. Let (HR)O:::;n<oo be a sequence of elements from FF,G(X) such

that IHnl ~ IHol for each n and assume that HR -+ H pointwise.

Jf f HRd1x E L~ for each n ~ 1 and if the sequence (f HRdlx)R converges

pointwise on fi, weakly in G, then f Hd1x E L~, and f HRd1x -+ f Hdlx

in the u(L~,Lh.) topology of L~, as well as pointwise, weakly in G. Jf

(f HRdlx)R converges pointwise, strongly in G, then f HRd1x -+ f Hdlx

strongly in Lh.

Proof. Since IHI ~ IHol, we deduce that H E FF,G(X). Let z E Lh •. We

can apply Lebesgue's theorem to (HR) in the spare L~(lm%l), and deduce that

Page 55: Seminar on Stochastic Processes, 1990

48 J.K. Brooks and N. Dinculeanu

E({(f HRd1x)(·),z(·»)) -+ (f Hd1x,z).

If h E LOO(P), then hz E L'b., hence, replacing z with hZj we obtain

E(h(·)((f HRdlx)(·),z(·»)) -+ (f d1x,hz).

Thus the sequence «((J HRd1x)(·),z(·»)) is weakly Cauchy in Ll(P), hence

the indefinite integrals of the above sequence are uniform1y absolutely con­

tinuous with respect to P. IT we let q,(w) := limR(J HRd1x)(w) weakly in

G, then the Vitali convergence theorem implies that (q,(·),z(·») E Ll(p) and

((J HRd1x)(·),z(·») converges in Ll(P) to (q,(·),z(·»), hence the expectations

E«(J HRd1x)(·),z(·»)) converge to E«(q"z». Since z E L'b. was arbitrary,

we deduce that q, E L~ (by Corollary 2, p. 236 in [D.l]). We then de­

duce that (q"z) = E({q,(·),z(·»)) = (J Hd1x,z), hence J Hdlx = q, E L~

and J HRdlx --+ J H dlx pointwise, weakly in G. From the above, it follows

that J HRd1x --+ J Hd1x in the u(L~,L'b.) topology of Va. In particular,

the above sequence converges in the u(Lh,L~.) topology of Lh, hence by

Theorem 4.4 in [B-D.3], the indefinite integrals J I J HRdlxldP are uniformly

u-additive on:F. IT q,(w) = limR(J HRd1x)(w), strongly in G, we can apply

the Vitali theorem for Lh and deduce that J HRd1x --+ J H d1x in Lh.

The stochastic integral

We shall be interested in the subspace of :FF,a(X) of processes H that

in addition to the property that Iro,t] H dlx E L~, for each t, also have the

property that the process (Iro,t] Hd1x )tE[O,oo] has a cadlag modification. Note

that, since X is cadlag, this holds for simple processes of the form

where the sets in the definition of H are predictable. We have

f Hdlx = lAoXoxo + El::;;::;R1AIXi(XtIAt - X.IAt) J[O,t]

and the right-hand side is cadlag.

Page 56: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 49

We now define our Lebesgue space of processes.

3.2 DEFINITION. We denote by L~,G(X) tbe space ofprocesses H E FF,G(X)

satisfying tbe following two conditions:

(1) fto,tl H d1x E L~ for eve.zy t E [0,00];

(2) Tbe process (fto,tl H dlx )tEIO,ool bas a cadlag modification.

Tbe processes H E L~,G(X) are said to be integrable witb respect to X.

If H E LhG(X), tben a.ny cadlag modification of (fto,tl H d1x )tEIO,ool is

called tbe stocbastic integral of H witb respect to X a.nd is denoted by J H dX

orH·X:

(H· X)t = (jHdX)t = f Hd1x a.s. JIO,tl

We note that if X is real valued, we regard 1R as being embedded in L(F, F),

and thus the space of F-valued integrable processes is denoted by L~,F(X).

We shall see later (Corollary 3.11 infra) that L~,G(X) is complete relative

to the seminorm 1 F,G, and L~,G( X) :) C, the class of predictable "elementary

processes" (see Corollary 3.6 infra). If IF,G if uniformly u-additive, then

FF,G(B,X) C L~,G(X) (Corollary 3.12 infra), where B is the set of bounded

processes.

We note that the stochastic integral is uniquely defined up to an evanescent

set. For t = 00, we have

(H ·X)oo = f HdX = f Hd1x + f Hd1x Jlo,ool Jlo,oo) J{oo}XO

= f Hdlx = jHd1x. Jlo,oo)

For simple, 'R.-measurable processes H, the stochastic integral can be com­

puted pathwise, as a Stieltjes integral:

(H· XMw) = (f Hd1x)(w) = f H.(w)dX.(w). JIO,tl JIO,tl

This property remains valid whenever both the stochastic integral and the

pathwise Stieltjes integral appearing above are defined. Moreover, we prove

below that if H E FF,G(X) and if the Stieltjes integral !ro,tl H.(w)dX.(w) is

defined for every t ~ 0, then necessarily H E L~,G(X).

Page 57: Seminar on Stochastic Processes, 1990

50 lK. Brooks and N. Dinculeanu

3.3 THEOREM. Asswne that X has finite variation IXI and that X is p­

swnmable relative ta (F,G). II H E FF,a(X) and ii !ro,t]IH.(w)ldIXI.(w) < 00, iar evezy t E IR+ and w E n, then H E Lha(X) and

(H· X}t(w) = [ H.(w)dX.(w). J[O,t]

Proof. As we mentioned above, if H = IA' x, for x E F and AER, then

the theorem is true. By a monotone class argwnent, this also holds if A E P,

hence for H any simple predictable process.

Now suppose that H satisfies the hypotheses of the above theorem. Let

(Hn) be a sequence of simple, predictable processes such that Hn ---+ H point­

wise and IHnl :::; IHI for each n. Let t > ° and w E n. Using the Lebesgue

theorem in L}(dX(-)(w)), we deduce that

and

[ W:'(w) - H.(w)ldIXI.(w) ---+ 0, J[O,t]

[ H;'(w)dX.(w) ---+ [ Hs(w)dX.(w). J[O,t] J[O,t]

Now we use the Lebesgue theorem 3.1 to conclude that ~O,t] Hdlx E L{';. and

~O,t] J Hnd1x ---+ !ro,t] H dlx pointwise. Hence (~O,t] H d1x)( w)

~O,t] H.( w )dX.( w) a.s. Since the Stieltjes integral is cadlag, as a function

of t, we have H E L},a(X) and (H· X}t(w) = ~O,t] H.(w)dX.(w).

Remark. This equality will remain valid for locally integrable processes (The­

orem 4.4 infra).

3.4 PROPOSITION. II H E L},a(X), then iar evezy t E [0,00) we have

(H· X)t- E L~ and

(H· X)t- = [ Hd1x. J[O,t)

In particular,

(H· X)oo- = (H· X)oo = J Hd1x.

The mapping t ---+ (H . X)t is cadlag in Lh.

Proof. Let tn /' t. The l[O,tn]H ---+ l[o,t)H pointwise, 11[O,tn]HI :::; IHI for each

n, and J l[O,tn]H dlx = (H . X)tn E L~ and (H . X)tn ---+ (H . X)t-. By

Page 58: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 51

Theorem 3.1, we have J l[o,t)H d1x E L~ and J l[O,tn]H d1x -+ J l[o,t)H d1x

pointwise. Hence (H· X)t- = J l[o,t)Hd1x. The final conclusion follows rrom

Theorem 3.1.

Notation and remarks

If Ce FF,a(X), we denote the closure of C in FF,a(X) by FF,a(C,X).

If C consists of processes H such that J Hd1x E L~, for every H E C,

then by continuity of the integral we still have J H d1x E L~ for every H E

FF,a(C,X). We shall see later (Corollary 3.11) that if C C L~,a(X), then

FF,a(C,X) C Lha(X). In this case we write L~,a(C,X) = FF,a(C,X).

Particular spaces C of interest are:

(1) The space BF of bounded, predictable processes with values in F. We

write FF,a(B,X) for FF,a(BF,X);

(2) The space SF(R) (respectively, SF(P)) of simple, F-valued processes

over R = A[O, (0) (respectively, over P). The closures ofthese sets in FF,a(X)

will be denoted by FF,a(S(R),X) (respectively, FF,a(S(P),X));

(3) The space &F of predictable, elementary, F-valued processes of the

form

H = Ho1{o} + ~1:::;i:::;nHi1(T.,T'+1] where (Ti)O:::;;:::;n+l is an increasing family of stopping times with To = O, and

Hi ia bounded and FT.-measurable for each i. We let FF,a(&,X) denote the

closure of this set.

We shall see (Corollary 3.6 infra) that SF(R) and &F are contained in

L~,a(X), hence L~,a(S(R),X) = FF,a(S(R),X) and L~,a(&,X) = FF,a(&,X).

By Proposition AL11, we have

More generally, if the set of measures IF,a is uniformly u-additive, then

FF,a(S(R),X) = FF,a(B, S) = L~,a(B,X).

Moreover, if X has integrable variation, or if X is a square integrable martin­

gale with values in a Hilbert space E, we have FF,a(S(R),X) = L~,a(X) =

FF,a(X) (see Theorems 3.27 and 3.32 intra).

Page 59: Seminar on Stochastic Processes, 1990

52 lK. Brooks and N. Dinculeanu

The stochastic integral of elementary prOCeBses

For simple predictable processes defined on 114 x n, of the form

H = EI~i~n1A;Yi with Ai E P[O, 00] and Yi E F,

we have

f Hd1x = EI~i~nIx(Ai)Xi E L~. If H' is the restriction of H to lR, then H' is predictable and J H'd1x =

J Hd1x. However, it is not certain that H is integrable with.respect to X,

because of the cadlag requirement. We shall see that if IF,G is unifonnly u­

additive, then these processes are integrable with respect to X (see Theorem

3.12 infra). In particular, the real valued, simple, predictable processes are

integrable withrespect to X since IlR,E is uniformly u-additive.

The simplest class of integrable processes with respect to X is that of the

simple processes over the algebra A[O, 00] of the form

where ° = to :5 tI < ... < tn < t n+1 :5 00, Yi E F and Ai E :Fi;. According

to the definition of the integral for simple processes, for each t E [0,00], the

integral Iro,tl H d1x can be computed pathwise:

( Hdlx = yo1AoXO + EI~i~n1A;Yi(Xt;+lAt - Xt;l\t). }[O,tl

This integral belongs to L~ and is cadlag, hence H is integrable with respect to

X and the stochastic integral (H ·X)t = Iro,tl Hd1x can be computed pathwise

by the above sum. In particular, this is the case of simple processes H over

'R = A[O, 00), having the above fonn but with tn+1 < 00.

A more general class is that of the simple processes of the form

where A E :Fo, (Tih~i~n+1 is an increasing family of stopping times, and Yi E

F. From Corollary 3.6 infra it will follow that any such process is integrable

with respect to X and the stochastic integral can be computed pathwise:

Page 60: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 53

A stilliarger class of integrable processes is that of the elementary processes

of the fonn

where (Ti)09:Sn+1 is an increasing family of stopping times with To = O and

for O ~ i ~ n, Hi is an F-valued, bounded, random variable which is Fr,­

measurable. We shall prove below (Corollary 3.6) that the stochastic integral

of such a process can be computed pathwise:

This will follow from the following result.

3.5 PROPOSITION. Let S ~ T be stopping times and let h : n -+ F be an

F s-measurable, bounded, random variable. Tben

If Sis predictable and h is Fs_-measurable, tben

and

J hl[S]d1x = h~Xs.

Proof. If h = lAY, with A E Fs and Y E F, then

Thus the equality holds when h is a simple function. For the general case, let

hn be simple functions converging pointwise to h with Ihn I ~ Ihl for each n.

By applying the Lebesgue Theorem 3.1, we obtain the desired result.

Assume now that Sis predictable and h is Fs_-measurable. If h = l A y,

with A E Fs- and Y E F, then SA is a predictable stopping time and

J l A yl[S]dlx = J l[SAlydlx = ~XSAY = lAY!x([S])i

Page 61: Seminar on Stochastic Processes, 1990

54

thus

lK. Brooks and N. Dinculeanu

J hl[s,TJ d1x = J lAyl[Sj d1x + J lAyl(s,TJ d1x

= l Ay(J 1[Sjd1x + J 1 (S,TJdlx )

= lAY J 1[s,TJ d1x = h J 1[s,TJ d1x.

As before, the conclusion holds for simple functions, and using the Lebesgue

Theorein 3.1, we obtain the general case.

3.6 COROLLARY. Every elementary process

is integrable with respect to X and its stochastic integral can be computed

pathwise, as a Stiltjes integral:

Stochastic integrals and stopping times

In this subsection we continue to assume that X is p-summable relative

to (F, G). We shall examine the relationship between stochastic integrals and

stopping times. First we extend Proposition 3.5 to a more general situation.

3.7 THEOREM. Let S::; T be stopping times and assume either

(a) h: il ---+ 1R is bounded, Fs-measurable, and H E FF,a(X);

or

(b) h: il ---+ F is bounded, Fs-measurable, and H E F IR((Ix)F,a).

(1) Ii Jl(s,TJHd1x E L~, in case (a), and J 1(s,TJHd1x E L~ in case (b),

then

J hl(s,TJ Hd1x = h J 1(s,TJHd1x.

(1') Ii S is predictable, h is Fs--measurable and J l[s,TJH d1x E L~ in

case (a), and Jl[s,TJHdlx E L~ in case (b), then

J hl[s,TJ Hd1x = h J 1[s,TJ Hd1x.

Page 62: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 55

(2) Ii H is integrable with respect to X, then l(s,T)H a.nd h1(s,T)H are

integrable with respect to X a.nd

(h1(S,T)H) . X = h[(l(s,T)H)· Xl.

(2') Ii S is predictable, h is .Fs_-measurable, a.nd H is integrable with

respect to X, then l[s,T)H a.nd h1[s,T)H are integrable with respect to X a.nd

(h1[s,T)H) . X = h[(l[s,T)H) . Xl·

Proof. We shall only prove (1) and (2). The case when S is predictable is

similar.

Assume first hypothesis (a). Let H be of the fOrIn H = l(.,t)xAY, where

A E .F. and Y E F. By Proposition 3.5, we have

J h1(s,T)Hdlx = J h1Ay(1(SV.,TI\t) d1x

= h1Ay(XTI\t - Xsv.) = h J l(s,T) Hdlx E L~. It follows that for B E 'R, we have

J h1(s,T) 1 Byd1x = h J 1 (S,T) 1 Byd1x E L~.

For any z E L'b., we have then

J h1(s,T)lByd(Ix)z = J l(s,T)lByd(Ix )hz.

The class of sets B for which the above equality holds for alI z E L'b. is

a monotone class which contains 'R, hence the above equality holds for an B E P, and z E L'b •.

Hence, for any predictable, simple process H, we have

J h1(s,T)Hd(Ix)z = J l(s,T)Hd(Ix)hz.

If H E .FF,G(X), Lebesgue's theorem implies that the above equality holds for

H. Assume now that J l(s,T)H dlx E L~. Then h J l(s,T)H dlx E L~ and

(h J 1(s,T) Hd1x,z) = (/ 1(s,T)Hd1x, hz)

= / l(s,T)Hd(Ix )hz = / h1(s,T)Hd(Ix )z

= (/ h1(s,T)HdIx,z).

Page 63: Seminar on Stochastic Processes, 1990

56 lK. Brooks and N. Dinculeanu

Since Li;.- is norming for both L~ and (Li;._)*, we deduce that J h1(s,TjH d1x =

h J 1(x,TjHd1x E L~, and this proves the theorem under hypothesis (a). -Assume (b), and let H : 1R -+ 1R be predictable with I F,G(H) < 00, that

is

Aiso assume that J l(s,TjH dlx E L~. Consider first the case h = h'y where

y E F and h' is real valued, bounded, and Fs-measurable. The l(s,TjHy E L~,

and by Theorem A1.14, J l(s,TjHydlx = y J l(s,TjHdlx . By the first part of

the proof, we have

f h1(s,Tj Hd1x = h' f 1(s,TjHyd1x = h f 1(s,TjHd1x.

This equality then holds for any Fs-measurable simple function. Byapproxi­

mat ing the general h with a dominated sequence of simple functions, and using

the Lebesgue Theorem 3.1, we obtain the desired conclusion.

We now establish a theorem which is essential for the proof of the main con­

vergence theorem. This theorem will be completed with additional properties

in Theorem 3.9 infra.

3.8 THEOREM. Let H E L},G(X) and Jet T be any stopping time. Then

l[o,TjH E L},G(X) and

(H . xl = (1[o,TjH) . X.

IfT is predictable, then l[o,T)H E L},G(X) and

(H . X)T- = (1[o,T)H) . X.

Proof. Suppose that T is a simple stopping time of the form T=~I:5i:5nlA,ti,

with O :::; tI < ... < tn :::; +00, Ai E Ft, mutually disjoint, and UI9:5nAi = n. For each w E n, there is a unique i such that w E Ai and hence T(w) = ti. Then

(H.X)T(W)=(H·X)t;{w)=( f Hd1x)(w) i[o,t;]

Page 64: Seminar on Stochastic Processes, 1990

hence

Stochastic Integration in Banach Spaces

(H . X)T = I:1::;i::;n1A, r H d1x i[o,t;]

= r Hdlx - I:1::;i:5n1A, r Hd1x i[o,co] i(t"oo]

= r Hdlx - I:1::;i::;n r 1A, Hd1x, i[o,oo] i(t"oo]

by Theorem 3.7, since Ai E :Ft ,; and hence

(H· X)T = r Hd1x - J l(T,oo] Hdlx = J 1[0,T] Hd1x. i[o,oo]

57

We can establish the above equality for a general stopping time T by approx­

imating it by Tn \.. T, where the Tn are simple stopping times, and then

applying the Lebesgue Theorem 3.1; we note that J l[o,T]H d1x E L~.

Replacing T by T 1\ t, we have

(H . X)Ţ = r l[o,T]H dlx. i[o,t]

Thus the process CI l[o,t]l[o,T]H dlx )f2:0 has values in G, and is cadlag, hence

l[o,T]H E L},a(X) and (l[o,T]H . X) = (H . X)T.

For the predictable case, we approximate the predictable stopping time T

by an increasing sequence of stopping times Tn /' T, and use the Lebesgue

Theorem 3.1 to obtain the conclusion.

The next theorem gives a more complete description of the properties of

X T . The proofs follow from our previous results and the definitions.

3.9 THEOREM. Let T be a stopping time.

(a) X T is p-summable relative to (F, G) and we have

X T = 1[0,T] • X and IXT(A) = Ix([O, T] n A), for A E P[O, 00].

(a') HT is predictable, then XT- is p-summable relative to (F,G) and

we have

X T- = l[O,T) . X and IXT- (A) = Ix([O, T) n A) for A E P[O, 00].

(b) For evezy predictable F-valued process H, we have

Page 65: Seminar on Stochastic Processes, 1990

58 lK. Brooks and N. Dinculeanu

(b') Ii T is predictable, then

(c) We have H E FF,a(XT ) ii and on1y if1[o,TlH E FF,a(X), and in this

case we have

J HdlxT = J l[o,TlHdlx.

(d) Ii T is predictable, then H E FF,a(XT -) ii and on1y if l[o,T)H E

FF,a(X), and in this case we have

J HdlxT- = J l[o,T)Hdlx.

(d) Ii H E L},.,a(X), then l[o,TlH E L},.,a(X) and H E L},.,a(XT). In this

case

(H . X)T = H . X T = (l[o,TlH) . X.

(d') Ii T is predictable and H E L},.,a(X), then l[o,T)H E L},.,a(X) and

H E L},.,a(XT-). In this case we have

(H . Xl- = H . X T- = (l[O,T)H) . X.

(e) Ii the set of measures (Ix)F,a is uniformly u-additive, then so is

(IxT )F,ai ifT is predictable, then (IxT- )F,a is also uniionnly u-additive.

Convergence theorems

We maintain the assumption that X is p-summable relative to (F, G). We

have already proved a Lebesgue-type convergence theorem (Theorem 3.1) for

processes in FF,a(X) concerning the convergence of the integrals. In this

section we shall consider the Lebesgue and Vitali theorem for convergence in

L},.,a(X), as well as pointwise uniform convergence of the integrals on compact

time intervals for a suitable subsequence.

The key result needed for the uniform convergence property is the following

theorem, which will imply that the space L},.,a(X) is complete.

3.10 THEOREM. Let (Hn) be a sequence in L},.,a(X) and assume that

Hn -+ H in FF,G(X), Then H E L},.,a(X), Moreover, for evezy t, we have

Page 66: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 59

(H" . X)t -+ (H . X)t in L~, and there exists a subsequence (n r ) such that

(Hn • . X)t -+ (H . X)t a.s., as r -+ 00, uniform1y an evezy compact time

interval.

Proo!. (Hn) is a Cauchy sequence in L}.,a(X), converging in :FF,a(X) to H.

By passing to a subsequence, if necessary, we can assume that - 1 1 F,a( Hn - Hn+I) :5 4n for each n. Let to > O. For each n, let zn = Hn . X,

and define the stopping time

un=inf{t: IZ;'-Z;'+II> 2:}I\to.

Let Gn = {un < tol. For each stopping time v, we have, by Theorem 3.8,

Z: = Iro,v] Hnd1x, hence

E(lz: - Z:+II) = E(I f (Hn - Hn+I)d1xl) Jlo,v]

= II f (Hn - Hn+I)d1xIlLh :5 II f (Hn - Hn+I)dlxIlL~ Jlo,v] Jlo,v]

:5 1 F,a(Hn - Hn+I) :5 ;n'

In particular, for v = Un, we have

E(IZn _ zn+Il) < 2-. Un Un - 4"

On the other hand,

P(G ) < 2nE(IZn - zn+Il) < 2-. n - Un Un - 28

To see this, we note that if w E Gn, then un(w) < toi we take a sequence

tk '\. U,.(w), with tk < to such that IZ~(w)- Z4+I(w)1 > 2:' for each k. Then

we use the right continuity of Z" and Z,.+I to conclude that 1

IZ,. (w) - Z"+I(w)1 > -. Thus Un u" - 2"

E(IZ" - zn+ll) > 2-P(G ) u" Un - 2" n ,

and the desired inequality follows.

Let Go = limsuPn G,.. Then P(Go) = O. For w rţ Go, there is a k such

that if n ~ k, we have w rţ G,., hence un(w) = to. Thus

1 sup IZ;'(w) - Z;,+I(w)1 :5 -2 • t<to ,.

Page 67: Seminar on Stochastic Processes, 1990

60 lK. Brooks and N. Dinculeanu

Hence for w fţ Go, the sequence (Z;'(w» is Cauchy in G uniformly for t < to.

The process Zt(w) := !imn Z;'(w), defined for t < to and w fţ Go, with

values in G, is cadlag, adapted, and IZ;'(w) - Zt(w)la ~ 2n1_1' hence IIZ;'-

ZtllLf, ~ 2:-1 • It follows that for t < to, we have Zt E Lft and Z;' -+ Zt in Lft.

On the other hand, l[o,t]Hn -+ l[o,t]H in FF,a(X), hence Z:, -+ Iro,t] Hdlx in

(L~.)*. It follows that

f Hd1x = Zt E Lft. 1[0,t]

Since Z is cadlag, we deduce that H E Lha(X), where we extend Zt consis­

tently, for tE [0,00), and we have also (H ·X)t = Zt, for each t. Thus L~,a(X)

1S complete. Since t o was arbitrary, it follows that

(Hn r • X)t -+ (H . X)t a.s., uniformly on every compact time interval, for

a suitable subsequence (n r).

3.11 COROLLARY. L~,a<X) is complete.

3.12 COROLLARY. Ii IF,a is uniformlya-additive, the L~,a(X) contains ali

the F-valued, bounded, predictable processes (in particular, this is the case if

F = lR).

In fact, in this case EF, the space of elementary processes is dense in

FF,a(B, X). Since EF C L~,a(X), we have FF,G(B,X) C L~,a(X).

Remark. We shall see that if X has integrable variation, or if E, G are Hilbert

spaces and X is a square integrable martingale, then L~,a(X) = FF,a(B,X) =

FF,a(X) (see Theorems 3.27 and 3.32 infra).

Uniform convergenee of proeesses yields eonvergenee in L~,a(X), as the

next theorem shows.

3.13 THEOREM. Let (Hn) be a sequence from FF,G(X) which eonverges uni­

formly pointwise to a process H. Then

(a) H E FF,a(X) and H n -+ H in FF,a(X).

Assume, in addition, that Hn E L~,a(X), for each n. Then

(b) H E L~,a(X), and Hn -+ H in L~,a(X).

(e) For every tE [0,00], we have (Hn. X)t -+ (H· X)t in Lft.

Page 68: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 61

(d) There is a subsequence (n r ) such that (HR • . X)t -+ (H· X)t a.s.,

uniformly on compact time intervals.

Proof. Assertion (a) is immediate. Assertions (b) and (d) follow from Theorem

3.10. Assertion (c) follows from the continuity ofthe integral.

For the Vitali and Lebesgue theorems, pointwise convergence does not en­

sure convergence in L~,a(X), un1ess IF,a is uniformly u-additive. The fol­

lowing two theorems fol1ow from the preceding two theorems and the general

Vitali and Lebesgue convergence theorems AL9 and AL10 in Appendix 1.

3.14 THEOREM. (Vitali). Let HR be a sequence from :FF,a(X) and Jet H be

an F-valued predictabJe process. Assume that

(1) I F,a(HR1A) -+ O as I F,a(A) -+ O, uniformly for n;

and either one of the conditions (2), (2') beJow: -(2) HR -+ H in I F,a-measure;

(2') HR -+ H pointwise and IF,a is uniformly u-additive (this is the case,

for exampJe, if the HR are real valued, i.e. F = 1R).

Then

(a) H E :FF,a(X) and HR -+ H in :FF,a(X).

Conversely, if HR,H E :FF,a(B,X) and if HR -+ H in :FF,a(X), then

conditions (1) and (2) are satisfied.

Under the hypotheses (1) and (2) or (2'), assume in addition that

HR E L~,a(X) for each n.

Then

(b) H E L~,a(X) and H R -+ H in L~,a(X);

(c) For every tE [0,00), we have (HR . Xh -+ (H· X)t in L~;

(d) There is a subsequence (n r ) such that (HR. ·X)t -+ (H·X)t, as r -+ 00,

a.s. uniformly on compact time interva1s.

3.15 THEOREM. (Lebesgue). Let (HR) be a sequence from :FF,a(X) and Jet

H be an F -valued predictabJe process. Assume that

(1) There is a process ti> E :Fm.(B,IF,a) such that IHRI ~ ti> forevery n,

and either one of the conditions (2), (2') below: -(2) HR -+ H in I F,a-measure;

Page 69: Seminar on Stochastic Processes, 1990

62 lK. Brooks and N. Dinculeanu

(2') Hn --+ H pointwise and IF,a is uniform1y q-additive (this is the case

ii the Hn are real valued, Le. F = lR).

Then

(a) Hn E :FF,a(B, X) and Hn --+ H in :FF,a(X).

Asswne in addition that Hn E L},.,a(X) for each n. Then

(b) H E L},.,a(X) and Hn --+ H in L},.,a(X);

(c) For every tE [0,00], we have (Hn . X)t --+ (H . X)t in L~;

(d) There is a subsequence (nr ) such that (Hnr ·X)t --+ (H ·X)" as r --+ 00,

uniformly on compact time intervals.

The stochastic integral of caglad and bounded processes

The stochastic integral H . X can be computed pathwise for the class of

q-elementary processes H E :FF,a(X) of the form

H = Hol{o} + El~i<ooHil(Ti,T;+l)' where (Ti) is a sequence of stopping times with Ti /' 00, H o is bounded and

:Fo-measurable, and for each i, Hi is bounded and :FT;-measurable.

This result will follow from the following general theorem.

3.16 THEOREM. Let H E :FF,a(X) and assume that there is a sequence

Tn /' 00 of stopping times such that l[o,Tn)H E L},.,a(XTn) for each n. Then

H E L},.,a(X) and H . X = limn{1[O,Tn)H) . X pointwise.

Proof. Let t E [0,00]. Note that, by Theorem 3.9 we have l[o,Tn)H E L},.,a(X)

for each n. Then, for t ~ ° we have l[o,t)l[o,Tn)H --+ l[o,t)H pointwise,

11[o,t)l[o,Tn )HI ::; IHI,

and

f l[o,Tn)Hd1x = «l[O,Tn)H). XTn)t. J[O,t)

We shall show that this sequence converges pointwise, as n --+ 00. For m ::; n,

we have (l[O,Tn)H . X)im = (l[o,Tm )H . X)t; for a given w E n, we choose

m = m"" such that t < Tm(w). Then, for n ~ m, we have

limn ( f l[o,Tn)Hd1x)(w) = limn(l[o,Tn)H. X)im(w) J[O,t)

= (l[o,Tm )H· X)t(w).

Page 70: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 63

This proves the pointwise convergence as asserted. Applying the Lebesgue

theorem 3.1, we have Iro,t] H d1x E L~ and Iro,t]l[O,Tn]H dlx -t I[o,t] H dIx

pointwise.

For each w, and m = m w as above, we have

(r HdIx)(w) = (1[o,T~]H· XMw), i[o,t]

hence the process U[O,t] Hdlxk~o is cadlagj thus H E L~,a(X) and

3.17 COROLLARY. L~,a(X) contains all the u-elementazy processes of

FF,a(X). If we put such a process H in the standard form:

then the stochastic integral H . X can be computed pathwise:

Remark. There are u-elementary processes which do not belong to FF,a(X)j

such processes are not integrable with respect to X. However, we shall see

in section 4 that such processes are "locally integrable" with respect to any

"locally summable process," even if the random variables H n are not bounded

(Theorem 4.5 infra).

The next theorem considers ali caglad processes of FF,a(X) - not just

the u-elementary processes.

3.18 THEOREM. L~,a(X) contains all caglad processes of FF,a(X). In par­

ticular, L~,a(X) contains all bounded, caglad, adapted, F-valued processes.

Proof. Let H be first a bounded, caglad, adapted process. Then H+ is cadlag

and adapted. For each n, define the stopping times T( n, O) = O, and for k 2: 1,

T(n, k + 1) = inf{t > T(n, k) : IHt+ - HT(n kl+1 > ~}!\ (T(n, k) + ~). 'n n

Page 71: Seminar on Stochastic Processes, 1990

64 J.K. Brooks and N. Oinculeanu

Now detine the u-elementary processes

We note that if IHI $ M, then IHnl $ M for each n, hence Hn E :FF,a(X). By

the preceding Corollary 3.17, we have H n E L~,a(X). Since H is caglad, from

the definition of the above family of stopping time, we deduce that Hn -+ H

uniformly. Then H E L~,a(X) by Theorem 3.13.

Now assume H E :FF,a(X) and that H is caglad, hence H is locally

bounded. Let Sn )" 00 be a sequence of stopping times, such that each l[O,Sn)H

is bounded. Since each such process is caglad, we have l[O,Sn)H E Lha(XSn)

for each nj hence, by Theorem 3.16, H E L~,a(X).

Summability of the stochastic integral

The following theorem states that under cert ain conditions, the stochastic

integral H· X is itself summable, and K· (H· X) = (KH)· X. This properly

follows from the associativity property established in Appendix 1 for the general

vector integrala (Theorem AI.15).

3.19 THEOREM. 1. Let H E :FR,((Ix)F,a) C :FlR,E(X). Assume that

H E Lk,E(X) and fA H dIx E L~ for every A E P. Then:

(a) H· X is p-summable relative to (F, G) and

dIH.x = d(HIx).

where HIx is the measure defined by (HIx)(A) = fA HdIx for A E P.

(b) For any predictable process K ~ O, we have

~ ~

(IH.x)F,a(K) = (Ix)F,a(KH).

(c) K E Lha(H· X) if and only if KH E L~,a(X) and in this case, we

have

K· (H ·X) = (KH) ·X).

(d) Assume (Ix)F,a is uniformlyu-additive. Then (IH.x)F,a is uniformly

u-additive if and only if H E :FR,(B, (Ix )F,a).

Page 72: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces

II. Let H E L},.,a(X) and assume that fA HdIx E L~ for A E 1'. Then:

(a) H· X is p-summable relative to (lR, G) and

dIH'X = d(HIx).

(b) For any predictable process K ;::: 0, we have

- -(I H·X )Dl,a(K) :5 (Ix )F,a(K H).

65

(c) If K is a real valued predictable process such that KH E L},.,a(X),

then K E Lk,a(H . X), and in this case we have

K·(H·X)=(KH)·X.

(d) Assume that (Ix )F,a ia uniformly u-additive and that H E:FF,a(B,X).

Then (IH.x)R,a ia uniformly u-additive.

Proof. We only need to prove assertion I(a), and then apply Theorem AI.15.

We noticefirst that by Proposition AI.12(a), d(HIx) is u-additive on 1'. Next

we prove the equalities

IH.x(A) = i H dIx

and - -(IH.x)F,a(A) = (Ix)F,a(l AH)

first for predictable rectangles A and then for every A E 'R.

From the mst equality we deduce that IH.x can be extended to a u-additive

measure on l' with values in L~. From the second equality it follows that IH.X

has bounded semivariation on 'R relative to (F, G):

~ -suP{IH.x)F,a(A); A E 'R}:5 (Ix)F,a(H) < 00.

By remark (f) following Definition 2.1, H . X is summable relative to (F, G).

From the first of the above equalities we deduce that the u-additive measures

dI H. x and d( H Ix) are equal on 'R; therefore they are equal on 1'.

Assertion lI(a) is proved in the same way, using the inequality

- -(I H·X )Dl,a( A) :5 (Ix )F,G(lAH), for A E 'Ro

Page 73: Seminar on Stochastic Processes, 1990

66 lK. Brooks and N. Dinculeanu

Tbe jumps of tbe stocbastic integral

The following theorem yields the jumps of the stochastic integral.

3.20 THEOREM. For any process H E L},a(X), we have

~(H· X) = H~X.

Proof. Assume H is bounded. By Theorem 3.8 we have ~Xt = Xt-Xt- E L~

and

~(H·X)t=(H·X)t-(H·X)t-= f Hd1x i[t]

= f Htd1x = Ht f dlx = Ht~Xt, i[t] i[t]

by Proposition 3.5, since Ht is Ft_-measurable.

Assume now that H E L~,a(X). For each n, the stopping time

Tn inf{t: IHtl ~ n} is predictable and l[O,Tn )IHI :::; n. By the above

case,

On the other hand,

Thus

~(1[O,Tn)H· X)t = J 1[t]1[o,Tn)Hd1x

= J l[t]l{t<Tn}HdIX = l{t<Tn} J l[t] Hdlx

= l{t<Tn}~(H· Xk

and the desired equality follows by letting n -+ 00.

The stochastic integral witb respect to a martingale

3.21 THEOREM. Let X bep-summablerelative to(F,G) andlet HEFp,L':, (X).

li X is a martingale and ii !ro,t] H dlx E L~ for evezy t E [0,00], then H E

L~ LP (X) and H . X is a uniiormly integrable martingale, bounded in L~. In , G

particular, for p = 2, ii X is a 2-summable, square integrable martingale, if

Page 74: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 67

H E F F,L?:, (X) and jf !ro,tj H d1x E L~ for t E [0,00], tben H E L~,L?:, (X) and

H . X is a square integrable martingale.

Proof. Let t E [0,(0) and A E F t and prove that

E(lA(!ro,ooj Hd1x - !ro,tj Hd1x)) = O that is

(*) E[lA(j 1(t,oojHd1xl = O.

If H = l{O}xBX, with B E Fo and x E F, then (*) holds. Assume

H = l(u,vjxBX, with B E Fu and x E F. If v ~ t V u, then (*) holds. Assume

t V u < Vj then J l(t,oojH d1x = lBx(Xv - Xtyu), thus IA J l(t,oojH dlx = lAnBx(Xv - X tvu ). By taking expectations of both sides, and noting that

An B E Ftvu , we obtain (*). Thus (*) holds for R-measurable simple pro-

cesses H.

Assume now H is predictable, and let y* E G*. The R-measurable, simple

processes are dense in L}((Ix )z), where z = lAY* E Lh.. Let (Hn) be a

sequence of such processes converging to H in L}((Ix)z).

Then Jl(t,oojHnd(Ix)z -+ Jl(t,oojHd(Ix)z, that is (J(t,oojHnd1x,z) -+

(J(t,ooj H d1x, z). Thus

E((lA 1 Hnd1x,y*)) -+ E((lA 1 Hd1x,y*)) (t,ooj (t,ooj

that is

(E(lAj Hnd1x),Y*) -+ (E(lAj Hd1x),Y*). (t,ooj (t,ooj

By the previous case, the left-hand side is o, for each y* E G* and every n,

hence E(lA !tt,ooj H dlx ) = o. It follows that (!ro,tj H d1x k:o is a uniformly

integrable martingale. Since every martingale has a cadlag modification, ([B­

DA]), we deduce that H E L} LP (X) and the theorem is proved. , G

3.22 COROLLARY. If Lft is reflexive and jf X is a p-summable martingale,

relative to (F, G), tben LF1 LP (X) = FF LP (X). , G ' G

Remarks. (1) We shall see in the next section that if X is a local martingale

and is locally summable, and if H is locally integrable with respect to X, then

H· X is a local martingale (Theorem 4.14 intra). The case when H· X is a

martingale, but not necessarily uniformly integrable is also considered.

Page 75: Seminar on Stochastic Processes, 1990

68 lK. Brooks and N. Dinculeanu

(2) A martingale, or a square integrable martingale is not necessarily

summable. But if E and G are Hilbert spaces and if X : IR -+ E C L(F, G)

is a square integrable martingale, then X is 2-summable (see Theorem 3.24

infra ).

Square integrable martingales

In this subsection, E and G are Hilbert spaces over the reals and F is a

Banach space such that ECL(F,G). For example, E=L(IR,E), E=L(E,IR)j A

Ee L(G,E 0HS G), where HS indicates that the Hilbert-Schmidt norm is

used on E 0 G. The inner product in any Hilbert space is denoted by (.,-).

The main result of this section is that any E-valued square integrable

martingale M is 2-summable relative to any embedding E C L(F, G), and

that the semivariation of IM is independent of this embedding.

We say that a martingale M : IR -+ E is square integrable if M t E L~ for

every t E [0,00) and SUPt IIMt ll2 < 00. This is equivalent to the existence of a

random variable Moo E L~ such that for every t we have Mt = E(MooIFt).

We shall make a slight departure from our usual notation. We shall write

L~,L~ (X), (i M )F,L~' etc., in place of Lha(X), (i M )F,G, respectively. This

notational change will only be made in this subsection.

3.22 PROPOSITION. (1) If M : IR -+ E is a square integrable martingale, then

IM can be extended ta a q-additive measure on P with values in L~.

(2) If M and N are E-valued square integrable martingales, then for any

prur of disjoint sets A, B from P, and for any x, y E F, we have

Proof. (a) Assume first that M and N are E-valued square integrable mar­

tingales. Suppose A and B are disjoint sets from R. By expressing A and

B as a finite union of disjoint predictable rectangles, it is easy to show that

E«(IM(A),IN(B))E) = O.

(b) Now we shall prove assertion (1). If AER then A is a disjoint union

of predictable rectangles [OA.] and «s;, ti] X A;h:::;;:::;n. Let T = max{t; : 1 :S

i :S n} < 00 and let B = [O, T] x Sl. Then IIIM(A)II~ + IIIM(B - A)II~ =

Page 76: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 69

IIIM(B)II~ ~ q, where q = SUPt IIMtll~. Thus IM is Li:-bounded on 'R. Since

Li: is reflexive, Li: does not contain Co; by Theorem 2.5, IM can be extended

to a q-additive measure on P.

(c) We now prove assertion (2). By (b), we can consider IM and IN as

having been extended to q-additive measures on P. IT A E 'R, let ~A be the

class of sets B E P such that IM(A) ..1 IN(B - A). Since ~A is a monotone

class containing 'R, we have ~A = P. If B E P, let ~B be the class of sets

A E P such that IM(A) ..1 IN(B - A). Again, ~B = P. Hence if A and B

are disjoint subsets of P, we have IM(A) ..1 IN(B). The second assertion of

(2) follows by considering the G-valued square integrable martingales M x and

Ny.

3.23 THEOREM. Let M be an E-valued square integrable martingale. Then

(1) M is 2-summable relative to (F, G); -(2) The semivariation (I M )F,L~ is independent of the embedding E C

L( F, G) and satisfies

-(IM)F,L~(A) = IIIM(A)IIL~' for A E P;

(3) The set ofmeasures (IM)F,L~ is uniform1y q-additive.

Proof. Assertions (1) and (3) follows from Proposition 3.22 and assertion (2).

To prove assertion (2), let A E P and let (Ai) be a finite family of disjoint sets

form P, with union A; let (Xi) be a finite family of elements from FI. Using

the orthogonality properties in assertion (2) of Proposition 3.22, we deduce

that lI~iIM(Ai)XiIl2 = ~iIlIM(Ai)XiIl2

~ ~iIlIM(Ai)1I2 = IIEIM(Ai )1I 2 = IIIM(A)1I2,

hence (IM)F,L~(A) ~ IIIM(A)lIi~. The reverse inequality obviously holds.

3.24 COROLLARY. An E-valued, square integrable martingale M is summable

relative to (F, G) and

The set ofmeasures (IM)F,L~ is uniform1ya-additive.

Page 77: Seminar on Stochastic Processes, 1990

70 lK. Brooks and N. Dinculeanu

3.25 COROLLARY. Il M is a real valued square integrable martingale, then M

is 2-summable relative to (E, E), for any Hilbert space E, and _ f'tJ _ _

(IM).,Z,Lk ~ (IM)E,L}, ~ (IM)E,L~ = (IMht,Lk

Remarlc. In the proof of the 2--summability of M relative to (F, G), it was

essential that both E and G are Hilbert spaces. If G is not a Hilbert space -we may have (I M )F,a = 00, as it is shown by an example given by Vor [Y.2]:

Let M be the real Brownian motion on [0,1]. We can embed IR c L(ll, lI)

and then Lk C L(ll, L},). Since M is a square integrable martingale, IM has

a O'~additive extension to P, with values in Liz C Lk, hence M is summable

relative to (IR, IR). But IM has infinite semivariation relative to (lI, L},).

In fact, if IM had finite semivariation relative to (ll,L},), then M would be

summable relative to (lI, lI), therefore, by Corollary 3.12, every bounded 0'­

elementary process with values in II would be integrable with respect to M.

However it is proved in [Y.2] that for the following process

where en = (Cin)iEJV ElI, we have E(II J HdIMlli') = 00, therefore J HdIM

does not belong to L},. It follows that IM does not have finite semivariation

relative to (.el, L},).

We proved in Corollary 3.12 that if (Ix )F,a is uniformly O'-additive, then

the space L},a(X) contains aU the bounded predictable processeSj however,

we do not know if, in general, the bounded predictable processes are dense in

L},a(X), This is true, as the next theorem shows, if X is a square integrable

Hilbert-valued martingale.

3.26 THEOREM. Il M is an E-valued, square integrable martingale, then

Proof. The first equality follows from Remark (1) following Theorem 3.21.

Page 78: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 71

Now suppose that H E L IF L2 (M). We shall show that H E :FF L2 (B, M). , G ' G

We note that IHI E :FlR,q,(M), hence IHI E Lk,q,CM) and by Theorem 3.21,

IHI' M is an E-valued square integrable martingale; thus

~ ~

(IIHI'M)R,L~(A) = (IIHI'M)F,L~/A) = IIIIHI'M(A)IIL~' for A E P.

By Theorem 3.19, for A E P, we have

~ ~

(IIHI'M)F,L~(A) = (IM)F,L~(lAIHI).

It follows that

~ ~

(IM)F,L~(lAH) = (IM)F,Lb(1AIHI).

= IIIIHI'M(A)IIL~ = (IlHI oM )lR,L~,(A)o

Since IHI·M is a square integrable martingale, the set ofmeasures (IIHloM )R,L~ ~ ~

is uniformly u-additive, hence (IM)F,L~(1AnH) = (IIHloMhl,Lj,(An) -+ 0, if

An'\. rjJo By Proposition AI.8(b), we have H E :FF,a(B,M).

We recall that if M is an E-valued, square integrable martingale, then

IMI2 is a submartingale of class (D) and has a Doob-Meyer decomposition

IMI2 = N + (M,M), where N is a martingale of class (D) and (M,M) is a

predictable, integrable, increasing process called the sharp bracket of M. Then

/-tIMI2 = /-t(M,M) on P, where

and

/-t(M,M)(A) = E(I(M,M)(A)) = E(! 1Ad(M, M)),

for A E B([O, 00)) x:F.

If we set z = Moo E L1" we can consider the scalar measure (IM, z) on P,

which is positivej in fact (IM, M oo ) = /-t(M,M)'

The relationship between alI these measures and the seminorm (I M)p,a is

given by the following theorem. This theorem also shows that the mapping

H -+ J HdIM, from L~,a(M) into L'tJ, which is continuous in general, is an

isometry in the case of a square integrable martingale with values in IR, or in

the case the martingale is HiIbert-valued, but F = IR.

Page 79: Seminar on Stochastic Processes, 1990

72 J.K. Brooks and N. Dinculeanu

3.27 THEOREM. Let M be an E-valued, square integrable martingale, and

H E L~ L2 (M). If eitber M is scalar valued ar H is scalar valued, tben , G

Proof. For A E Fo and x E F, we have (since either M is real, or F = lR),

IIIM([DA])xlli2 = 1I1AMoxlli2 = E(lAIMoI2 Ix I2 ) G G

= E(J 1[OAllxI2dIIMI2),

and for stopping times S ::; T, and x E F, we have

IIIM(1(S,Tj)xllib = II(MT - Ms)xllh

= E(lMT - Ms1 2 1x1 2 ) = E((IMTI2 - IMsI2 )lxI2 )

= E(J l(s,Tj Ix I2 dIIMI2).

Let H be a simple process of the form

where A EFo, and (Tih::;;:5n+l is an increasing family of stopping times, and

Xi E F, for D ::; i ::; n. Since the sets [DAl and (Ti, Ti+ll are mutually disjoint,

we have

II J HdIMllib = IIIM([DA])xollib + ~l:5i:5nIlIM((Ti,Ti+l])Xilih

= E(J IHI 2 dIIMI2) = J IHI 2 dţ.tIMI2

= J IHI2 dţ.L(M,M) = E(J IHI2d(M,M}) = J IHI2 d(IM,Moo }.

Since (IM) P,Lb is uniformly u-additive, the R-simple processes are dense

in L pl L2 (8, M), hence by Theorem 3.26, they are dense in L~ L2 (M). , G ' G

Let H E L~ L2 (M), and let (Hn) be a sequence of R-simple processes , G

such that Hn -+ H in L~ L2 (M). By taking a subsequence if necesary, we can , G

assume that Hn -+ H pointwise IM-a.e. The continuity of the integral implies

that J HndIM -+ J H dIM in L't. -Since the measure (IM,Moo ) is dominated by (IM)nl,L'i,' we deduce that

Hn -+ H, (IM,Moo}-a.e. At the same time, (Hn) is Cauchy in L~((IM,Moo}),

Page 80: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 73

using the isometry proved above. It follows that Hn -+ H in L}((IM,Moo }),

and from the above mentioned isometry, we deduce

Finally,

(iM)F,L"t,(H) = sup II J sdIMIIL"t, = sup(J IsI2d(IM,Moo})~

= (J IHI2d(IM,Moo})~ = II J HdIM IIL"t"

where the supremum is taken over alI simple, predictable, F-valued processes

s such that Isi ~ IHI.

3.28 COROLLARY. The spaces L~ L' (M) and L}((M,M}) contain the same , G

predictable processes and are isometrically isomorphic.

Remark. The classical approach to scalar stochastic integrals with respect to

a real valued, square integrable martingale M is to prove the isometry H -+

J H dIM, for the R-simple processes H from L2 (ţt(M,M}) into L2, and to extend

this isometry to all of L2 (ţt(M,M})'

In our approach, we obtain this isometry directly from the space L~ L' (M) , G

into L~.

Processes with integrable variation

Let X : 1R -+ E be a cadlag, adapted process with integrable variation

IXI; that is, IXloo E Ll(P). Then X t E Lk(P) and IXlt E Ll(P) for every

tE [0,00].

Then, there is a u-additive measure ţtx : 8[0,00] X :F -+ E with bounded

variation lţtx I satisfying

and

lţtxl(B) = E(J lB(s,w)dIXI.(w»

for every B E 8[0,00] x:F. It follows that

lţtxl = ţtlxl'

Page 81: Seminar on Stochastic Processes, 1990

74 lK. Brooks and N. Dinculeanu

Moreover, if Ee L(F, G) and if H : 1R -> F is jointly measurable, then

H E L~(IlX) iff E(f IH.(w)ldIXlsCw)) < 00.

In this case we have (see [D.2]):

f Hdllx = E(f H.(w)dX.(w)).

3.30 PROPOSITION. li X bas integra bIe variation, tben tbe measure Ix is

(T-additive and bas bounded variation an R, and for every B E R we bave

Ilx(B) = E(Ix(B))

and

Proof. From the definit ion of Ix, we deduce that for every B E R we have

and

hence

Then

Ix(B)(w) = f lB(S,w)dX.(w)

Ilx(B) = E(Ix(B)) and Illxl(B) = E(Ilxl(B).

IIIx(B)IIL}, = E(I f lB(S,w)dX.(w)1)

:s; E(f lB(s,w)dIXI.(w))

= Illxl(B).

Since Ilixi is (T-additive, it follows that Ix : R -> Lk is (T-additive and has

bounded variation IIxI on R, satisfying IIxl :s; Ilixi = Illxl. Conversely, for B E R we have

Illx(B)1 = E(Ix(B))1 :s; E(IIx(B)1)

= IIIx(B)IIL}, :s; IIxl(B);

Page 82: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 75

therefore

I/Lx I :5 IIx I on 'R-,

and the conclusion follows.

3.32 THEOREM. A cad1ag, adapted process X: 1R -+ E witb integrable vari­

ation IXI is summable relative to any prur (F, G) sucb tbat E C L(F, G). In

tbis case, tbe set of measures (Ix )F,Lh ia uniformly u-additive and we bave

L~('P, /Lx) = L~('P, Ix) C L~ Li (X) . , a

and

L~ Li (S('R-),X) = L~ LI (X) =:FF LI (X). , G , G ' G

Proof. The first equality follows from I/Lx I = IIx 1. The inclusion follows from ~

the inequality (Ix )F,Lh :5 IIx 1: since the step processes over 'P are dense in

LH'P, Ix), from Theorem 3.12 we deduce that L~('P,Ix) C :FF,Lh(B,X) C

L~ L' (X). On the other hand, by Theorem AL8 we have :FF,L1a(B,X) = , a

:FF,Lh(X),

Remark. We can define Ix for every (not necessarily predictable) rectangle

{O} x A or (s,t] x A with A E:F, by

and we stiH have

for B in the algebra generated by the above rectangles. Since this algebra

generates the u-algebra B(lR+) x :F, it follows that Ix can be extended as a

u-additive measure with finite variation on the whole algebra B( ~) x :F, not

only on 'P, and we still have IIx I = I/Lx I on B( 1R+) x:F. We can then apply the

integration theory of Appendix 1, with ~ = B(~) x:F and obtain the space

:FF,Lh(~'X), Then we can define a "stochastic integral" (H ,X)t = Iro,t] Hd1x

in the case the integral belongs to Lh. This integral is stiH cadlag, but is not

necessarily adapted.

Page 83: Seminar on Stochastic Processes, 1990

76 lK. Brooks and N. Dinculeanu

Weak completeness of L~,G(B,X)

The following theorem gives sufficient conditions for L~,G(B, X) to be

weakly sequentially complete. It is a corollary of the general theorem AI.19 in

Appendix 1.

3.33 THEOREM. Assume that F is reflexive, that (Ix )F,G is uniformly u­

additive and Co rt G. Then L~,G(B, X) is weakly sequentially complete.

In fact, L~ does not cont ain Co (see [Kw]) , and we can apply Theorem

AI.19.

Weak compactness in L~,G(B,X)

We shall apply the general theory of weak compact ness in Appendix 1 to

L~,G(B, X). Recall that a subset K in a Banach space is said to be con­

ditionally weakly compact if every sequence of elements from K contains a

subsequence which is weakly Cauchy.

The next theorem follows from Theorem A1.20.

3.34 THEOREM. Let X be p-summable relative to (F, G). Assume F is re­

flexive and (Ix )F,G is uniformly u-additive.

Let K c L~,G(B, X) be a set satisfying the following conditions:

(1) K is bounded in L~,G(B, X);

(2) HIAn --+ O in L~,G(B,X), uniformly for H E K, whenever An E P

and An ~ </>.

Then K is conditionally weakly compact in L~,G(B, X). Ii, in addition,

Co rt G, then K is relatively weakly compact in L~,G(B, X).

In the last case, for every sequence (Hn) from K, there exists a subsequence

(Hnr ) such that (J Hnr dX)t converges weakly in L~ as r --+ 00, for every t.

The next theorem follows from Theorem AL21.

3.35 THEOREM. Let X be E-valued and p-summable realtive to (IR, E). Let

K C Lk,E(B,X) be a set satisfying the following conditions:

(1) K is bounded in Lk,E(B,X);

(2) JAn HdIx --+ O in L~, uniformly for H E K, whenever An E P and

An ~</>.

Page 84: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 77

Tben K is conditionaJly weakly compact in Lk,E(8,X). Ii, in addition,

Co rt. E, tben K is relatively weakly compact in Lk,E(8,X).

In tbis last case, for any sequence (Hn) from K, tbere exists a subsequence

(Hnr) sucb tbat (Hnr . X)t converges weakly in L~ as r -t 00, for eacb t.

Finally, we state a result about sequential weak convergence in Lk,E(8, X).

This theorem follows from Theorem A1.22.

3.36 THEOREM. Let X be an E-valued process, p-summable relative to

(lR,E). Let (Hn)n~o be a sequence of scalar processes from Lk,E(8,X).

Suppose tbat co rt. E. li

tben

bence

4. Local summability and local integrability

Throughout this section, X : lR -t E C L( F, G) is a cadlag, adapted

process with X t E L~ for each t E 114. We shall study the properties of the

stochastic integral H . X in the case X is locally p-summable and H is locally

integrable with respect to X.

4.1 DEFINITIONS.

(a) We say tbat X islocaJly p-summable relative to (F, G) il tbere exists

an increasing sequence (Tn ) of stopping times witb Tn / 00, sucb tbat for

eacb n, XT" is p-summable relative to (F, G).

litbe set ofmeasures (IxT" )F,G is uniformly q-additive for eacb n, we say

tbat tbe set of measures (Ix )F,G islcoaJly unilormly q-additive.

The sequence (Tn ) is called a determining sequence for the local summa­

bility of X relative to (F, G).

Page 85: Seminar on Stochastic Processes, 1990

78 lK. Brooks and N. Dinculeanu

Examples of locally summable processes are: locally square integrable pro­

cesses, and processes with locally integrable variation.

(b) A predictable process H : 1R -+ F is said to be loca11y integrable with

respect to a process X : 1R -+ E C L(E, F), which is locally p-summable

relative to (F, G), ifthere exists an increasing sequence (Tn ) of stopping times

with Tn /' 00, such that for each n, XTn is p-summable relative to (F, G) and

l[O,Tn]H is integrable with respect to XTn.

The sequence (Tn ) is called a determining sequence for the local integra­

bility of H with respect to X.

The set of alI F-valued, predictable processes which are locally integrable

with respect to X will be denoted by L},.,a(X)loc.

(c) Let X be a locally summable process relative to (F, G) and let D be

a Banach space. We denote by:FD(IF,a)loc the space of alI predictable D­

valued processes H for which there exists a sequence of stopping times (Tn )

with Tn /' 00, such that for each n, XTn is p-summable relative to (F, G),

and l[O,Tn]H E :FD«IxTn)F,a), that is

-(I XTn )F,a(l[o,Tn )H) < 00.

IT C is any set of D-valued, bounded, predictable processes, we denote by

:FD(C,IF,a)loc the set of all processes H E :FD(IF,a)loc, such that for each

stopping time Tn as above, we have l[O,Tn )H E :FD(C, (IxTn )F,a).

Instead of writing H E :FD(C,IF,a)loc, we shall say that H ia locally in

:F D( C, IF,a).

(d) H Hn and H are processes, we say Hn -+ H locally uniform1y if there

exists a sequence (TA;) of stopping times with TA; /' 00, such that for each k,

Hn -+ H uniformly on [O, TA;).

Basic properties

1. IT X is p-summable relative to (F, G), then X is locally p-summable

relative to (F, G).

2. If X is locally p-summable relative to (F, G), then X is locally p­

summable relative to (lR, E).

Page 86: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 79

3. If (Tn ) is a sequence of stopping times, determining for the local p­

summability of X relative to (F, G), and if Sn /' 00 is another sequence of

stopping times, then (Tn " Sn) is determining for the local p-summability of

X relative to (F, G). A similar result holds for determining sequences for the

local integrability of H with respect to X.

4. If X is locally p-summable relative to (F, G) and if T is a stopping time,

then XT is locally p-summable relative to (F, G).

5. If X is p-summable relative to (F, G) and if H E L};.,a(X), then H is

locally integrable with respect to X.

Let (Tn ) be a sequence, determining for the local integrability of H with

respect to X. Then for each n, we have

outside an evanescent set. It follows that the limit

exists pointwise outside an evanescent set. The limit is independent of the

determining sequence. Moreover, this limit is cadlag and adapted.

This leads to the following definit ion:

4.2 DEFINITION. li X is locally p-summable relative to (F, G) and ifthe F­

valued process H is locally integrable with respect to X, then the stochastic

integral of H with respect to X is a process denoted by H . X or f H dX, and

is defined up to an evanescent set by the equality

for any sequence (Tn ) of stopping times which is determining for the local

integrability of H with respect to X.

It follows that for each n, we have

The following theorem states that integrability and local integrability are

equivalent for processes of .rF,a(X), in case X is p-summable.

Page 87: Seminar on Stochastic Processes, 1990

80 lK. Brooks and N. Dinculeanu

4.3 THEOREM. Let X be a p-summable process relative to (F, G) and

H E FF,G(X). Then H is integrable with respect to X il and only if H

is locally integrable with respect to X. In this case, the stochastic integral

H . X is the same, whether H is considered integrable or locally integrable

with respect to X.

Proo/. If H is integrable with respect to X, it easily follows that H is locally

integrable with respect to X and the two integrals agree. The converse is

proved by taking a determining sequence for the local integrability of H with

respect to X and applying Theorem 3.16.

4.4 THEOREM. Ii X is locally p-summable relative to (F, G) and has finite

variation, then

(H· Xh = f H.(w)dX.(w), J[O,tj

as long as both sides are defined.

This follows from Theorem 3.3.

An important class of processes that are locally integrable with respect

to any locally p-summable process is the class of o--elementary processes,

where the Hi used in defining the o---elementary process are not assumed to

be bounded. In Theorem 4.9 intra, we shall prove that ali the caglad, adapted

processes are locally integrable with respect to any localiy p-summable pro­

cess. We note that a o---elementary process is not necessarily integrable with

respect to a p-summable process.

4.5 THEOREM. Let H be an F-valued o--elementary process of the form

where the Hi are not necessarily bounded. Then H is locally integra bIe with

respect to any locally p-summable process X relative to (F, G), and the sto­

chastic integral can be computed pathwise by

Page 88: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 81

Proof. We note that for each t and w, the above series reduces to a finite sum.

For each n, consider the stopping time Sn = inf{t: IHt+1 > n}. Since H+ is

cadlag, we have Sn /' 00. Aiso l[o.snJlHI ::; n, since H is caglad. Note that

l[o.SnlIHil ::; n, for each i. Now we observe that l[o.snATnlH is an elementary

process, hence it is integrable with respect to X. As a result, H is locally

integrable with respect to X.

Let Un /' 00 be a determining sequence of stopping times for the local

p-summability of X. Set Rn = Un II Sn II Tn. Then l[o.RnJH is an elementary

process and the stochastic integral can be computed pathwise by

For fixed w and t, we take n such that t < Rn. Then

(H· XMw) = limn((1[O.RnJH). XRnMw)

= Ho(w)Xo(w) + El~i<nHi(W)(XT;+lAt(W) - XT;At(W)),

and the conclusion follows.

Convergence tbeorems

We shall need the following theorem:

4.6 THEOREM. Assume X is locally p-summable relative ta (F, G) and let Hn,

H E L},a(X)loc for nE IN. Let Tk /'00 be stopping times sucb tbat for eacb

k, X T- is p-summable relative to (F, G), tbe processes l[o,T_JHn and l[o,T_JH

belong to L},a(XT-), and l[o,T_JHn ~ l[o,T_JH in L},a(XT-) as n ~ 00.

Tben

(a) For eacb t, (Hn . X)t ~ (H . X)t in probability;

(b) Tbere exists a subsequence (n r ) sucb tbat (Hn r • X)t ~ (H· X)t, as

r ~ 00, uniformlyon compact time intervals.

Proof. (a) Let t ~ O and choose € > O. Note that P( {Tk ::; t}) \, O. Fix ko so

that P( {Tko ::; t}) < €. If 1] > O, we have

Page 89: Seminar on Stochastic Processes, 1990

82 lK. Brooks and N. Dinculeanu

From the hypothesis we deduce that l[o,t]l[o,Tho]Hn ---+ l[o,t]l[o,Th o]H, in

L}.,a(XT,o), which implies that (Hn.X);hO ---+ (H.X);'O in L~, hence in prob­

ability. There exists an N such that for n :2: N, we have

P( {1(Hn .X);'O -(H .X);hO I > 1J}) < f, thus P( {1(Hn'X)t-(H ,X)tl > 1J}) <

2f, for n :2: N, and this proves assertion (a).

(b) Since by hypothesis, l[o,T,]Hn ---+ l[o,Tk ]Hn in L}.,a(XT.), as n ---+ 00,

by Theorem 3.10, for each k there exists a subsequence (n(r,k))r such that

as r ---+ 00, uniformly on compact time intervals. One may assume that

(n(r, k + l))r is a subsequence of (n(r, k))r. By a diagonalization argument,

and the fact that Tk /' 00, the conclusion follows.

Next we consider uniformly convergent sequences of locally integrable pro-

cesses.

4.7 THEOREM. Assume that X is locally summable relative to (F, G). Let

(Hn) be a sequence Erom L}.,dX)loC and let H be an F-valued process such

that H n ---+ H uniEormly on lR.

Then

(a) H is locally integrable with respect to Xi

(b) For each t, (H n . X)t ---+ (H· Xh in probabi1tYi

(c) There is a subsequence (n r ) such that (Hn r ,X)t ---+ (H ,X)t, as r ---+ 00,

uniEormly on compact time intervals.

Proof. We choose N so that IHn - HNI :::; 1 for n :2: N. Let (Tk ) be a

determining sequence for the local integrability of H N with respect to X. Since

l[O,Tk]HN E L}.,a(XT.), for each k, we deduce that l[O,Tk]Hn E FF,a(XTk ), for

n:2: N. Note that l[O,T,]Hn is locally integrable with respect to X T" hence by

Theorem 4.3, l[O,T.]Hn is integrable with respect to XTk. Since l[o,Tk]Hn ---+

l[o,Tk]H uniformly, as n ---+ 00, by Theorem 3.13, it follows that, for each k,

we have l[O,Tk]H E L}.,G(XTk) and l[O,T,]Hn ---+ l[O,Th]H in L}.,a(XT.), as

n ---+ 00. The conclusion follows by applying Theorem 4.6.

Page 90: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 83

Another application of Theorem 4.6 is the Lebesgue theorem for locally

integrable processes. A Vitali convergence theorem can also be proved along

the same lines.

4.8 THEOREM. (Lebesgue) Assume that X is locally p-summable relative to

(F, G). Let (Hn) be a sequence of F-valued processes, which are locally in­

tegrable with respect to X, let H be a predictable, F -valued process and let

4> E FIR(B, (Ix )F,G )loc-

Assume that

(1) IHnl::; 4>, for each ni

and either

(2) H n -+ H locally uniformlYi

or

(2') Hn -+ H pointwise and the family of measures (Ix )F,G is locally

uniformly u-additive.

Then

(a) H ia locally integrable with respect to Xi

(b) For each t, we have (Hn • X)t -+ (H . X)t in probabilitYi

(c) There is a subsequence (nr) such that (Hn r ,X)t -+ (H ·X)" as r -+ 00,

a.s., uniformly on any compact tinle interval.

Proof. The proof uses a sequence (Tk) of stopping times which is determin­

ing for the local p-summability of X, and at the same time, for each k

we have Hn -+ H uniformly on [O, Tkl in the case of (2), and such that

(IxT. )F,G is uniformly u-additive, in the case of (2'). We may also assume that

4> E FR(B,(IxT.)F,G) for each k. With this setting in place, the conclusions

follow by applying Theorems 3.15 and 4.6.

As an application of Theorem 4.7, we shall deduce the local integrability of

any caglad, adapted process, with respect to any locally p-summable process.

4.9 THEOREM. Any F-valued, caglad, adapted process ia locally integrable

with respect to any process X which is locally p-summable relative to (F, G).

More precisely, if X is locally p-summable relative to (F, G) and if

H : 1R -+ F is cadlag and adapted, then there exists a sequence (Hn) of

Page 91: Seminar on Stochastic Processes, 1990

84 J.K. Brooks and N. Dinculeanu

F-valued u-elementazy processes converging uniform1y to H_. For every t,

we have (Hn . X)t -+ (H_ . X)t in probability. Moreover, there is a subse­

quence (n r ) such that (Hn • . X)t -+ (H_ . X)t a.s. as r -+ 00, uniformly on

every compact time interval.

Proof. Let K : IR -+ F be caglad aud adapted. Then H = K+ is cadlag,

adapted aud K = H_. Let bn '\. O aud define the stopping times v(n,O) = O,

aud for k = 1,

v(n, k + 1) = inf{t > v(n, k) : IHt - H,,(n,k) I > bn} 1\ (bn + v(n, k)).

These stopping times have the following properties:

(i) for each n we have v(n, k) /' 00, as k -+ 00;

(ii) limn sUPk( v(n, k + 1) - v(n, k)) = O;

(iii) IHt - H,,(n,k) 1:5 an, for t E [v(n, k), v(n, k + 1)).

For each n, define the u-elementary process

From properties (i), (ii), aud (iii), it follows that Hn -+ H_ uniformly. The

conclusion then follows from Theorem 4.7.

Additional properties

We shall state some properties that are extensions of corresponding prop­

erties proved in section 3 for integrable processes.

The following theorem follows from Theorem 3.7.

4.10 THEOREM. Assume that X is locally p-summable relative to (F, G) aud

let S :5 T be stopping times. Then:

(1) (h1(s,T]H)· X = h[(1(s,T]H) . Xl in each of the following two cases:

(a) h is a real valued, JS-measurable, raudom variable, aud HEL~,G(X)loc;

(b) h is an F-valued, JS-measurable, random variable and HELk,E(X)locn

:FlR(IF,G)loc.

(2) Ii, in addition, Sis predictable and h is :Fs--measureable in (a) and

(b) above, then

(h1[s,T]H). X = h[(1[s,T]H)· Xl.

Page 92: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 85

For the proof of the next theorem, which states some properties of the

stopped process, we use Theorem 3.9.

4.11 THEOREM. Assume that X is locally p-summable relative to (F, G), and

let T be a stopping time. Then:

(a) XT is locally p-summablerelative to (F, G) and alsorelative to (lR, E),

and we have

X T = l[o,TJ . X.

(a') HT is predictable, then X T- is locally p-summable relative to (F, G)

and relative to (lR, E) and

X T- = l[o,T) . X.

(b) An F-valued predictable process H belongs to L};.,a(XT)loc if and

on1y ii l[o,TJH E L};.,a(X)loc.

(b') Assume T is predictable. An F -valued predictable process H belongs

to L};.,a(XT-)loc ii and only ii l[o,T)H E L};.,a(X)loc.

(c) H H E L};.,a(X)loc, then H E L};.,a(XT)loc, and l[o,TJH E L};.,a(X)loc,

and we have

(H . X? = H . X T = (l[o,TJH) . X.

(c') HT is predictable and ii H E L~,a(X)loc, then H E L~,a(XT-)loc

and l[o,T)H E L~,a<X)loc and we have

(H· X)T- = H· X T- = (l[o,T)H)· X.

Next we state the associativity property of the stochastic integral.

4.12 THEOREM. Let X: lR -+ E C L(F, G) be a cadlag, adapted process.

1) Assume that X islocally p-summable relative to (F, G) (hence relative

to (lR, E») and let H E Lk,E(X)loc n-rlR(IF,a)loc. Assume there is a sequence

(Tn ) of stopping times, determining for the local integrability of H with respect

to X, such that, for each n and each A E P we have fA l[o,Tn ]HdIXT n E L~.

Then:

(a) H· X is locally p-summable relative to (F, G);

Page 93: Seminar on Stochastic Processes, 1990

86 J.K. Brooks and N. Dinculeanu

(b) An F-valued, predictable process K belongs to L},G(H. X)loc ii and

only if KH E L},G(X)loc' In this case we have

K· (H· X) = (KH)· X.

II) Assume that X is locally p-summable relative to (F, G) and let

H E L},G(X)loc. Assume there is a sequence (Tn) of stopping times, de­

termining for the local integrability of H with respect to X, such that, for

each n and each A E P we have fA l[O,TnJHdlx T n E L~. Then:

(a) H· X is locally p-summable relative to (iR, G).

(b) Ii K is a real valued, predictable process, and KH E L},G(X)loc, then

K E Lk,G(H. X)IOC" In this case we have

K· (H· X) = (KH)' X.

We use Theorem 3.19 to deduce that l[O,Tn JH' XTn is p-summable, and

that the associativity holds locally.

The formula for the jumps of the stochastic integral can be established

using Theorem 3.20.

4.13 THEOREM. Assume that X is locally p-summable relative to (F, G) and

let H E L},G(X)loc. Then

!:!..(H· X) = H!:!..X.

The property of being a local marlingale is inherited by the stochastic

integral, if X is a local martingale.

4.14 THEOREM. (a) Assume that X is locally p-summable relative to (F, G)

and let H E L},G(X)loc. Ii X is a local martingale, then H . X is a local

martingale.

(b) Ii X is a martingale and jf for each t E iR+, xt is p-summable relative

to (F, G) and l[o,tJH E L},G(Xt ), then H . X is a martingale.

The proof of the above theorem uses an appropriate sequence of stopping

times and Theorem 3.21.

Page 94: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 87

Semi-summable process

As we have seen in this section and in section 2, the stochastic integral H· X

can be defined when X belongs to one of the following two classes of processes:

(1) locally p-summable processes; (2) processes with finite variation.

Putting these two classes together we obtain the following definition.

4.15 DEFINITION. We say that a process Z : lR -+ E C L(F, G) is semi-p­

summable relative to (F, G), ii it is of the form Z = X + Y, where X is locally

p-summable relative to (F, G) and Y is a cadlag, adapted process with finite

variation. Ii p = 1 above, we say that Z is semi-summable relative to (F, G).

An F-valued process H is said to be locally integrable with respect to a

semi-p-summable process Z, ii there exists a decomposition Z = X + Y, as

above, such that both integrals H . X and H . Y are defined. In this case, we

detine the stochastic integral H . Z by

H·Z=H·X+H·Y.

The definition of the stochastic integral is independent of the decomposition

Z=X+Y.

4.16 THEOREM. li E is a Hilbert space then any semimartingale is semi­

summable relative to any embedding E C L(F, G) with G a Hilbert space.

A real valued process is a semi-summable ii and only ii it is a semimartin­

gale.

Proof. If Z is an E-valued semimartingale, then Z = M + V, where M is

a locally square integrable martingale and V is a process of finite variation.

Then M is locally summable; hence Z is semi-summable.

Conversely, suppose Z is a real valued, semi-summable process. Let Z = X + Y, where X is locally summable relative to (lR, lR) and Y has finite

variation. We can assume X is summable, by stopping it at a convenient

sequence Tn / 00 of stopping times. Taking G ;: 1 in Theorem 2.5(5), we

deduce that X is a quasimartingale on (0,00], hence Z is a semimartingale.

Page 95: Seminar on Stochastic Processes, 1990

88 lK. Brooks and N. Dinculeanu

Remark. For a Banach space - or even a Hilbert space - the concept of

semi-summability is more general than that of semimartingale, as it can be

seen from the following example:

Example. Let 11 = {w} consist of one element and F t = F = {11,<,6} for each

t ~ o. Then any local martingale is constant (hence of finite variation). Let

E be any infinite dimensional Banach spacej then Lk(P) = E. Let (xn ) be

a sequence in E such that the series ~xn is unconditionally convergent, but

~Ixnl = 00. Such a sequence exists by the Dvoretzky-Rogers theorem. For

each n set en = ~;2:nXij then Iim en = O and X n = en+l - eno

Let 8 n /' 1 with 81 = O and define the process

This process is cadlag and has infinite variation, equal to the sum of the norms

of the jumps:

It follows that X is not a semimartingale.

Now we show that X is summable relative to (lR, E). For each interval

(a,b] C [0,1], let n and m be such that n $ a < n + 1 $ m $ b < m + 1 and

set ~(a, b] = in, n + 1, ... , m -1}. Then Ix«a, b]) = em - en = ~iEA(a.blxi.

If A E 'R and A = Ul~i~k(ai, bi] with (ai, bi] mutually disjoint, set ~(A) = Ul~i9~( ai, bi]. Then Ix(A) = ~iEA(A)Xi. If now (An) is a sequence of mu­

tually disjoint sets from 'R, then ~nIx(An) = ~n~iEA(An)Xi = ~iEUnA(An)Xi and this series is convergent in E, since the series ~nxn is unconditionally

convergent. It follows that Ix is strongly additive on 'R. If (An) is a sequence

of disjoint sets from 'R with union A E 'R, then ~(A) = Un~(An) therefore

~nIx(An) = ~iEA(A)Xi = Ix (A), hence Ix is q-additive on 'R. By Theorem

AI.1, Ix can be extended to a q-additive measure on Pj hence Ix is bounded

on P, therefore Ix has finite semivariation relative to (lR, L},;). It follows that

X is summable relative to (lR, E).

A stochastic integral H . X can be deflned by using our approach, while

the classical approach cannot be applied in this case.

Page 96: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 89

4.17 THEOREM. Ii Z : 1R -> E C L(F, G) is semi-p-summable relative to

(F, G), tben any F-valued, caglad, adapted process is locally integrable witb

respect to Z.

This follows from the fact that the caglad adapted processes are locally

bounded.

AH properties stated in sections 1 and 3, that are common to processes of

finite variation and to locally p-summable processes, are obviously valid for

semi--p-summable processes. Among these properties we mention the associa­

tivity property K· (H· X) = (KH)· X and the jumps property l::.(H· X) = Hl::.X.

Appendix 1: General integration theory in Banach spaces.

In this section we shall present a theory of integration in which both the

integrand and the measure are Banach-valued. The measure will be countably

additive with finite semivariation. The basis for this theory is essentially found

in [BD.2]; however, in order to apply the general theory to stochastic integra­

tion, a further development and new results were required. In this section, the

necessary extension of the general theory is presented.

The framework for this section consists of a nonempty set S, a ring 'R

of subsets of S and the u-algebra ~ generated by 'R. We assume that S = Ul::;nSn, with Sn E 'R. We shall use the notation established in section 1.

Strong additivity

Let m : 'R -> E be a finitely additive measure. We say that m is strongly

additive if for any sequence (An) of disjoint sets from 'R, the series ~m(An)

is convergent in E (or equivalently, if m(An) -> 0, for any sequence (An) of

disjoint sets from 'R).

The reader is referred to [BD.1] for a more complete study of strong addi­

tivity. We list below some properties that will be used in the sequel:

1) m is strongly additive iff for any increasing (respectively decreasing)

sequence (An) from 'R, limn m(An) exists in E.

Page 97: Seminar on Stochastic Processes, 1990

90 lK. Brooks and N. Dinculeanu

2) A 17-additive measure defined on au-algebra is strongly additivej but

if its domain is simply a ring, this need not be true.

3) A strongly additive measure on a ring is boundedj if E does not cont ain

a copy of Co, then the converse is true (cf. Theorem AI.2).

4) Any finitely additive measures with bounded variation is strongly ad­

ditive

Strong additivity plays an important role in the problem of the extension

of a measure from 'R to ~ (see Theorem AI.I).

Uniform 17-additivity

A family (mo,)aEI of E-valued measures on the ring 'R is said to be uni­

formly 17-additive if for any sequence (An) of mutually disjoint sets from 'R

with union in 'R we have

where the series is uniformly convergent with respect to aj or, equivalently if

for every decreasing sequence An '\, 1> of sets from 'R we have

uniformly with respect to a.

A finitely additive measure m : 'R --+ E is 17-additive iff the family

{x*mj x* EE;} of scalar measures is uniformly 17-additive. The measure

x*m : 'R --+ IR is defined by

(x*m)(A) = (m(A), x*}, for A E 'R.

A family (ma)aEI of E-valued measures on a 17-algebra ~ is uniformly 17-

additive iff there is a control measure A, that is a positive, 17-additive measure

A on ~ such that ma ~ A uniformly with respect to a and A(A) ~ sUPa ma(A),

for A E ~, where ma is the semivariation of ma (see [B-D.I]).

In particular, any 17-additive measure m : ~ --+ E has a control measure

A, such that m ~ A ~ m.

Page 98: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 91

Measures with finite variation

Let m : n --+ E be a finitely additive measure. The variation of m is a set

function Imi: n --+ lR+ defined for every set A E n by

Iml(A) = sup ~lm(Ai)l,

where the supremum is taken over an finite families (Ai) of disjoint subsets

from n with union A.

The variat ion Imi is additive; Imi is u-additive iff m is u-additive. The

measure m has finite variation (resp. bounded variation) on n if Iml(A) < 00

for every A E n (respectively sup{lml(A) : A E n} < (0). Note that if m is

real valued and bounded, then m has bounded variation.

Now let m : ~ --+ E be u-additive with finite variation Imi. We say that

a set or a function is m-negligible, m-measurable, or m-integrable if it has

the same property with respect to Imi. For any Banach space F, we denote

LHm) = L~(lmJ), and endow this space with the seminorm Ilflli = J Ifldlml·

If G is another Banach space such that Ee L(F, G), then for f E LHm), we

can define the integral J f dm E G and we have

I J fdml ::; J Ifldlml = IlfllI·

This is done by defining the integral in the obvious way for simple functions

which are dense in L~(m), and then extending the integral by continuity to

the whole space L~(m).

Stieltjes measures

An important particular case of measures with finite variation are the

Stieltjes measures on a subinterval of lR.

Let le lR be an interval containing its left endpoint, of the form [a, b) ar

[a, bl with a < b ::; 00 and let f : I --+ E be a function.

We say that f has finite variation an I if the variation "V[s,tj (f) of f on any

compact interval [s, tl C I is finite. We say f has bounded variation an I if

VI(f) := sup{"V[s,tj(f) : [8, tl el} < 00.

Page 99: Seminar on Stochastic Processes, 1990

92 lK. Brooks and N. Dinculeanu

If I has finite variation on 1 we define the variation l'Unction of I to be the

function III : 1 ~ lE4 defined by

111(t) = 11(a)1 + V[a,tIU), for tEl.

The vanation III of I is increasing and satisfies

111(t) - 1(8)1 ~ 111(t) -111(8), for 8 < t.

Moreover, 1 is right (or left) continuous iff 111 has the same property (see

[D.3]).

Let 'R be the ring generated by the intervals of the form [a, t] c 1. We

define a measure JL f : 'R ~ E by

JLf([a,t]) = I(t) - I(a).

Then JLf«8,t]) = I(t) - 1(8), if (8,t] el. The measure JLf has finite (resp.

bounded) variation IJLfl on 'R iff 1 has finite (resp. bounded) variation 111 on

1. In this case we have

IJLfl(A) = JLldA), for A E 'R

(see [D.l], p. 363).

If I is right contin'Uo'U8 and has bounded variation III, then JLf and JLlfl

have q-additive extensions on the q-algebra B(I), denoted by the same letters,

and we stiU have IJLfl = JLlfl on B(I).

Assume that 1 is right continuous and has bo'Unded variation and assume

E c L(F, G). A function 9 : 1 ~ F is said to be Stieltjes integra bIe with

respect to 1 if it is integrable with respect to JL f, that is, with respect to

IJLfl = JLlfl' In this case the integral f gdJLf is called the Stieltjes integral of

9 with respect to 1 and is denoted f gdl or fI gdl. To say that 9 is Stieltjes

integrable with respect to 1 means that 9 is JLrmeasurable and f Igldlll < 00.

In this case

1 f gdll ~ f Igldlll·

Page 100: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 93

Extensions of measures

If m : R -+ E is (T-additive with bounded variat ion Imi, then it has a

unique (T-additive extension m' : ~ -+ E, with bounded variation Im'l, and

Im'l is the unique (T-additive extension of Imi from R to ~ (see [D.l], p. 62).

If m is (T-additive but does not have finite variation on R, then a (T-additive

extension to ~ does not necessarily exist.

We now present some extension theorems for Banach-valued measures,

which will be applied to stochastic measures. These theorems are an improve­

ment over the existing extension theorems (which were stated for the particular

case when Z = E*).

AI.l THEOREM. Let m : R -+ E be a finitely additive measure. Suppose that

Z C E* is norming for E. The following assertions are equivalent:

(a) m is strongly additive on R and for every x* E Z, the scalar measure

x*m is (T-additive on Ri

(b) m is strongly additive and (T-additive on Ri

(c) m can be extended uniquely to a (T-additive measure m: ~ -+ E.

Proof. The proof is done in the following way: a ==> b ==> c ==> a. Assume

(a) and prove (b), that is prove that m is (T-additive on R. Let An E R such

that An '\...p. Since m is strongly additive, limn m(An) = X exists in E. Let

x* E Zj since x*m is (T-additive, it follows that x*x = 0, hence x = O. Thus

m is (T-additive on E, that is (b).

Assume now (b) and prove (c). We deduce first that the family of scalar

measures {x*m : x* E ZI} is uniformly (T-additive on R. Since m is strongly

additive, it is bounded on R. Then each scalar measure x*m is bounded on

R, hence it has bounded variation on Rj being also (T-additive on R, it can be

extended uniquely to a (T-additive measure m x • on ~ with bounded variation

Imx.l, which is equal to the extension of Ix*ml to ~.

Now we assert that the family of measures {lmx.1 : x* E Zd is uniformly

(T-additive on ~. If not, there exists an € > 0, a sequence of sets An E ~ with

An '\. 4>, and a sequence (x~) from ZI such that if we denote I-ln = Imx:; 1, then I-ln(An) > € for each n. Let Ro be a countable subring of R such that alI

Page 101: Seminar on Stochastic Processes, 1990

94 lK. Brooks and N. Dinculeanu

the An belong to u('Ro), the u-algebra generated by 'Ro. Let A = En2-n{ln.

Then each {In is absolutely continuous with respect to the u-additive measure

A, and the sequence ({In) is uniformly absolutely continuous with respect to A

on 'Ro, since the {In are uniformly u-additive on 'Ro. Then, for € > O above,

there is a 8 > O such that if B E 'Ro and A(B) < 8, then {ln(B) < TJ for each

n. Let now A E u('Ro) with A(A) < 8. There is a sequence of disjoint sets

Bn E 'Ro such that A C UnBn and EnA(Bn) < 8. Let Ck = UI~i9Bi' Then

A(Ck) < 8, hence {ln(Ck) < € for each n. Thus

for each n. In particular, taking A = An we obtain {ln(An) ::; € for each n. But

by our choice of An and {In, we have {ln(An) > € for each n, and we reached

a contradiction. Hence the family of measures {Imx' : x* E Zd is uniformly

u-additive on E.

For each A E E, define mI (A) : Z -t IR by (z,ml(A)) = mz(A), for z E Z.

Then ml(A) is a linear functional on Z and

l{z,ml(A))1 = Imzl(A)1 ::; Imzl(S)

::; 2sup{lzm(B)1 : B E R} ::; 2lzlc,

where c = sup{lm(B)1 : B E 'R} < 00. Thus ml(A) E Z·. Note that mI = m

on 'R. Since {mz : z E Zd is uniformly u-additive on E, it follows that mI

is u-additive on E. Finally, we observe that mI takes its values in E C Z·.

To see this, let C denote the class of subsets A from E such that ml(A) E E.

Since C is a monotone class which contains 'R, we deduce that C = E. Thus mI

is a u-additive extension of m to E. The uniqueness of the extension follows

by using a monotone class argument; therefore (c) is proved.

The implication c ==> a is evident and this proves the theorem.

As we mentioned earlier, any strongly additive measure on a ring is

bounded. We next prove a partial converse.

AI.2 THEOREM. Hm: 'R -t E is a bounded nnitely additive measure, and if

E does not cont ain a copy of Co, tben m is strongly additive.

Proof. Let (An) be a sequence of disjoint sets from 'R. It suffices to show that

the series Enm(An) is convergent in E. For each x' EE', the scalar measure

Page 102: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces

x*m is bounded on 'R, hence it has bounded vanation Ix*ml. Thus

~1~i~nlx*m(Ai)1 ~ Ix*ml(UI~i~nAi)

~ sup{lx*ml(B) : B E 'R} < 00.

95

Hence the series ~1~i<oox*m(Ai) is unconditionally convergent. Since E 1J co,

by the Bessaga-Pelczinski theorem [B-P], the series ~1~i<oom(Ai) converges.

Thus m is strongly additive.

Combining the preceding two theorems, we obtain the following extension

theorem.

AI.3 THEOREM. Assume that E does not contain a copy of Co and Jet Z C E*

be a normIDg space for E. II m : 'R --+ E is a bounded finitely additive

measure, and il x*m is q-additive on 'R for each x* E Z, then m can be

extended uniquely to a q-additive E-valued measure on ~.

The following particular case of the preceding theorem is used in the con­

struction of the stochastic integral.

AI.4 THEOREM. Assume that E does not contain a copy of Co and Jet Z C E*

be a norming space for E. Let (n,:F,p.) be a measure space, a ~ p < 00,

and Jet m : 'R --+ L~(p.) be a ffuitely additive measure. For each z E Z deffue

the measure zm : 'R --+ V(p.) by (zm)(A) = (m(A), z), for A E 'R. II m is

bounded on 'R and il for each z E Z, the measure zm is q-additive on 'R, then

m can be uniquely extended to a q-additive measure mi : ~ --+ L~(p.).

Proof. By a theorem of Kwapien [Kw], L~ does not cont ain a copy of Co if E

does not cont ain a copy of Ca. Let M be the space of Z-valued 'R-measurable

simplefunctions. Then M C L~.(p.) c (L~(p.»*, and M is norming for L~(p.).

Let f E M. Consider the scalar measure fm defined on 'R by

(Jm)(A) = J (m(A), f)dp..

Note that if An E 'R and An '\, 4>, then for each z E Z, we have (m(An), z) --+ O

in LP({t), as n --+ 00. Hence (Jm)(An) --+ O as n --+ 00; that is, fm is q-additive

on 'R. We can then apply the preceding theorem, replacing E and Z by L~({t)

and M respectively.

Page 103: Seminar on Stochastic Processes, 1990

96 lK. Brooks and N. Dinculeanu

Tbe semivariation

Let m : R --+ E C L(F, G) be finitely additive. For every set AER, we

define the semivariation mF,a(A) of m on A, relative to the pair (F, G), by

where the supremum is taken over alI finite families (Ai)iEI of disjoint sets

from R, with union A, and alI finite families (Xi)iEI of elements from FI' We

thus obtain a set function mF,a : R --+ [O, +00]. Sometimes the semivariation

mF,a is denoted by svarF,am. Note that

mF,a(A) = sup I J sdml,

where the supremum is taken over alI F-valued simple R-measurable functions

s, such that Isi:::; IA, where the integral J sdm is defined in the usual manner.

We say that m has finite (respectively bounded) semivariation relative to

(F, G) if mF,a(A) < 00 for every AER (respectively

sup{mF,a(A) : AER} < 00).

If E = L( lR, E), we sometimes write svar m, or m, instead of mlR,E, and we

call it simply the semivariation of m. In this case m has bounded semivariation

on R if and only if m is bounded on Rj more precisely, for every AER, we

have

m(A):::; 2sup{lm(B)I: B E R,B CA}.

If E C L(F, G), then we have ([D], pages 51, 54):

In particular, for a real measure m: R --+ lR = L(lR,lR) we have m = Imi.

Computation of the semivariation

Let m : R --+ E C L(F, G) be a finitely additive measure, and let Z C G*

be a norming space for G. For each AER we have meA) : F --+ G. Consider

the adjoint m(A)* : G* --+ F*. For each x E F and y* E G*, we have

(m(A)x,y*) = (x,m(A)*y*).

Page 104: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 97

We denote by my• : 'R. -+ F* thefinitely additive measure defined by my.(A) = m(A)*y* for A E 'R.. In particular, for z E Z, we have, for mz : 'R. -+ F*,

(m(A)x, z) = (x, mz(A)), for x E F and A E 'R..

One can show ([D.1], page 55) that

where Imzl is the variation set function of m z • Note that the above equality is

independent of the norming space Z c G*. In particular, we have

mR,E = sup{lx*ml : x* E Zd,

where Z C E* is a norming space for E*.

If mF,G is bounded, then each Imzl is bounded, for z E Z. In this case we

detine the set mF,G of positive measures by

Note that mF,G depends upon Z C G*.

We have the following property of the semivariation of the extension of m

to ~.

AI.5 PROPOSITION. Let m : 'R. -+ E C L(F, G) be a finitely additive measure

witb bounded semivariation mF,G. H m bas a u-additive extension m' to ~,

tben m' bas bounded semivariation mp,G on ~ and mp,G is tbe extension of

For the proof we use the fact that for every z E Z, the measure m~ is the

extension of m z and Im~1 is the extension of Imzl.

Measures witb bounded semivariation

Prom now on we shall assume that m : ~ -+ E C L( F, G) is u-additive

and has bounded semivariation mF,G, and that Z C G* is a norming space for

G. To develop the stochastic integral, we shall use an integration theory with

respect to m, for functions f : S -+ F. Observe that since m is u-additive, it

is bounded on ~ and hence mlR,E is bounded on ~.

Page 105: Seminar on Stochastic Processes, 1990

98 lK. Brooks and N. Dinculeanu

We say that a set A E E is m-negligible if m( B) = O for every BeA,

B E E. Thus A is m-negligible if and only if mF,a(A) = O.

If D is any Banach space, we say that a function f : S --> D is m-negligible

(or that f = O, m-a.e.) if it vanishes outside an m-negligible set. This notion

is independent of the embedding E C L( F, G). A subset Q C S is said to be

mF,a-negligible if for each z E Z, Q is contained in an Imzl-negligible set.

Note that Q need not belong to E.

A function f : S --> Dis said to be mF,G-measurable if it is mz-measurable

for every z E Z. We say f : S --> D is m-measurable if it is the m-a.e. limit of

a sequence of D-valued, E-measurable simple functions.

If fis m-measurable, then it is mF,a-measurable. The converse is true if

mF,a is uniformly u-additive, as the next proposition shows.

AI.6 PROPOSITION. Suppose tbat mF,a is uniformly u-additive. Tben a func­

tion f : S --> D is m-measurable if and only if fis mF,G-measurable.

Proof. Suppose f is mF,a-measurable. Since mF,a is uniformly u-additive,

there exists control measure >. on E, of the form >. = ECn{tn, for some Cn 2:: O

with ECn = 1, and some {tn E mF,G (see [BD.l] Lemma 3.1). Let (!In) be a

sequence of E-measurable simple functions converging to f on S - SI, where

SI EE and ILI(Sd = O; we can assume alI the !In = O on SI. Let (hn) be a

sequence of E-measurable simple functions converging to f on SI - S2, where

S2 E E and IL2(S2) = O; we can assume that all the hn = O on S2' Continue

in this fashion and obtain for each i, a sequence (finh5,n<oo of E-measurable

simple functions converging to f on Si-l -Si, where ILi(Si) = O; we can assume

that alI the hn = O on Si. If So = nI9<ooSi, then >'(So) = 0, hence So is m­

negligible. The sequence (EI95,n!;n)l5,n<oo of E-measurable simple function

converges to f m-a.e., hence f is m-measurable.

Remark. Although the set of measures mF,a depends upon Z C G*, the uni­

form u-additivity of mF,a is equivalent to mF,a(An) --> O whenever An '\. <P

(using a control measure as in the above praof), and as a result, the uniform

u-additivity of mF,a is independent of Z.

We shall now extend the definition of mF,a to functions. Recall that

Page 106: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 99

m : L: ---+ Ee L(F, G) is q-additive with bounded semivariation :;;"F,C.

For each f : S ---+ D (or lR) which is mF,c-measurable, define

:;;"F,C(f) = :;;"F,c(lfl) = sup{1 J sdml},

where the supremum is extended over alI F-valued, L:-measurable simple func­

tions s such that Isi ~ Ifl on S. Note that if A E L:, then :;;"F,c(A) = :;;"F,c(lA).

We shall use this equality to extend the definit ion of :;;"F,c(A) to any mF,c­

measurable set Ac S.

We can also define :;;"F,C(f) in terms of an arbitrary norming subspace

Z C G*:

AI.7 PROPOSITION. Let f: S ---+ D be any mF,c-measurable function and let

Z C G* be a norming subspace for G. Then

Proof. If s : S ---+ Fis a L:-measurable simple function such that Isi ~ Ifl, and

if z EZI, then

Since Zis norming for G, we conclude that

Conversely, let € > O and choose a E lR such that

a < sup{f Ifldlmzl : z E Zd. There is a scalar L:-measurable simple function

cp ~ Ifl such that a < J cpdlmzl, for some z EZI. Let cp = L:I9:::;n1A,ai, where

the Ai are disjoint sets from L: and ai > O. There exists a finite family (Bij )i,j

of disjoint sets from L:, such that Ai = UjBij and

We choose elements Xij E H such that

Page 107: Seminar on Stochastic Processes, 1990

100 J.K. Brooks and N. Dinculeanu

a < / Cf'dlmzl = (/ sdm,z) +e

::; 1/ sdml + e ::; ;nF,O(f) + e,

since Isi::; Ifl. Since e> O and a were arbitrary, the result follows.

We now list some properties, whose proofs we omit. For simplicity, write

N = ;nF,O.

(1) N is subadditive and positively homogeneous on the space of mF,o­

measurable functions.

(2) N(f) = N(lfD

(3) N(f)::; N(g) if Ifl ::; Igl·

(4) N(f) = sup{N(f1A) : A E ~} = sUPn{N(f1{IfI~n})}. (5) N(sup fn) = sup N(fn), for every increasing sequence (fn) of positive

m F,o-measurable functions.

(6) N(~fn)::; ~N(fn), for every sequence of positive mF,o-measurable

functions.

(7) N(liminf fn) ::; liminf N(fn), for every sequence of positive mF,G­

measurable functions.

(8) If N(f) < 00, then f is finite mF,O-a.e.

(9) If f : S -t D is mF,o-measurable and c> O, then

N( {Ifl > c}) ::; ~N(f). c

If fn'! : S -t D are mF,o-measurable, we say fn -t f in mF,O-measure if

for every e > O, we have

;nF,O( {Ifn - fi > e) -t O, as n -t 00.

(10) If N(fn - f) -t O, then fn -t f in mF,o-measure and there exists a

subsequence (fnk) converging mF,O-a.e. to f (use property (6)).

The Egorov thoerem is not valid in general. However, using a control

measure, it is valid whenever mF,O is uniformly u-additive.

AI.7 THEOREM. (Egorov) Assume that mF,O is uniformly u-additive and Jet

f n, f be D-valued, mF,o-measurabJe functions such that f n -t f mF,o-a.e.

Page 108: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 101

Then

(a) for every mF,a-measurable set A, and € > O, there exists a set B E E

with BeA, such that ;;iF,a(A - B) < € and fn ---+ f uniformlyon Bi

(b) fn ---+ f in mF,a-measure.

The space of integrable functions

We maintain the framework of a u-additive measure m : E ---+ E C L(F, G)

with finite semivariation ;;iF,a, and Z C G* a norming space for G. Let D

be a Banach space. We denote by :Fv(mF,a) the set of alI mF,a-measurable

functions f : S ---+ D such that ;;iF,a(f) < 00. The mapping f ---+ ;;iF,a(f) is a

seminorm on the vector space :Fv(mF,a) which is complete (use properly (6)).

Note that :FV(mF,a) C Lb(lmzl) continuously, for each z E Z.

The set Bv of bounded, D-valued, mF,a-measurable functions is con­

tained in :Fv(mF,a). In particular, the sets Sv(R) and Sv(E), the D-valued,

R-measurable, respectively E-measurable, simple functions are contained in

:Fv(mF,a). However, unlike the classical case, these sets are not necessarily

dense in Bv for the seminorm ;"F,a. This is due to the fact that the Lebesgue

dominated convergence theorem, valid for convergence in ;"F,a-measure, is

not valid, in general, for pointwise convergence, unless mF,a is uniformly u­

additive.

For any subspace C C :Fv(mF,a), we denote by :Fv(C,mF,G) the clo­

sure of C in :Fv(mF,a), which is also complete. We write :Fv(B,mF,a),

:Fv(S(R),mF,a), and :Fv(S(E),mF,G) when C is Bv, Sv(R), or Sv(E) re­

spectively. Since R generates E, we shall see later (AI.11 infra) that

and if mF,a is uniformly u-additive, then

:Fv(S(R),mF,a) = :Fv(B,mF,G).

We shall now list some properties offunctions in :Fv(mF,a) without proofs.

AI.B THEOREM. (a) Ii f E :Fv(B,mF,G), then ;"F,a(flA) ---+ O as

;"F,a(A) ---+ O. The con verse is also true if mF,a is uniformly u-additive.

Page 109: Seminar on Stochastic Processes, 1990

102 lK. Brooks and N. Dinculeanu

(b) Iii E :FD(mF,G) and iimF,G(flAn) -+ O for any sequence ofmF,G­

measurable sets An \,. cjJ, then lE :FD(B, mF,G).

(c) Ii I : S -+ D is mF,a-measurable and if III :S 9 E :F lR(B, mF,a), then

lE :FD(B,mF,a).

(d) A function I : S -+ D belongs to :FD(B,mF,a) ii and only ii I is

mF,G-measurable and III E :FlR(B, mF,G).

(e) Suppose (fn) is a sequence of functions from :FD(mF,a) such that

In -+ I uniformly on S. Then I E :FD(mF,G) and In -+ I in :FD(mF,a).

(f) Ii m has finite variation Imi on E, then mF,a is uniformly O'-additive

and :FD(B,mF,a) = :FD(mF,a).

AL9 THEOREM. (Vitali) Let (fn) be a sequence from :FD(mF,a) and let

I : S -+ D be m F,a-measurable. Ii condition (1) below and either of conditions

(2a) or (2b) are satisfied, then f E :FD(mF,a) and fn -+ f in :FD(mF,G)'

(1) mF,a(fn1An) -+ O as mF,G(1A) -+ O, uniformly in n;

(2a) In -+ f in mF,a-measure;

(2b) fn -+ f pointwise, and mF,G is uniiormly O'-additive.

Converse1y, ii In -+ fin :FD(B,mF,G), then conditions (1) and (2a) are

satisfied.

For the proof, see [B-D.2], Theorem 2.5.

The next theorem follows from Vitali's theorem and Theorem AI.8(a).

ALlO THEOREM. (Lebesgue) Let (fn) be a sequence from :FD(B,mF,a), let

f : S -+ D be an mF,G-measurable function and 9 E :FlR(B, mF,a). Ii

(1) Ifnl:S g, mF,a-a.e. for each n, and any one of the conditions (2a) or

(2b) below is satisfied:

(2a) In -+ f in mF,G-measure;

(2b) fn -+ f pointwise and mF,a is uniformly O'-additive,

then f E :FD(B,mF,a) and fn -+ I in :FD(mF,G)'

We now state without proof some closure properties:

ALU PROPOSITION. (a) :FlR(S(E),mF,a) = :FR(B,mF,a).

(b) IimF,G is uniformly O'-additive, and ifE = O'(R), then

:FD(S(R), mF,G) = :FD(S(E), mF,a) = :FD(B, mF,a).

Page 110: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 103

In particular,

:Fv(s(n),mIR,E) = :Fv(S(E),mIR,E) = :Fv(B,mIR,E).

The integral

Let m : E -+ E C L(F, G) be (j-additive with finite semivariation mF,G

and take Z = G*. In the special case D = F, we can deflne an integral J Jdm

for functions J belonging to :FF(mF,G). To simplify the notation, we shall

denote :FF,G(m) = :FF(mF,G).

The construction is as follows. If J E :FF,G(m), then J E L~(lmzl) for each

z E G*, hence the real number J J dm z is defined. The mapping z -+ J J dm z

is a linear continuous mapping from G* into 1R:

hence, this mapping belongs to G**j we denote this mapping by J Jdm. Thus

(z, / Jdm) = / Jdmz, for z E G*

and

1/ Jdml :::; mF,G(f).

If Z c G*, we can regard J Jdm E Z*, by considering the restriction of J Jdm

to Z.

Note that since mIR,E is finite, we can define J <.pdm E E**, for <.p E

:FIR,E(m), and we have

(/ <.pdm,x*) = / <.pd(x*m), for x* E E*,

and

1/ <.pdml :::; mIR,E(<.p).

We are particularly interested in the case when J Jdm E G. Of course

this holds when G is reflexive. In general, if e is a sub set of :F F,G( m) such

that J Jdm E G, whenever J E e, then by continuity of the integral, it follows

that J Jdm E G, whenever J E :FF,G(e, m). For example, if e = S(E), then

Page 111: Seminar on Stochastic Processes, 1990

104 lK. Brooks and N. Dinculeanu

we have f Idm E G for any I E S(~), hence also for I E FF,a(S(~),m).

Since the ~-measurable functions are not necessarily dense in FF,a(B, m), the

integral of bounded measurable functions need not belong to Gj however, if

mF,a is uniformly O'-additive, this property holds. In particular, since mE,E

is uniformly O'-additive, it follows that f Idm E E, whenever lE FIR,E(B, m).

Since the integral is continuous on FF,a(m), any theorem insuring con­

vergence In --+ I in F F,a( m) can be completed by stating convergence of the

integrals f Indm --+ f Idm in G**. In particular, whenever the In and I satisfy

the hypotheses of the Vitali or Lebegue theorems, we have f Indm --+ f Idm

in G**.

Remark. If m : ~ --+ E C L(F, G) has finite total variat ion Imi, then ;"F,a is

finite and mF,a is uniformly O'-additive. Moreover, LHm) C FF,a(m). Using

simple functions, we see that for every I E L}..(m), the integral f fdm is the

same relative to either L}..(m) or FF,G(m).

The indefinite integral

We still assume the same conditions on m hold, namely, m : ~ --+ E C

L(F, G) is O'-additive and ;;F,a is finite. For I E FF,a(m) and A E ~, we

define fAldm = f1Afdm. Set n(A) = fAldm. Then n: ~ --+ G**. We call

n the indefinite integral of f with respect to m, or the measure with density I and base mj we also denote this finitely additive measure by Im. In general,

Im is not countably additive.

AI.12 PROPOSITION. Let f E FF,a(m). Then fm is O'-additive on ~ in each

of the following cases:

(a) fA fdm E G, for every A E ~; in particular if lis a ~-step funtion.

(b) f E FF,a(B, m) and mF,a is uniformly O'-additive (this is the case if

F = 1R); in this case we have f gdm E G for every 9 E FF,a(B, m).

(c) G does not contain a copy of Co; in this case we have f gdm E G for

every 9 E FF,a(m).

Proof. (a) follows from the Pettis theorem (any weakly O'-additive measure

is strongly O'-additive), since the set function U(.) fdm, z) = f(-) fdm z is 0'­

additive, for each z E G*. (b) follows from Theorem AI.ll(b). To prove

Page 112: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 105

(c), assume first that 9 is a u-step function in :FF,G(m), of the form 9 = ~l$n<oolAnXn with Xn E F and An mutually disjoint and mF,G-measurable.

Let z E G*. Since 9 E LHlmzl), we have

hence ~nm(An)xn is weakly unconditionally convergent. Since Co fi. G, by

the Bessaga-Pelczynski theorem [B-P], the series ~n'm(An)xn converges to an

element y E G. Thus

for z E G*, consequently J gdm = y E G.

If 9 is arbitrary in :FF,G(m), there is a sequence (gn) of mF,G-measurable,

u-step functions such that gn -+ 9 uniformly and Ignl ~ Igl for each n. Then

gn E :FF,G(m) , hence, by the above, J gndm E G for each n. By Theorem

AI.8(e), we have gn -+ 9 in :FF,G(m), hence J gndm -+ J gdm in G**, conse­

quently J gdm E G. We take then 9 = flA with A E ~ and obtain (a).

Relationship between the spaces :Fv(mF,G)

Next we show that the inequality mlR,E(A) ~ mF',G(A), valid for A E ~,

can be extended to real functions.

AI.13 THEOREM. Let m : ~ -+ E C L(F, G) be a u-additive measure with

finite semivariation mF,G. Then

aud

Proof. Suppose t.p is a ~-measurable, scalar valued, simple function. Let n =

t.pm. Then n is u-additive on ~ into E, and for A E ~, z E G* and y E F, we

have

Page 113: Seminar on Stochastic Processes, 1990

106 lK. Brooks and N. Dinculeanu

hence

n .. (A) = L cpdm .. ,

and by [D.I], Theorem 7, p. 278, In .. I(A) = IA Icpldlm .. l.

In particular, regarding E as L(lR,E), we have, for x* E E*,

Thus

In.,·I(A) = Ix*nl(A) = L Icpldlx*ml·

mn,E(cp) = sup{j Icpldlx*ml : x* E Et}

= sup{ln.,·I(S) : x* E Et} = nRl"E(S)

$ nF,G(S) = sup{ln .. I(S) : z E GD

= sup{j Icpldlm .. l: z E GD = mF,G(cp).

In general, if cP E :FRl,(mF,G), we choose a sequence (CPn) of E-measurable

simple functions such that cpn --+ cP pointwise and l<Pnl $ Icpl. Using the

dominated convergence theorem relative to the measures Ix*ml and Im .. l, for

x* E E* and z E G*, we obtain

J Icpldlx*ml = limn J ICPnldlx*ml $limsuPn mRl"E(CPn)

$limsuPn mF,G(CPn) $ mF,G(cp).

Thus mRl"E(cp) $ mF,G(cp), and the theorem is proved.

IT m : E --+ L(F, G) is a q-additive measure and y E F, then we denote by

my: E --+ G the q-additive measure defined by

(my)(A) = m(A)y, for A EE.

AI.14 THEOREM. Let m : E --+ E C L(F, G) be a q-additive measure witb

finite semivariation mF,G, and let y E F. Tben :FRl,(mF,G) C :FRl,((mY)n,G),

y:FRl,(mF,G) C :FF(mF,G), and for cP E :FRl,(mF,G), we bave

and

(my)'B,G(cp) $IYlmF,G(cp),

mF,G(CPY) = IYlmF,G(cp),

j cpydm = j cpd(my).

Page 114: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces

Ii in addition, J rpdm E E, then

(/ rpdm)y = / rpydm = / rpd(my).

Proof. Let z E Gi and let A E ~. Then

l(my)z(A) I = l(m(A)y,z)1 = l(y,mz(A)1

::; lyllmz(A)1 ::; IYllmzl(A),

hence l(mY)zl ::; IYllmzl. As a result, for any rp E :FR(mF,a), we have

107

and the first inequality in the conclusion follows. The second equality is im­

mediate.

Suppose now that rp = IA, with A E ~. Then (J rpdm)y = J rpydm and

m(A)y = J rpd(my), hence the equalities in the conclusion hold when rp is

a measurable simple function. For the general case, let rp E :FR(mF,a) and

let (rpn) be a sequence of ~-measurable simple functions such that rpn -+ rp

pointwise and Irpnl ::; Irpl. Since rp E Lk((my)z), we can use the domi­

nated convergence theorem to conclude that (J rpnd(my), z) -+ (J rpd(my), z);

similarly, since rpy E LHmz), we have (J rpnydm, z) -+ (J rpydm, z). Since

J rpnydm = J rpnd(my), for each n, we conclude that J rpydm = J rpd(my).

Assume now that J rpdm E E. From Theorem AI.13, we have :FR(mF,a) C

:FR(mR,E), hence rp E :FR(mR,E). For x* E E*, we apply the dominated con­

vergence theorem to deduce that J rpnd(x*m) -+ J rpd(x*m). Hence

(J rpndm, x*) -+ (J rpdm, x*). Now we choose x* E E* to be defined by x*(x) =

(x(y),z), for x E E. Since Jrpdm E E, the convergence Jrpnd(x*m)-+

Jrpd(x*m) can be written ((Jrpndm)y,z) -+ ((Jrpdm)y,z). Since for each

n we have (J rpndm)y = J rpnydm, we deduce (J rpdm)y = J rpydm, and the

theorem is proved.

Associativity properties

The following theorem concerns the "associativity" of the integral: f(gm) =

(f 9 )m. We shall consider the case when one of the functions f and 9 is real

valued.

Page 115: Seminar on Stochastic Processes, 1990

108 lK. Brooks and N. Dinculeanu

AI.15 THEOREM. Let m : ~ -+ E C L(F, G) be a u-additive measure with

finite semivariation mF,G'

1) Let c.p E :FlR(mF,G) C :FlR(mlR,E) and assume that fA c.pdm E E, for

every A E~. Consider the measure c.pm : ~ -+ E defined by (c.pm)(A) = fA c.pdm, for A E ~.

and

(a) The measure c.pm is u-additive with finite semivariation (c.pm)'F,G;

(b) li f ~ O is ~-measurable, then

(c.pmrm,E(J) = mlR,E(c.pf),

(c.pm)'F,G(J) = mF,G(c.pf)j

(e) We have f E :FF,G(c.pm) if and only if fc.p E :FF,G(m), and in this case

f(c.pm) = (Jc.p)mj

(d) Suppose mF,G is uniformly u-additive. Then (c.pm)F,G is unifonnly

u-additive if and only if c.p E :F lR(B, mF,G).

II) Let f E :FF,G(m) and assume that fA fdm E G for every A E ~.

Consider the measure fm: ~ -+ G defined by (Jm)(A) = fA fdm, for A E ~.

(a) fm is u-additive and has finite semivariation (Jm )~lR,G;

(b) li c.p ~ O is ~-measurable, we have

(e) li c.p is real valued and ~-measurable and if c.pf E :FF,G(m), then

c.p E :FlR,G(Jm), and in this case we have

c.p(Jm) = (c.pf)mj

(d) Suppose mF,G is uniformly u-additive. Then (Jm)R,G is unifonnly

u-additive if f E :FF(B,mF,G).

Proof. 1. Let n = c.pm.

(a) n is weakly u-additive, therefore it is strongly u-additive (see Propo­

sition AI.12). The finiteness of TI F,G will follow from (b).

Page 116: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 109

(b) Let z E G*, y E F, and A E ~. Then

(y,nz(A)} = (n(A)y,z) = (/ lA<pdm)y,z)

= (/ lA<Pydm,z) = / lA<Pydmz = (y, L <pdmz).

Thus nz(A) = fA <pdmz for A E ~, therefore

If f ~ O is ~-measurable, then f fdlnzl = f If<pldlmzl; taking the supremum

over Gi, we have nF,a(f) = mF,a(f<p). The other equality follows by taking

F = 1R and G = E.

(c) From (b) we deduce that f E FF,a(n) if and only if f<p E FF,a(m), and

from the proof of (b), for each z E G* we have n z = <pmz ; hence if f E FF,a(n)

and A E ~, then flA E L~(nz) and fA fdn z = fA f<pdm z. This implies that

(fAfdn,z) = (fAf<pdm,z), which yields the conclusion in (c).

Assertion (d) follows from AI.8a and b. The proof of II is similar.

AI.16 COROLLARY. Let m : ~ -+ E C L(F,G) be a u-additive measure

with finite semivariation mF,a and Jet f E FF,a(m) be ~-measurabJe. If

f fd(lAm) E G for every A E ~, then fA fdm E G for every A E ~.

Proof. Let <p = IA. We note that since <pf E FF,a(m), we have

f E FF,a(r.pm) and

(f(<pm»(S) = «(f<p)m)(S) = L fdm.

Since f(<pm)(S) E G, by hypothesis, the conclusion follows.

Weak compJeteness and weak compactness in FF,G(B,m)

One of the main goals in [B-D.2] was to obtain sufficient conditions for weak

completeness and weak compactness in F F,a( B, m). To establish these results,

a characterization of elements in (FF,G(B, m»* was given, using techniques of

Kothe spaces. This theory can be applied to stochastic integration theory to

yield new convergence theorems. In this section we shall present the necessary

tools for this application.

Page 117: Seminar on Stochastic Processes, 1990

110 lK. Brooks and N. Dinculeanu

A crucial properly in establishing weak compactness criteria is the following

"Beppo Levi properly."

Let m : ~ -+ E C L(F,G) be a q-additive measure. We say that mF,a

has the Beppo Levi property if every increasing sequence (Jn) of positive ~­

measurable simple functions, with SUPn mF,a(Jn) < 00, is a Cauchy sequence

in :FlR(B, mF,G) (hence sUPn fn E :FlR(B, mF,a)).

We remark that if mF,a has the Beppo Levi property, then mF,a is uni­

formly q-additive.

One of the main theorems in [B-D1) (Theorem 8.8), gives sufficient condi­

tions that m F,G has the Beppo Levi properly.

AI.17 THEOREM. Let m: ~ -+ E C L(F,G) be q-additive. Suppose that m

has finite semivariation mF,a and that mF,a is uniformly countably additive.

Ii J fdm E G for every ~-measurable function f E :FF,a(m), then mF,G has

the Beppo Levi property.

For applications to the theory of stochastic integration, we shall strengthen

Corollary 8.10 in [B-D1). The following result will be used repeatedly in the

sequel.

AI.18 COROLLARY. Suppose m : ~ -+ E C L(F, G) is q-additive with finite

semivariation mF,G' IimF,a is uniformly q-additive and ifG does not cont ain

a copy of Co, then m F,G has the Beppo Levi property.

Proof· Since Co cf. G, we have J fdm E G for every f E :FF,a(m), by Proposi­

tion AI.12( c). We can then apply Theorem AI.17.

We shall now present, without proofs, the main results in [B-D.2) concern­

ing the weak completeness of :F F,a(B, m) and criteria for weak compact ness of

subsets of :FF,a(B,m). We shall state these results in a slightly different form

from the results given in [B-D.2), by using Corollary AI.18 above.

Recall that a set K in a Banach space is conditionally weakly compact if

every sequence of elements from K contains a subsequence which is weakly

Cauchy; and that K is relatively weakly compact if its weak closure is weakly

compact.

Page 118: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 111

To avoid repetition, we shall assume in the sequel that m : ~ -+ E C

L(F, G) is q-additive, and has finite semivariation mF,a.

AI.19 THEOREM. Assume that mF,a is uniformly q-additive, Fis reflexive

and G does not contain a copy of Co. Then :FF,a(B, m) is weakly sequentially

complete.

AI.20 THEOREM. Assume that mF,G is uniformly q-additive and Fis reflex­

ive. Let K C :FF,a(B,m) be a set satisfying the following conditions:

(1) K is boundedj

(2) limn mF,a(flAn) = O uniformly for f E K whenever An E ~ and

An '\. 4>.

Then K is conditiona11y weakly compact in :FF,a(B,m). Ii, in addition, G

does not contain a copy of Co, then K is relatively weakly compact.

AI.21 THEOREM. Let K c :FlR.,E(B, m) be a set satisfying the following con­

ditions:

(1) K is boundedj

(2) IAn fdm -+ O unifozmly for f E K whenever An E ~ and An '\. 4>. '

Then K is conditionally weakly compact. Ii, in addition, E does not cont ain

a copy of co, then K is relatively weakly compact.

AI.22 THEOREM. Assume that E does not contain a copy of Ca. Let (fn)n~O

be a sequence of elements from :FlR.,E(B, m). II IA fndm -+ IA fodm, for every

A E~, then fn -+ fo weakly in :FlR.,E(B,m).

Appendix II: Quasimartingales

In this section we shall present game basic properties of Banach-valued

quasimartingales, which are used in section 2 concerning summability. This

material is taken from [B-D.5] and [Ku.l].

In this section, we assume that X : IR -+ E is a cadlag, adapted process,

such that X t E L~, for every t ;::: O. If X has a limit at 00, we denote it by

Xoo-. We extend X at 00 with X oo = o.

Page 119: Seminar on Stochastic Processes, 1990

112 lK. Brooks and N. Dinculeanu

Rings of subsets of lE4 x n

We shall consider five rings of subsets of 1R+ X n: (1) A[O] = {O} x Fo = {[DA] : A EFo}, where [DA] = {O} x A is the graph

of the stopping times DA which is zero on A and +00 on AC.

(2) A(O, (0) is the ring of ali finite unions of predictable rectangles (s, t] x A,

with O ::; s < t < 00, and A E F •.

(3) A[O, (0) = A[O] U A(O, (0).

(4) A( O, 00] is the ring of ali finite unions of predictable rectangles (s, t] x A,

with O ::; s ::; t ::; 00, and A E F •.

(5) A[O,oo] = A[O] U A(O, 00]; A[O,oo] is an algebra of subsets of 1R+ x n, and contains, along with A[O, (0), predictable rectangles of the form (t, 00] x A,

where A E Ft •

The DoJeans function

Since L'}, C Lk, we have X t E Lk for every t ~ o. We define the additive

measure J.LX : A[O, 00] -+ E, called the Doleans function of the process X, first

for predictable rectangles, and then extend it in an additive fashion to A[O, 00].

For [OA] E A[O, 00] and (s, t] x B E A[O, 00], we set

and

J.Lx«s, t] x A) = E(lA(Xt - X.)).

Note that

and

J.Lx«s, oo] x A) = -E(lAX.).

We also have J.Lx([O, 00] x A) = O and J.Lx(A) = E(Ix(A)), where Ix is the sto­

chastic measure defined in section 2. The restriction of J.Lx to A[O] is bounded

and l7-additive. Hence J.Lx is bounded (respectively l7-additive) on A[O, 00)

or on A[O, 00] if and only if J.Lx has the same property on A(O, (0) or A(O, 00]

respectively.

Page 120: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 113

Quasimartingales

We say X is a quasimartingale on (0,00) (respectively on (0,00], or [0,00),

or [0,00]) if the measure ţtx has bounded variation on A(O, 00] (respectively

on A(O, 00], or A[O, 00), or A[O, 00]). Since ţtx has bounded variat ion on A[O],

X is a quasimartingale on (0,00) or (0,00] if and only if it is a quasimartingale

on [0,00) or [0,00] respectively.

We now list some properties of quasimartingales.

1. X is a quasimartingale on (0,00] if and only if X is a quasimartingale

on (0,00) and SUPt IIXtih < 00.

2. If X is a quasimartingale on (0,00) or on (0,00], then so is the process

IXI = (IXt\)t>o.

3. Any process with integrable variat ion is a quasimartingale on (0,00].

4. X is a martingale if and only if ţtx = ° on A(O, 00); a martingale X

is a quasimartingale on (0,00); it is a quasimartingale on (0,00] if and only if

SUPt IIXtih < 00.

5. X is a submartingale if and only if ţtx ~ ° on A(O, 00). Any negative

submartingale and any positive supermartingale is a quasimartingale on (O, 00].

6. If X is a quasimartingale on (0,00], then for every stopping time T, we

have XT E Ll;.

7. If X is a quasimartingale on (0,00] and if (Tn ) is a decreasing sequence

of stopping times such that Tn "" T, then XTn -> XT in Ll;.

8. X is a quasimartingale of class (D) on (0,00] if and only if ţtx is (1'­

additive and has bounded variation on A(O,oo].

9. If X is a real valued quasimartingale on (0,00], then X = M + V,

where M is a local martingale and V is a predictable process with integrable

variation (ef. [Ku, Theorem 9.15]). If, in addition, X is of class (D), then M

is a martingale of class (D). In this case we have

ţtx = ţtv on A(O, 00).

10. If X is a real valued quasimartingale, then X is summable if and only

if X· = SUPt IXt I is integrable.

Page 121: Seminar on Stochastic Processes, 1990

114 J.K. Brooks and N. Dinculeanu

REFERENCES

[B-P] C. Bessaga and A. Pelczynski, On bases and unconditional convergence of series in Banach spaces, Studia Math. 5 (1974), 151-164.

[B-D.l] J.K. Brooks and N. Dinculeanu, Strong additivity, absolute continuity and compactness in spaces of measures, J. Math. Anal. and Appl. 45 (1974), 156-175.

[B-D.2] __ , Lebesgue-type spaces for vector integration, linear operators, weak completeness and weak compactness, J. Math. Anal. and Appl. 54 (1976), 348-389.

[B-D.3] __ , Weak compactness in spaces of Bochner integrable functions and applications, Advances in Math. 24 (1977), 172-188.

[B-DA] __ , Projections and regularity of abstract process, Stochastic Anal­ysis and Appl. 5 (1987), 17-25.

[B-D.5] __ , Regularity and the Doob-Meyer decomposition of abstract quasi­martingales, Seminar on Stochastic Processes, Birkhaiiser, Boston (1988), 21-63.

[B-D.6] __ , Stochastic integration in Banach spaces, Advances in Math. 81 (1990), 99-104.

[B-D.7] __ , Ito's Formula for stochastic integration in Banach spaces, Con­ference on diffusion processes, Birkhaiiser (to appear).

[D-M] C. Dellacherie and P.A. Meyer, Probabilities and Potential, North­Holland, (1978), (1980).

[D.l] N. Dinculeanu, Vector Measures, Pergamon Press, 1967. [D.2] __ , Vector valued stochastic processes 1. Vector measures and vector

valued stochastic processes with finite variation, J. Theoretical Probability 1 (1988), 149-169.

[D.3] __ , Vector valued stochastic processes V. Optional and predictable variation of stochastic measures and stochastic processes, Proc. A.M.S. 104 (1988), 625-63l.

[D-S] N. Dunford and J. Schwartz, Linear Operators, Part 1, Interscience, New York,1958.

[G-P] B. Gravereaux and J. Pellaumail, Formule de Ito pour des processus ti valeurs dans des espaces de Banach, Ann. Inst. H. Poincare 10 (1974), 399-422.

[K] H. Kunita, Stochastic integrals based on martingales taking their values in Hilbert spaces, Nagoya Math J. 38 (1970), 41-52.

[Ku.l] A.D. Kussmaul, Stochastic integration and generalized martingales, Pit­man, London, 1977.

[Ku.2] __ , Regularităt und stochastische Integration von Semimartingalen mit Werten in einem Banachraum, Dissertation, Stuttgart (1978).

[Kw] S. Kwapien, On Banach spaces containing co, Studia Math. 5 (1974), 187-188.

[M.l] M. Metivier, The stochastic integral with respect to processes with values in a reflexive Banach space, Theory Prob. Appl. 14 (1974), 758-787.

[M.2] __ , Semimartingales, de Gruyter, Berlin, 1982. [M-P] M. Metivier and J. Pellaumail, Stochastic Integration, Academic Press,

New York, 1980.

Page 122: Seminar on Stochastic Processes, 1990

Stochastic Integration in Banach Spaces 115

[P] J. Pellaumail, Sur l'integrale stochastique et la decomposition de Doob­Meyer, S.M.F., Asterisque 9 (1973).

[Pr] M. Pratelli, Integration stochastique et Geometrie des espaces de Banach, Seminaire de Probabilities, Springer Lecture Notes, New York (1988).

[Pro] P. Protter, Stochastic integration and differential equations, Springer­Verlag, New York, 1990.

[Y.l] M. Yor, Sur les integrales stochastiques Il valeurs dans un espace de Ba­nach, C.R. Acad. Sci. Paris Ser. A 277 (1973), 467-469.

[Y.2] __ , Sur les integrales stochastiques Il valeurs dans un espace de Ba­nach, Ann. Inst. H. Poincare X (1974), 31-36.

J.K. BROOKS Department of Mathematics University of Florida Gainesville, FL 32611-2082 USA

N. DINCULEANU Deparlment of Mathematics University of Florida Gainesville, FL 32611-2082 USA

Page 123: Seminar on Stochastic Processes, 1990

Absolute Continuity of the Measure States in a Branching Model with Catalysts

DONALD A. DAWSON1 , KLAUS FLEISCHMANN

and SYLVIE ROELLY

1. INTRODUCTION

spatially homogeneous measure-valued branching Markov

processes X on the real line R with certain motion

processes and branching mechanisms with finite variances

have absolutely continuous states with respect to Lebesgue

measure, that is, roughly speaking,

X(t,dy) = ~(t,y)dy

for some random density function ~(t)=~(t,·). Resu1ts of

this type are established in Dawson and Hochberg (1979),

Roelly-Coppoletta (1986), Wulfsohn (1986), Konno and shiga

(1988), and Tribe (1989).

More generally, if the branching mechanism does not

necessarily has finite second moments, a similar absolute

continuity result is valid in Rd for all dimensions d

smaller than a critical value which depends on the under­

lying motion process and the branching mechanism. This

critical value can take on any positive value. We refer to

Fleischmann (1988, Appendix).

The simplest case, namely, a continuous critical su­per-Brownian motion X = [X,Pp ;SER,~EMfl in R is re­

s,~

lsupported by an NSERC grant.

Page 124: Seminar on Stochastic Processes, 1990

118 D.A. Dawson, K. Fleischmann and S. Roelly

lated to the parabolic partial differential equation

(1.1) a asv(S,t,x) aZ 2 K - v(s,t,x) + pv (s,t,x),

axz

sst, xe~, where K>O is the diffusion constant and p~O

the constant branching rate. In fact, the Laplace transi­tion functional of X is given by

(1. 2) IEP exp(X(t) ,-11') = exp(/.L,-v(s,t», S,/.L sst, /.LeAtf , q>eF +'

where v solves (1.1) with final condition v(t,t) = 11'.

Here Atf is the set of alI finite measures /.L on ~,

and F+ is some set of continuous non-negative test func­tions on ~, defined in section 2 below. Moreover, (m,h) :=Jm(dX)h(X), and IE~,/.L denotes expectation with respect to pP , the law of the process X with branching rate S,/.L P and start ing at time se~ with the measure /.L.

(We mention that we adopt time-inhomogeneous notation and a backward formulation of the equation, in order to facilitate the generalization later to time-inhomogeneous Markov processes.)

Intuitively it is clear that the absolute continuity result for the states of the process X will remain true if the constant branching rate P is replaced by a boun­ded non-negative function, smoothly varying in time and space (varying medium p).

However it is not immediately clear what will happen if p degenerates to a generalized function, for instan­

ce, to the weighted ~-function a~o' a>O. In this case one can interpret p=a~o as a point catalyst with action weight a and located at o. In other words, branching does not occur except at the origin. From the viewpoint of an approximating particle system, a particle will split only if it approaches O within a distance c«I, and then the branching rate is given by the scaled action weight a/2c.

Actually, it is possible to give (1.1) a precise mea­

ning in the degenerate case p=a~O' namely in terms of the integral equation

Page 125: Seminar on Stochastic Processes, 1990

Absolutely Continuous States 119

(1. 3) v(s,t,x) IdY p(s,t,x,y)~(y) - aI;dr p(s,r,x,0)v2 (r,t,0), sst, xeR

where p(s,t,x,y)=p(t-s,y-x), s<t, x,yeR, is the conti­nuous transition density function of the heat flow corres-ponding to KA, and formally we set p(O,y)=~o(Y).

In Dawson and Fleischmann (1990a), it is shown, that there exists a continuous F+-valued curve v(·,t)~O which solves equation (1.3), for each given teR and ~eF+

(so-called mild solution of (1.1». It is constructed by approximating p=a~o by the smooth functions Pc=aP(C,.) as c~O. Using this type of approximation and continuity properties of the Laplace transition functional in (1.2), a superprocess X vith singular branching rate p~o can be defined which is related to (1.3) by (1.2).

To give a feeling for this process X, we provide some moment calculations. To this end, fix s<t and /J.=~

(unit mass at x) • In (1.2) replace ~ by 8~ with 8>0, (formally) differentiate with respect to 8 at 8=0+, and proceed in the same way with equation (1.3). Then it turns out that the first moment measure of X(t) with respect to is given by

E~ xX(t,dy) = p(s,t,x,y)dy. ,

x

Consequently, since the branching term, i.e. the nonlinear term in (1.1), does not effect the expectation of the pro­cess, we get the same first moment density as in the clas­sical model of constant branching rate, namely p(s,t,x,·).

Following an analogous procedure, for the covariance measure of X(t) with respect to pP we obtain s,x

COV~,X[X(t,dy),X(t,dZ)]

= [2aI;dr p(s,r,x,o)p(t-r,y)p(t-r,Z)]dYdZ.

Hence, this process function, except at deed, letting y=z,

has a finite smooth covariance density 0, the position of the catalyst. In­the latter integral behaves like

Page 126: Seminar on Stochastic Processes, 1990

120 D.A. Dawson, K. Fleischmann and S. Roelly

(1. 4) const Iloglyll as y~O

(recall that s<t and x are fixed). Such behavior is in

a sharp contrast to the "classical" models in constant me­

dia p.

On the other hand, despite this singularity, as in

the classical models above this superprocess X has abso­lutely continuous states, since the singularity (1.4) at

the catalyst's position y=O is (locally) integrable with

respect to Lebesgue measure (see Meidan (1980». More pre­

cisely, there is a second order random function ~(t,·)=

~(t) such that 2

~~,xIJX(t,dY)f(Y) - JdY ~(t,y)f(y) I = O, feF+.

However, by (1.4) this L2-random density function ~(t) is singular at y=O since, by (1.4),

~~,x~2(t,y) ~ 00 as y~O, s<t, xeR.

In this case of a single non-mov ing catalyst we now

consider an alternative approach to the problem. Rewrite

equation (1.3) in the following way:

v(s,t,x) = Es,x[~(W(t» - aJ;Lo (dr)V2 (r,t,0)],

where [[w,Lo]'P :s,xeR] is a Wiener process w in R S,x with transition density function p, and its local time LO at O (and E denotes expectation with respect to s,x Ps,x' the law of w start ing at time s at x). Since the latter equation can further be reformulated as

v(s,t,x) = Es,x[~(W(t» - J;aLo (dr)V2 (r,t,w(s»],

s~t, xeR, we obtain a special case of equation (1.23) in

Dynkin (1990). Thus, this superprocess X corresponding

to a single non-mov ing catalyst is a member of the general

family of superprocesses constructed by Dynkin (1990).

By the way, this also illuminates the reason why the

point catalyst model discussed above'has to be restricted

to the space dimension one since it involves the local ti­

me LO of the Brownian motion w at the catalyst's posi­

tion O, whereas for the Brownian motion in dimensions

Page 127: Seminar on Stochastic Processes, 1990

Absolutely Continuous States 121

d>l a single point set is polar and does not carry a po­sitive local time.

In the general model we investigate, the branching rate p is given by a dense set of point catalysts, which are also alloved to move in space, and whose action weights are not locally bounded above.

To worsen the situation, we can think of a more gene­ral branching mechanism which does not necessarily have finite second moments. Consequently, in this case as a ru­le the covariance measure does not exist, i.e. it is not a locally finite measure. This in fact raises the question as to whether a process with dens it ies exists at all in such a general situation.

It is the main purpose of the present paper to demon­strate that even in such a general situation of a super­process X without second moments and in a highly singu­lar varying medium p, the absolute continuity results remain true.

To be precise, we will consider a superprocess X related to the following integral equation

(1.5) v(s,t,x) = JdY p(s,t,x,y)~(y)

- J;drJp(r,dy)p(S,r,x,Y)lvI1+~(r,t,y),

s~t, xeR, ~eF+. Here p(s,t,x,y), s<t, x,yeR, 1s now the continuous transition density function of a symmetric stable flow with index ae(1,2] corresponding to the fractional Laplacian KAa :=-K(-A)a/2, the critical conti­nuous state splittings have index 1+~e(1,2], and p is some branching rate kernel.

The latter is a measurable kernel set of all locally finite measures on wing property:

(1. 6) sup (p (r) ,f~) < <», s~t,

re[s,t]

P of R into the R with the follo-

feF +.

To mention an example, set p(r)=1l where Il is any fini-te measure on R. Here Il (dx) is the time-independent branching rate at x.

Note that by a formal differentiation, (1.5) can be

Page 128: Seminar on Stochastic Processes, 1990

122 D.A. Dawson, K. Fleischmann and S. Roelly

written as

(1. 7) a H~ asV(s,t,X) = -Kâav(s,t,x) + p(s,dx)lvl (s,t,x),

s~t, xe~, with final state v(t,t,.)=~eF+.

A rigorous setting of equation (1.5) is given in Daw­

son and Fleischmann (1990a). Based on this, actually a su­perprocess X = [X,Pp ;se~,~eMf] related to (1.7) can be

s,~

constructed:

PROPOSITION 1.8. To each branching rate kernel p there

exists an Mf-valued time-inhomogeneous superprocess X = [X,Pp ;se~,~eMf] with Laplace transition functional

s,~

(1. 9) IEP exp(X (t) , -~) = exp(~, -v (s, t», s,~

s~t, ~eMf' ~eF+, where v solves (1.5).

To formulate the results of the present paper, we in­

troduce the following definition.

Definition 1.10. Fix J:=(s' ,t), s'<t. The restricted branching rate kernel pJ:={p(r);reJ} is called admissib­le, if there exists a Borel subset N(pJ) of ~ of Le­

besgue measure O such that the following holds. For each

ze~\N(PJ)

(1.11)

( 1.12)

as well as

(1.13)

sup (p(r) ,p~(r,t,z,.» < ~, reJ

sup (p(r) ,p(s' ,r,z,·» < ~, reJ

Iim sup (p(r),p~(r,t+En(Z),Z,.» < ~ n-?~ reJ

for some sequence ~(z):={En(Z)e(O,l) ;n~l} satisfying

En(Z)--?O as n~. The zero set N(pJ) is called an ex­ceptional set for PJ.

A trivial example is given by the branching rate ker­

nel p(r,dy) = a8 0 (dy) as discussed above. In fact, in

this case the restricted branching rate kernels PJ are

Page 129: Seminar on Stochastic Processes, 1990

Absolutely Continuous States 123

admissible for any J since we can set N(pJ) E {O}, and

the conditions hold whatever the ~-sequence is. Note that

here the exceptional set just represents the position of

the non-mov ing catalyst.

A more interesting class of admissible PJ will be

provided in Proposition 1.18 below.

Our first result can be formulated as follows. Recall

that X=[X,Pp ;seR,~eMf] is the superprocess with bran­s,~

ching rate kernel p.

THEOREM 1.14. Fix J=(s' ,t), s·<t, and let the restric­ted branching rate kernel PJ be admissible. Then with respect to pP ,s<s', ~eMf' the random measure X(t)

s,~

is absolutely continuous a.s., that is, there exists a

random density function ~(t)=~(t,·) such that

p~,~{X(t,dY) = ~(t,Y)dY} = 1.

Consequently, if the branching rate kernel P is

such that its restriction PJ to J=(s' ,t) is admissib­

le, then the superprocess X corresponding to P and

start ing before s' has with probability one an absolute­

ly continuous state at time t.

The key of proof of that result is the following Ba­sic Lemma.

LEMMA 1.15. Let v be a random element in Mf and as­sume that

(i) there is a Borel subset N of R of Lebesgue mea­

sure O such that for each zeR\N there is a se­quence ~(Z):={cn(z)e(O,l);n~l} with cn(Z)~O as n~, and v([Z-cn (z),z+cn (Z)])/2Cn (Z) converges in distribution to a random variable ~(z) as n~,

(ii) the expectation E(v,f) coincides with

EIR\NdZ ~(z)f(z) for all feF+.

Then with probability one, v is an absolutely continuous measure.

Roughly speaking, if (V,~z) exists and has full ex-

Page 130: Seminar on Stochastic Processes, 1990

124 D.A. Dawson, K. Fleischmann and S. Roelly

pectation, then v is absolutely continuous and has den­sity (v,c5 z).

In order to apply this lemma to the random measure v=X(t) and having in mind the relation (1.9) with equa­tion (1.5), it is necessary to develop a formulation of the nonlinear equation (1.5) which in particular applies

for generaIized final functions v(t-,t)=~=c5z' (mild ba­

sic solutions to (1.7». This is one of the technical de­velopments which has to be carried out in this paper. The essence is the following result.

THEOREM 1.16. Let J and PJ as veii as N(pJ) as de­

scribed in Definition 1.10. Then to each ze~\N(PJ) there

exists a continuous F+-valued curve v(·,t) on J vhich

solves equation (1.5) on Jx~ vith JdY p(s,t,·,y)~(y) repIaced by p(s,t,·,z).

We note that this approach to proving the absolute continuity via basic solutions of the nonlinear equation was already used in Fleischmann (1988), namely for super­processes with constant branching rate (and this approach differs from that used in the other papers quoted above).

An interesting class of branching rate kernels satis­fying Definition 1.10 is obtained by sampling P from an a-stable moving system r of 7-stable point cataIysts

described as follows. At time O the random cataIytic me­

dium r is given by the stable random measure r(o) =

Liaic5X(i) on ~ with index 7e(0,1). It is determined by its Laplace functional

(1.17) Eexp(r(o) ,-f) = eXp[-JdX f7 (x)], feF +.

(Note that r(o) has a dense set of atoms.) Then, as time

t goes forward or backward, the point catalysts a i c5 X(i) perform, independently of each other, symmetric stable mo­tions with index ae(1,2] and "diffusion" constant K

carrying their act ion weights ai with them. This results in a measure-valued Markov process r={r(t) ;te~}, the ca­

taIyst process. Note that the law of r is shift inva­riant in time and space. Recall that 1<as 2, O<~Sl, and

Page 131: Seminar on Stochastic Processes, 1990

Absolutely Continuous States 125

0<'1<1.

PROPOSITION 1.18. Let a,~, and '1 as introduced abo­ve. If a<2 holds, ve additionally require that (~'1)-1 <l+a is fulfilled. Then vith probability one, r is a branching rate kernel. Horeover, for each given J=(s' ,t), s'<t, vith probability one the restricted process r J := {r(r):reJ} is an admissible restricted branching rate kernel.

Combininq both Proposition 1.18 and Theorem 1.14 we recoqnize that for almost all realizations r of the ca­talyst process the superprocess X = [X,p~,~:se~,~eMf] with branchinq rate kernel r exists. By mixinq over r

we then qet the probability laws ~s,~:=EP~,~, se~, ~eMf correspondinq to a superprocess X in the random medium r (which of course is no lonqer a Markov process). Our main result then reads as follows.

THEOREM 1.19. Let a,~,'1 be given as in Proposition 1.18 and X be the superprocess in the random medium r. If te~ is a fixed time point, then the random measure X(t) is absolutely continuous vith ~s,~-probability one, for all s<t and ~eMf.

We note that for the continuous critica1 super-Brown­ian motion with constant branchinq rate considered above, Konno and Shiqa (1988) ,obtained a stronqer result, namely that with probability one the absolute continuity property holds simultaneously for all times t>O.

It can be noted that if the mot ion of catalysts is allowed to have oscillitory discontinuities, then the ad­missibility conditions in Definition 1.10 may fail. In' fact, consider the followinq simple counter example.

Example 1.20. Set p(t) := ~sin(l/(l-t» for teJ:=(O,l) and p(t):=o otherwise. This is obviously a branchinq ra­te kernel. But for this J, condition (1.11) is violated on the set (-1,+1) of positive Lebesque measure. Indeed, for each ze(-l,+l), in J we find a sequence rn~l

Page 132: Seminar on Stochastic Processes, 1990

126 D.A. Dawson, K. Fleischmann and S. Roelly

such that sin(l/(l-rn». z holds. Then

(p(rn),p~(rn,l,Z,.» = p~(l-rn'O) = const (l-rn)-~/a

(see Lemma A.14 in the Appendix) which goes to infinity as n~.

To prove Proposition 1.18 we will heavily exploit scaling properties of stable distributions. This is the main reason why from the beginning we restricted ourselves to an a-stable mass flow, to a-stable motions of the cata­lysts, and to (l+~)-continuous state branching.

However, the results should not depend on these spe­cial properties, because they are of a local nature. It is clear that certain perturbations can be allowed, for in­stance, the Laplacian could be replaced by a uniformly el­liptic differential operator. The symmetric stable proces­ses could also be replaced by more general infinitely di­visible processes whose Levy measures have a similar beha­vior near the origin.

The plan of the paper is as follows. First we mention that alI theorems and propositions of the Introduction will be reformulated in the sequel. In Section 2 we start by proving the Basic Lemma 1.15 and introduc ing the func­tion space F+ and measure space MF,~. Then in Theorem 3.5 a precise setting for equation (1.7) is given inclu­ding basic solutions and some continuity properties. In section 4 first an existence proof of the superprocess X is sketched. Then the absolute continuity result for a fi­xed admissible branching rate kernel PJ follows (Theorem 4.4). After providing some estimates involving stable flows and densities related to the interplay of both stab­le motion laws, the catalyst process r (including a sim­plified poisson version) is introduced in section 6. Its properties are derived in section 7, ending up with our main absolute continuity result which is formulated in Theorem 7.14 for the superprocess in the random medium r. Comprehensive facts on the stable semi-group used in the present paper are compiled in an Appendix.

Page 133: Seminar on Stochastic Processes, 1990

Absolutely Continuous States

2. PRELIMINARIES

Before giving a more precise description of the mo­

del, we will prove the Basic Lemma.

127

Proof of the Basic Lemma 1.15. Assume that [n,~,p] is a

probability space and that v is a measurable map of

[n,~] into [Mf,mfl. Here mf is the smallest u-algebra

of subsets of Mf , the set of all finite measures on ~,

such that for each interval I the mapping m ~ meI) of

Mf into ~ is measurable.

For each wen, we can decompose themeasure v(w,dx)

into its absolutely continuous and singular parts,

vac(w,dx) and vs(w,dx), respectively. Then again vac and Vs are measurable maps of [n,~] into [Mf,mf ];

see, for instance, Cutler (1984), Theorem 2.1.6.

(2.1)

Furthermore, for each wen, the limit

lim (1/2c) v(w,[z-c,z+c]) =: ~ac(W,z) c"'O+

exists for all ze~\N(w), where N(w) is a Borel subset

of ~ of Lebesgue measure O, and ~ac(w,.) is a ver­

sion of the Radon-Nikodym derivative of Vac(w,dx) with

respect to Lebesgue measure; see, for instance, [8], Theo­

rem III.12.6. Moreover, from the proof there, it can be

seen that ~ac={~ac(W,Z);WEn,zeR\N(w)} is measurab1e with respect to the u-algebra ~~~ corresponding to nx~.

Hence (2.1) holds almost everywhere with respect to

the product measure P(dw)dz on ~~~. In particular, for

almost all z, the limit relation (2.1) is true with res­

pect to convergence in distribution. Then by assumption

(i) in the lemma, we conclude that ~ac(·'z) coincides in

distribution with ~(z), for almost all ze~. Therefore,

by the statement (ii) of the lemma,

Jp(dW)JdZ ~ac(w,z)f(Z) = Jp(dW)JV(W,dX) f(x)

holds for all feF+. Thus, we obtain EVac = Ev. But

then the natural inequality vac(w) S v(w), wen, is with

probability one even an equality. consequently, v is ab­

solutely continuous a.s., and the proof of the Basic Lemma

Page 134: Seminar on Stochastic Processes, 1990

128 D.A. Dawson, K. Fleischmann and S. Roelly

is finished. c

We continue with some terminology. For constants K>O

and ae(1,2], let S:={St:t~O} denote the contraction semigroup of a symmetric stable Markov process on the real

line ~ with index a and generator Kâa =-K(-â)a/2

where â is the one-dimensional Laplacian. That process

possesses continuous transition probability density func­

tions

p(s,t,x,y)=p(t-s,y-X)=Pa(K(t-S),y-X), s<t, x,ye~,

with Pa taken from the Appendix. Note that we include

a=2, the case of a Wiener process with generator Kâ.

Let F denote the set of alI real-valued continuous

functions f on ~ with the property that there exist

positive constants c and ~ (possibly depending on f)

such that If(x)1 ~ c p(~,x) holds for alI xe~. We

equip the linear space F with the supremum norm n·n .. of uniform convergence.

In other words, F conta ins alI those continuous

functions f(x), xe~, which, as x~, have at least an exponential decay c 1exp(-c2x2 ) (for positive constants

c 1 and c 2 possibly depending on f) provided that

a=2; otherwise that exponential decay has to be replaced -l-a by a potential decay clxl ,c>O; see Lemma A.8 (in

the Appendix) .

LEMMA 2.2. The space

lutions. For feF and F is closed with respect to convo­T>O, alI functions Stf, O~t~T,

belong to F and are dominated by some h in F, i.e. IStfl~h, O~t~T.

Proof. See, for instance, Dawson and Fleischmann (1990a),

Examples 3.1 and 3.3. c

Fix a number ~e(O,l]. Let MF,~ denote the set of

those (non-negative) measures ~ defined on the u-field

of alI Borel subsets of ~ for which (~,f~) is finite for alI feF+. Here the lower index + at a set A re­

fers to the collection of alI non-negative members of A.

Page 135: Seminar on Stochastic Processes, 1990

Absolutely Continuous States 129

We endow MF,~ with the coarsest topology such that, for each feF +' the mapping IJ. H(IJ., f~) of MF ,~ into IR will be continuous. Of course, Mf is a subset of MF,~.

3. BASIC SOLUTIONS OF THE UNDERL YING SINGULAR EQUA TION

In this section we will deal with equation (1.7) in the setting needed in the present paper. To this end we may fix a finite nonempty open time interval J:=(L,T)clR, and write ~ and ~ for [L,T) and [L,T], respective­ly. Let ~ denote the set of alI continuous mappings u of ~ into F such that

IIUIIJ := JJdS lIu(s) II .. < ...

We will look for solutions to (1.7) in the normed space J [F,II.IIJ ].

Next we introduce possible final states for solu­tions. Let O denote the set of alI finite measures ~

defined on IR which are either degenerate (i.e. concent­rated at a point) or absolutely continuous with a density function h such that h~f for some feF+ possibly de­pending on ~. We equip 9 with the topology of weak convergence.

In particular, for each ee(O,l) the uniform distri­bution on the closed interval [_e 1/ a ,e1/ a ] belongs to O. Its density function is denoted by q(e).

First of alI we shall deal with the trivial case in which the nonlinear term of equation (1.7) disappears. We will use the notat ion ~*h(x) := J~(dY)h(X-Y), xelR, and set

s~(s) := ~*p(T-s), se~.

s!l.... J LEMMA 3.1. v belongs to ~, for each continuously depends on ~. Horeover, in veak convergence S~(S)(X)dX ~ ~(dx) as

~eO, and it 9 ve have the

s-+T.

Proof. Fix ~eO. For each se~, obviously S~(S) is a continuous function on IR. Since the continuous density functions p(T-s) belong to F+ and F+ is closed with

Page 136: Seminar on Stochastic Processes, 1990

130 D.A. Dawson, K. Fleischmann and S. Roelly

respect to convo1utions, S~(s) be10ngs to F+. More­

over it continuous1y depends on s since the stab1e den-

sity functions

for each c>O.

pare uniform1y continuous on

Fina11y, s~ be10ngs to F~ +

(3.2) IIS~IIJ :s const 11"11 IJdS(T-S)-l/O: < DO

[C,DO)XIR,

because of

where 11"11 denotes the total mass of ", and Lemma A.14

was used.

By the way, here we exp10ited the assumption that

0:>1, which is essentia1 since we intend to deal with

point cata1ysts (reca11 that a symmetric stab1e process

with index o: has a positive local time if and on1y if

0:> 1) •

We now assume the weak convergence "n ~" in e and consider

I~-LdS sUpII"n(dY)P(S,X-y) - I"(dY)P(S,X-y ) 1. xelR

By estimates as in (3.2) we see that we may assume that s

is bounded away from zero, i.e. we suppose se[c,T-L] for

some c>o. There the stab1e density functions are uni­

form1y bounded (cf. Lemma A.14), and by the weak conver­

gence "n ~" we may additiona11y assume that y var ies

on1y in a bounded set. Fina11y, p(s,x-y) converges to O

as x~, uniform1y for such s and y (cf. Lemma A.8),

thus it suffices to take the supremum over a bounded set

of x.

Now it is enough to show that for fixed s and each

bounded sequence {Xn:n~l} in IR

(3.3) O

ho1ds. Consider a subsequence of {Xn:n~l}. Then it has a

further subsequence converg ing to some x. But by conti­

nuous convergence a10ng the 1atter subsequence, both terms

in (3.3) tend to (",p(s,x-·» (see, for instance, [1],

Theorem 5.5). This then imp1ies the fu11 convergence sta­

tement (3.3). Thus S~" continuous1y depends on ".

Fina11y, the weak convergence S~(S) (x)dx ~ "(dx)

as s~T is a1so easy to see by consider ing such integra1s

Page 137: Seminar on Stochastic Processes, 1990

Absolutely Continuous States 131

JdX S~(s) (x)h(x) where h is any uniformly continuous bounded function on R. o

Now we wil1 also take into considerat ion the nonli-near term in the equation. To this end, let 1< (~) denote the set of alI kernels ţ of ;I:=[L,T] into Jt.F,(3 such that ţ (t, .) belongs to Jt.F, (3 for aH te;I, and ţ (', I)

is a measurable function defined on ;I, for alI intervals I in R, as well as

(3.4) gllf := sUE (ţ(r),f(3) < <Xl, rell

is true. Our results on the equation wil1 be collected in the following theorem. Recall that q(e) introduced befo­re Lemma 3.1 is the density function of the uniform di­stribution on some e-neighborhood of the origin.

THEOREM 3.5. Let ţ belong to 1«;I) and ~ to 9. If ~ is absolutely continuous, then there exists a unique element v:= v!l[~,ţ] in ~ which satisfies the integral equation

(3.6) v(s,x) "*p(T-s) (x)

- J~drJţ(r,dy)p(S,r,x,y) IvI 1+(3(r,y),

sell, xeR. If ~ has an atom at z and

(3.7) sup (ţ(r), [~z*p(T-r)](3) < <Xl reJ

holds, then there exists at most one element v:= v!l[~,ţ] in ~ which satisfies (3.6). If {e(n);n~l} is a se­quence with e(n)e(O,I), n~l, and converging to O as n~, and in addition to (3.7)

(3.8) Iim sup (ţ(r) '[~z*p(T-r+e(n»](3) < <Xl n4<Xl reJ

is true (where z is the position of the atom of ~),

then there exists a solution v=v!l[~,ţ]eFll to (3.6), and + the convergence

(3.9) v

Page 138: Seminar on Stochastic Processes, 1990

132 D.A. Dawson, K. Fleischmann and S. Roelly

takes place in J Fi:. Finally, ve have

(3.10 )

for alI &e~ satisfying

(3.11) sup (~(r),o&*p(r-L» < ~. reJ

Note that (3.6) can formally be written as in (1.7), with final condition expressed by the weak convergence

v(s,x)dx ~ ~(dx) as s-+T.

Proof. Fix ~e9 and ~eX(~). In order to prove unique­ness, assume that in ~ we have two solutions v 1 and v 2 of (3.6) which correspond to these data, i.e. we have

vi(s) = ~*p(T-s) - J~drJ~(r,dy)p(S,r,.,y) IViI1+~(r,y),

i=l,2. Applying Lemma A.14, the following elementary ine­quality

(3. 12) a,b 2: 0,

and

(3.13) seir, i=l,2,

for the following nonempty sub interval ir'=[L' ,T) of ir we get

O < IV1-V2"J , :S 2JJ ,dS J~dr J~(r,dY) (r_s)-l/IX

[~*p(T-r)]~(y) IIv1(r)-v2(r)II~.

By a change of order of integration we may continue with

(3.14) (T-L' ) l-l/IX

sup (~(r) ,[~*p(T-r)]~). reJ

First we assume that ~ is degenerate and has an atom at z. Then by (3.7) the latter supremum term is fi­nite. Moreover, by the estimate (3.13) and Lemma 3.1 the norm expression is finite, too. Hence, since IX>l, for L' sUfficiently close to T we get the contradiction

Page 139: Seminar on Stochastic Processes, 1990

Absolutely Continuous States

IIV1-V2 I1 J , < IIV1-V2 I1 J , unless IIV1-V2 I1 J ,=O. In other words, for a degenerate ~ and on a sufficiently small interval ~' we get uniqueness.

133

Now we will prepare for the corresponding existence proof. For ee(O,l) we consider the function ~*p(e) =: ~e By Lemma 3.1, it belongs to F+, and it determines a measure in e which we denote by the same symbol ~e.

From the Existence Theorem 2.6 in Dawson and Fleisch­mann (1990a) we know that with probability one to each ~e

there exists a continuous mapping ve of ~=[L,T] into F+ which solves (3.6) on ~. In fact, by time reversibi­lity of the stable semigroup S and time reversibility of the condition (3.4), the forward formulation of the equa­tion in [3] can easily be transferred to the backward for­mulation in the present paper.

abviously, ve restricted to J (which we denote by the same symbol) belongs to J F+. We will now apply these constructions to the sequence {e (n) :nl!:l} of the theorem. aur next task is to prove that

(3. 15) o

for each sufficiently small sub interval ~'=[L' ,T) of ~.

From equation (3.6), for se~' and xe~ we get

+ J~drJ~(r,dy)p(S,r,x,y) Iv;7~) (r,y) - v;7~) (r,y) 1· Applying aga in Lemma A.14, (3.12), and (3.13), as in the estimate (3.14) we obtain

(3.16 ) J J IIVe(m) - Ve(n)IIJ , S IIS""'e(m) - S""'e(n)IIJ

, 1-1/0: + const IIVe (m) - ve (n) II J , (T-L)

~~~ (~(r) , [~e (m) *p (T-r)]t3 + [~e (n) *p (T-r) ](3) .

If ~ has an atom at z, then by Lemma 3.1 and assump­

tion (3.8) we deduce (3.15) for sufficiently small ~'.

Page 140: Seminar on Stochastic Processes, 1990

134 D.A. Dawson, K. Fleischmann and S. Roelly

Now we will complete the existence proof. By the as­

sertion (3.15), {Ve(n) ;n~l} is a Cauchy sequence in

F~' However by construction, ~' coincides with the Ba­

nach space Ll[~, ,F+,ds] but restricted to continuous

functions. Hence, ve(n) converges in Ll[~, ,F+,ds] to

some limit v~o as n~. If aga in ~ has an atom at z,

using condition (3.8) and proceeding as in the derivation

of (3.14) or (3.15), we conclude that

J T drJ~(r,dy)p(.,r,.,y)v~~~) (r,y) ( . )

~ J T drJ~(r,dy)p(.,r,.,y)vl+~(r,y) ( . )

also holds in Ll[~, ,F+,ds] as n~. Noting that

s ~ J;drJ~(r,dy)p(S,r,.,y)vl+~(r,y)

is a continuous mapping of ~' into F+, and combining

this with Lemma 3.1 we get that v is a continuous ele­

ment in Ll~, ,F+,ds] which solves (3.6) on ~'. This

gives the existence claim in the case of a degenerate ~

and a sufficiently small interval ~'.

So far we proved uniqueness and existence on ~'

for degenerate ~. If now ~ is absolutely continuous with density function hsf'eF+, then the supremum in the

estimate (3.14) can be bounded above by

s const sup (~(r),[f'*p(T-r)]~) s const 1I~llf < .. reJ

where feF+ is a dominat ing function for

f'*p(T-r)=ST_rf', reJ

(and the norm II' "f was defined in (3.4». Such f ac­

tually exists by Lemma 2.2. Hence the uniqueness proof

carries over to such ~.

For the same reasons, the supremum in (3.16) is fini­

te, uniformly in m and n. Therefore the existence

proof also remains valid for absolutely continuous ~.

summarizing, for sufficiently small intervals ~'

uniqueness and existence hold, and in this case we turn to

the continuity assertion (3.9). Recall that q(e) is the density function of a uni-

Page 141: Seminar on Stochastic Processes, 1990

Absolutely Continuous States 135

form distribution. Let un denote the solution correspon­

ding to ~*q(c(n». since

(3.17) q(c) ~ const p(c), O<c<l

is true (which follows from a simple scaling argument), we

may proceed as in the proof of (3.15) to show t~at

UUn - VUJ , ---+ O as n~

holds where v=~[~,~], for both choices of ~ (i.e. de­generate or absolutely continuous measure etc.).

In summary, if for the moment we exclude (3.10), then

alI assertions in the theorem hold, provided we replace ~

by a sUfficiently small sub interval ~'=[L' ,T).

since the bounds used do not depend on ~', an ex­tension of the proved assertions from ~' to the whole

interval ~ can be established by the usual iteration scheme. Note, in particular, that v(L') which will serve as the final state of the next iteration step, determines an absolutely continuous measure in 8 with density func­tion in F+. Therefore, the conditions (3.7) and (3.8) are only needed for the initial step of iteration.

For a proof of (3.10) we write ~=A~z and take a. & satisfying (3.11). From (3.6), (3.12), (3.13), and (3.17) we get

IVn(L,&)-V(L,&) I ~ AI~z*q(c(n»*p(T-L) (&) - ~z*p(T-L) I

+ const JidrJ~(r,dy)p(L,r,&,y) I vn-v I (r,y)

[p~(T-r+C(n),y-z) + p~(T-r+,y-z)].

Clearly, the first summand at the right hand side of this

inequality approaches O

q(c(n» (x)dx to ~o(dx),

may be estimated above by

pression

by the weak convergence of

as n~. The second summand

times the ex-

sup(~(r) ,~&*p(r-L)[p~(T-r+C(n) ,.-Z)+p~(T-r+,.-z)]). rEJ

To show the boundedness of the latter term, we fix a time

point SEJ. Then by Lemma A.14,

Page 142: Seminar on Stochastic Processes, 1990

136 D.A. Dawson, K. Fleischmann and S. Roelly

~&*p(r-L) (x) s const (S_L)-l/a = const, re [ s , T), xelR ,

and we may apply (3.7) and (3.8). But analogously we can

proceed on the remaining interval (L,s) by using (3.11).

summarizing, the second summand may be estimated abo­

ve by s const IIVn-VIIJ which by (3.9) converges to zero

as n~.

This shows (3.10) and completes the proof of the

theorem. c

4. SUPERPROCESS WITH ABSOLUTEL Y CONTINUOUS ST ATES

p

Let X(IR) denote the set of alI measurable kernels

of IR into MF,~ (this set of measures was defined at

the end of Section 2) such that (1.6) holds. In other

words, X(IR) is the set of alI kernels of IR into MF,~ such that their restrictions to any finite closed interval

~=[S,t] belong to X(~).

then

For instance, if

p belongs to

p (r) =V

X (IR) •

for a measure v in

Actually, each p in X(IR) may serve as a branching rate kernel for a superprocess. (Recall that Mf is the

set of alI finite measures defined on IR.)

PROPOSITION 4.1. To each p in X(IR), there exists an

Mf-valued time-inhomogeneous superprocess X = [X,Pp ;selR+,~eMf) with Laplace transition functional

s,~

(4.2) IEP exp(X(t) ,-rp) = exp(~,-v(s,t», s,~ _~

s<t, ~eMf' rpeF+, where v(·,t)=v~[~,~] is the unique solution to equation (3.6) with ~=[s,t), ~(dx) = rp(x)dx, and ~={p(r);re~}.

Moreover, we have the following expectation formula:

(4.3) sst, rpeF +.

Sketch of Proof (for details we refer to Dawson and

Fleischmann (1990b». First of alI we assume that p is

absolutely continuous, i.e. p(r,dx) = h(r,x)dx, xelR, but

where the measurable density function h on IRxlR is even

Page 143: Seminar on Stochastic Processes, 1990

Absolutely Continuous States 137

bounded. Then there exists a

cess X = [X,IPP ;SEIR+,/.lEAlf ] S,/.l

time-inhomogeneous superpro­with Laplace transition

functional (4.2). See Dawson and Perkins (1990); compare also Fitzsimmons (1988) and (1989) for the time-homoge-neous case.

To deal with a general pEK(IR), fix an interval

~=[s,t), s<t. Then we will use continuity properties of

solutions to equation (3.6) (with ~(dx) = ~(x)dx and

~={p(r) ;rE~}) as described in Dawson and Fleischmann (1990a, Theorems 2.11 and 2.13). There it was shown that under certa in conditions the solutions ~[~,~] to (3.6)

depend continuously on ~. Thus we can obtain them as the

limit of a sequence ~[~'~n] where ~n' n~l, are ap­proximations of ~ which are absolutely continuous with

bounded density kernels as above. By dominated convergence

then the corresponding right hand sides of (4.2) converge, and the limit will again be a Laplace functional. Since ~

is arbitrary, in this way we get Laplace transition func­tionals, which determine a time-inhomogeneous Markov pro­cess X with the desired properties.

The expectation formula (4.3) follows by a similar approximation procedure (or formally by differentiation as in the moment calculation in the Introduction). This fini-shes this sketch of the proof. []

Now we are in a position to formulate our absolute continuity result for a fixed admissible restricted bran­ching rate kernel. Recall that 1<as2 and O<~Sl.

THEOREM 4.4. Let pEK(IR) and [X,IPP ;SEIR,/.lEAlf ] be a S,/.l superprocess vith branching rate kernel P, according to Proposition 4.1. Fix J=(s' ,t), s'<t, and let the re­

stricted kernel PJ be admissible as described in Defini~ tion 1.10. Then vith respect to IPP , s<s', /.lEAl f , the

S,/.l random measure X(t) is absolutely continuous a.s., that

is, there exists a random density function ~(t) such that

IPP {X(t,dY) = ~(t,Y)dY} = 1. S,/.l

Page 144: Seminar on Stochastic Processes, 1990

138 D.A. Dawson, K. Fleischmann and S. Roelly

Proof. Recall that q(e) denotes the density function of a uniform distribution on some interval around the origin, as defined before Lemma 3.1.

Consider p,J,PJ and s,~ as in the theorem. Choose an exceptional set N(pJ) for PJ and sequences

~(z)={en(Z)E(O,l) ini!:l},

according to the Definition 1.10.

By the expectation formula (4.3), we get IEP X(s') = s,~

Ss'-s~ which is X(s' ,N(pJ» = O

an absolutely continuous measure. Hence, with pP -probability one, because N(pJ)

s,~

is a Lebesgue zero set. By the Markov property it is the-refore enough to show that X(t) is absolutely continuous with pP, -probability one, for alI ~EMf satisfying s ,~

~(N(PJ» = O. We fix such a ~, and to simplify the no-tation we will write s instead of s'.

For ZE~\N(PJ) and Ai!:O, by (3.10) in Theorem 3.5,

for and

where ~ is the restriction of the branching rate kernel P to J, we get

since (3.7), (3.8), and (3.11) are fulfilled (see the con­ditions (1.11), (1.13), and (1.12), respectively). By our assumption on ~ and dominated convergence this implies

(~ , v n ( s » n~ 1 (~ , V O ( s » , for alI Ai!:O. In fact, by (3.13), (3.17), and Lemma A.14,

for alI ni!: O (where we set eo(z)=o),

vn(s) s A~z*p(en(z)+t-s) s const A(t_S)-l/a

which is a finite constant for the fixed A,t,S. Using again this domination, we concI ude that

as A~O.

Therefore by Proposition 4.1 there exists a random variab­

le ~(z)i!:O such that

exp(~, -vn (s»

Page 145: Seminar on Stochastic Processes, 1990

Absolutely Continuous States 139

n~) exp(J.l,-vO(S» = Eexp[-AlI(Z)], A~O,

holds. In other words, we have the convergence in distri­bution

(4.5) lI(Z) ,

for each Ze~\N(PJ). According to the Basic Lemma 1.15, now it suffices to show that

J dZ ElI(Z)f(z) = EP (X(t),f), S,J.l feF +'

is true. But by (4.3), the right hand side coincides with

(St_sJ.l,f). Hence it is enough to prove that

is valid. Now taking expectations in the convergence rela­tion (4.5) and using the expectation formula (4.3) in Pro­position 4.1 we get

(4.6) E~,J.l(X(t)'~z*q(en(Z») = (J.l,~z*q(en(Z»*P(t-S»

n~) (J.l,~z*p(t-s» ~ ElI(Z).

On the other hand, by Jensen's inequality, for A>O,

exp(J.l,-v(s» i1! exp[-AElI(Z)].

Hence, by equation (3.6) and the estimate (3.13)

(4.7) AElI(Z) i1! (J.l,A~Z*P(t-S»

- A~+l JJ.l(dX)J~drJ~(r,dy)p(S,r,x,y)p1+~(t-r,z-y).

But the latter integral term may be estimated above by

J t -1/0: -1/0: ~ :s const dr(r-s) (t-r) sup (~(s') ,p (t-s' ,z-·». s s' eJ

since 0:>1 and by (1.11), the latter expression is fini­te. In (4.7) we divide by A and let A tend to O.

Then together with the estimate (4.6) we are done. c

5. SOME ESTIMA TES INVOL VING ST ABLE FLOWS ANO OENSITIES

In this section we will collect some technical de­

tails later needed for catalyst processes.

Let S' be defined as S in section 2, except re-

Page 146: Seminar on Stochastic Processes, 1990

140 O.A. Oawson, K. Fleischmann and S. Roelly

placing K>O by K'~O. We pay attention only to the ca­ses K'=O and K'=K. (The former case will concern non-mov ing catalysts.)

Consider a constant 7E(O,l]. If a<2 holds, we ad-ditionally require that (1l7) -l<1+a. This condition gua-rantees that aH functions f in F+ are 1l7-fold inte-grable, i.e. that fll7 is integrable with respect to Le-besgue measure.

LEMMA 5.1. For k>o, the function

x ~ sup{s~pll(t,X); O~s~K, O<tSK}, x~O, is finite. Horeover, it is 7-fold integrable on the set {x;lxl>l}, and if additionally 1l7<1 holds, then it is also integrable on {x; Ixl<l}.

Proof. By Jensen's inequality

But

Hence,

{ p(t,x)

S'p(t,x) = s p(s+t,x)

(5.2) can be continued with

~ sup pll(r,x). O<rs2T

it K'=O

it K'=K.

Then the statement directly follows from Lemma A.13 (with Il' replaced by 7). c

LEMMA 5.3. Under 1l<1, for K,T>O the function

x ~ sup JTodS SslA pll(r+T-S) + :spll(r+T-S) I (x), x~o O~r~K a

is finite and 7-fold integrable.

Proof. For· X~O, we consider the integral

(5.4)

If we restrict the integrat ion to Iy-xl ~ Ixl/2, then we get

~ J~dS p(s,X/2) II Aapll (r+T-s) + :spll(r+T-S)111

Page 147: Seminar on Stochastic Processes, 1990

Absolutely Continuous States

where 11.11 1 denotes the Ll-norm. In view of (A.3),

p(s,x) s const S-l/ap(T,X), O<sST, xeR.

141

On the other hand, by Lemma A.30, the norm expression can

be estimated above by

s const (r+T_S)-l+(l-~)/a s const (T_s)-l+(l-~)/a,

since a>l by assumption. Because we supposed ~<l,

J~dS s-l/a(T_S)-l+(l-~)/a < m.

But p(T,x) is finite, too, and 7-fold integrable, since

(~7)-1<1+a implies that 7-l <1+a.

Now we restrict the integral (5.4) to ly-xl<lxl/2

which gives

(5.5) Ixl/2 < Iyl < 3Ixl/2.

First of alI, if additionally Ixl~l is true, then

by the Lemmas A.28 and A.22

IAaP~ + ~sp~l(r+T-s"y) s const (r+T-s)-lp~(r+T-s,x/4),

and the restricted integral may be estimated to be

JT+K -1 ~ s const O ds s P (s,x/4)

-a const Ixl-~ JgT+K)IXI ds S-lp~(S,1/4),

where we used (A.l). But if a<2, by Lemma A.8 the latter

inequality can be continued with

s const Ixl-~(l+a),

which is finite and 7-fold integrable on Ixl~l. On the

other hand, for a=2 we also get a finite and 7-fold in­

tegrable bound.

Nowassume O<lxl<l. By Lemma A.6 (with K=lxl-l ),

for (5.4) restricted to (5.5) we can write

J TdSJ dy Ixl p(s,lxly-x) Ixl-a-~ O 1/2<lyl<3/2

I ~ 8 ~I -a AaP + 8sP (Ixl (r+T-s),y).

using (A.l) we continue with

Page 148: Seminar on Stochastic Processes, 1990

142 D.A. Dawson, K. Fleischmann and S. Roelly

(5.6) -a

J~IXI dSJ dy p(s,y-xlxl-1 ) 1/2<lyl<2

IăaP~ + ~sp~1 (lxl-a(r+T)-s,y).

:s Ixl-~ J(r+T) IXI-adSJ dy O 1/2<lyl<2

p(lxl-a (r+T)-s,y-xlxl-1 ) IăaP~ + ~sp~1 (s,y).

Since y is bounded away from O and m, by alI the

Lemmas A.2S, A.22, and A.30 we get

IăaP~ + ~sp~1 (s,y) :s h(s) := {

Hence (5.6) may be estimated to

const s~-l

const s-l-a/~

:s const Ixl-~ J~dS h(s) = const IXI-~,

it O<s<l

it

which is finite and 7-fold integrable around the origine

This ends the proof. c

6. CAT AL YST PROCESSES

Here we introduce some catalyst processes r, for

details we refer to Dawson and Fleischmann (1990a), Sec­

tions 4 and 5.

The random quantities appearing in the following are

alI defined on some common probability space [n,~,p]. Re­

caII that K'=O or K'=K>O.

Let wX:={wx(t);teR}, xeR, be a family of indepen­

dent symmetric stable Markov processes with generator

K'ă which at time t=O go through the site xeR, a i.e. wx(O)=x, and having trajectories in D[R,R]. Here

D[R,A] denotes the space of alI functions of R into a

topological space A which are right continuous and have

left limits.

Recall that 7 is a given parameter satisfying 0<7

:s1. If 7=1 holds, we consider a poisson random point

measure r(o) - ~ m ~ on R with uniform density, - L.i=l xCi)

determined by its Laplace functional

Page 149: Seminar on Stochastic Processes, 1990

Absolutely Continuous States

(6.1) feF +.

We assume that r(o) is independent of the family w:= {wx;XeR}. setting

00 r(t) := I i =l a (1) , teR,

WX (t)

we get a point measure-valued Markov process r.

143

Alternatively, if 7<1, again independently of w, consider a stable random measure r(o) vith index 7 de­termined by the Laplace functional (1.17). As in the case of the Poisson point measure, this random measure r(o) has independent increments. with probability one it can be represented as

x(i)*x(j) for bj.

We stress the fact that the supporting set {x(i);i~l} is now dense in R. Finally, also in contrast to the poisson point measure, r(o) Le. K-1r(O, [-K,K])

In this case

has infinite asymptotic density, ~ 00 as K~ with probability one.

r(t) := I i : 1 aia (1 ' teR WX 1 (t)

yields a measure-valued Markov process r. In both cases, 7=1 and 7<1, we call r a cata­

lyst process. It describes a random system of point cata­

lysts moving independently according to a-stable proces­ses. Recall that the process r is defined on some basic probability space [O,~,P].

LEMMA 6.2. The catalyst process r is (in distribution) stationary in time and space. With P-probability one the folloving expectation formula holds:

E{er(t) ,f)lr(s)} = er(s) ,St_sf), sst, feF+.

Here stationary means that r(r+·,y+·) has the same dist­ribution as r, for all r,y e R.

Finally we quote the following result. Recall that we required (~7)-1<1+a if a<2.

Page 150: Seminar on Stochastic Processes, 1990

144 D.A. Dawson, K. Fleischmann and S. Roelly

LEMMA 6.3. The process r can be realized in D[R,MF,~], and with probability one r is a branching rate kernel,

Le. (1.6) is satisfied.

Remark 6.4. From the construction of solutions to (3.6) in the case of absolutely continuous final states ~ pro­vided in [3], Theorems 2.6, 2.11, and 2.13, it can be ve­rified that the mapping p ~ ~[~'P~] of D[R,MF,~] in­to F~ is measurable in an appropriate sense, for each choice of J.

Then from the construction of our superprocess X

(cf. Proposition 4.1) it can be shown that the map p ~

pP is measurable in an appropriate sense, for each SER s,/..t and /..tEM f • This measurability property will be used below for defining the superprocess in a random medium.

7. FURTHER PROPERTIES OF THE CAT AL YST PROCESSES

First we recall that we assume (~7)-1<1+a in the case a<2.

LEMMA 7.1. With P-probability one,

(7.2) Jr(O,dX) sup{s~P~(t,X); OssSK, O<tSK} < ~

for all K>O. Similarly, if ~<1, for fixed T>O with

P-probability one,

Jr(O,dX) sup J~dS SsIAaP~(r+T-S) + ~sp~(r+T-S) I (x) OsrsK

(7.3)

is finite, for all K>O.

Proof. First, by monotinicity in K, we may assume that

K is fixed. If in (7.2) we additionally introduce the indicator

function l{lxl>l}, then by Lemma 5.1 the new integrand will be 7-fold integrable with respect to Lebesgue measu­re. Hence, from the formulas (6.1) and (1.17) (which can be extended to more general non-negative functions) we know that this restricted integral is finite a.s.

Page 151: Seminar on Stochastic Processes, 1990

Absolutely Continuous States 145

On the other hand, assume in addition that Ixlsl. If 7=1, then with probability one the Poisson system r(o) restricted to {Ixlsl} has finitely many points different from o. Then by Lemma 5.1, the restricted in­tegral is finite.

If 7<1, by Lemma 5.1 the integrand in (7.2) is 7-fold integrable on {Ixlsl} with respect to Lebesgue measure. Then we can employ (1.17) to get the a.s. finite­ness of the integral in (7.2) restricted to Ixl s l.

Thus, the assertion (7.2) is proved. Using Lemma 5.3, the proof to (7.3) is even simpler. o

Now we restrict our considerat ion to the fixed finite half-open interval ~=[O,T). Recall that ~Sl.

LEMMA 7.4. Fix r~O. Given r(o),

:= (r(t) ,p~(r+T-t» - (r(o) ,p~(r+T»

- I~dS (r(s) ,[KAcx + ~slP~(r+T-s», te!!,

where the integral term must be deleted in the case ~=1,

is a right continuous P{·lr(O)}-martingale with respect to the filtration ~t:=u{r(s);ossst}, te~.

Proof. For te~, by Lemma 6.2,

(7.5) E{er(t),pP(r+T-t»lr(o)} = Jr(O,dX)StPP(r+T-t,X)

s Ir(O,dX) sup{S~P~(s' ,x); Oss,s'sr+T, s'*O}.

But by Lemma 7.1, this expression is finite with probabi­lity one. Therefore, given r(o), the first two terms in the definit ion of M~ are finite for alI te~.

Let ~=l. Then (7.5) shows that these first two terms have the required martingale property. (Note also that [Kăcx~lP(r+T-S) is identically zero in this case.)

Assume now that ~<l. Let G+ ·be the set of aU

functions geF+ such that g~ belongs to the domain of

Kăcx • Then we observe that by the expectation formula in

Lemma 6.2, for 9 in G+ and given r(o) ,

Page 152: Seminar on Stochastic Processes, 1990

146 D.A. Dawson, K. Fleischmann and S. Roelly

(r(t) ,g(3) - (r(o) ,g(3) - gdS (r(s) ,K.!J.cx.g(3), teJ,

is a right continuous martingale. Moreover, for suffi­ciently smooth mappings h of J into G+,

(r (t) ,h(3 (t» - (r ( O) ,h(3 ( O»

- J~dS (r(s) ,[K.!J.cx. + ~s]h(3(S», teJ,

is a right continuous martingale, too. From this the sta-tement follows. c

Now let J denote the finite interval [L,T).

LEMMA 7.6. Fix zelR and "I:~O. Let rn~"I: as n~ in ["1:, "1:+1) be given. Then

p{liminf sup (r(t),p(3(t,T+r ,z,·» < n~m teJ n

m} = 1.

Proof. Since the catalyst processes are stationary in ti­me and space (see Lemma 6.2), without loss of generality we may assume that z=O=L.

If K.'=O, then by definition r(t)=r(O) a.s., and the expression under considerat ion can be estimated above by

Ir(O,dX) sup p(3(S,X). O<s:sT+"I:+1

Then by Lemma 7.1 we directly get the statement.

From now on suppose that K.'=K.. Fix re ["1:,"1:+1) , let K be a natural number, and use the martingale Mr from Lemma 7.4 (with L=O). To this end we fix a r(o) satis­fying the assertions in the Lemmas 7.4 and 7.1.

(7.7)

If (3=1,

p{sup (r(t),p(3(r+T-t» > Klr(o)} teJ

:s P{SUPIM~I > K/2Ir(0)} + 2K-1(r(0),p(3(r+T». teJ

Applying Doob's inequality (which is also valid for the halfopen interval J) yields

(7.8) :s const K-1 sup E{(r(t),p(3(r+T-t»lr(o)}. teJ

Page 153: Seminar on Stochastic Processes, 1990

Absolutely Continuous States

If ~<1, then (7.7) becomes true if at the right

side we replace K/2 by K/3 and add the term

147

(7.9) + 3K-1E{JJdS (r(s), IKâaP~ + ~sp~1 (r+T-S») Ir(o)}.

Changing here the order of expectation and integration, by

the expectation formula in Lemma 6.2 the expressions (7.8)

and (7.9) can be estimated above by

~ const K-1 (r(0), sUP{Stp~(r+T-t); te~, o~r<~+l}

+ sup JJdS S IKâ p~ + ~ p~1 (r+T-S») O~r<~+l sas

Now H(r(O» is finite with probability one by Lemma 7.1.

Note that the exceptional set is independent of K and

r.

Summarizing, we found that for each natural number K

p{~~~ (r(t),p~(r+T-t» > Klr(o)} ~ K- 1H(r(0»

where H(r(O» is finite a.s. with an exceptional set in­

dependent of K and r. We fix such a r(o). Then for

alI natural numbers K, n, and k,

p{sup (r(t),p~(r +T-t» < K for some n>klr(o)} te~ n

~ 1 - K- 1H(r(0».

Hence, by monotinicity in K,

p{liminf sup (r(t) ,p~(rn+T-t» ~ Klr(o)} n-+oo te~

~ 1 - K- 1H(r(0»,

for alI K. Finally,

p{liminf sup (r(t) ,p~(r +T-t» < oolr(o)} ~ 1, n-+oo te~ n

and we get

p{liminf sup (r(t),p~(r +T-t» < oo} 1. n-+oo te~ n

This completes the proof. o

a.s. ,

LEMMA 7.10. Fix J=(L,T), L<T. Consider a sequence r:=

{rn;n~l} in [0,1) with rn--+o as n-+oo. Then with P-

Page 154: Seminar on Stochastic Processes, 1990

148 D.A. Dawson, K. Fleischmann and S. Roelly

probabi1ity one the fo11owing ho1ds. For a11 ~ea except those which have an (weighted) atom at z for z in some set N(rJ'~) of Lebesgue measure zero we have

liminf sup (r(t) ,[~*p(rn+T-t)]~) < eo. n-+eo teJ

Proof. Let ~ea and re[O,l). By assumption on the spa­ce a,

~*p(r+T-t) ~ const p(L+r+T-t,·-z), teJ,

for some L~O and zeR. In fact, either ~ is concen­trated at some point z (then take L=O) or it has a density function bounded by some function in F+ (then choose z=O). Hence, from Lemma 7.6 in connection with the spatial invariance of r, we see that for the given sequence rn--+o

p{liminf n-+eo

and each ~ea

sup (r (t) , [~*p (r +T-t) ]~) = eo} teJ n

If ~ is absolutely continuous, we are done.

O.

Assume now that ~ is degenerate, and let z denote the atom of ~. Then by Lemma 7.6 we get

J dz p{liminf sup (r (t) , [o z *p (rn +T-t)]~) = eo} = O. n-+eo teJ

Therefore, by Fubini's theorem, the limit inferior is in­finite only on a zero set with respect to the product mea­sure P(dw)dz, and once more by Fubini's theorem the claim follows. Il

COROLLARY 7.11. Fix again J =(L,T), L<T. Then with P­probabi1ity one the fo11owing ho1ds true. For a11 ~ea

except those which have an (weighted) atom at z for z

in some set N(rJ ) of Lebesgue measure zero we have

sup (r (t) , [~*p (t-L) D < eo. teJ

Proof. First we observe that the right continuous vers ion of the time reversed cadlag process r has the same pro­bability law as the original process. Moreover, the supre­mum expression in the corollary over the open interval J is insensitive to changes from left to right continuous versions. Hence, to get the claim we may use Lemma 7.10

Page 155: Seminar on Stochastic Processes, 1990

Absolutely Continuous States 149

with rn=o, and the fact that 1-1<1+a from (~l)-l<l+a follows. c

LEMMA 7.12. For each given J=(L,T), L<T, with P-proba­bility one the restricted branching rate kernel r J :=

{r(r) ireJ} is admissible.

Proof. Fix J=(L,T). as n~.

Let ~ be a sequence in AppIying Lemma 7.10 with

(0,1)

r=O and with Cn--+O also with as well as Corollary 7.11 to obtain with

probability one the existence of a Lebesgue zero set

N(rJ'~) such that the following are satisfied:

sup (r (t) ,[o *p (T-td3) < "', teJ z

l~:!nf ~~~ (r(t),[oz*P(Cn+T-t)]~) < "',

sup (r(t) ,[o *p(t-L)]) < "', teJ z

for alI ze~\N(rJ'~). For each such z we may now choose a subsequence ~(z) of ~ such that along this subse­quence the latter limit inferior becomes a finite limit. Then alI requirements in the Definition 1.10 are fuIfil-led, and the proof is complete. c

Combining the Lemmas 6.3 and 7.12, we immediately get the following resuIt.

PROPOSITION 7.13. lITitli P-probability one, r is a bran­ching rate kernel. For each given J=(L,T), L<T, with P­probability one the restricted process r J := {r(r) ireJ} is an admissible restricted branching rate kernel.

Since according to Remark 6.4 for alI the mapping p ~ pP is measurable in an

S,1l sense, and because of Proposition 7.13 with

se~ and lleMf appropriate P-probability

one r is a branching rate kernel, by mixing we may form the probability measures

bing a superprocess X

r ~ :=EP ,se~, lleMf , descri-

S,1l S,1l in the random medium r.

THEOREM 7.14. Let X be the superprocess in the random medium r, defined by the catalyst process r. If te~

Page 156: Seminar on Stochastic Processes, 1990

150 D.A. Dawson, K. Fleischmann and S. Roelly

is a fixed time point, then the random measure X(t) is

absolutely continuous with ~ -probability one, for alI S,1l s<t and lleMf •

Proof. We fix s<t and lleMf , choose an s'e(s,t), and

set J:=(s' ,t). By Proposition 7.13 with P-probability

one, r is a branching rate kernel and the restricted

kernel rJ:={r(r) :reJ} is admissible. Therefore, given

r J , by Theorem 4.4 the random measure X(t) is absolute­

ly continuous with pr -probability one. But then it is S,1l also absolutely continuous with ~ -probability one, and s,1l the proof is finished. c

APPENDIX: ON THE STABLE SEMI-GROUP

For convenience, here we compile some facts related

to the stable semi-group and needed in the present paper.

To this end, we fix the following constants:

~e(O,l), a,a'e(0,2], and ~,~'e(O,l].

(Note that in the Appendix we do not impose restrictions

as a>l).

For t>O let q~(t,.) denote the continuous density

function of a stable distribution on ~+ with index ~

determined by the Laplace transform

J~ -sB ~ ods q~(t,s)e = exp[-tB ], B~O.

Similarly, let Pa(t,.) be the continuous density func­

tion of a symmetric stable distribution with index a gi­

ven by the Fourier transform

JdY Pa(t,y)eiyx = exp[-t/x/ a ], xe~. In particular,

P2(t,x) := (4rrt)-1/2exp[-X2/4t], xe~.

We get the self-similarity properties

(A. O)

(A.1)

K>O, and, in the case a<2, the subordination formula

Page 157: Seminar on Stochastic Processes, 1990

Absolutely Continuous States

(A.2)

Immediately from (A.l) we conclude

(A.3) o<t:sc, xeIR,

for each c>O.

Let sa:={S~it~O} denote the semi-group correspon­

ding to the family {PaCt) it>O}:

S~h(X) := JdY Pa(t,y-x)h(y), t>O, xeIR

151

(provided that the integral exists). Its generator is gi­

ven by the fractional power _(_â)a/2=â of the Laplacian a â.

LEMMA A.4. If a<2, ve have the representation

geV(â)

vhere ca is some positive constant (determined by the

gamma function) and V(â) is the domain of definition of

the one-dimensional Laplacian â.

Proof. See Yosida (1978), formula (9.11.5). c

Immediately from (A.l), for geV(â), t~o, and xeIR,

we get

and therefore

(A.5)

LEMMA A.6. We have the folloving self-similarity formu­

las: a (3 a+(3 a (3 a atPa (t,x) K [atPa](K t,Kx),

~xp/(t,X) K1+(3[~xp/](Kat,KX),

âa ,Pa(3(t,X) Ka '+(3[âa ,Pa(3](Ka t,KX),

t>O, xeIR, K>O.

Proof. The first two statements follow from (A.l) by dif-

Page 158: Seminar on Stochastic Processes, 1990

152 D.A. Dawson, K. Fleischmann and S. Roelly

ferentiation, whereas the third one is a consequence of

the identity (A.5) combined with (A.1). Il

1+7) LEMMA A.7. s q7)(l,S) converges to some positive con-stant (depending on 7) as s~ whereas exp(l/S)q7)(l,s)

tends to O as s~o.

Proof. See, e.g., Zolotariev (1983), formula (2.4.8) and

Theorem 2.5.2. Il

LEMMA A.8. If a<2, then t-1IxI1+apa(t,X) is bounded in t>O, xeR and, as x~, converges to some positive constant which is independent of t. On the other hand, for given k,K>O, it is bounded away from O on the set {[t,x] ~ O<tSK, Ixl"=k}.

Proof. By substitut ion in (A.2) and by the self-simila­

rity properties (A.O) and (A.1) we get

t-1 Ix I1+a pa(t,X)

= J~dS[t-2/ax2S]1+a/2qa/2(l,t-2/ax2S)S-1-a/2P2(S,l).

The integral J~dS s-1-a/2P2 (S,l) is finite. On the other

hand, by Lemma A.7,

[t-2/ax2S]1+a/2Qa/2(1,t-2/ax2S)

is bounded in s,t,x, which yields the first statement.

Moreover, by the same lemma, for fixed s and t, as

x~ it converges to a constant which is independent of s

and t. Finally, by (A.1) we have

(A.9) t>O, xeR,

hence

t-1 IxI 1+a pa(t,X) = It-1/ a xl l+apa (l,t-1/ a X),

and the convergence implies the last statement.

Recall that const always denotes a positive and

finite constant.

LEMMA A.10. Given k,K>O, ve have

Il

Page 159: Seminar on Stochastic Processes, 1990

Absolutely Continuous States 153

O<t:SK, Ix I ;t:k.

Proof. It is easy to see that the statement holds for a=2, and in the case a<2 we may apply Lemma A.8. c

LEMMA A.ll. Let be given ue[O,l+a). Then

t (l-U)/alxluPN(t,X) . b d d· t>o IR ... ~s oun e ~n , xe •

Proof. First of alI,

(A.l2) sup rae-r < tI>,

r;t:O a>O.

This already implies the statement in the case a=2.

Now we assume that a<2. From (A.2) and the proved statement in the case a=2 we get

t(l-U)/alxluPa(t,X)

:s const t(l-u)/a JtI>odS q (t) -(l-u)/2 a/2 ,s s .

By the self-similarity (A.O), the inequality can be conti­nued with

JtI> -(l-u)/2 = const ods qa/2(l,S)S •

But the latter integral is finite by Lemma A.7. c

LEMHA A.l3. Let a,~,~' be given as in the beglnnlng of the Appendix and K>O. Then the function

x ~ [ sup Pa~(t,X)]W, X" O O<t:sK

is finite. Horeover, it is integrable (vith respect to Le­

besgue measure) on the set {x;lxl>l} if in the case a<2 additionally ~~. (l+a) > 1 is fulfilled, vhereas on {x;lxl<l} it is integrable if ~~. < 1 holds.

Proof. On Ixl>l, we apply Lemma A.lO, where in the case a<2 we additionally employ Lemma A.8. On Ixl:Sl, we may use Lemma A.ll with u=l. c

LEMHA A.l4. For t>O ve ha ve

lip (t) II = const t -l/a, a ti>

Page 160: Seminar on Stochastic Processes, 1990

154

and

D.A. Dawson, K. Fleischmann and S. Roelly

lI~tPa (t) II",

lI aa p (t) II x a '"

const t-1- 1/ a ,

const t-2/ a ,

IIAp (t) II = const t -3/a • a '"

ProoE. The dependence in t results from Lemma A.6. By the Fourier invers ion formula,

Pa(t,x) = (2rr)-1 JdY exp[-t1y1a]cos(yx). Hence

~tPa(t,X) = (2rr)-1 JdY [-Iyla]exp[-tlyla]cos(yx),

and for alI XE~,

l~tPa(t,X) I s const JdY Iylaexp[-tlyla] < "'.

The remaining statements are quite analogous. c

LEMMA A.1S. For t>O and XE~,

l~tp~(t,X) I + IAP2~(t,X) I s const t-1[1+x2/t]P2~(t,X) -1 ~ s const t P2 (t,x/2).

ProoE. First of alI, for O<as 2,

(A.16) a ~ atPa (t,x) = ~p/-1(t,X) a atPa(t,x),

and

(A.17 ) I1Pa~(t,X) ~ (~-1) p/-2 (t, x) [~xPa (t, x)]2

+ ~Pa~-l(t,X) I1Pa(t,x). But

(A.18 ) a axP2(t,x)

-1 - x(2t) P2 (t,x)

and a -1 2 -2 I1P2 (t,x) = atP2(t,x) = [-(2t) + x (2t) ]p2 (t,x).

Then the first claimed inequality follows. By . 2

p2 (t,x) = P2 (t,x/2)exp[-3x /16t]

combined with (A.12), we also arrive at the second one. c

Page 161: Seminar on Stochastic Processes, 1990

Absolutely Continuous States 155

LEMMA A.19. We have a -1

latPa(t,x) I ~ const t Pa (t,X/2), t>O, xelR.

Proof. Because of Lemma A.lS we may restrict to a<2. By

the subordination (A.2), the self-similarity (A.O), and a

substitution of integrat ion variable,

(A.20)

Thus,

a I J" 2/a-l la I 2/a latPa(t,x) ~ const ods qa/2(1,S)t s atP2 (t s,x).

Applying Lemma A.lS and again (A.20), we are done. c

LEMMA A.2l. Given k,K>O, for O<t~K and Ixl~k we

have a -l/a

laxPa(t,x) I ~ const t Pa(t,X/2) and

Proof. By (A.2) and Lemma A.lS, for a<2 we get

IAPa(l,X) I ~ J~dS Qa/2(1,S) IAP2 (s,x) I

J.. -1 2 ~ const ods Qa/2(1,S)S [l+x /S]P2 (S,X).

But, for Ixl~k,

p 2 (s,x) ~ P2 (s,X/2) exp[-3x2/32S]eXP[-3k2/32S].

Then with (A.12) and (A.2) we arrive at the second inequa­

lity in the case a<2 and t=l. The latter restriction

can be removed using (A.l) and Lemma A.6, whereas the case

a=2 was contained in Lemma A.lS.

The proof of the first inequality is quite analogous

except we apply (A.18) instead of Lemma A.lS.

LEMMA A.22. Given k,K>O, for O<t~K and Ixl~k ve

have

c

Page 162: Seminar on Stochastic Processes, 1990

156 D.A. Dawson, K. Fleischmann and S. Roelly

Proof. It is enough to prove the first inequality. In fact, to get from this the second one use

-1-0: po:(t,X/2) s const ti xl , t>O, x~o

which follows from Lemma A.8 in the case 0:<2 and is va­lid for 0:=2, too.

By (A.16) and Lemma A.19, for t>o and XE~ we get

(A.23) a fl fl-1-1 latpo: (t,x) I s const Po: (t,x)t po:(t,X/2).

Because of Lemma A.15, we may suppose that 0:<2. Then from (A.1) (applied to K=t-1/0:) and Lemma A.8 we

recognize that

(A. 24) O<tSK, Ixll:k.

If we combine this with (A.23), we are ready. o

LEMMA A.25. Given k,K>O, for O<tsK and Ixll:k ve

have

Il1P/(t,X) I s -2/0: 13 const t Po: (t,x/2).

Proof. Because of Lemma A.15 we may suppose 0:<2. Then apply (A.17), Lemma A.21, and (A. 24) . o

LEMMA A.26. We have

"l1p 13(t)" < m, t>O. o: m

Proof. Because of Lemma A.15 we may suppose that 0:<2 holds. For fixed k,K>O, by (A.9) and Lemma A.8, there is a positive constant const+ such that

po:(t,k) = t-1/O:Po:(l,t-1/O:k) l: const+t,

Hence (under 0:<2) inf p (t,x) l: const+t,

Ixl sk o: O<tsK.

O<tsK.

Therefore (A.17), Lemma A.14, and Lemma A.25 imply the

claim. o

Page 163: Seminar on Stochastic Processes, 1990

Absolutely Continuous States 157

LEMMA A.27. If a<2, for AS1 ve have

IPa~(l,X) - S~Pa~(l) (x) I s const A[lAIXI-~(l+a)], xe~.

Proof. By a change of the integrat ion variable,

2 ~ J ~ 1/2 SAPa (1) (x) = dy P2 (1,y)Pa (l,x+A y).

We apply the Taylor formula:

Pa~(1,X+A1/2y) Pa~(l,X) + A1/ 2y ~xPa~(l,X)

+ 2-1Ay2âp ~(1) (X+8A1/ 2y), a

where 8 (depending an X,y,A) satisfies Os181s1. Sin-

ce

we get

IPa~(l,X) - S~Pa~(l) (x) I

s 2-1AJdY P2(1,y)y2IâPa~(1) I (X+8A 1/ 2 y).

By the Lemmas A.26, A.25, and A.8 we have

lâPa~(l,Z) I s const min{lzl-~(l+a),l}, z*O.

Hence, for the integral restricted to IX+8A1/ 2YI a Ixl/2

we are done. On the other hand, IX+8A1/ 2YI < Ixl/2 im­

plies Ixl/2 < 18A1/2YI s Iyl, and

J dy P2(1,y)y2 s const Ixl-~(l+a) lyl>lxl/2

is obviously true. c

LEMMA A.28. Let asa' and k,K>O be given. In the case

a<2 ve additionally require that ~(l+a) > 1 holds. Then for O<tsK and Ix I ak ve have

lâa,Pa~(t,X) I s const t-a'/apa~(t,X/2).

Proof. Because of Lemma A.28 we may assume that a'<2.

AIso, (A.1) and Lemma A.6 allow us to reduce the problem

to t=l. Then by Lemma A.8 it suffices to show that

lâa,Pa~(l,X) I s const Ixl-~(l+a), Ixlak

holds.

Page 164: Seminar on Stochastic Processes, 1990

158 D.A. Dawson, K. Fleischmann and S. Roelly

By Lemma A.4,

(A.29) IAa ,P/(l,X) I

~ const J~dA A-1-a'/2IPa~(1,X) - S~Pa~(l) (X) 1. We distinguish between A~l and A>l. In the first case,

the previous lemma yields the desired result. Now we sup­

pose A>l. By Lemma A.8 we have

P ~(l,x) ~ const Ixl-~(l+a). a

On the other hand, for the integral JdY P2(A,y-X)Pa~(1,y) restricted to Ix-yl > Ixl/2 we get ~ const P2(A,X/2),

where we used Lemma A.13. By Lemma A.7 we have

-1-a'/2 A ~ const Qa'/2(1,A),

Hence, by subordination (A.2), Lemma A.8, and

-1-a'/2 A P2(A,X/2) ~ const Pa' (1,x/2)

a'~a,

~ const Ixl-~(l+a), Ixl~k.

In the opposite case Ix-yl ~ Ixl/2, we have IYI~lxl/2,

and aga in we may apply Lemma A.8. summarizing, the inte­

gral in the formula line (A.29), restricted to A>l, has

the c1aimed estimate, too. c

LEMMA A.30.

~(1+a) > 1

stants)

Let be given a~a',

if a<2. Then, for with the restriction

t>o (and finite con-

II~P ~(t)1I = const t-1+(l-~)/a at al'

II~P ~ (t) II = const t -l-~/a at a IlO '

IIAa,P/(t)11 1

IIA ,P ~(t)1I a a IlO

= const t-a'/a + (l-~)/a I

= const t-a'/a - ~/a

Proof. The claimed dependence in t is a consequence of

the self-similarities expressed in Lemma A.6, and we may

assume that t=l holds. In the expressions defining the

two norms we will distinguish between Ixl~l and the op­

posite. In the first case we use the Lemmas A.22, A.28,

and A.13. It remains to show boundedness in Ixl<l. Con-

Page 165: Seminar on Stochastic Processes, 1990

Absolutely Continuous States 159

cerninq the first two terms in the lemma, we use the esti­mate

Pa~-l(l,X) ~ Pa~-l(l,l) = const, Ixl<l,

formula (A.16), and Lemma A.14. concerninq the other two terms, the case a'=2 follows from Lemma A.26, whereas under a'<2 in (A.29) we aqain distinquish between ~~1

and ~>1. In the first case we apply Lemma A.27, whereas the remaininq case is obvious. c

REFERENCES

[1] P. BILLINGSLEY, "Converqence of Probability Measu­res", Wiley, New York, 1968.

[2] C. CUTLER, "Some Measure-theoretic and Topoloqical Results for Measure-valued and Set-valued Stochastic Processes", Carleton Univ., Lab. Research stat. Probab., Tech. Report No. 49, Ottawa, 1984.

[3] O.A. OAWSON and K. FLEISCHMANN, oiffusion and reac­tion caused by point catalysts, (revised manuscript, Carleton Univ. Ottawa 1990a).

[4] O.A. OAWSON and K. FLEISCHMANN, critical branchinq in a hiqhly fluctuatinq random medium, (revised manus­cript, Carleton Univ. ottawa 1990b).

[5] O.A. OAWSON, K. FLEISCHMANN, and S. ROELLY­COPPOLETTA, Absolute continuity of the measure states in a branchinq model with catalysts, Carleton Univ., Lab. Research stat. Probab., Tech. Report No. 134, Ottawa, 1989.

[6] O.A. OAWSON and K.J. HOCHBERG, The carryinq dimension of a stochastic measure diffusion, Ann. Probab. Z (1979), 693-703.

[7] O.A. OAWSON and E.A.PERKINS, Historical processes, Carleton Univ., Lab. Research stat. Probab., Tech. Report No. 142, Ottawa, 1990.

[8] N. OUNFORO and J.T. SCHWARTZ, "Linear Operators. Part 1: General Theory", Interscience Publishers, New York, 1958.

[9] E.B. OYNKIN, Branchinq particle systems and superpro­cesses, (manuscript, Cornell Univ. Ithaca 1990).

[10] P.J. FITZSIMMONS, Construction and reqularity of measure-valued Markov branchinq processes, Israel J. Math. 64 (1988), 337-361.

[11] P.J. FITZSIMMONS, Correction and addendum to: Con­struction and reqularity of measure-valued Markov branchinq processes, Israel J. Math. (to appear 1990).

[12] K. FLEISCHMANN, Critical behavior of some measure-

Page 166: Seminar on Stochastic Processes, 1990

160 O.A. Oawson, K. Fleischmann and S. Roelly

valued processes, Hath. Nachr. ~ (1988), 131-147. [13] N. KONNO and T. SHIGA, Stochastic differential equa­

tions for some measure-valued diffusions, Probab. Th. Rel. Fields 2i (1988), 201-225

[14] R. MEIDAN, On the connection between ordinary and generalized stochastic processes. J. Hat. Analysis Appl. 76, 124-133 (1980).

[15] S. ROELLY-COPPOLETTA, A criterion of convergence of measure-valued processes: Application to measure branching processes, Stochastics 17 (1986), 43-65.

[16] R. TRIBE, Path properties of superprocesses. Ph.D thesis, UBC, Vancouver, 1989.

[17] A. WULFSOHN, Random creation and dispersion of mass, J. Hultivariate Anal. ~ (1986), 274-286. 86.

[18] K. YOSIDA, "Functional Analysis", 5-th edition, springer-verlag, Berlin, 1978.

[19] V.M. ZOLOTARIEV, "one-dimensional Stable Distribu­tions" (in Russian), Nauka, Moscow, 1983.

DONALD A. DAWSON Department of Mathematics and statistics, Carleton University, Ottawa, Canada K1S 5B6

SYLVIE ROELLY Laboratoire de calcul des Probabilites, Universite Paris 6, 4, place Jussieu, Tour 56, 75230 Paris cedex 05, France

KLAUS FLEISCHMANN Karl Weierstrass Institute of Mathematics, Box 1304, Berlin, DDR-1086

Page 167: Seminar on Stochastic Processes, 1990

Martingales Associated with Finite Markov Chains

ROBERT J. ELLIOTT

1. Introduction.

In a recent paper, [1], Phillipe Biane introduced martingales Mk associated

with the different jump 'sizes' of a time homogeneous, finite Markov chain and

developed homogeneous chaos expansions. It has long been known that the Kol­

mogorov equation for the probability densities of a Markov chain gives rise to a

canonical martingale M. The modest contributions of this note, are that working

with a non-homogeneous chain, we relate Biane's martingales M k to M, calculate

the quadratic variation of M and thereby that of the M k. In addition, square field

identities are obtained for each jump size.

For ° ~ i ~ N write ei = (0,0, ... ,1, ... ,0)* for the i-th unit (column) vector

in R N+l, (80 eO = (1,0, ... ,0)* etc.). Consider the (non-homogeneous) Markov

process {Xt}, t ~ 0, defined on a probability space (n,F,p), whose state space,

without 10ss of generality, can be identified with the set S = {eO,el> ... ,eN}.

Write p~ = P(Xt = ei), ° ~ i ~ N. We shall suppose that for some family of

matrices At, Pt = (p?, ... ,pf)* satisfies the forward Kolmogorov equation

dpt - = AtPt· dt

At = (aij(t)) is, therefore, the family of Q-matrices of the process.

(1.1)

It has long been known (see, for example, Liptser and Shiryayev [4], Elliott [2])

that the process

(1.2)

is a martingale. (See Lemma 2.3 below.)

ACKNOWLEDGMENTS: Research partially supporled by NSERC Grant A 7964, the Air Force Office of Scientific Research United States Air Force, under contract AFOSR-86-0332, and the U.S. Army Research Office under contract DAAL03-87-0102.

Page 168: Seminar on Stochastic Processes, 1990

162 R.I. Elliott

Solving (1.2) by 'variation of constants' we can immediately write

Xt = q;(t, O) (Xo + fot q;(0, r )-ldMr) (1.3)

where q; is the fundamental matrix of the generator A. Equation (1.3) is a mar­

tingale representation result which in turn gives a representation result in terms

of the M k. (By iterating this representation Biane's homogeneous chaos expan­

sion can be obtainedj this is quite explicit, in terms of matrices q; and matrices

associated with A.) Functions of the chain are just given by vectors in RN + 1 and

in Section 4 'square field' identities are obtained for each jump 'size'.

2. Markov Chains.

Consider a Markov chain {Xt}, t ~ O, with state space S = {eo, ... ,eN}

and Q-matrix generators At. We shali make the following assumptions.

ASSUMPTIONS 2.1. (i) For ali O $ i,j $ N and t ~ O

(2.1)

for some bound B' j write B = B' + 1.

(ii) For ali O $ i,j $ N and t ~ O, aij(t) > O if i f. j and, (because At is a

Q-matrix),

aii(t) = - :E aji(t). j:Fi

(2.2)

The fundamental transition matrix associated with A will be denoted by

q;(t,s), so with 1 the (N + 1) x (N + 1) identity matrix,

dq;(t,s) _ A .... (t) .... () 1 dt - t'" ,s, '" s,s = (2.3)

and

dq;(t,s) __ .... ( )A ds - '" t,s s, q;(t,t) = 1. (2.4)

(If At is constant q;(t,s) = expA(t - s).)

BOUNDS 2.2. For a matrix C = (Cij) consider a norm ICI = ~~ 1 Cii 1· Then a,3

for alI t, IAtl $ B. The columns of q; are probability distributions so Iq;(t, s)1 $ 1

for alI t, s.

Consider the process in state x E S at time s and write Xs,t(x) for its state

at time t ~ s.

Then E[Xs,t(x)] = Es,x[Xt] = q;(t,s)x. Write FI for the right continuous

complete ffitration generated by u{Xr : s $ r $ t} and Ff = Ft.

Page 169: Seminar on Stochastic Processes, 1990

Martingales for Finite Markov Chains 163

LEMMA 2.3. The process Mt = Xt - Xo - fot ArXr_dr is an {Ft} marlingale.

Proof. Suppose O ::5 s ::5 t. Then

E[Mt - Ms I Fs] = E [Xt - X s - l t ArXr_dr I Fs]

= E [Xt - X s -lt ArXrdr I Xs]

= Es,X. [Xt]- X s - l t ArEs,X. [Xr]dr

= ~(t,8)Xs - X s -lt Ar~(r,8)Xsdr = O by (2.3).

Therefore,

~=~+t~~·+~=~+t~~_·+~ where M is an {Ft} martingale.

NOTATION 2.4. HX=(XO,Xlo ••• ,xN)* ERN+l thendiag X isthematrix

LEMMA 2.5.

Proof. Recall Xt E Sis one of the unit vectors ei. Therefore,

Xt ® Xt = diag Xt. (2.5)

N ow by the differentiation rule

Xt ® Xt = Xo ® Xo + fot Xr- ® (ArXr- )dr

+ fotXr_ ®dMr + fot(ArXr_)®Xr_dr

+ fot dMr ® Xr- + (M, M}t + Nt

Page 170: Seminar on Stochastic Processes, 1990

164 R.I Elliott

where Nt is the Ft martingale

[M, Mlt - (M, Mk

However, a simple calculation shows

and

(ArXr-) 0 Xr- = Ar( diag Xr-).

Therefore,

Xt 0 Xt = Xo 0 Xo + fot (diag Xr- )A;dr

+ fot Ar( diag Xr- )dr + (M, M)t + martingale. (2.6)

Also, from (2.5)

Xt 0 Xt = diag Xt = diag Xo + diag fot ArXr-dr + diag Mt. (2.7)

The semimartingale decompositions (2.6) and (2.7) must be the same, so equating

the predictable terms

We next note the following representation result:

LEMMA 2.6.

Proof. This result follows immediately by 'variation of constants'.

REMARKS 2.7. A function of Xt E S can be represented by a vector

f(t) = (Jo(t), ... , fN(t»* E RN+1

(2.8)

so that f(t,Xt) = f(t)*Xt = (J(t),Xt) where ( , ) denotes the inner product

in RN+1.

We, therefore, have the following differentiation rule and representation result:

Page 171: Seminar on Stochastic Processes, 1990

Martingales for Finite Markov Chains 165

LEMMA 2.8. Suppose tbe components of j(t) are differentiable in t. Tben

j(t,Xt} = j(O,Xo) + fot(f/(r),Xr}dr+ fot(f(r),ArXr_}dr+ fot(f(r),dMr}.

(2.9)

Here, fot (f(r), dMr } is an Ft-martingale. AIso,

(2.10)

This gives the martingale representation of j(t,Xt).

REMARK 2.9. With an obvious abuse of notation, if the jump times of the

chain are TI (w), T2( w), ... , we can write down a 'random measure' decomposition

of Xt from (1.2) as

because ~(ei - Xr-)aiXr _ = Ar-Xr-. Here, DTk(W)(dr) is the unit mass at t

Tk(w) and, with XTk(w) = eik(W)' Dik(W)(i) is 1 if i = ik(w) and O otherwise.

That is,

This representation would provide another means of calculat ing (M, M}t.

3. Shift Operators.

The formulae of Section 2, particularly the martingale representations (2.8)

and (2.10), provide basic informat ion about the Markov process X. However, ifthe

'size' of the jumps is considered some other expressions, including a homogeneous

chaos expansion, were obtained recently by Biane [1]. We wish to indicate how the

results of Biane relate to the above expressions. First we introduce some notation.

NOTATION 3.1. Write i EB j for addition mod (N + 1). For X s E S =

{eo,et. ... ,eN}, say X s = ei, and k = 1, ... ,N, write

Page 172: Seminar on Stochastic Processes, 1990

166 R.I Elliott

That is, X s -+ X: corresponds to a cyclic jump of size k in the index of the unit

vector corresponding to the state.

Suppose X s- = ei and X:_ = ej, where j = i $ k, then clearly

(3.1)

We now wish to introduce some subsidiary matrices associated with As = (aij(s)). These can best be explained by first considering the 3 x 3 case. Suppose

Then

(

-alO

Al := a~o o a02 )

O ,

-a02 A2 := (-:20 ~:~l

a20 O :~:J. Note that if A is a Q-matrix aOi + ali + a2i = O, so Al + A2 = A.

In general, if As = (aij(s)) is an (N + 1) x (N + 1) Q-matrix, A~ is obtained

by forming a matrix from the k-th subdiagonal (continued as a superdiagonal),

with the negative of the column entries on the diagonal and zeros elsewhere. By

construction, Ak is a Q-matrix, and it is clearly related to those jumps of 'size' k.

As above,

(3.2)

Also,

(3.3)

so N L ((X:_)* AsXs-) (X:_ -Xs-) = AsXs_· (3.4) k=l

We also wish to introduce matrices lk, k =1- O, whose off-diagonal entries

are the (positive) square roots ofthose of A k, and whose diagonal entries are the

Page 173: Seminar on Stochastic Processes, 1990

Martingales for Finite Markov Chains 167

negative of that square root in the same colwnn. That is, in the (3 X 3) case above:

cvaw O

~) Al:= v:o -va2I -~ y'ă21

c~ v'ăOl ;') k:= O -v'ăOl

yfii2ii O -y'ai2

For k = 1, ... , N write

so A~ is a predictable process.

DEFINITION 3.2. In our notation the matrices M k introduced by Biaue [1)

are, for k = 1, .. . ,N

Mf = L (X:_)* AsX s_)-1/2I (Xs = X:_) - lot (X:_)* AsX s_)1/2ds . O<s<t O

- - (3.5)

LEMMA 3.3. For k = 1, ... ,N,

Proof. First note

k rt k rt k Mt = 10 As . dXs - 10 As . AsXs_ds

= lot (X:_)* AsXs-) -1/2(X:_)* . dXs

- lot (X:_)* AsX s_)-1/2(X:_)* AsXs-)ds,

a.nd the result follows from (3.6).

Page 174: Seminar on Stochastic Processes, 1990

168 R.I Elliott

LEMMA 3.4. Fbr k = 1, ... , N, (Mk, Mk)t = t.

Proof. Mf = fot A~ . dMs , so

(Mk,Mk)t = fotA~d(M,M)S(A~)*

= fot (X:_)* AsXs_)-1/2(X:_)*

. (diag (AsXs-) - (diag Xs-)A! - As(diag X s-))

. (X:_){(X:_)* AsX s_)-1/2ds .

Now for k f:. O:

and

(X:_)*. (diag (AsXs-))· (X:_) = (X:_)*AsXs_.

Therefore, (Mk , Mk)t = fot ds = t. O

REMARKS 3.5. For k f:. f, Mk and Mi have no common jumps, so [Mk, Mil t

= O and (Mk,Ml)t = O. Therefore, M 1, ... ,MN are a family of orthogonal

martingales, each of which has predictable variation t.

Having expressed M k in terms of M we now wish to express M in terms of

the M k.

N lot -k k k . THEOREM 3.6. Mt = E AsXs_dMs ' so tbe M form a baslS. k=l O

Proof. From (3.6) first note that

N dXs = L (X:_ - X s- )(X:_)* . dXs.

k=l

Therefore,

(3.7)

(3.8)

Page 175: Seminar on Stochastic Processes, 1990

Martingales for Finite Markov Chains 169

~ lot k (k * )1/2 k + L..J (Xs- - X s-) (Xs-) AsXs- dMs · k=1 O

From (3.3) and (3.4) this equals

lot N lot -k k = AsXs_ds + L AsXs_dMs · O k=1 O

(3.9)

Comparing (3.7) and (3.9) we see

~ lot-k k Mt = L..J AsXs_dMs · k=1 O

(3.10)

4. Discrete Derivatives for Different Jump Sizes.

Consider a function f on S = {ei}. For simplicity we suppose f is constant in

time. Then, as noted in Section 2, f is represented by a vector f = (fo, ... , f N ) * and

from (3.9), and this ia

t N t = (f,XO) + [ (A;f,Xr_)dr + L [ (A~)* f,Xr-)dMfr. (4.2)

10 k=11o We now re-establish the 'square field' formula of Biane [1) by calculating

f(Xt)2 in two ways.

LEMMA 4.1. A;f2 - 2f . A;f = f (A~)* fl k=1

Proof. Function multiplication is pointwise in each coordinate, so f2 corre­

sponds to the vector (f6, ... , fiv )*, and

t N t f2(Xt) = (f2,XO) + 10 (A;,/2,Xr_)dr + L 10 (A~)* f2,Xr-}dMfr

O k=1 O (4.3)

= (f(Xt»)2.

Page 176: Seminar on Stochastic Processes, 1990

170 R.I Elliott

U sing the differentiation rule this also equals

Now

= f(XO)2 + 2 fot f(Xr- )df(Xr ) + [f(X), f(X)lt

= f(XO)2 + 2 fot (J,Xr-)(A~f,Xr-)dr

[f(X),J(X)lt = L 6f(Xr)6f(Xr) 0:::;r9

N = L L (A~)* f,Xr_)2(6M~)2

k=10:::;r9

N t = L r (A~)* f,Xr_)2(X:_)* ArXr_)-1/2dM:

k=1 10

N t + L r (A~)* f,Xr_)2dr,

k=1 10 from (3.5).

Substituting in (4.4)

f(Xd = f(XO)2 + 2 fot (J,Xr-)(A~f,Xr-)dr

N!ot -k k +2 L (J,Xr-)(Ar)*f,Xr-)dMr k=l O

N t + L r (A~)* f,Xr_)2(X:_)* ArXr_)-1/2dM:

k=1 10

N t '" fo -k * 2 + L..J (Ar) f,Xr-) dr. k=l O

(4.5)

The special semimartingales (4.3) and (4.5) are equal, so equating the bounded

variation terms

N (A~f2, X r-) = 2(J, Xr- )(A;f, X r-) + E (A~)* f, Xr_)2.

k=l

Page 177: Seminar on Stochastic Processes, 1990

Martingales for Finite Markov Chains 171

That is, as functions on S

N L (A:)* 1)2 = A;12 - 21 . A;J. k=l

o (A:)* corresponds to a discrete derivative of 'amount', or in 'direction' k.

However, the algebra suggests that (A~)2 should be related to A~.

A more specific re1ation is now obtained.

LEMMA 4.2. Fbr k = 1, ... ,N

Proof. From the form of Ak and Ak, for any I E RN+1

(Ak)* 1= (akO( - 10 + Ik), akE&l,l (-fI + AE&I),

... ,akE&N,N( -IN + IkE&N))'

(Ak)* 12 = (akO( -I~ + If), akE&l,l( - Il + IfE&l)'

... , akE&N,N( - l'!v + IfE&N )),

-k * ( (A ) 1= ..fokrj(-lo+lk),..jakE&l,l(-fI +lkE&l)'

... , ..jakE&N,N( - IN + IkE&N))'

Therefore, as function multiplication is pointwise, that is coordinatewise:

(Ak)* 1)2 = (akOU~ - 210lk + If),··· ,akE&N,NU'!v - 21NlkEN + IfE&N))

1(Ak)* 1) = (akO( - I~ + lolk)"" ,akE&N,N( - l'!v + IN IkE&N))'

Operating coordinatewise, for example,

( - IJ + IfE&j) - 2(-IJ + Ij IkE&j) = IJ - 21j IkE&j + 1~E&j

and the result follows.

Finally, we note that substituting (3.10) in (2.9) we have

o

( ~ ft -l-k k) Xt = ~(t,O) Xo + L...J 10 ~(r,O) ArXr-dMr . k=l O

(4.6)

Page 178: Seminar on Stochastic Processes, 1990

172 R.I Elliott

Now Xr- ia a.s. equal to X r which equals

Substituting in (5.1) we have

N t Xt = C)(t,O)XO + ~ f C)(t,r)l~C)(r,O)XodM:

1:=1 10

Iterating this process we obtain the homogeneous chaos expansions of Biane [1],

(see also Elliott and Kohlmann [3]), in terms of the non-homogeneous transition

matrices C) and the matrices 11:.

REFERENCES [1] P. Biane, Cha.otic representation for finite Markov chains. Stochastics and

Stoch. Reports 30 (1990), 61-68. [2] R.J. Elliott, Smoothing for a finite state Markov process. Springer Ledv.Te

Notes in Control and In/o. Sciences, VoI. 69, (1985), 199-206. [3] R.J. Elliott and M. Kohlmann, Integration by parts, homogeneous chaos ex­

pansions and smooth densities. Ann. 0/ Prob. 17 (1989), 194-207. [4] R.S. Liptser and A.N. Shiryayev, Statistics of Random Processes, Vol. 1,

Springer-Verlag, Berlin, Heidelberg, New York, 1977.

Robert J. Elliott Department of Statistics and Applied Probability University of Alberta Edmonton, Alberta, Canada T6G 2G 1.

Page 179: Seminar on Stochastic Processes, 1990

Equivalence and Perpendicularity of Local Field Gaussian Measures

STEVEN N. EV ANS*

1. Introduction.

One way of thinking about Gaussian measures is that they are the class of pro­bability measures that naturally arise when we seek measures with properties that are intimately linked to the linearity and orthogonality structure of the spaces on

which the measures are defined.

There are fields other than R or C for which there is a well-developed and interesting theory of orthogonality for the vector spaces over them. These fields are

the so-called local fields. In Evans (1989) we worked from the above perspective

and defined a suitable concept of a "Gaussian" measure on vector spaces over local fields. In many particulars the theory of such measures resembles the Euclidean prototype, but there are a number of interesting departures.

Here we continue this investigation with a consideration of various questions conceming equivalence, absolute continuity and perpendicularity of local field Gaussian measures.

We begin in §2 with some preliminaries regarding both the general theory of local fields and the particular properties of local field Gaussian measures.

In §3 we observe that, unlike the Gaussian case, one local field Gaussian meas­ure can be absolutely continuous with respect to another without the two measures being equivalent and the only time two such measure will be equivalent is when they are equal.

Theorem 4.1 is a "Carneron-Martin"-type result on the effect of translating

local field Gaussian measures. As a consequence, we show in Theorems 4.2 and

4.3 that the local field Gaussian measures on a Banach space are precisely the nor­

malised Haar measures on compact additive subgroups which satisfy an extra "con­vexity" condition. This allows us to conclude that there is a "ftat" local field

Gaussian measure on a Banach space if and only if the space is finite dimensional

* Research supported in part by an NSF grant

Page 180: Seminar on Stochastic Processes, 1990

174 S.N. Evans

(see Corollary 4.4).

In Theorem 5.1 we examine the effect of contracting or dilating a local field

Gaussian measure, and observe that we get qualitatively different behaviour depend­ing on whether the measure is "finite dimensional" or "infinite dimensional". In the Banach space case we show that the "dimension" of the measure shows up in the mass assigned to small balls around zero (see Theorem 5.2).

2. Preliminaries

We begin this section with a brief overview of some of the theory of local fields. We refer the reader to Taibleson (1975) or Schikhof (1984) for fuller

accounts. Later we also recall some of the salient details from Evans (1989)

regarding local field Gaussian measures.

Let K be a locally compact, non-discrete, topological field. If K is connected, then K is either R or C. If K is disconnected, then K is totally disconnected and

we say that K is a local field.

From now on, we let K be a fixed local field. There is a distinguished real­valued mapping on K which we denote by x ~ Ix I and call the valuation map. The set of values taken by the valuation is the set {qk: k E Z} u {O}, where q = pc

for some prime p and positive integer c. The valuation has the following proper­ties:

Ixl=O <=> x=O;

Ixyl = Ixllyl;

Ix+yl ~ Ixlvlyl.

The last property is known as the ultrametric inequality and implies that if I x I * I y I then Ix + y I = Ix Ivi y I - the so-called isosceles triangle property. The mapping (x, y) ~ Ix - y I on K x K is a metric which gives the topology of K.

There is a unique measure Il on K for which

Il (x + A) = Il (A) , x E K,

ll(xA) = Ixlll(A), x E K,

Il ({x E K: Ixl ~ I}) = 1.

The measure Il is Haar measure suitably normalised.

There is a character X on the additive group of K with the properties

X({x: Ixl ~ I}) = {1}

and

X({x: Ixl ~ q}) * {I}.

Page 181: Seminar on Stochastic Processes, 1990

Local Field Gaussian Measures 175

For N::; 1,2, ... , the correspondence A ~ XA., where XA. (x) = X (A . x) establishes

an isomorphism between the additive group of KN and its dual.

Let E be a vector space over K. A norm on E is a map II IIE: E ~ [0,00 [

such that

II x IIE = O <'--> X = O;

II Ax IIE = I AI II x IIE' A E K;

II x + y IIE 5 II X IIE V II y !lE-The last property is also called the ultrametric inequality and implies the obvious generalisation of the isosceles triangle property. We call the pair (E, II IIE) a

normed vector space (over K).

If E is complete in the metric (x, y) H II x - Y IIE we say that E is a Banach space. For N == 1,2, ... the space (KN, I 1), where

l(xI,·.·,xN)1 == Ixtlv"'vlxNI,

is a Banach space. More generally, if (O, F, P) is a probability space and we let L ~

be the set of measurable functions f: O ~ K such that ess sup (I f (O) 1: O) E O} < 00

then L - becomes a Banach space when we equip it with the norm II 11_ defined by

II fll_ == ess sup{l f(O)I O) E O} (we adopt the usual convention that we regard two

functions to be equal if they are equal almost surely).

A subset C of a normal vector space E is said to be orthogonal if for every

finite subset {xl' ... , xN} c C and each Al' ... , Â-N e K, we have

N N II,I: Â-ixi IIE == vi=l I ~ III xi IIE .

1=1

If, moreover, II x IIE == 1 for alI x E C, then C is said to be orthonormal.

We now recall from Evans (1989) our general definition for the local field analogue of Gaussian measures. Let E be a measurable vector space (over K). Let

El and ~ be two copies of E and write Xi: El x ~ ~ Ei' i ::; 1,2, for the two

coordinate maps. A measure P on E is said to be K-Gaussian if for every pair of

orthonormal vectors (uu, (12)' (CX2,I' CX2,2) E K2 the law of

(UUXI + U12X2, CX2,1X1 + anX2) under P x P is also P x P.

Note: in future when we consider measure on a measurable vector space E we will

always reserve the notation X for the identity map on E.

The K-Gaussian measures on E::; K are those measures P such that X E L ~ (P)

and J X (~x) P (dx) ::; <1> (II X II~ I ~ 1), where we let <1> de note the indicator function of

the interval [0,1]. Thus, either X = O or

P(dx) = IIXII.:I<l>(IIXII.:llxD~(dx).

Page 182: Seminar on Stochastic Processes, 1990

176 S.N. Evans

The theory of K-Gaussian measures is panicularly tractable when B, the O'-field

on the measurable vector space E, is the O'-field generated by some collection, F, of linear functionals on E. In this case we say that the triple (E, F, B) satisfies the hypothesis (*) of Evans (1989). One example is the case when E is a separable Banach space, F = E* (the dual of E) and B is the Borel O'-field of E.

If (E, F, B) satisfies the hypothesis (*) then a measure P on E is K-Gaussian if and only if T(X) is a K-valued, K-Gaussian random variable for alI TE spanF. Moreover, P is then uniquely determined by the laws of the individual random vari­ables T (X), T E span F, and hence by the set of numbers II T (X) 11_, T E span F.

3. Equivalence

A well-known feature of the Gaussian theory is, in a variety of general settings, that two Gaussian measures on the same space are either equivalent or perpendicu­lar. We refer the reader to Kuo (1975) for a discussion of such results and some relevant references. An analogous theorem certainly doesn't hold for the K­Gaussian theory. For example, suppose that P and Q are two K-Gaussian measures on E = K such that II X II- = 1 in L - (P) and II X 11_ = q in L - (Q). Then P <: Q but P and Q are not equivalent. As the following theorem shows, equivalence is a much more restrictive condition in the K-Gaussian case.

Theorem 3.1. Suppose that (E,F,B) satisfies the hypothesis (*). Let P and Q be two K-Gaussian measures on E. Then P - Q if and only if P = Q.

Proof Suppose that P - Q. Then II T (X) 11_ is the same in L - (P) and L - (Q) for alI TE spanF. As T(X) is K-valued, K-Gaussian for alI such T we see that the distribution of T (X) under P is the same as that under Q, and hence P = Q.

The converse is obvious. o

4. Translation

Suppose that Z is a real-valued, centred, Gaussian process indexed by some set 1. Let H be the corresponding reproducing kemel Hilbert space. If zEI.I then it is known that the laws of Z and z + Z are either equivalent or perpendicular depend­ing upon whether or not Z E H (see, for example, Feldman (1958) or Hajek (1959». In the K-Gaussian case we have the following analogue.

Theorem 4.1. Suppose that (E, F, B) satisfies the hypothesis (*). Let P be a K­

Gaussian measure on E. Set

S = {x EE: IT(x)l:,; IIT(X) 11_ alI TE spanF}.

If x E S then the law of x + X under P is P itself. Otherwise, the law of x + X

under P is perpendicular to P.

Page 183: Seminar on Stochastic Processes, 1990

Local Field Gaussian Measures 177

Proof Note that if Y is a K-valued, K-Gaussian random variable and y e K is such

that I y I ~ II Y II.. then, since the law of Y is Haar measure on the subgroup

{z: Izl ~ IIYII .. }, we see that the law of y + Y is that of Y. Hence, if x e S we

have that the law of T (x + X) = T (x) + T (X) under P is that of T (X) under P for

alI T e span F. Thus the law of x + X under P is that of X under P.

If x E S then there exists T e span F such that IT (x) I > II T (X) II... Then, by the

isosceles triangle property, IT (x + X) I = IT (x) I P-almost surely and hence the law

of x + X is perpendicular to P. O

We remark that S determines P uniquely. AIso, S is an additive subgroup of E

which is closed under multiplication by scalars a. e K such that I a.1 ~ 1. If we

combine parts (i) and (ii) of the following result with Theorem 4.1 we see when E

is a separable Banach space that the group S supports P, S is compact and that P is

just normalised Haar measure on S. Part (iii) extends Theorem 6.1 of Evans

(1989), where it was shown that if P is a K-Gaussian measure on measurable vector

space (E, B) and M is a measurable subspace of E then either P (M) = O or 1. Here

we see that if E is a separable Banach space then it is possible to give an analytic

condition which determines what branch of the dichotomy holds. No comparable

condition on the covariance structure seems to be known for the various Gaussian

analogues of this zero-one law.

Theorem 4.2. Suppose that (E, II IIE) is a separable Banach space with dual E*

and P is a K-Gaussian measure on E. Set

S = {x eE: IT(x)1 ~ II T (X) II .. alI Te E*}.

(i) The group S is the closed support of P.

(ii) The group S is compact.

(iii) lf M is a measurable vector subspace of E then P (M) is either 1 or O, depending on whether or not SeM.

Proof (i) It is clear that S is closed. Let {Tdi:\ be a countable dense subset of

E*. Then

Conversely, suppose that x e S and U is an open neighbourhood of x. Let {xdi:\

be a countable dense subset of S. Then ui:\ [xi + (U - x)] covers S and hence

P (Xi + (U - x» > O for at least one i; but, by Theorem 4.1,

P(U) = P«xi - x) + U), since xi - X E S.

(ii) As E is complete and separable, alI probability measures on E are tight and

so there exists a compact set C c S such that P (C) > O. Let G be the smallest

Page 184: Seminar on Stochastic Processes, 1990

178 S.N. Evans

closed add.itive group containing C. We claim that O is also compact Oiven

e > O, there exists a finite set (xf, ... , X:(E)} C C such that if x e C then

IIx - Xj.EIIE < e for some XjE. The smallest closed group containing (xf, ... '~E)}

is OE = (D X xf) + ... + (D x X:(E» where D is the ring of integers in K, that is,

D = {k e K: Ikl Si}. Clearly, OE is compact. Moreover, from the uitrametric

inequality it is clear that if x e O, then there exists y e OE such that II x - y IIE < e. Thus O is totally bounded and hence compact.

Part (ii) wilI now folIow if O has onIy finiteIy many distinct cosets in S; but this

must be the case, since otherwise we could find infinitely many disjoint cosets

0 1,02"" for which, by Theorem 4.1, P(Oj) = P(O) > O.

(iii) From Theorem 6.1 of Evans (1989) we know that P(M) is either O or 1. If SeM then it folIows from (i) that P(M) = 1. ConverseIy, suppose that P(M) = 1. If there exists x e S such that x rI. M then M and x + M are disjoint; but this is

impossibIe, since P (x + M) = P (M) = 1 by Theorem 4.1. O

The converse to Theorem 4.2 hoIds.

Theorem 4.3. Suppose that (E, II IIE) is a separable Banach space and that O is a

compact add.itive subgroup of E such that aO c O when a e K with lai s 1. Let

P be normalised Haar measure on O. Then P is a K-Oaussian measure for which

S=O.

Proof It folIows from CorolIary 7.4 of Evans (1989) that P is K-Oaussian. Part (i)

of Theorem 4.2 shows that S = O. O

If (H, < ',' » is a real, separable Hilbert space then it is welI-known that there

exists a probability measure P on H such that f ej< x,y> P (dy) = e-< x,x>J2 for alI

x e H if and onIy if H is finite dimensional. The corresponding result in our setting

is the following.

Corollary 4.4. Let (E, II IIE) be a separable Banach space. There exists a proba­

bility measure P on E such that J X (Ty) P (dy) = fb (II T IIE.) for alI T e E* if and

only if E is finite dimensional.

Proof. Suppose that E is infinite dimensional and such a probability measure P

exists. It is clear that P is K-Oaussian and II T (X) II .. = II T IIE. for alI T e E*.

Therefore we have

S = {x: IT(x)l s IITIIE. alI Te E*}

and so S =>{x: II x IIE si} (in fact, we have equality). Since the unit bali of E is

certainIy not compact, this contradicts part (ii) of Theorem 4.2.

Page 185: Seminar on Stochastic Processes, 1990

Local Field Gaussian Measures 179

Suppose, on the other hand, that E has finite dimension n. Let el , ... , Cn be

an orthogonal basis for E (see Theorem 50.8 of Schikhof (1984) for the existence of such a basis). By rescaling, we may assume that q-l < lIedlE s 1, 1 sis n, so that

{x eE: II x IIE si} = {I1 a; ei: v;'l 1 a; 1 si} and hence II T IIE• = vi~tI Ted for each Te E*. Let Xl' ... , Xn be independent, K-valued, K-Gaussian random variables

for which IIXdl_ = 1, so that Xl, ... , Xn are orthonormal in L - (see Theorem 7.5

of Evans (1989». Set X=I1~ei' Then X is K-Gaussian and

II T (X) 11_ = vi~ll Tei 1, so the law of X is the probability measure we are seeking. O

5. Contraction

Suppose that (Z(i): ieI} is a real-valued, centred, Gaussian process on some index set I. Using, for example, Theorem ll. 4.3 in Kuo (1975) it is not difficult to see that if aei. \ {-1, O, 1} then the law of a Z is either equivalent or perpendicu­lar to the law of Z depending on whether the subspace of L2 spanned by (Z (i): ieI} is finite dimensional or infinite dimensional. The corresponding result holds in our setting.

Theorem 5.1. Suppose that (E, F, B) satisfies the hypothesis (*) and P is a K­Oaussian measure on E. Oiven a e K with O < 1 a 1 < 1, let Q be the law of aX. Then either Q <:: P or Q .L P, depending on whether the subspace of L - (P) spanned by (T(X): T e F} is finite dimensional or infinite dimensional.

Proof. Suppose that the dimension of the subspace of L - (P) spanned by (T(X): Te F} is m < 00. Then there exists Tt, ... , T; e spanF such that span(Tt (X), ... , T; (X)} = span(T(X): TE F} and Tt (X), ... , T; (X) are orthononnal in L - (P) (see Theorem 50.8 of Schikhof (1984)). From Theorem 7.5

of Evans (1989), Tt (X), ... , T; (X) are independent. If g is a B-measurable function then g is of the form g (x) = O (TI (x), T2 (x), ... ) for some measurable func­

tion O on KII and some sequence (TJ c F. We may find coefficients ~ij E K,

1 s i < 00, 1 s j s m, such that Ti (X) = Lj ~ij Tt (X), 1 s i < 00. Detine

0*: Km --t 1. by 0* (tI' ... , tm) = O «Lj ~ij 1j)i':::\)' A straightforward calculation of the density of the law of (Tt (X), ... , T; (X» with respect to Ilm shows that

Q[g(X)]

and so Q <:: P.

P[g(aX)]

P [O (a Ti (X)i':l)]

P[O*(aTt(X), ... , aT;(X»]

s lal-mP[O*(Tt(X), ... , T;(X»]

P[g(X)],

Page 186: Seminar on Stochastic Processes, 1990

180 S.N. Evans

Suppose now that the dimension of the subspace of L - (P) spanned by

(T(X): Te F} is infinite dimensional. Then we may tind a sequence {Td c spanF

such that TI (X). T 2 (X).... are onhononnal in L - (P) and hence independent (see Theorem 7.5 of Evans (1989». The event {lTj(X)I s laI. 1 si < oo} has proba-

bility zero under P and probability one under Q. so that P .L Q. O

When E is a Banach space the dimension appearing in Theorem 5.1 shows up in the probability of small balls.

Theorem 5.2. Suppose that (E.II IIE) is a separable Banach space and P is a K­Gaussian measure on E. Let

m = dim{T(X): Te E*}.

Then

. 10~P(IIXIIE S q-n) m = -iIm .

n....... n

Proof The result is obvious if m = O. since in that case X = O almost surely. Sup­pose next that 1 s m < 00. We may tind TI •...• Tm e E* such that {T 1 (X). . . . • T m (X)} fonns an onhonormal basis for {T (X) : T e E*} in L - (P).

Consequent1y. {TI (X) •...• Tm (X) I are independent random variables. Let (ej)j':l be a basis for E. If 1tj: E -+ K. 1 s i < 00. is the ith coordinate map then 1tj e E* and so we have

X ~':I1tj (X) ej

~':l (r.;l CXjj Tj (X»ej

= r.;l Tj (X) (~':l CXjj ej)

say. for some choice of coefficients (CXjj). It is easy to see that {fi' ...• fml must be linearly independent. otherwise we would have a contradiction to the linear

independence of {TI (X). . . . • T m (X) I . Thus (x 1. . . . • Xm) f-+ II r.J~1 Xj fj IIE is a nonn on Km. Since all nonns on Km are equivalent (see Theorem 13.3 of Schikhof (1984» there exists a constant c > 1 such that

c-I'1~1 ITj(X)I s II X IIE s C'1~1 ITj(X)1

and the result now follows easily.

Suppose tinally that m = 00. We may then tind a sequence (Tj)j':l c E* such

that (Tj (X»;:l are onhonormal in L - (P) and hence independent. Then

P(IIXIIE S q-n) S P(r\':l (ITj(X)I S IITdIE"q-n})

Page 187: Seminar on Stochastic Processes, 1990

Local Field Gaussian Measures

= n;1P(ITi(X)1 S IITiIlE-q-n),

and so -1iII1n __ lo~ P (II X IIE S q-n) I n = 00, as required.

REFERENCES

181

o

[1] Evans, S.N. (1989). Local field Gaussian measures. In Seminar on Stochas­tic Processes 1988 (E. Cinlar, K.L. Chung, R.K. Getoor eds.) pp.121-160.

Birkhliuser.

[2] Feldman, 1. (1958). Equivalence and perpendicularity of Gaussian processes.

Pacific J. Math. 4, 699-708.

[3] Hâjek, 1. (1959). On a simple linear model in Gaussian processes. In Trans. Second Prague Conf lnformation Theory, pp.185-197.

[4] Kuo, H.-H. (1975). Gaussian measures in Banach Spaces. Lecture Notes in Mathematics 463. Springer.

[5] Schikhof, W.H. (1984). Ultrametric Calculus. Cambridge University Press.

[6] Taibleson, M.H. (1975). Fourier Analysis on Local Fields. Princeton

University Press.

Department of Statistics University of California

367 Evans HalI Berkeley, CA 94720

U.S.A.

Page 188: Seminar on Stochastic Processes, 1990

Skorokhod Embedding by Randomized Hitting Times

P. J. FITZSIMMONS*

1. Introduction.

The "Skorokhod embedding" problem was solved for general strong Markov

processes by Rost [R70,R71]: given such a process X = (Xti t 2: O), an initial

law I-l with u-finite potential, and a target law v, there is a randomized stopping

time T such that

(1.1) XT ~ v when X o ~ I-l

if and only if the potential of I-l dominates that of v. Subsequently various

authors have shown that under additional hypotheses on X one can take T to

be nonrandomized, i.e. a stopping time of the natural filtration of X. For recent

work on this subject see [C85] and [FF90]i see also [Fa81,Fa83] which cont ain

references for the earlier literature.

Our object in this note is different. We shall deal with a general right Markov

process X, but we shall show that a randomized stopping time T achieving the

embedding (1.1) can be chosen from the reasonably narrow class of "randomized

hitting times." More precisely, we show that if the potential of I-l is u-finite and

dominates that of v, then there is a monotone family of sets {B(r)iO ~ r ~ 1}

such that if T is the first entry time of B(R), where R is independent of X and

uniformly distributed over [0,1], then (1.1) holds. The reader will recall that this

* Research supported in pari by NSF grant DMS 8721347.

Page 189: Seminar on Stochastic Processes, 1990

184 P.I Fitzsimmons

is the same sort of stopping time constructed by Skorokhod in his original work

[Sk65] on embedding mean zero random variables in Brownian motion.

Our main result, Theorem (2.1) is stated and proved in the next section. The

proof is based on a result of Meyer [Me71], and on the version of Rost's theorem

found in [Fi88]. This latter result relies on a technique due to Mokobodzki which

was used by Heath [H74] to prove what amounts to Theorem (2.1) in the special

case of Brownian motion in three or more dimensions. In Sect. 3 we provide a new

proof of the result of Meyer mentioned above. This is included since it yields an

explicit description of the family {B(r); O ::; r ::; 1} involved in the main result.

2. Main Result.

Let X = (n,F,Ft,Xt,Bt,PX) be a right process in the sense of Sharpe

[Sh88]. Thus X is a strong Markov process with right continuous paths, along

which the a-excessive functions are almost surely right continuous. The state

space E of X is homeomorphic to a universally measurable sub set of some com­

pact metric space. The Borel a-field in E is denoted E, and E* is the universal

complet ion of E. The transition probabilities (Pt ; t ~ O) form a semigroup of

subMarkovian kernels on (E, E*). In particular, a cemetery point ~ (ţ E is ad­

joined to E as an isolated point and the lifetime ( := inf{t : X t = ~} may be

finite. The potential kernel U is defined by

Uf(x):= [X> Pd(x)dt = p x (1' f(Xt)dt) .

Recall that Ee (:) E) denotes the a-field on E generated by the 1-excessive

functions of X. If B E Ee then the entry time (or debut) of B

DB := inf{t ~ O : X t E B}

is a stopping time of the natural filtration (Ft ). We write

for the associated hitting operator.

Page 190: Seminar on Stochastic Processes, 1990

Randomized Hitting Times 185

Here is the main result of the paper.

(2.1) Theorem. Let ţl and v be measures on (E, &) sucb tbat ţlU is u-nnite

and vU :::: ţlU. Tben tbere is a decreasing family {B( r); O :::: r :::: 1} of nnely

closed Ee-measurable sets sucb tbat

(2.2) v = 11 ţlHB(r) dr.

H{A(r);O:::: r:::: 1} is a second sucb family, tben PI"(DB(r) i= DA(r») = O for

a.e. rE [0,1].

Remarks. (a) According ta a result of Mokobodzki [Mo71], if ţlU is u-finite,

then the extreme points of the convex set AI" := {v: vU :::: ţlU} are precisely the

measures ţlHB , B E Ee. One could use Mokobodzki's theorem and an abstract

integral representation theorem of Arsove and Leutwiler [AL75] to prove the

existence part of Theorem (2.1). We shall give a more direct probabilistic proof.

Of course, Mokobodzki's theorem is an immediate corollary of Theorem (2.1).

(b) The measures ţl and v in Theorem (2.1) need not be finite but they are

u-finite: if f > O and ţlU(f) < 00, then U f > O and v(U f) :::: ţl(U f) < 00.

The u-finiteness of ţlU amounts to a transience hypothesis. Indeed with f as

before, X restricted to the absorbing set {U f < oo} is transient, and each of the

measures p, v, pHB(r), is carried by {Uj < co}.

(c) The probabilistic interpretation of (2.2) is as noted in Sect.l: if R is

chosen independently of X and uniformly distributed over the interval [0,1], and

if D is the debut of B(R), then XD has law v when X o has law ţl.

For the proof of (2.1) we require two lemmas. The first of these is taken from

Sect.3 of [Fi88] and was proved there under the hypotheses of Borel measurabil­

ity. However the argument is valid in the general case considered here; ef. [G90,

(5.23)]. The second lemma is due to Meyer [Me71, Prop.8] and, independently,

to Mokobodzki.

Recall that an excessive measure of X is au-finite measure ~ on (E, E) such

that ~Pt :::: ~ for alI t > O. For example, any potential AU is excessive provided

Page 191: Seminar on Stochastic Processes, 1990

186 P.I Fitzsimmons

it is q-finite. IT € and", are excessive measures then the reduite R(€ -",) is the

smallest excessive measure p such that p + ", dominates €. IT € is a potential,

then so is R(€ - ",). For a stopping T the kernel PT is defined by PTf(x) :=

P"'(f(XT)jT < O.

(2.3) Lemma. Let p.U and IIU be q-finite potentials with IIU ~ p.U. Then

there is a family {T(r)j ° ~ r ~ 1} of (:Ft ) stopping times, with r 1-+ T(r,w)

increasing and right continuous for each w E n, such that

(2.4)

and

(2.5) R(IIU - r . p.U) = 11 P.PT(s)U ds, Vr E [0,1].

(2.6) Lemma. Let II and A be measures on (E, e) such that the potentials IIU

and AU are q-finite. Then there exists a finely closed ee-measurable set B such

that

R(IIU - AU) = (II - A)HBU.

Proofof Theorem (2.1). Fix r E]O, 1[. By (2.5), Lemma (2.6) (with A = r·p.),

and the uniqueness of charges [G90, (2.12)], there is a finely closed set B(r) E ee such that

(2.7) 11 P.PT(s) ds = (11- r . p.)HB(r)'

Since B(r) is finely closed, the measure on the R.H.S. of (2.7) ia carried by B(r)j

the same is therefore true of the L.H.S. It fol1ows that XT(s) E B(r) a.s. pp

on {T(s) < oo} for a.e. sE [r,l]. Consequent1y T(s) ~ DB(r) a.s. pp for a.e.

sE [r,l]. Invoking Fubini's theorem and the right continuity of s 1-+ T(s,w), we

conclude that

(2.8) T(r) ~ DB(r) a.s. pl'.

Page 192: Seminar on Stochastic Processes, 1990

Randomized Hitting Times 187

On the other hand if we apply HB(r) to both sides of (2.7), then by (2.4) and

the identity HB(r) = HB(r)HB(r),

(2.9)

But DB(r) :::; D(s) := T(s) + DB(r)o(}T(s) on {T(s) < oo}. Since ţlU is u-finite,

we can choose ! > O such that ţlU! < 00, and then by (2.9) and the strong

Markov property

pl' (f OO !(Xt ) dt) = r-1 fr ds pl' (f OO !(Xt ) dt) < 00, J DB(r) Jo J D(s)

so

(2.10) DB(r) = T(s) + DB(r)o(}T(s) ~ T(s) a.s. pl'

for a.e. s E [O,r). By (2.8), (2.10), and the monotonicity of s f-+ T(s,w) we

therefore have

(2.11) T(r-) :::; DB(r) :::; T(r) a.s. Pl'.

Since r E)O,l[ was arbitrary and T(·,w) has only countably many discontinu­

ities, formula (2.2) now follows easily from (2.4) and (2.11). The sets B(r) just

constructed need not be monotone in rj to remedy this replace B(r) by the fine

closure of U{B(s) : r < s < 1, s rational} (taking B(l) = 0). In view of (2.11)

and the monotonicity ofT(·,w), this change does not disturb the validity of (2.2).

It remains to prove the uniqueness. Let {A( r)j O :::; r :::; 1} be a second family

of sets with the properties of {B( r)j O :::; r :::; 1}. Then

from which it follows that

Page 193: Seminar on Stochastic Processes, 1990

188 P.I Fitzsimmons

Consequently, since the A( r)'s decrease,

l r p,HB(s)U ds ~ l r

p,HA(sp ds ~ r . p,HA(r)U.

Thus, by a lemma of Rost [R74, p.201],

T(s) = DB(s) ::; DA(r) a.s. pl', for a.e. s E [O,r],

hence T(r-) ::; DA(r) a.s. Pl'. Since foI p,HA(r)Udr = vU = foI p,HB(r)Udr,

the argument used earlier yields PI'(DB(r) #- DA(r» = O for a.e. rE [0,1], as

required. O

Remark. The proof of Lemma (2.6) given in Sect. 3 reveals the foHowing recipe

for the sets B(r) of Theorem (2.1). For rE [0,1], the excessive measures R(vU­

r . p,U) and vU are both dominated by p,U j let their "fine" densities (Lemma

(3.1» be denoted tr and t/J respectively. Then B(r) can be taken to be the fine

closure of {x EE: tr(x) ::; t/J(x) - r}. (Note that {x EE: tr < t/J - r} is

p,U -nuH.) With a Httle care one can arrange that r 1-+ tr( x) is decreasing and

convex for each Xj this being done, r 1-+ hr ::; t/J - r} is decreasing, and so is

rl-+B(r).

3. Proof of Lemma (2.6)

The proof of Lemma (2.6) rests on a domination principle, which is based

on the choice of precise versions of certain Radon-Nikodym derivatives. We fix

a a-finite potential m = pU. A set B E ee is p-evanescent provided PP(Xt E

B for some t ~ O) = O. The foHowing two lemmas sharpen results in [Fi87,

Fi89] by taking advantage of the fact that the excessive measure m is a potential.

For a complete discussion of these and related results see [FG90].

(3.1) Lemma. Let vU be a a-finite potential dominated by a multiple of m.

Then there is a bounded ee-measurable version t/J of d(vU)/dm and a set A E ee

such that

(i) A is absorbing for X and E \ A is p-evanescent;

Page 194: Seminar on Stochastic Processes, 1990

Randomized Hitting Times 189

(ii) tPlA is finely continuous.

The density tP is uniquely determined modulo a p-evanescent set.

In the sequel we shall write tPlI for the "fine" version of d(vU)/dm provided

by Lemma (3.1). If vU and p.U are both dominated by a multiple of m and

vU ~ p.U, then both tPlI and tPlI"tPl-' are fine versions of d(vU)/dm. Thus we can

(and do) assume that tPlI ~ tPl-' when vU ~ p.U. Also, note that if vU ~ p.U then

VHBU ~ p.HBU for any B E ee. In particular, if p.U is dominated by a multiple

of m, then p. charges no p-evanescent set; cf. [Fa83, Lemma 3]. These facts

in hand the proof of [Fi89, (2.13)] can be adapted in the obvious way (replace

"m-polar" by "p-evanescent") to yield the following dominat ion principle.

(3.2) Lemma. Let p.U and vU be q-finite potentials dommated by a multiple

ofm. HtPlI ~ tPl-' a.e. v, then tPlI ~ tPl-' olfa p-evanescent set, hence vU ~ p.U.

Proof of Lemma (2.6). Given potentials vU and AU, the reduite R( vU - AU),

being dominated by vU, is also a potential, say VIU. Moreover, VIU is strongly

dominated by vU in that thete is a potentialv2 U such that VIU +V2U = vU, and

then VI + V2 = V by the uniqueness of charges. (The reader can consult Sect. 5 of

[G90] for proofs ofthese well-known facts.) Since VI U +AU = R( vU - AU)+AU ;?:

vU, we have

(3.3)

We take p = V + A, and in the subsequent discussion alI fine densities (tPlI' tPl-"

etc.) are taken relative to m = pU. By a previous remark we can assume that

Let B denote the fine closure of {tP1I2 = tP,x}. Clearly B is fe-measurable and

B \ {tP1I2 = tP,x} is p-evanescent. We will show that

(3.4)

Page 195: Seminar on Stochastic Processes, 1990

190 P.I Fitzsimmons

For E E]a,l[, set B(E) = {E1,b", < 1,b" -1,b,,}, so that nnB(l - lin) = B up to

a p-evanescent set. By a lemma of Mokobodzki (see [G90, (5.6)]), and [Fi88,

(2.17)]

since B( E) differs from it fine interior by a p-evanescent set. By the uniqueness

of charges, vIHB«) = VI, SO VI is carried by the fine closure of B(E). But if

a < E' < E < 1, then B( E') contains the fine closure of B( E) up to a p-evanescent

set not charged by VI. It follows that VI is carried by B, hence VI = VI H B. To

finish the proof of (3.4) we must therefore establish

(3.5)

On the one hand )"U ~ V2U, so )"HBU ~ V2HBU. On the other hand, the

inequality )"HBU :::; )"U implies that {1,b"HB > 1,b,,} is p-evanescent. Thus

which carries )"HB. Lernma (3.2) allows us to conclude that )"HBU ::::; v 2 U, hence

)"HBU = )"HBHBU :::; V2HBU, and (3.5) follows. O

References

[AL75] M. ARSOVE and H. LEUTWILER. Infinitesimal generators and quasi-units in potential theory. Proc. Nat. Acad. Sci. 72 (1975) 2498-2500.

[C85] P. CHACON. The filling scheme and barrier stopping times. Ph. D. Thesis, Univ. Washington, 1985.

[Fa81] N. FALKNER. The distribution of Brownian motion in Rn at a natural stopping time. Adv. Math. 40 (1981) 97-127.

[Fa83] N. FALKNER. Stopped distributions for Markov processes in duality. Z. Wahr­

scheinlichkeitstheor. verw. Geb. 62 (1983) 43-51.

[FF90] N. FALKNER and P. J. FITZSIMMONS. Stopping distributions for right pro­cesses. Submitted to Probab. Th. ReI. Fields.

[Fi87] P. J. FITZSIMMONS. Homogeneous random measures and a weak order for the excessive measures of a Markov process. Trans. Am. Math. Soc. 303 (1987)

431-478.

Page 196: Seminar on Stochastic Processes, 1990

Randomized Hitting Times 191

[Fi88] P. J. FITZSIMMONS. Penetration times and Skorohod stopping. Sem. de Probabilites XXII, pp. 166-174. Lecture N otes in Math. 1321, Springer ,Berlin,

1988.

[Fi89] P. J. FITZSIMMONS. On the equivalence of three potential principles for right

Markov processes. Probab. Th. ReI. Fields 84 (1990) 251-265.

[FG90] P. J. FITZSIMMONS and R. K. GETOOR. A fine domination principle for

excessive measures. To appear in Math. Z.

[G90] R. K. GETOOR. Excessive Measures. Birkhauser, Boston, 1990.

[H74] D. HEATH. Skorohod stopping via potential theory. Sem. de Probabilites VIII, pp. 150-154. Lecture Notes in Math. 381, Springer, Berlin, 1974.

[Me71] P.-A. MEYER. Le schema de remplissage en temps continu. Sem. de Proba­bilites VI,pp.130-150. Lecture Notes in Math. 258, Springer, Berlin, 1971.

[Mo71] G. MOKOBODZKI. Elements extremaux pour le balayage. Seminaire Brelot­Choquet-Deny (Theorie du potentiel), 13e annee, 1969/70, no.5, Paris, 1971.

[R70] H. ROST. Die Stoppverteilungen eines Markoff-Processes mit lokalendlichem

Potential. Manuscripta Math. 3 (1970) 321-329.

[R71] H. ROST. The stopping distributions of a Markov process. Z. Wahrschein­

lichkeitstheor. verw. Geb. 14 (1971) 1-16.

[R74] H. ROST. Skorokhod stopping times of minimal variance. Sem. de Probabilites X, pp. 194-208. Lecture Notes in Math. 511, Springer, Berlin, 1974.

[Sh88] M. J. SHARPE. General Theory of Markov Processes. Academic Press, San

Diego, 1988.

[Sk65] A. V. SKOROKHOD. Studies in the Theory of Random Processes. Addison­

Wesley, Reading, Mass., 1965.

P. J. FITZSIMMONS Department of Mathematics, C-012 University of California, San Diego La Jolla, California 92093

Page 197: Seminar on Stochastic Processes, 1990

Multiplicative Symmetry Groups of Markov Processes

1. Introduction.

JOSEPH GLOVER*

RENMING SONG

In [5], Glover and Mitro formulated a group G consisting of symmetries of

the cone S of excessive functions of a transient Markov process Xt. Roughly

speaking, G is defined to be the collection of alI bimeasurable bijections rp of the

state space E of Xt onto itself such that S = {f o rp : fES}. This group G

can also be characterized as the collection of alI bimeasurable bijections rp : E --+

E with the following properties: i) rp(X) is a transient Markov process; and ii)

there is a continuous additive functional Ar of Xt which is strictly increasing and

finite on [O,() with right continuous inverse r(rp,t) such that (rp(Xt),p<p-l(z»)

and (Xr(<p,t) , PZ) are identical in law. Because of this, we call the group G the

additive symmetry group of Xt. From each subgroup H of G, Glover and Mitro

constructed a new state space F and a surjection efi : E --+ F. They showed

that, under some mild topological hypotheses, there is a time change r(t) of Xt

·Research supported in part by NSA and NSF by grant MDA904-89-H-2037

Page 198: Seminar on Stochastic Processes, 1990

194 1. Glover and R. Song

such that iP(Xr(t)) is a strong Markov process. Following this, Glover [3] used

appropriate transitive subgroups of G to introduce a group structure on the state

space E and showed that, under appropriate conditions, Xt is a Levy process in

this new group structure.

There are at least two important classes of functionals in the theory of Markov

processes: one is the class of additive functionals mentioned above, and the other

is the class of multiplicative functionals. It is therefore natural to ask if we can

formulate a multiplicative symmetry group by using multiplicative functionals and

develop a theory similar to that of the additive case.

By using a "diagonal principle", Glover [4] proved results similar to those of [3]

for multiplicative symmetry groups when the underlying process Xt is a regular

step process. The argument in [4] depends heavily on the special properties of

regular step processes, and it seems that his method cannot be extended easily to

more general processes.

In this paper, we are going to assume that Xt is a general Markov process

but that H is a subgroup of the multiplicative symmetry group with a special

property: we shall assume that H has a finite left-invariant measure. The contents

of this paper are organized as follows. Section 2 serves as a preparation: the basic

framework is set up in this section and a preliminary result is proven. A result

similar to that of [3] is given in Section 3.

2. Preparation.

Let E be a Lusin space and let 8(E) be the Borel field of E. Adjoin a cemetery

point II to E and denote the extended space and Borel field by Efl and 8(Efl).

Let X = (Îl, F, Ft, Xt Jlt , PX) be a right process on (E,8(E)). For convenience,

we shall assume that Îl is the space of all maps w : [0,00) ---+ Efl which are

right continuous and such that w( t) = II if and only if w( t + s) = II for every

s > O. Set Xt(w) = w(t), and let Ft and F be the appropriate completions of

.rp = u{Xs : s ::; t} and :PJ = u{Xs : s ~ O}. For each t ~ 0, 0t : Îl ---+ Îl is

the shift operator characterized by X s o flt = Xs+t. Under the measure p x, Xt is

Page 199: Seminar on Stochastic Processes, 1990

Symmetry Groups of Markov Processes 195

a time homogeneous strong Markov process with Xo = x a.s. p z . In general, if

e is au-algebra, we write be(resp. pe) to denote the collection of bounded (resp.

positive) e-measurable functions.

Let Pt denote the semigroup of X. We assume throughout this article that

X is a Borel right process, by which we mean Pt maps Borel functions into Borel

functions.

Let G be the collection of bijections tp E ---+ E satisfying the following

properties:

(1) tp and tp-l are 8(E) measurable.

(2) tp and tp-l are finely continuous.

PROPOSITION. Ii tp E G, then Yt = {tp(Xt} , p<p-l(z)} is also a. Borel right process.

PROOF: Let P: = p<p-l(z). We must check first that Yt is a right continuous

strong Markov process on E. If 9 is any continuous function on E, then 9 o tp is

finely continuous, so g(Yt) is right continuous a.s. P: for every x in E. Therefore

Yt is right continuous a.s. P: for every x E E. Since tp is a measurable bijection,

Yt inherits the strong Markov property from Xt.

Second, we need to check that g(Yt) is right continuous whenever 9 is excessive

for Yt. But 9 is excessive for Yt if and only if 9 o tp is excessive for Xt, so g(Yt) is

right continuous a.s. P: for every x. Let U:; be the resolvent of Yt. For x E E, we have

fZU:;(f) = p<p-l(z) / e-o:tf(Yt)dt

= f<p-l(Z)UO:(f o tp)

= tp( f<p-l(z)UO:)(f)

for every bounded positive function f E B(E). Therefore, for every x E E,

Since tp and tp-l are both 8(E)-measurable, (U:;) is a Borel resolvent and we have

proved that Yt is a right process. I

Page 200: Seminar on Stochastic Processes, 1990

196 J. Glover and R. Song

DEFINITION. A family M = {Mti O ::; t < oo} of positive real-valued random

variables on (n, F) is called a multiplicative functional of X provided:

(1) Mt E Ft for each t ;::: O.

(2) Mt+s = Mt(Ms o Ot} a.s. for eam t, s ;::: O.

A multiplicative functional M of X is called nonvanishing if Mt > O a.s. pz on

{t < (} for every t > O and every x E E.

A multiplicative functional M of X is said to be a strong multiplicative func­

tional provided that

a.s. pz on {T < oo} for every x E E, every t ;::: O and every stopping time T.

It follows from [Il that any right continuous multiplicative functional of X is

a strong multjiplicative functional

Given two multiplicativefunctionals M and N of X, we say that N is a version

of M if PZ[Mt =f. Nti t < (1 = O for alI t and x.

DEFINITION. r.p E G is ca1led a multiplicative symmetry of X ii there is a right

continuous nonvanishing multiplicative functional Mi sum that

(1)

for every f E pB(E), for every t. We let GM denote the collection of alI multi­

plicative symmetries of X.

It is easy to see that for every '1' E GM, Mi is a supermartingale mqltiplicative

functional.

PROPOSITION. GM is a subgroup ofG.

PROOF: First, we must show that ifcp E GM, then '1'-1 E GM. Define a map

r'P : n -+ n by rcp(w) = cp(Xt(w)) (where cp(~) = ~)i then (1) implies

r'P(pcp-l(z»)[Fi t < (1 = pZ(F. Mil

Page 201: Seminar on Stochastic Processes, 1990

Symmetry Groups of Markov Processes 197

for every F E P:Ft. In particular, if we let F = !(Xt)/M't, we have

PZ[!(Xt}) = r<p(p<p-1(z)H!(Xt}/M'f]

= p<p-1(z)[! o cp(Xt) . 1/(M't o r<P))

If we replace ! o cp with 9 and cp-l(x) with z, we see that

Since 1/(M't o r<P) is a right continuous, nonvanishing multiplicative functional of

X, cp-l E GM.

Second, we must show that if cp and 'Ij; are in GM, then cp o 'Ij; E GM. To do

this, we compute

p(<pO.p)-1(Z)[! o cp o 'Ij;(Xt)] = p<p-1(z)[! o cp(Xt)Mt)

= PZ[!(Xt}M't. (Mt o r<p-1 ))

Since M<P . (Mt o r<p- 1 ) is also a right continuous, nonvanishing multiplicative

functional of X, cp o 'Ij; E GM and we conclude that GM is a group .•

From the proof above, we can see that we have the following important

COROLLARY. For any rp,'Ij; E GM, we have

(1) 1/M't o r<p is a version of M't -1

(2) M't. (Mt o r<p- 1 ) is a version of Mr.p

3. Levy processes.

Take a subgroup H of GM. In this article we are going to assume:

HYPOTHESIS. H is transitive, Le., for each pair of points x and y in E, there is a

map cp E H such that rp(x) = y.

Let us fix, once and for ali, a point e E E to serve as a reference point in E

and let

He = {cp E Hj cp(e) = el.

Page 202: Seminar on Stochastic Processes, 1990

198 1. Glover and R. Song

This is a subgroup of H, and we let T = H/He be the collection of left cosets

From [3] we know that

c.pHe = {1P E Hj 'IjJ(e) = c.p(e)}.

Because of this, we can define a map W from E ta T as follows:

W(x) = {c.p E Hj c.p(e) = x}.

In fact, it is easy to show that W is a bijection from E to T (see [3]).

The bijection W : E ---+ T allows us to identify E with Tj we thereby endow

E with the structure of a coset space.

Now we are going to assume the following:

HYPOTHESIS. He is trivial; i.e., He consists only of the identity map.

Under this hypothesis, T and H are isomorphic, and W is a bijection from E

to H. We use W to irlentify E and Hand in particular, W endows E with the

group structure of H given by the product

xy = W- 1(W(x) o W(y))

whenever x,y E E.

The group product notation above is useful, but we also find it convenient to

use the product in H (which is composition o) by identifying the point x E E with

the map c.px = W(x) E H.

HYPOTHESIS. (x,y) ---+ xy and (x,y) ---+ x-1y are B(E) x B(E)-measurable.

DEFINITION. If ţt is a measure on (E,B(E)) and x E E, ţtx is the measure on

(E,B(E)) defined by ţtX(A) = ţt(xA) for every A E B(E). A u-finitemeasure ţt on

(E,B(E)) is said to be left quasi-invariant if ţtx «ţt for every x E E. Au-finite

measure m on (E, B (E)) is said to be a left Haar measure if mX = m for every

x E E.

In this paper we assume the following

Page 203: Seminar on Stochastic Processes, 1990

Symmetry Groups of Markov Processes 199

HYPOTHESIS. There is a a-finite left quasi-invariant measure ţi. on (E, B(E)).

By the Mackey-Weil theorem (see [3]) we know that this hypothesis implies

there is a topology on E making E into a locally compact second countable metric

group such that:

(1) the Borel a-algebra of the topology is B(E);

(2) there is a left Haar measure n, and ţi. and n have the same null sets.

We are going to call this topology the Mackey-Weil topology, and we set

so m is a measure on (H,B(H)), where B(H) = {w- 1(A): A E B(E)}.

In this article we are going to assume m is a finite measure. Without loss

of generality we can assume that m( H) = l.

The purpose of this article is to use {M{; ep E H} to produce a nice multi­

plicative functional Mt so that (Xt, Mt) is a Levy process. In order to proceed, we

need to know that M{ can be made jointly measurable.

PROPOSITION. There is a procelis N{ such that

(1) for each ep, N'P is a version of M'P;

(2) (t, x,w) ---> NtiJ!(x) is B(R+) x B(E) x ;Pl-measurable.

PROOF: First we fix a t > O. For each pair (x,ep) E E x H, define a measure

Lt((x,ep),dw) by setting Lt((x,ep),F) = PX[M{. F] for every F E pJ1. Assume

for the moment that we have shown that (x, z) ---> Lt((x, epz), F) is B(E) x B(E)­

measurable. Doob's lemma then yields a density Ct(x, z,w) E B(E) x B(E) x J1 such that

for every F E pJ1. If we set ct(w) = Gt(Xo(w), z,w), then ct(w) is B(E) x J1-measurable and ct(w) = MtV/(z) a.s. p x for every x.

Now we define

Page 204: Seminar on Stochastic Processes, 1990

200 J. Glover and R. Song

Then t --+ Nti1l(z) is right continuous a.s., Nti1l(z) and Mti1l(z) are indistinguishable,

(t, x,w) --+ Nti1l (z) is B(Jl+) x B(E) x,rO- measurable and Nti1l (z) is Ft-measurable

for every t.

So alI that remains to complete the proof of this proposition is to verify that

(x,z) --+ PZ[Mti1l(z) . F] is B(E) x B(E)-measurable whenever F E p.rp. Since

.rp is generated by random variables of the form

with tI < t2 < ... < tn ~ t and n = 1,2, ... , it suffices to prove that

is B(E) x B(E)-measurable for every n and an tI < ... < tn ~ t.

We proceed by induction on n. When n = 1

PZ[Mti1l(z) fI(X(tl))] = PZ[Mt~(z) Mt~~; o OtJI(X(tl))]

= PZ[Mt~(z) fI(X(tl))PX(td[Mt~~)ll

= p<p;l(z)[fI o <Pz(X(tI))P<Pz(X(td)[Mt~~;ll

= p<p;l(z)[fI o <Pz(X(tl))pX(td[t - tI < (ll

Since (z, x) -> <p;l(x) and (z, x) -> <pz(x) are jointly measurable, and X is a Borel

right process, we immediately obtain the desired measurability.

Now we assume that for any fI,··· , fn-l E B(E) and any tI < ... < tn-l ~ t

is B(E) x B(E)-measurable. Then for any fI,··· , fn E B(E) and any tI < ... <

PZ[Mti1l(z) fI(X(td)··· fn(X(tn))]

= PZ[Mt~(z) . Mt~\~) o Ot1 fI(X(tl))[h(X(t2 - tI))··· fn(X(tn - tI))] o Otl]

== PZ[Mt~(Z) fI(X(tl))PX(td[Mt~\z; h(X(t2 - td)··· fn(X(tn - td)]].

Page 205: Seminar on Stochastic Processes, 1990

Symmetry Groups of Markov Processes 201

By the induction assumption we know that

is B(E) x B(E)-measurable. It follows that

is B(E) x B(E)-measurable .•

Now let us put

Ai = lnN'f.

Aside from our generic assumptions about the structure of H, which have

appeared before in [3] and [4], we have made the special assumption that m(H) =

1. We need one other special hypothesis, without which our proposed method

cannot work.

HVPOTHESIS. There is a null set N such that for any t > O and for any w E

n-(Nu{t>(}),

Under this hypothesis, we can define, for any t > O, At(w) = JH Ai m(d<p)

when t < ((w) and At(w) = -00 when t > ((w). Since J(An- m(d<p) and

J (Af) - m( d<p) are finite on n - (N U {t + s > (}), the following identities are true

almost surely on {t + s < (}:

AHs= j(Ai+Afoot)m(d<p)

= j Ai m(d<p) + j Ar o Ot m(d<p)

= At + As o Ot

Thus the At defined above is an additive functional. In fact, the same argument

yields the fact that for every x E E, every t > O and every stopping time T,

Page 206: Seminar on Stochastic Processes, 1990

202 1. Glover and R. Song

a.s. px on {T < oo}.

Put

Then Jensen's inequality implies

PX[Mtl = PX[exp{j Aim(dep)}]

~ PX[j exp{Ai}m(dep)]

~l.

So Mt is finite almost surely and furthermore, the inequality above shows that Mt

is a supermartingale strong multiplicative functional. Our hypothesis insures that

Mt is nonvanishing.

As we mentioned above, we need the hypothesis to insure that Mt does not

vanish. To see what can happen without this hypothesis, let E = [0,271") be the cir­

ele group, and let Yt be the Levy process on E which sits for an exponentiallength

of time at its starting point x, after which it jumps to the point x + 71" (mod 271"),

where it sits for an exponentiallength of time, etc. Let c > 0, define

Bt = fot c(Ys )ds

Rt = exp( - BL)

and let Xt be the process Yt killed by Rt. Let H be the group of rotations on E:

H = {epa : epa(x) = x + a (mod 271")}. Then H is isomorphic to E, and H has

a finite left-invariant measure, namely, normalized Lebesgue measure. If ep = epa,

then Mt' = exp(Bt'), where

Bt' = l [c(Xs) - c(Xs - a)]ds

We see that

j Bt'm(dep) = j l[c(Xs) - c(Xs - a)]ds da

is finite only when c E Ll(da), so the hypothesis above is a necessary assumption.

Page 207: Seminar on Stochastic Processes, 1990

Symmetry Groups of Markov Processes 203

PROPOSITION. Mt has a right continuous version.

PROOF: In the proof of this proposition we are going to use the original topology

on E. In this topology X is a Borel right process with lifetime (. Define a kernel

QX(dw) from (E~,B(E~)) to (n,F) by

QX f(X(t)) = PX[Mtf(X(t))].

Then clearly

(1) For x E E, QX(Xo = x) = 1.

(2) For every t ~ 0, every f E bB(E~) and every optional time T over

FP+ C Ft,

(3) For every x E E, the trace QX IFo on {t < (} is absolutely continuous t

relative to the trace of p x IFo on {t < (}. t

Thus by Theorem 62.26 of [8] we know that there exists a right continuous super­

martingale multiplicative functional M such that for every stopping time T over

Ft and every H E bFT,

In other words, M has a right continuous version. I

Because of this proposition we are going to assume, in the sequel, that M is

actually right continuous.

DEFINITION. X is called H-translation invariant ii the processes (<p(Xt), p<p-l(X»)

and (Xt, P X ) are identical in law for every x E E and every <p E H.

THEOREM. If<p E H, then we have

Page 208: Seminar on Stochastic Processes, 1990

204 1. Glover and R. Song

for any f E pB(Et.), any x E E and any t.

PROOF: By the definition of Mt, we have

p<p-l(X)[f o 'P(Xt)Mtl = p<p-l(x)[f o 'P(Xt)eJ At m(d,p)]

= pX[j(Xt)Mt exp(j Ar o r<p-l m(d1j»)]

= PX[f(Xt) exp(j[A<P + Ar o r<p-l] m(d1j»)]

= PX[f(Xt ) exp(j[A<PO,p] m(#»

by the corollary at the end of section 2. Since m is left-invariant, this last expres-

From this theorem, we can immediately get the following:

COROLLARY. X = (n,F,Ft,Xt,Bt,QX) is H-translation invariant.

Now let f be a bounded positive continuous function on E, and let F be a

positive Fs-measurable random variable. Then

QX[J(X;l Xt+s)1{t«oO,}F1{s<(}] = QX [QX(s) [f(X;l Xt)i t < (]Fi S < (]

= QX[Qe[J(Xt}i t < (]Fi S < (l

= Qe[f(Xt}i t < (]QX[Fi s < (].

In particular, if we let f = F = 1 in the above, we get

This together with the H-translation invariance implies that Qt1 = e-o:t for some

a 2: o. Define

Then Nt is a right continuous, nonvanishing martingale multiplicative functional

of X. Now let X = (n,F,Ft,Xt,Bt,PX) be the subprocess of X, constructed in

Theorem 62.19 of [8], corresponding to N. Then X is again a Borel right process.

Summarizing the above, we get our final result.

THEOREM. With the group structure an E given by H, X is a Levy process.

Page 209: Seminar on Stochastic Processes, 1990

Symmetry Groups of Markov Processes 205

REFERENCES

1. R. M. Blumenthal and R. K. Getoor, "Markov Processes and Potential Theory," Academic Press, New York, 1968.

2. C. Dellacherie and P. A. Meyer, "Probability and Potentials," North-Holland, Amsterdam, 1982.

3. J. Glover, Symmetry groups and translation invariant representations of Markov processes, Annals of Probab. to appear.

4. J. Glover, Symmetry groups of Markov processes and the diagonal principle, to appear. 5. J. Glover and J. Mitro, Symmetries and functions of Markov processes, Annals of Prob. 18

(1990), 655-668. 6. H. Heyer, "Probability Measures on Locally Compact Groups," Interscience, New York,

1977. 7. D. Montegomery and L. Zippin, "Topological Transformation Groups," Interscience, New

York,1955. 8. M. J. Sharpe, "General Theory of Markov Processes," Academic Press, San Diego, 1988.

Department of Mathematics, University of Florida, Gainesville, FL32611

Page 210: Seminar on Stochastic Processes, 1990

On the Existence of Occupation Densitites of Stochastic Integral Processes via Operator Theory

PETER IMKELLER

INTRODUCTION

Fourier analysis provides one of the well known methods by which

local behaviour of Gaussian processes, especially their occupation

densities, can be investigated. Berman [3] initiated on approach which

proved to be rather successful also in the more general area of

Gaussian random fields and random fields with independent increments

(see Geman, Horowitz [6] for a survey, Ehm [4]). The observation basic

to this approach is comprised in the statement: the Fourier transform

of the occupation measure of a real valued function is square

integrable if and only if it posesses a square integrable density which

then serves as a "local time" or "occupation density". It is

therefore, at least in principle, quite general.

Random fields of a different source have recently been studied

intensively. They originate for example from stochastic differential

equations, involving the wiener process, with boundary conditions (e.g.

periodic ones) destroying the adaptedness of their solutions with

respect to some filtration. They can therefore only be described by

stochastic integrals and their associated processes able to integrate

non-adapted data (seeOcone, pardoux [16], Nualart, pardoux [14], [15]).

Page 211: Seminar on Stochastic Processes, 1990

208 P.lmkeller

Combining Skorohod's [21) original construction of an appropriate

stochastic integral with ideas of Malliavin's calculus on wiener space,

and taking into account the surprising fact that Skorohod's integral

has a simple interpretat ion as the adjoint operator of Malliavin's

derivative, Nualart, pardoux [13) presented a stochastic calculus

fulfilling these requirements. They were able to explain some fine

structure properties of the random fields described by Skorohod's

integral, as for exarnple the existence of a non-trivial quadratic

variation. Yet their calculus provided no answer to questions about

existence and properties of occupation densities.

In [7), we took up Berman's Fourier analytic approach on only a

small port ion of wiener space, the second chaos, on which Skorohod's

integral produces, so to speak, the simplest non-Gaussian fields in

this setting. They are mainly described by a generally

infinite-dimensional interaction matrix T of pairwise orthogonal

Gaussian components. We wound up with translating sarnple properties

into purely analytic terms and this way obtained a necessary and

sufficient integral condition for the existence of occupation densities

involving only T and Hilbert-Schmidt operators derived from it. At

that time, however, the theory of operators and integral equations we

fell upon after performing this translation procedure, was rather new

to us. So for exarnple it carne just as a surprise and was puzzling for

some time that our integral condition seemed to be necessary and

sufficient only in case T is a trace class operator. Meanwhile, after

becoming just a little better acquainted with the relevant literature

(a look at the books of Smithies [22), Jorgens [9) and in particular

Simon [20) proved to be very profitablel, the problem found its natural

solution. We mainly learned that our integral condition could be

nicely put into the terms of Fredholm's theory of integral equations,

Page 212: Seminar on Stochastic Processes, 1990

Existence of Occupation Densities 209

developed already in the first half of this century.

This is what our translation of the problem into operator

theoretic language ultimately lead to: a necessary and sufficient

integral condition in terms of "Fredholm determinants" and "minors", if

T is of trace class, and regularized Fredholm determinants and minors

of the second order, if T is not of trace class. Its different

versiona, along with the "computable" descriptions of there objects we

could find in the literature, will be presented in section 1. They

still look rather complex and formidable. one reason for this might be

our ignorance of a highly developed and aophiaticated area of

mathematics, leading to possibly awkward formulations. Another reason

might well be the delicacy of the problem of the existence of

occupation densities for complex objecta as the ones considered, which

might call for some stochastically intuitive notions, at least in a

less abstract setting of fields described as solutions of particular

stochastic differential equations, for exarnple.

In section 2, we consider Skorohod integral processes defined by

non-necessarily symmetric finite-dimensional operators T. Put

stochastically, only finitely many orthogonal Gaussian componenta are

allowed to interact. In solving the problem of the existence of square

integrable occupation densities in this innocently looking context,

again the complexity of the analysis to be invested carne as a surprise

to us. The easiest and simplest way we could think of was looking at a

two-pararneter farnily of finite-dimensional matrices, which form the

essential building block of the integral condition to be confirmed, in

the coordinates of their major axes. This lead to considering the

smoothness of the associated farnilies of eigenvalues and orthogonal

matrices, a non-trivial problem which could be formulated in the

frarnework of the perturbation theory of linear operators as

Page 213: Seminar on Stochastic Processes, 1990

210 P. Imkeller

presented in Kato's [10] book. A major role is played by the

variational description of eigenvalues, as described in the min-max

principle of Courant-Fischer. Along the way, for technical reasons, we

lost track of the dependence of the upper bound we ultimately turn up

with, on the interaction matrix T. We therefore can only conjecture

that the method developed will have some bearing also if T takes into

account infinitely many Gaussian components. CUr main result most

likely can be carried over to a non-compact parameter space and a

"locally finite" interaction, Le. each point in parameter space has a

neighborhood in which only finitely many of the Gaussian components

considered are "alive".

O. NOTATIONS AND CONVENTIONS

We will be dealing with the Wiener process W indexed by [0,1],

defined on some fixed probability space (g,]=' ,P), and its stochastic

integrals in the "second chaos". 2 More precisely, if for g,heL ([0,1])

the tensor product of g and h is denoted by g~ h(s,t) = g(s)h(t), and

f hdw is the usual stochastic integral of a deterministic function with O

respect to a Gaussian process, and if a kernel feL 2([0,1]2) is

described in terms of an orthonormal basis (hn)neN of L2 ([0,1]) by

we consider the integrand

u = t

..

.. r a .. h i (t)

i. j=l 1J

and its "Skorohod integral process"

t 1 t

1 f h.dW, O J

Ut = r a .. (f h.dW f h.dW - f h.h.dA), i, j=l 1J O 1 O J O 1 J

te[O,l].

Page 214: Seminar on Stochastic Processes, 1990

Existence of Occupation Densities 211

Apart from this simple definition, we will essentially not need results

of the theory of Skorohod's integral based on Malliavin's calculus.

But, of course, it will always be present in the background. We refer

to Nualart, pardoux [13], Nualart [11] or Watanabe [23]. For a system

f of subsets of Q, O(f) denotes the o-algebra generated by f.

In terms of linear operators on the Hilbert space L2 ([0,1]), the

integral kernel f defines a Hilbert-Schmidt operator T. By T we

* denote its adjoint which is associated with the kernel f (s,t)

f(t,s), by tr(T) its trace, if it exists, by I the identity on

2 If f,g are L -kernels,

1 fg(s,t) = f(s, ')g(' ,t) = f f(s,u)g(u,t)du, s,t€[O,l],

O

is their product kernel. If f=g, it induces the operator T2. The

scalar product on L2 ([0,1]) is denoted by <','),

the norm by 11,11 2,

Especially in section 2, we will mostly be working in

finite-dimensional spaces, say of dimension n and use a matrix

description. In this context, I = (Oij:1~i,j~n) is the unit matrix.

The scalar product in Rn is written x*y, x,y€R n A vector of functions

h1 , ... ,hn € L 2 ([0,1]) will be denoted by h, the vector of their

111 Gaussian integrals f h1dW, ... , f h dW occasionally by f hdW. The

O O n O

Lebesgue measure on the Borel subsets of any measurable

subspace of Rn is sometimes written A, regardless of the dimension.

1. A CRITERION FOR EXISTENCE IN TERMS OF FREDHOLM'S THEORY

In [7], we gave a necessary and sufficient condition for the

existence of occupation densities of Skorohod integral processes in the

Page 215: Seminar on Stochastic Processes, 1990

212 P. Imkeller

second wiener chaos. It was essentially described by an integral

condition featuring the term exp(-i tr(H)) det (I+iH) where H is a

Hilbert-Schmidt operator closely related to the one which determines

the stochastic integral process considered. Written in the above way,

the condition of course only makes sense if H is a trace class

operator. This, in turn, restricted the validity of the criterion to

integral processes based on a trace class operator themselves. For

some time, this strange circumstance proved to be puzzling. Qnly when

we tried to re interpret the integral condition in the light of the

theory of integral equations named after Fredholm and developed already

in the first decades of this century, the problem dissolved completely.

Formally, the two components of the term mentioned above can be taken

together as a "regularized" Fredholm determinant of second order, which

is defined and behaves smoothly on the whole space of Hilbert-Schmidt

operators and requires no condition on the trace. Consequently, after

being put into these and related terms, our existence criterion for

occupation densities generalized completely and naturally. Therefore,

we re formulate essential parts of [7] using Fredhom's theory as

developed by Carleman, Schmidt, Hilbert, Smithies, Plemelj a.o., hereby

using Smithies' [22] and Jorgens' [9] books as guidelines, but mainly

the more modern presentation of Simon [20] for references. For

simplicity, the results will not be stated in the most flexible form of

[7], using an arbitrary subspace of L2([0,1]) containing the range of

the basic Hilbert-Schmidt operator as "universe", but L2 ([0,1]) itself.

On the other hand, we will choose a slightly more general setting,

including non-symmetric kernels as well. We first recall the main

general result. If f€L 2 ([0,1]2) is a not necessarily symmetric kemel,

T the Hilbert-Schmidt operator associated with it,

Page 216: Seminar on Stochastic Processes, 1990

Existence of Occupation Densities

1 J f(t,s)dWs ' and O

6(l[O,t]U)' tE[O,l], its Skorohod integral process, we set

A(s,t) Sgn(t-S)l[S~t,svt] ,

* H(s,t,x) -x(T A(s,t) + A(s,t)T),

F(s,t,x) I + iH(s,t,x), s,tE[O,l], xER,

we obtain that, provided T is of trace class, ti posesses a square

213

integrable occupation density itf (the attribute "balanced" of [7] is

omitted here, since this is the only kind discussed in this paper)

1 1 J J J exp(1/2 tr(H(s,t,x)) (det F(S,t,X))-1/2 (1)

R O O -1 * -1 [fls,o) F(s,t,x) f (o,s) f(t,o) F(s,t,x) * f (o ,tI

+ 2(f(s,0) F(S,t,X)-l f* (0,t))2] ds dt dx < CO o

To obtain (1) from theorem 2.1 of [7], we took care of the non-symmetry

of T, f and translated a basis dependent description into a basis

independent one using integral kernels instead. To express the main

ingredients of (1) in Fredholm's theory, we have to introduce the

following objects. Assume S is a Hilbert-Schmidt operator. Then the

operator

R2 (S) = (I+S) exp(-S) - I

is a trace class operator (see Simon [20], p. 106). It therefore makes

sense to detine

the "regularized Fredholm determinant", and

-1 D2 (S) = -S (I+R2 (S)) det2 (I+S) exp (-S) ,

the "regularized Fredholm minor" (see Simon [20], p. 107).

In case S is a trace class operator, we may reverse the regularization

and wind up with the familiar formula

(2) det2 (I+S) = det(I+S) exp(-tr(S)).

Page 217: Seminar on Stochastic Processes, 1990

214 P. Imkeller

In this case we need not regularize to get "Fredholm minors"

D1 (S) = -S(I+S)-l det(I+S)

(see Simon [20], p. 67). For the cases we are interested in, we will

give alternative and more transparent descriptions of these quantities

below. Now determinants and resolvents in (1) can be given a new

shape. This leads to the following integral condition.

THEOREM 1: U possesses a square integrable occupation density iff

1 1 -5/2 S S S (det2 F(S, t,x) ) R O O

* (f(s,·) [det2 (F(S,t,X))I + D2 (iH(S,t,X))] f (·,s)

* f(t,·) [det2 (F(S,t,x)) I + D2 (iH(S,t,x))]f (·,t)

* 2 + 2(f(s,·) [det2 (F(S,t,X))I + D2 (iH(S,t,x,))] f (·,t)) } ds dt dx < "".

PROOF: If T is of trace class, (1) gives the necessary and sufficient

condition. Now apply (2) to S = iH(s,t,x) and use the well known

formula for the resolvent

-1 -1 F(s,t,x) = I + det2 (F(S,t,x)) D2 (iH(S,t,x))

(see Simon [20], pp. 107, 108 and Smithies [22], pp. 96-99). The

resulting integral condition now makes sense for arbitrary

Hilbert-Schmidt operators. Hence an approximation argument as

contained in propositions 2.8, 2.9 of [7] completes the proof. •

In case T is of trace class, we can use non-regularized

determinants and minors.

THEOREM 2: Assume T is a trace class operator. Then U posesses a

square integrable occupation density iff

Page 218: Seminar on Stochastic Processes, 1990

Existence of Occupation Densities 215

1 1 -5/2 111 exp(i/2 tr(H(s,t,x))) det(F(s,t,x)) R o o (f.(s,o) [det(F(s,t,x))I + D1 (iH(S,t,x))) f*(o,s)

* f(t,o) [det(F(s,t,x))I + D1 (iH(S,t,X))) f (o,t)

* 2 + 2(f(s,o) [det(F(s,t,x))I + D1 (iH(S,t,X))) f (o,t)) ) ds dt dx < 00 0

PROOF: proceed as in the proof of the preceding theorem and use the

alternative equation for the resolvent

F(S,t,x)-l = I + det F(S,t,x)-l D1 (iH(S,t,x))

(see Simon [20), p. 67). •

So far we have gained some generality. But we have only replaced

the complicated resolvents F(S,t,x)-l be another set of complex

objects. Now determinants and minors can be developed in power series

featuring new expressions which look a little more easily accessible.

This interpretat ion is due to the work of Fredholm, Plemelj and

Smithies in the case of trace class operators. For general

HS-operators, Hilbert, Plemelj and Smithies deduced the formulas we

will now be using. This time, we start by looking at trace class

operators.

PROPOSITION 1: Let geL2 ([0,l))2) induce the trace class operator G.

( Xl" ,xn ) G = det(g(x.,y.)

Y1''' Yn 1. J lsi,jsn) ,

1 1 (X1 ... X) 1 .. 1 G n dx1 •.• dx, ao (G) O O Xl' .. Xn n

1,

Page 219: Seminar on Stochastic Processes, 1990

216 P.lmkeller

Gn(X,y) = Î Î G(X X1 ···Xn ) dx1 ... dx, x,ye:[O,lJ, o o y xl·· .xn n

~n(G) the operator induced by the kernel Gn'~O(G) = I. Then for

any Ae:C

det(I+AG)

PROOF: See Simon [20], pp. 51,69.

.. An L ,. a (G),

n=O n. n

.. An+1 L -,- ~ (G).

n=O n. n

The formulas of proposition 1 are due to Fredholm [5].

Alternatively, we can use formulas developed by Plemelj-Smithies.

PROPOSITION 2: Let ge:L2 ([0,1]2) induce the trace class operator G.

For ne:R let an tr(Gn ) and

C n-l O

1) O2 al n-2

Tn(G) det

)+1

al

. a °n-l n

r n O

iJ G2

~1 n-l

c5 (G) = det : al n •

Gn+1 . a °n-l n

(HS-operators!) .

Then for any Ae:C .. An det(I+AG) L n! Tn(G),

n=O

Page 220: Seminar on Stochastic Processes, 1990

Existence of Occupation Densities 217

PROOF: See Simon [20], pp. 68,69. • If G is not of trace class, we know already that determinants and

minors have to be regularized. In terms of the matrices used in their

power ser ies description, this simply amounts to removing the

diagonal.

PROPOSITION 3: Let g€L 2 ( [0,1]2) induce the Hilbert-Schmidt operator G.

G (x,y) = n

1 (X1 ... X) fG n dx1 ... dx, O xl·· .xn n

aO(G) = 1,

1 1 '" (X xl· .. xn ') f f G dx ... dx , O O Y xl· .. xn 1 n

~n(G) the operator induced by the kernel Gn'~o(G) I.

Then for any II€C

ao n det2 (I+IIG) = I ~ a (G)

n=O n! n '

~ (G). n

PROOF: See Simon [20], p. 108, and Smithies [22], p. 99. • Plemelj-Smithies' formulas possess the following regularizations.

PROPOSITION 4: Let g€L 2 ([O,1]2) induce the Hilbert-Schmidt operator G.

Page 221: Seminar on Stochastic Processes, 1990

218 P. Imkeller

For n€R let o tr(Gn ) and n

(i' n-1

O2

rn(G) det

o n-1 n

G n

(l O ;5 n(G) det O2

~n+1 o n

(HS-operators!) .

Then for any A€C

det2 (I+AG)

. ...l) O ....

n-1 O

O2 .

o n-1

rO (G)

O

o 1 O

1,

;50(G) I

~ An+1 L. n! ;5n(G).

n=O

PROOF: See Simon [20], p. 108, and Smithies [22], p. 94.

The preceding proposition now allows us to put the conditions of

theorems 1 and 2 into more readily accessible, yet rather complex,

forms.

THEOREM 3: For s,t€[O,l] let

H = T*A(s,t) + A(s,t)T.

U possesses a square integrable occupation density iff

11 00 • n f f f (I ~ a (H))-5/2 il O O n=O n! n

(3)

* {[ff (s,s) + ~ (-ix)n * _ _ * L. n! (ff (S,S)On(H) + n f~n(H)f (s,s))]

n=l 00 • n

* ţ' ~ * - -[ff (t,t) + n:1 n! (ff (t,t)On(H) + n f~n(H)f*(t,t))]

Page 222: Seminar on Stochastic Processes, 1990

Existence of Occupation Densities 219

* + 2[ff (s,t) + I n=l

(-ix)n * - * 2 n! (ff (s,t)an(H) + n f~n(H)f (s,t))] }

ds dt dx <

Alternatively, a (H) resp. ~ (H) may be replaced by r (H) resp. n n n

() (H). n

PROOF: Combine propositions 3,4 with theorem 1 and compare the

resulting power ser ies term by term. • In the trace class case, we can again replace the "-"-coefficients

in the integral criterion of theorem 3 with their counterparts.

THEOREM 4: Assume T is a trace class operator. Then U possesses a

square integrable occupation density iff the analogue of (3) holds with

a (H) replaced by a (H),~ (H) by ~ (H). Alternatively, an(H) resp. n n n n

~n(H) may be replaced by yn(H) resp. 6n (H).

PROOF: This time we have to combine propositions 1, 2 and theorem 2,

and compare the power ser ies appearing term by term. • REMARK: Though the constituents of (3) are computable and there are

relatively simple recursive formulas for the coefficients an(H)'~n(H)

etc. (see Smithies [22], pp. 74,88), the criteria of theorem 3 or

theorem 4 seem to be hard to verify. In particular, the series in (3)

seem to simplify further in only rather special cases. Therefore, so

far we have just been able to use the integral conditions directly in

some particular cases. Other cases, for example the one considered in

the subsequent section, seem to favor the more flexible criterion of

theorem 2.1. of [7] in which the analysis is restricted to a subspace

of L2 ([0,1]).

Page 223: Seminar on Stochastic Processes, 1990

220 P.hnkeller

2. OCCUPATION DENSITIES IN THE FINITE DIMENSIONAL CASE

We will now look at Skorohod integral processes in the second

chaos described by only finitely many interacting orthogonal Gaussian

components. The main result of this section is that they always

possess square integrable occupation densities. The nature of the

problem makes it more convenient to work with a form of the integral

conditions discussed in section 1, the operators of which live on the

finite dimensional range of T. Criteria of this form were presented in

the first two theorems of section 2 of (7). Choosing N = R(T) there

(cf. p. 14 of (7)) makes appear a nontrivial real part of F(s,t,x),

namely

2 * G(s,t,x) = P + x T A(s,t)B(s,t)T, where B(s,t) = P - A(s,t),

P the orthogonal projection on N. Instead of F(s,t,x), we will be able

to work with G(s,t,x) alone. To verify the resulting integral

condition, we look at G(s,t,x) in its diagonal form. This amounts to

following the major axes of A(s,t) alI along the way as s,t run through

[0,1). The main problem we have to face hereby consists in keeping

track of the eigenvalues A(s,t) and orthogonal projections O(s,t). As

long as A(s,t) itself varies analytically, the perturbation theoryof

linear operators based upon the variational description of its

eigenvalues in the Courant-Schmidt min-max principle yields nice

results about the connection between X(s,t) and O(s,t) and the

analyticity of these functions. This enables us to solve our problem

for analytic data first. We then approximate general data by analytic

ones to carry the result over to any finite dimensional operator T.

Finally, a very simple example will be given to underline that our

results are out of reach of the usual techniques of enlargment of

Page 224: Seminar on Stochastic Processes, 1990

Existence of Occupation Densities 221

filtrations hooked up with martingale theory, as developed in Jeulin,

Yor [8].

To be more precise now, assume (h1, ... ,hn ) is an orthonormal

family in L2 ([0,1]) and

f = L aij hi 1131 hj lsi,jsn

with some real matrix (aij )lsi,jSn' T the operator associated with f.

Moreover, let

and

P the orthogonal projection on N.

We tacitly assume, finally, that n, via the orthogonal family, is

chosen "minimal", i.e. that T is invertible. In particular, from now

on, n is supposed to be fixed and will not get a special mention in

propositions and theorems.

Before we start analyzing the Fourier analytic criterion for the

existence of occupation densities for the Skorohod integral process

assoicated with T, we present an inequality for the inverses of an

ordered pair of symmetric, positive definitA matrices which will prove

to be very useful along the way.

PROPOSITION 1: Let A,B be n-dimensional real symmetric non-negative

definite matrices. Suppose that

A ~ B > O.

Then

PROOF: We found the following nice argument in the book of Bellman

[2], p. 93. Consider the function

Page 225: Seminar on Stochastic Processes, 1990

222 P. Imkeller

* Y'" 2x Y - Y Ay.

To determine the extrema of f we may, by an orthogonal transformation

of the coordinates, assume that A is in diagonal form, i.e.

A (>" :J with al' ... ,an > O.

Here we used the assumption that the considered matrices are symmetric

and positive definite. Now

nf(y) = 2x - 2Ay, n2f(y) = -A, yeRn .

Since -A is negative definite, f has a maximum at x = Ay, i.e.

y = A-1x. We therefore obtain

The same equation being true for B, we obtain

* -1 * x A x max {2x y - Y Ay

* S max (2x y - Y By (A~B)

* -1 x B x.

since this inequality holds for any xeRn , we are done.

with the aid of proposition 1, we gain the following sufficient

condition from theorem 2.2 of [7].

PROPOSITION 2: For s,teIO,l], xeR let

A(S,t)

B(S,t)

G(s,t,x)

Assume that

sgn (t-s) • P 1 P ISAt, S-it]

1 - A(s,t),

2 * 1 + x T A(s,t)B(s,t)T.

t * f h h dA, s

Page 226: Seminar on Stochastic Processes, 1990

Existence of Occupation Densities

1 1 f f f det G(s,t,x)-1/2[h* (S)TG(S,t,X)-lT*h(s) R O O

h* (t)TG(S,t,X)-lT*h(t)]ds dt dx < ~.

Then U possesses a square integrable oeeupation density.

223

PROOF: Sinee T is a traee operator, a slight extension of theorern 2.2

of [7) to non-syrnrnetrie operators (see seetion 1) tells us that U

possesses a square integrable oeeupation density if

1 1 det G(S,t,x)-1/2 det C(S,t,x)-1/4 (4) f f f

R O O

{h* (S)TG(S,t,X)-1/2 -1/2 -1/2 * C(s,t,x) G(s,t,x) T h(s)

h* (t)TG(S,t,x)-1/2 -1/2 -1/2 * C(s,t,x) G(s,t,x) T h(t)}ds dt dx < 00.

Here

C(s, t,x) P + G(s,t,x)-1/2 H(s,t,x) G(S,t,x)-l H(s,t,x) G(S,t,x)-1/2,

* H(s,t,x) -x(T A(s,t) + A(s,t)T).

Now let

J -1/2 -1/2 G(s,t,x) H(s,t,x) G(s,t,x) .

Then

C(s,t,x) P + J2 ~ P > O.

Henee by proposition 1

and so, sinee P is represented by the unit matrix on N,

AIso,

-1/4 det C(s,t,x) ~ 1.

Henee (4) follows from the integral eondition in the statement of the

proposition and the proof is finished. • REMARK: It is worth noting that we aetually got a little more than

Page 227: Seminar on Stochastic Processes, 1990

224 P.lmkeller

what the statement of proposition 2 says. We have

1 1 2 2 f f f E(exp(ix(Ut-U ))u ut)ds dt dx R O O s s

1 1 1/2 * 1 * s f f f det G(s,t,x)- [h (S)TG(s,t,x)- T h(s)

R O O

h* (t)T G(S,t,x)-lT*h(t)] ds dt dx.

To see this, look at proposition 2.10 of [7].

Next, to establish the integral condition figuring in proposition

2, we eliminate the influence of the "interaction amplitudes" described

in the coefficients of T. This is done to avoid some technical

problems.

PROP05ITION 3: For s,t€[O,l], x€R let

K(s,t,x) I + X2A(S,t) B(s,t).

There is a constant c1 which only depends Gn (a .. :lsi,jsn) such that l.J

for any s,t€[O,l], x€R

det G(S,t,x)-1/2

-1/2 S c1 • det K(s,t,x) •

* -1 * -1 [h (s) K(s,t,x) h(s) h (t)K(s,t,x) h(t)].

PROOF: We have to show that there is a constant c2 > O only depending

on (a .. :lsi,jsn) such that l.J

-1 2 -1 T G(s,t,x) T S (C 2I+X A(s,t)B(s,t)) , s,t€[O,l], x€R.

Now by definition

G(s,t,x) * * -1 2

T ((TT) + x (A(s,t)B(s,t))T.

50 we have to show

* -1 2 -1 2 -1 ((TT) + x A(s,t)B(s,t)) S (c2I+X A(s,t)B(s,t)) .

But proposition 1 reduces this inequality further to

Page 228: Seminar on Stochastic Processes, 1990

Existence of Occupation Densities 225

(TT *) -1 ~ C 2I.

This inequality obviously holds, if we let c2 be the smallest

* This quantity, due to the fact that TT is

symmetric and positive definite, is obviously positive. • To treat the integrand figuring on the right hand side of the

inequality of proposition 3 further, we look at the A(s,t) along their

major axes. This involves working in moving coordinate systems and

with moving eigenvalues. Since we want to have some smoothness in s,t

for both objects, we face a problem usually encountered in the

perturbation theory of finite dimensional linear operators. Its main

theorems state that analytic behaviour of one-parameter families of

linear opera tors is inherited by both eigenvalues and projections on

the eigenspaces. Continuity or differentiability alone is inherited by

just the eigenvalues, whereas eigenspaces may behave rather badly (see

Kato [10], p. 111, for an example of Rellich [19]). Of course, since

h1 , ... ,hn are just square integrable functions in general, A(s,t) is no

more than continuously differentiable in s,t. To make things even

worse, it is a two-parameter family of matrices. And in this

situation, per turbat ion theory becomes more complicated. Not even

analyticity is inherited by the eigenvalues (see Rellich [19], p. 37,

Baumgartel [1]). We circumvent these problems in the following way.

First of alI, we fix either s or t and consider the one-parameter

families of matrices as the respective other paramater varies. In

addition, we assume that h1, ... ,hn are analytic (for example

polynomials) first and come back to the general situation later using a

global approximation argument.

PROPOSITION 4: Let h1, ... ,hn be analytic functions, s€[O,l]. Then

Page 229: Seminar on Stochastic Processes, 1990

226 P.lmkeller

there exist families (Ai(S,t) :sStSl,lSisn) of real numbers and

(o. (s,t) :sStSl,lSisn) of vectors in Rn such that l.

(i) t ~ Ai (s,t), t ~ oi (s,t) is analytic except at finitely many

points,

(ii) t Ai(s,t) is increasing,

(iii) O S Ai (s,t) S 1, Ai (s,s) = O, Ai (0,1)

te:[s,l], lsisn,

(iv) for te:[s,l] the matrix O(s,t)

orthogonal and

(

Al (s, t).

0* (s,t) A(s,t) O(s,t) = ••••• O

A similar statement holds with respect to se:[o,t] for t fixed.

PROOF: Since h1 , ... ,hn are analytic, so is the family of matrices

(A(s,t).:sstSl). Hence there is an integer m S n, a family

(~l(s,t), .. "~m(s,t) :sStSl) (eigenvalues), integers Pl' ... ,Pm (their

m multiplicities) such that L p. = n, and a family

j=l J

(P1 (s,t), ... ,Pm(s,t) :sstSl) of orthogonal projections such that for any

lsjSm

Pj(S,t) is the orthogonal projection on the eigenspace of ~j(S,t),

sstSl, and such that the functions

t ~ Aj(S,t), t ~ Pj (s,t) are analytic, lsjsm

(see Kato [10], pp. 63-65, 120). Now fix lsjsm. Using an analytic

family of unitary transformations (see Kato [10], pp. 104-106, 121,

122), we can construct analytic families of orthonormal vectors, say

1 p. (e j (s,t), ... ,e/ (s,t) :sstsl),

Page 230: Seminar on Stochastic Processes, 1990

Existence of Occupation Densities 227

a smoothly moving basis of the subspaces of Rn(P.(s,tl :sstSll project J

ono Next, we take multiplicities into account. For

let

"i (s,tl Il j (s, tI,

i-Pl-" .-Pj-l ei(s,tl = ej (s,tl, sstSl.

Then the eigenvectors ei(s,tl correspond to the eigenvalues vi(s,tl,

lsisn. But still, "i(S,tl < "i+l(S,tl is possible. We therefore have

to arrange the eigenvalues to make (iiil valid. For sStsl fixed we

therefore define a permutation o of Il, ... ,m) such that

1l0 (11 (s,tl ~ ... ~ Ilo (ml (s,tl.

Due to continuity, we obtain the same permutations on whole

subintervals of [s,l]. Analyticity and compactness imply that we need

only finitely many permutations on the whole of [s,l]. If we perform

these permutations on the vi(s,tl and ei(s,tl, lsisn, sstsl, we obtain

the desired families

(Ai(s,tl: sstSl,lsisnl and 0i(S,tl: sstsn,lsisnl.

By construction, they are analytic except at finitely many points of

[s,l]. We have therefore proved (il and (ivI. To prove (iil and the

rest of (iiil, let us look a little more closely at A(s,tl. Observe

that for YERn

* t * 2 y A(s,tlY J (y h(ull du.

s

Therefore the family (A(s,tl: sstSl) of nonnegative definite matrices

possesses the properties

o S A(s,t) S I, A(s,s) = O, A(O,l) I,

and

Page 231: Seminar on Stochastic Processes, 1990

228 P. Imkeller

t ~ A(s,t) is increasing on [s,l] with respect to the usual

ordering of non-negative definite symmetric matrices. These facts

together with the Courant-Fischer min-max principle, expressed for

example in Kato [10], pp. 60,61, yield the desired inequalities. •

proposition 4 allows a further reduction of the integral condition

we have to establish. OUe to the problems alluded to above, we will

have to be careful with two-parameter families and symmetrically fix s

for one part of the integrand, t for the other.

PROPOSITION 5: Let h1 , ... ,hn be analytic functions, s,te[O,l], s~t.

Assume (Ai(S,V): s~v~l,l~i~n) and (oi(s,v): s~v~l,l~i~n) resp.

(~i(u,t): O~u~t,l~i~n) and (Pi(u,t): O~u~t,l~i~n) are given according

to proposition 4 for s resp. t fixed. Let

O(s,v) = (Ol(S,v), ... ,On(S,v», P(u,t) = (Pl(u,t), ... ,Pn(U,t» and

* * k(t) = O (s,t) h(t), ils) = P (s,t) h(s).

Moreover, let

maxlf (1+x4)-3/2 dx, f (1+x2)-5/2 dx}. R R

Then

(i) Ai(U,V) = ~i(u,v) for all s~u~v~t, l~i~n,

(ii) f det K(S,t,x)-1/2 Ih* (S)K(S,t,x)-lh (S) h* (t)K(S,t,X)-lh (t)] dx R

n L

j=l

2 -1/4 k.(t)[A.(s,t)(l-A.(s,t»] . J J J

PROOF: Though the procedure of arranging the eigenvalues in descending

order in the proof of proposition 4 may destroy their overall

analyticity, it preserves continuity. This obviously implies (i). To

prove (ii), first observe that, due to the choice of Ai(S,t), O(s,t),

Page 232: Seminar on Stochastic Processes, 1990

Existence of Occupation Densities

P(s,t), for any xeR

and a similar equation with P(s,t) in place of O(s,t). Hence

* -1 n k~ (t)

2 -1 h (t) K(s,t,x) h(t) 1 [1+xA.(s,t)(l-A.(s,t))) ,

j=l J J J

* -1 n 1~(S) 2 -1

h (s) K(s,t,x) h(s) 1 [1+x\(s,t)(l-Ai (s,t))) , i=l

l.

and therefore

(5) det K(S,t,x)-1/2 [h*(S)K(S,t,x)-lh (S) h*(t)K(S,t,x)-lh (t))

n 2 -1/2 IT (l+x Ak(S,t) (l-Ak (s,t))) •

k=l

n 2 1 L (s) i=l l.

n 1 k~ (t)

j=l J

n 2 2 2 sI L(s)k.(t) {[l+xA.(s,t)(l-A.(s,t))) i, j=l l. J l. l.

iţj

[1+x2A. (s,t) (l-A. (s,t))) )-3/2 J J

n 2 2 + 1 1. (s)k. (t)

i=l l. l.

[1+x2A. (s,t) (l-A. (s,t) ))-5/2 l. l.

s 1 1~(S)k~(t) [1+x4A.(s,t)(1-A.(s,t))A.(s,t)(1-A.(s,t))]-3/2 i. j=l l. J l. l. J J

iţj

229

Page 233: Seminar on Stochastic Processes, 1990

230 P.Inikeller

Now observe that for bl ,b2 ~ O we have

(6) S (1+x4bl )-3/2 dx = b~1/4 S (1+x4)-3/2 dx R R

-1/4 ~ c2 bl '

S (1+x2b2)-5/2 dx = b;1/2 S (1+x2)-5/2 dx R R

-1/2 ~ c2 b2 .

Applying (6) term by term to the right hand side of (5) yields the

desired inequality. • From this point on it is relatively obvious what has to be done to

prove the integral condition of proposition 2. We ultimately have to

integrate the rhs of (ii) in proposition 5 in s and t. The key

observat ion we will exploit in doing this rests upon the extremal

properties of the eigenvalues as expressed in the principle of

Courant-Fischer. Intuitively, this can be most easily understood in

the two-dimensional case. Assume the notat ion of proposition 5. Fix

s<t and suppose Al(s,t) > A2 (S,t). Then the principle of

Courant-Fischer states

* (7)

x A(s,t)x Al(S,t) = max n *

O~x€R x x A2 (S,t) = min

O~X€Rn

* x A(s,t)x * xx

Since 0l(s,t), 02(s,t) are unit eigenvectors of Al (s,t),A2 (s,t), we

also have

* * Al(S,t) = 0l(S,t)A(S,t)Ol(S,t), A2 (S,t) °2(S,t)A(S,t)02(S,t) .

Now consider the functions

* * fl(h) = 0l(S,t+h)A(S,t)Ol(S,t+h), f 2 (h) = °2(S,t+h)A(S,t)02(S,t+h),

defined in some small neighborhood of t. If, as we may do, assume that

t is not one of the exceptional points of proposition 4, f l ,f2 are

Page 234: Seminar on Stochastic Processes, 1990

Existence of Occupation Densities 231

differentiable at O. Moreover, (7) forces them to take their maximum

resp. minimum there. Hence

o

and we obtain the formulas

* d °l(s,t) dt A(s,t)ol(s,t)

* (h~ (t) h12h2 (tJ °l(s,t)

h1h2 (t) h2 (t)

(8)

* 2 2 (Ol(S,t)h(t» = k1 (t),

k;(t) correspondingly.

(8) enables us, while integrat ing the rhs of the inequality (ii) of

proposition 5, to do a simple substitution of variables and the rest is

"smooth sailing". As it turns out, (8) is true far more generally.

The reasons, as given in Kato [10], pp. 77-81, are not as intuitive as

the ones given ahove in the simplest case one can think of, yet rest

upon the same observations. We are therefore led to the following

proposition.

PROPOSITION 6: Let h1 , ... ,hn be analytic functions, s,tE[O,l], s~t.

In the ~otation of proposition 5, for l~i~n let

(analyticity!) .

Set

1 f [U(1_u)]-1/2 du. O

Then

1 f s

Page 235: Seminar on Stochastic Processes, 1990

232 P. Imkeller

St 't' 2 -1/2 L.. lj (u) [Ai (u, t) (l-Ai (u, t) )] du ~ c3 IIi 1.

O jeli

PROOF: Since the asserted inequalities are symmetric, we may

concentrate on the first one. Proposition 4 allows us to differentiate

the function v ~ Ai(S,v) at all but finitely many ve[s,l]. We obtain

d 1 * d dv Ai(S,v) = ~ I 0j(S,v) dv A(s,v) 0j(S,v)

1 jeli

(Kato [10], p. 80)

1 = IIil

* * I 0j(s,v)h(v)h (v)Oj(S;v) jeli

1 = IIil

We may therefore substitute

w = Ai (s,v)

to get, observing proposition 4, (iii),

1 2 1/2 1 1/2 S I k.(v) [A.(s,v)(l-A.(s,v))]- dv ~ S II.I(w(l-w))- dw s jer. J 1 1 O 1

1

which completes the proof.

We are now ready to prove the integral condition of proposition

2.

PROPOSITION 7: Let h1, ... ,hn be analytic functions. Then

1 1 -1/2 * -1 * S S S det G(s,t,x) [h (s)T G(s,t,x) T h(sl R O O

h*(t)T G(S,t,x)-lT*h(tl]ds dt dx ~ 2 c1c 2c3 n2,

where c1,c2,c3 are the constants of propositions 3, 5 and 6.

Page 236: Seminar on Stochastic Processes, 1990

Existence of Occupation Densities 233

PROOF: We adopt the notations introduoed in proposition 5. Fix

s,te[O,l] for a moment, s~t. The inequality of Cauohy-Sohwarz and the

orthogonality of O(s,t),P(s,t) allow us to estimate

(9) S det K(S,t,x)-1/ 2 [h* (S)K(S,t,x)-lh (S)h*(t)K(S,t,x)-lh (t)] dx R

n ~ O2 L R.~(s)k~(t) [A. (s,t) (1-1.. (s,t) )1.. (s,t) (1-1.. (s,t) )]-1/4

1 J 1 1 J J i,j=l

[ j l/2 n 2 2 -1 2 L R..(s)k.(t)[A.(s,t)(l-A.(s,t))] 1

i, j=l 1 J 1 1

(proposition 5)

[ j l/2 n 2 2 -1 2 L R..(s)k.(t) [A.(s,t)(l-A.(s,t))] 1

i, j=l 1 J J J

= 9 2 r~ R.~(s) [A.(S,t)(1_A.(S,t))]-1/2Ih(t)I~1/2 li=l 1 1 1 ~

• ~ k~(t) [A.(s,t)(1-A.(s,t))]-1/ 2 Ih (s)1 2 [ j l/2

i=l J J J

To integrate both sides of (9) over s,t, s~t, we may and do assume that

k and R. have measurable versions in both variables. We then obtain

1 1 f f f det K(S,t,x)-1/ 2 [h* (S)K(S,t,x)-lh (S) O O R {s~t}

h*(t)K(S,t,X)-lh (t)]dx ds dt

[11 S f

n L R.~(S)

1 J1 / 2

[A. (s,t) (l-A. (s,t))]-1/ 2 Ih (t) 12 ds dt • 1 1 O O

s~t}

i=l

[11

. f S O O s~t}

n L k~(t)

j=l J Jl/2

-1 2 2 [A.(s,t) (l-A.(s,t))] Ilh(s)1 dtds J J

Page 237: Seminar on Stochastic Processes, 1990

234 P. Imkeller

1 1 ~ C2C3 [n f

O 2

'hit) ,2 dt1 1/ 2 [n f 'h(s) ,2 ds1 1/ 2 O

(proposition 6)

= C2C3 n (h1, ... ,hn orthonormal).

It remains to apply proposition 3. Splitting [0,11 2 into {s~t} and

{t~s} leads to the factor 2 in the asserted inequality. This completes

the proof. •

For analytic data we have therefore achieved our aim.

PROPOSITION 8: Let h1, ... ,hn be analytic functions. Then U possesses

a square integrable occupation density.

PROOF: Combine propositions 2 and 7.

To generalize proposition 8 to non-analytic h1, ... ,hn , we first

remark that indeed we have proved a little more.

REMARK: Let h1, ... ,hn be analytic functions. Then

1 1 2 2 f f f E(exp(ix(ut-U ))u ut ) ds dt dx R O O s s

2 ~ 2 c1c2c3 n .

This follows immediately from the remark to proposition 2 and

proposition 7. An estimate like this with a dimension dependent bound

makes one wonder whether the inequalities we have been using were too

rough to carry over to the infinite dimensional case. Indeed, in

proposition 3, when getting rid of the influence of the interaction T,

our arguments were susceptible to some improvement. We suspect that

the bound c1 n2 can be replaced by a smaller constant depending only on

T. But it is hard to say in which way this constant depends on n.

Cur second step to generalize proposition 8 consists in

approximating an orthonormal family (h1, ... ,hn) by an orthonormal

family of analytic functions.

Page 238: Seminar on Stochastic Processes, 1990

Existence of Occupation Densities 235

PROPOSITION 9: Let (h1, ... ,hn) be an orthonormal family in L2([O,lJ),

6>0. Then there exists an orthonormal family (gl" .. ,gn) consisting of

analytic functions such that

PROOF: Choose 9>0 such that 39 + 3n9(1+39) < 1. Using standard

theorems of real analysis we obtain a family (k1, ... ,kn) of polynomials

on [O,lJ such that

IIhi -ki ll 2 s: 9 for ls:is:n.

To (k1, ... ,kn) we apply the Gram-Schmidt orthogonalization procedure.

1 Let gl = ~ k1,

1 2 i-l -1 i-l

gi = [IIki - I <kJ.,ki>kjIl2J • [ki - I <kj,ki>kjJ, 2s:iS:n. j=l j=l

Note that for i~j

<ki,kj > = <ki-hi,kj-hj > + <ki-hi,hj > + <hi,kj-hj >,

due to orthogonality of h.,h .. Therefore, since h.,h. are unit L J L J

vectors,

l<ki,kj>1 s: IIki -hi ll 2 IIkj -hj ll 2 + IIki -hi ll 2 + IIkj -hj ll2

s: 92 + 29 s: 39

In the same way for ls:iS:n

Moreover, for ls:iS:n

i-l i-l IIki - I <kj ,ki >kj I1 2 s: IIki ll 2 + I l<kj,ki>llIkjIl2

j=l j=l

s: 1 + 39 + n • 39(1+39),

i-l i-l IIki - I <kj ,ki >kj Il 2 ~ IIki ll 2 - I I <kj , k/ IIIkjll2

j=l j=l

~ 1 - 39 - n • 39(1+39) > O.

(9<1) .

Page 239: Seminar on Stochastic Processes, 1990

236 P. Imkeller

Hence for lsiSn

i-1 -1 IIki -gi ll 2 S I [llki - L <kj,ki>kjIl21 - 1111kill2 +

j=l i-1 -1 i-1

[llki - L <kj ,k/kj Il 21 L I <kJ.,ki > IlIkJ.1I2 j=l j=l

S 39+3nEl/1+39) /1+39) 3n9/1+39) 1-3El-3n9/1+39) + 1-39-3n9/1+39)

Finally, we may make El small enough to keep both Ilhi-kill2 and Ilki -gi l1 2

below 6/2. This completes the proof. •

We now can prove the main result of this section.

THEOREM 1: U possesses a square integrable occupation density.

PROOF: m m Using proposition 8, choose sequences/g1)meH'···' /gn)meH of

analytic functions such that for any meH

and

m m /gl, ... ,gn) is an orthonormal family

lim IIg~-hiI12 O, lSiSn. ro--

For meH let

n '\"" m m L.. a ij gi®gj'

i,j=l

um, ~ the respective integrand and Skorohod integral process

associated with Tm. Remember

n T = L aij hi ® hj .

i,j=l

Now the remark following proposition 8 tells us that

1 1 sup S S S E/exp/ix/~-~)) /um)2/u~)2) ds dt dx < =. meHROO s s

Moreover, by choice of the approximately sequence um ~ u, ~ ~ U, both

Page 240: Seminar on Stochastic Processes, 1990

Existence of Occupation Densities 237

in L2 (Qx[0,1]) as m ~ =. By selecting a subsequence, if necessary, we

may assume that this convergence is P x A - a.s. Hence the lernrna of

Fatou allows us to deduce from (10)

1 1 2 2 f f f E(exp(ix(Ut-U ))u ut ) ds dt dx < =. R O O s s

But by proposition 1.1 of [7], this implies that U possesses a square

integrable occupation density. • We will now illustrate by an example that the result of theorem 1,

as simple and easy as it may seem, cannot be deduced from the results

of the theory of Gaussian enlargements of the Wiener filtration.

Indeed, it will turn out that it is enough to take two orthogonal

interacting Gaussian components.

EXAMPLE: Let

fIs) log 2/2 (tS log S)-l, OssSl/2,

fIs) 1[0,1/2] (s) + f(1-S)1[1/2,1] (s),

1 [O, 1/2] (s) - 1 [1/2, 1] (s) .

Using the transformation t = -log s, it is easy to see that

1/2 f f 2 (s) ds = 1/2, O

and therefore that IIhi l1 2 = 1 = Ilh211 2 . It is obvious from the

definition that <h1 ,h2> = O. So (h1,h2) is an orthonormal pair of

functions. Now let

1 ut = h1 (t) f h2 dW, tE[O,l].

O

The Skorohod integral process of u is given by

t 1 t Ut = f h1dW f h2dW - f h1h2 dA, tEIO,l].

O O O

Theorem 1 shows that U possesses a square integrable occupation

density. Let us show that U is not a semimartingale with respect to

the enlarged Wiener filtrations ta be used in this context (see Juelin,

Page 241: Seminar on Stochastic Processes, 1990

238 P. Imkeller

Yor [8]). For te[O,l] let

1 OI. 1 = alw : s:s:t) \1 OII h2dW), ~/t s O

completed w.r. to P so that the "usual hypotheses" of martingale theory

are valid. Abbreviate G1 = I~!: O:s:t:S:1). Since h1 is deterministic,

theoreme I.1.1 of the paper of Chaleyat-Maurel, Jeulin in Jeulin, Yor

[81, p. 64, is applicable and states that

t I h1dW is a G1-semimartingale iff I Ih1ls) 1

Ih 2 IS ) 1 -1.,------!:'----- ds < 00

O O II h~IU)dU)1/2 s

for alI te[O,11. Now

1 I Ih1 Is) 1 O

Ih2 Is) 1 1 1 -1"----=---- ds ~ log 2/2 I [11-s) Ilogl1-s) 11- ds II h~IU)dU)1/2 1/2 s

1/2 1 log 2/2 I [sllog sl]- ds

O 00

= log 2/2 I 1 dt log 2 t

= ClO.

For the equation of lines 2 and 3 of this inequality chain we have used

the substitution t = -log s again. Hence

1 I h1dW is not a G1-semimartingale. O

1 Since I h1h2 dA < 00, also U is not a G1-semimartingale. Of course,

1/2

if we enlarge further, for example to

2 1 1 ~t = ~t yalI h1dW), te[O,11,

O

this statement is true a fortiori.

REMARK: The question, whether the process U just constructed is a

Page 242: Seminar on Stochastic Processes, 1990

Existence of Occupation Densities 239

semimartingale with respect to its natural filtration, remains open.

It is hard to imagine how it could be approached.

(1) Baumgărtel, H.

(2) Bellman, R.

(3) Berman, S.M.

(4) Ehm, W.

(5) Fredholm, I.

REFERENCES

Analytic perturbation theory for matrices and operators. Birkhăuser: Basel, Boston (1985) .

Introduction to matrix analysis. McGraw-Hill: New York (1970).

Local times and sample function properties of stationary Gaussian processes. Trans. Amer. Math. Soc. 137 (1969), 277-300.

Sample function properties of multi­parameter stable processes. Z. Wahrscheinlichkeitstheorie verw. Geb. 56 (1981), 195-228.

Sur une classe d'equations fonctionnelles. Acta Math. 11 (1903), 365-390.

(6) Geman, D., Horowitz, J. Occupation densities. Ano. Probab. ! (1980), 1-67.

(7) Imkeller, P. Occupation densities for stochastic integral processes in the second Wiener chaos. Preprint, Univ. of B.C. (1990).

(8) Jeulin,Th.,Yor,M. (eds). Grossissement de filtrations: exemples et applications. Seminaire de Calcul Stochastique, Paris 1982/83. LNM 1118. Springer: Berlin, Heidelberg, New York (1985) .

(9) Jorgens, K.

(10) Kato, T.

(11) Nualart, D.

Linear integral operators. Pitman: Boston, London (1982).

Perturbation theory for linear operators. Springer: Berlin, Heidelberg, New York (1966) .

Noncausal stochastic integrals and calculus. LNM 1516. Springer: Berlin, Heidelberg, New York (1988).

(12) Nualart, D., Pardoux, E. Stochastic calculus with anticipating integrands. Probab. Th. ReI. Fields 78 (1988), 535-581.

(13) Nualart, D., pardoux, E. Boundary value problems for stochastic

Page 243: Seminar on Stochastic Processes, 1990

240 P.lmkeller

differential equation. preprint (1990).

[14) Nualart, D., Pardoux, E. Second order stochastic differential equations with Dirichlet boundary conditions. Preprint (1990).

[15) Nualart, D., Zakai, M.

[16) Ocone, D., Pardoux, E.

[17) Pietsch, A.

[18) Reed, M., Simon, B.

[19) Rellich, F.

[20) Simon, B.

[21) Skorohod, A.V.

[22) Smithies, F.

[23) Watanabe, S.

[24) Zakai, M.

Generalized stochastic integrals and the Malliavin calculus. Probab. Th. ReI. Fields 2l (1986), 255-280.

Linear stochastic differential equations with boundary conditions. Probab. Th. ReI. Fields, to appear. (1990).

Eigenvalues and s-numbers. Cambridge University Press: Cambridge, London (1987) .

Methods of modern mathematical physics. IV: Analysis of operators. Acad. Press: New York (1978). Perturbation theory of eigenvalue problems. Gordon, Breach: New York, London (1969).

Trace ideals and their applications. London Math. Soc. Lecture Notes Series 35. Cambridge University Press: Cambridge, London (1979).

On a generalization of a stochastic integral. Theor. Prob. Appl. ~ (1975), 219-233.

Integral equations. Cambridge University Press: Cambridge, London (1965).

Lectures on stochastic differential equations and Malliavin calculus. Tata Institute of Fundamental Research. Springer: Berlin, Heidelberg, New York (1984) .

The Malliavin calculus. Acta Appl. Math. l (1985), 175-207.

Peter Imkeller Department of Mathematics University of British Columbia 121 - 1984 Mathematics Road Vancouver, B.C. V6T 1Y4 Canada

Page 244: Seminar on Stochastic Processes, 1990

Calculating the Compensator: Method and Example

BY FRANK B. KNIGHT

1. METHOD: Let X t , t ;:::. o be a real-valued stochastic process on a complete

probability space (n,:F, P), adapted to a right-continuous filtration :Ft containing

alI P-null sets. We recall from [2, VII, 23] that X t is an :Ft-semimartingale if

it can be expressed X t = X o + M t + A t , where M t is a local martingale of :Ft ,

Mo = O, and At is a right-continuous adapted process with paths of finite variation

on finite time intervals. Moreover ([2, ibid]) X t is called "special" if there is such

a representation with At previsible, and then the previsible At is unique. In this

case we will call A t the "compensator" (or dual previsible projection) of X t . Note

that this terminology differs considerably from that of [2, VI], which seems not to

give any general name to such At . The process At can be obtained from X t more

or less explicitely, at least in theory. Indeed, there exist stopping times Tn i 00

such that Atl\Tn are of bounded variation ([2, VI, 2, (52.1)]). Then Atl\Tn may be

constructed from Xtl\Tn by the approximations of P. A. Meyer [8, VII, T29] or M.

Rao ([10] and [2, VIIj 1, 21]).

In the present work, by contrast, we do not assume that X t is a semimartingale,

but instead propose a method of checking that it is one, and of simultaneously

obtaining the compensatoţ" At . We do not, however, have any results that evaluate

how general this method is. Instead, we only wish to apply it to an example which

seems to be of independent interest.

To describe the method in general terms, we assume that X t is right-continuous

in Ll, and that, for A > O, Efa"" e->.tIXtldt < 00 (since it suffices to construct At

in finite intervals (O, K] it would be permissible to redefine X t == O for alI t ;::: K

in order to achieve the last hypothesis). In this case, it is known ([6] and [8)] that

for A > O the following expression is an :Ft-martingale

(1.1)

Page 245: Seminar on Stochastic Processes, 1990

242 F.B. Knight

where R>.(Xt) = E (fooo e->'·XH.dsl.rt) is chosen to be right-continuous in t.

REMARK: The notation R>. derives from the fact that if we represent Xt = cp(Zt), where Zt is the "prediction process" of X" then R>.(Xt) = RXcp(Xt),

whereRx is the resolvent of Z"~ (see for example [7]).

The method which we use may be spelled out as follows

PROPOSITION 1.1: IT X t has right-continuous paths, and the limits

(1.2) At = Iim >. r (X" - >.R>.X,,)du, >. ..... 00 10

0< t,

exists both pathwise a.s. and in LI, and are of finite variation in finite time

intervals, then X t is a special semimartingale and At is its compensator.

PROOF: Since X t is right-continuous in LI, Iim>. ..... oo >.R>.(Xt ) = X t in LI. Thus

if the limit At exists we have lim>. ..... oo >'M>.(t) = X t + At in LI. Hence X t + At :

X o + Mt is a martingale, and since >. J:(X. - >'R>.Xu)du is continuous in t, At is

previsible. Therefore At is the required compensator.

2. AN EXAMPLE: While simple to state, the above method (and probably any

other method of finding the compensator as well) can lead to difficult calculations

when put into practice. We have chosen to work an example in the form X t = BtAq, where B t is a Brownian motion starting at O and Q is measurable over

Goo(: u(B., s < 00», but Q is not a stopping time of Gt {:: u(B., s ~ t». The general class of such processes might be called "arrested Brownian motions,"

and they behave rather differently from stopped Brownian motions, as is to be

expected. What was not entirely anticipated is the degree of difficulty inherent in

calculat ing At for such X"~ even for the simplest cases of Q. Indeed, we still do

not know whether alI such X t are even semimartingales relative to .rt(= u(X., s ~

t+ ». Dur aim, however, was not to investigate this question, but to calculate Al

in the following special case (proposed by Professor Bruce Hajek).

Page 246: Seminar on Stochastic Processes, 1990

Calculating the Compensator 243

PROPOSITION 2.1. For c > O, let Se = maxO::;.,::;e Ba, so that B(Qe) = Se (it

is well-known that Qe is unique, P-a.s.) Then X t = B(t A Qe) is an Ft-special

semimartingale, with compensator AtJ\Q. = J:J\Q. Hv(u,S(u) - X(u))du, where

H(u,v) = lnJvOOexp(_y2/2(c - u))dy, and Hv = tvH. Moreover, X t + At is a

stopped Brownian motion, in the sense that (XtJ\Q. + AtJ\Q.)2 - (t A Qe) is also a

martingale.

Before commencing the proof, it may be amusing to give an "economic" inter­

pretation. Suppose that a certain stock market index (with appropriate scaling)

performs a Brownian motion, but that there is an oracle who, given a time c > O,

can announce at its arrival the time when the market reaches its maximum in

O ~ t ~ c. The question is, how should a stock owner, who would otherwise have

no inside knowledge, be fairly paid in lieu of using the oracle (and thus selling at

the maximum). Thus, if he promises to give up the oracle until a time t < c he

(or his agent) should recieve -At by time t, and O thereafter, in order to be fairly

compensated. But if at time t the oracle has not spoken, and knowing this the

stock owner decides to continue until time t + s, then he should be paid by that

time an additional amount -(A(t + s) - A(t)) not to use the oracle.

For another interpretation, suppose a gambling house introduces the game

"watch B(t) and receive Se at time Qe." This can be implemented since the

house may know B(t), t ::; e, in advance. Then -A(t) gives the fair charge for

playing the game until time t. We note that A(t) can be calculated from B(t)

without using any future information (except the fact that t ~ Qe, at least until

time Qe when the game is over).

ADDED REMARKS: After completing an initial draft of this paper, it was

brought to our attention by ehris Rogers that this example is a special case of

those treated abstract1y by M. Barlow in [1], by T Jeulin and M. Vor in [4]. The

formula for the compensator from [1, Prop. 3.7] (to which the one from [4] is

equivalent) is ['J\Q.

AtJ\Q. = - Jo (1- A:_)-ld{B, AO - Â)u

where A: is the optional projection of I[Q.,oo)(u) and Âu is it dual optional pro­

jection. From this it is clear that AtJ\Q. is Lebesgue-absolutely-continuous, which

provided a check on our calculations. More important1y, it is not very difficult to

Page 247: Seminar on Stochastic Processes, 1990

244 F.B. Knight

calculate AO, and then to derive  from AO by using Ito's Formula, thus obtaining

a shortened proof of Proposition 2.1 (as Professor Rogers has shown me). Indeed,

we have

A~ = E(I[qe.oo)(t)IFt)

= P(Qc < tlFt)

~lS(t)-X(t) ( y2) - exp---- dy - 7r(c-t) ° 2(c-t)'

and it follows by Ito's Formula that

A~ _ ~ t dS(u) Y -:; 10 ";c-u (2 t 1

= -Y -:; 10 ..;c - u exp (S(u) - B(U))2 dB(u).

2(c - u)

Then by optional stopping we have Ât = ~ J: ~, and Proposition 2.1 follows.

Finally, an expression somewhat resembling that of Proposition 2.1, but con­

taining an additional term, is found in [3, p. 49]. The problem considered there,

in which u(Qc) is adjoined immediately at t = O, is quite different from ours. The

compensator of B(t) for t ~ Qc is also given, which would be the same as for our

problem.

In view of these facts, we might not want to publish our own calculations, except

for the following considerations. First, our method is in no way limited to "honest"

times, as is that of [1] and [4], and it does not depend on these results, or on Ito's

Formula. Second, it may be of use to indicate the type of calculations which

our method leads to, even though they become quite tedious in the present case.

Third, since the result is now known by other methods, we can omit the final pages

of checking that the three "o-terms" do not contribute to the answer.

PROOF: We continue to let F t denote the usual augmentation of n.>ou(x., s < t + e). To construct .AR.x(Xt)(= Xt for t ~ Qc), we need to calculate E(Xt+.IFt)

over {t < Qc}. It is easy to see that the conditioning reduces to being given

the pair (Xt , St), but to write St as given we need to introduce a further no­

tation to distinguish it from the future maximum. We write So(t) for St when

given in a conditional probability. Then for s ~ c - t we have EO(XH.IFt) = EB(t)(X.ISc_t > So(t)). Setting B(t) = x for brevity, we will need the P'" joint

Page 248: Seminar on Stochastic Processes, 1990

Calculating the Compensator 245

density of (Qc-"Sc-t) from L. Shepp [11, (1.6)]. In the variables (8,y) it is ( ) ( (ti_"'I') y-xexp-.!..I!..-.=.L..

1. 8 2,9, O < 8 < c - t, Y > x. Thus, for s :::; c - t, ... 8'(c - t - 8).

E"'(XşISc-t > So(t)) e P"'(Sc-t > So(t))

=! r ({OO (y(y _ x)exp -(y - x)2) dY) 8-l(c _ t _ 8)-îd8 1l' Jo Js.(t) 28

(2.1) 1 J.c-t (1 00 + - E"'(B(x)IQc-t = 8,Sc-t = y) 1l'" S.(t)

-(y - x)2 ) 8 1 e(y-x)exp 28 dy 8-'(c-t-8)-'d8.

Let us denote these two double integral terms by TI and T2 , respectively. We

integrate by parts in TI to obtain

11" ( (S (t) - X)2 100 2) TI = - So(t)exp o 8 + exp(-Y28)dy e(8(c-t-8))-td8. 1l' o 2 s.(t)-",

In order to find the contributionofTI to >.2R.\(X .. ) in (1.2), note that for s ~ c-t

the contribution of T2 is O, and that of TI is the same as for s = c - t (since

X c+t = Xc). Integrating by parts twice, we obtain

Continuing with this term, but reintroducing the variable u from (1.2) in place

oft (so that TI depends on s, u, and x, where x = X(u) = B(u) for u:::; Qc) we

now take t :::; Q(c) and calculate pathwise

(2.3)

in lieu of lim.\->oo I: >.2 R.\X .. in (1.2). Actually, from (2.1) there is also a de­

nominator pX(u)(Sc_ .. > So(u)) to be included in the integrand, but this term is

awkward when we need Ll-limits, and it does not involve >.. Therefore, we set

TK = Qc A inf(t: pB(t){Sc_t > So(t)} :::; K-l), and (for fixed K) we replace t by

t* = t A TK (note that TK is an Ft-stopping time and TK = Qc as K -+ 00), so

Page 249: Seminar on Stochastic Processes, 1990

246 F.B. Knight

that the denominator is bounded away from O by K-I for O < u $ t*. Then it

does not aft"ect the convergence as A -+ 00 in (2.3). It is easyl to see that this will

also be unaft"ected if we restrict the ds-integral to O < s $ f, which in turn allows

us to replace the term (c - t - s)-i by (c - t)-i in (2.2) as A -+ 00, and then

again allow s -+ 00 in (2.2). This leaves

(2.4)

Now the ds integration leads to the usual resolvent kernel (2A)-! exp -v'2>:x of

Brownian motion, and (2.4) becomes

(2.5)

For u < t*, we have So( u) - X a = So( u) - Ba which is equivalent to IB" I in law, and hence has a continuous local time l+(u,x)j x ~ O. Using this, and

approximating (2.5) by Riemann sums, it becomes

I§ n-l k ~t' k Iim - Iim I)c - -t*)-! f [So( -t*).

A--+OO 7t' n-+oo k=O n J !-t. m

exp -v'2>:(So(u) - Xa ) + (2A)-i exp -v'2X(So(u) - X .. )]du (2.6) I§ n-l k 100 k = Iim - Iim ~)c - -t*)-! [So( -t*)exp -v'2Xx

>. ..... 00 7r n ..... oo k=O non

+ (2A)-i exp -v'V."x(l+( k + 1 t*, x) -l+( kt* ,x»dx. n n

For each A, this is dominated by

~(c - t*)-i(So(t*)v'2X + 1) foo exp( -v'V."x)l+(t*, x)dx y27r 10

in such a way that as A -+ 00, using continuity of So{ u), we have convergence both

pathwise and in LI to

1 We will use several times the observation that, if fooo e-).·g(s)ds < 00 for a 9 ~ O, then

lim)._oo >.k fo' e-).·g(s)ds exists if and only if lim)._oo >.k J,oo e-).·g(s)ds exists, and then the two limits are equal for every € > O. o

Page 250: Seminar on Stochastic Processes, 1990

Ca1culating the Compensator 247

(2.7)

In more detail, to intercha.nge the limits we observe that as n -+ 00 we have a

Cauchy sequence in LI, uniformly in -\ > !' by reason of the uniform bound

Finally, to include the conditioning in (2.1) into the contribution to

lim~_oo 1:* -\2 R~Xudu we also need to incorporate a denominator pX(u)(Se_u > So(u» into (2.4). But since this is bounded away from O for u < t*, and in

(2.7) i+(u,O) increases only when X(u) = So(u), this factor just becomes 1 in

the limit, and may be ignored for u :5 t*(= tA TK)' Thus (2.7) is the limiting

contribution of TI to the compensator At* of (1.2), except for a change of sign.

(The integrand term X u in (1.2) must thus be added to minus the contribution of

-\Ioe- u e-~·T2(S)ds, with T2(s) from (2.1».

Thefirst task in evaluating T2 is toestimate E"(B(s)IQe = 8, Se = Y)j s < 8 < c.

There is no difficulty to write the exact expression, but it is a little complicated.

We note that when x = B(O), z = B(c), 8 = Qe and Y = Se are all given, the path

B( s), O :5 s :5 c, breaks into independent parts O :5 s :5 8 and 8 :5 s :5 c. For the

second part y - B( 8 + s) is just the excursion of the reflected Brownian motion

S. - B. straddling c. It is well-known from excursion theory (see for example [5,

Theorem 5.2.7 and Lemma 5.2.8]) that, conditional on the value of y - B(c), this

process is equivalent to a Bessel bridge Besbra(s) from O to y-B(c), 0:5 s :5 c-(}.

The process needed here, however, is y - B(8 - s), O :5 s :5 8. But if z = B(c)

is given, while Qe and Se are unknown, B(t) in O :5 t :5 c becomes a Brownian

bridge from x to z. It is equivalent in law to B(c - t) + (2~ -1)(z - x), O :5 t :5 c,

and it follows that if 8 = Q e and y = Se are also given, then y - B( 8 - s) is also a

Besbra(s), from O to y-x. This does not dependon z, so we can compute, using the

Bes3 transition density P3(t,x,y) = !(271't)-t(exp-("2~y)2 - exp-("2~Y)\ t > O, x

E"(B(s)IQe = 8, Se = y) = E(y - Besbra(8 - s»

= y - Pa I (8,0, y - x) 100 zPa(8 - s,O,Z)Pa(S, z, y - x)dz.

Page 251: Seminar on Stochastic Processes, 1990

248 F.B. Knight

Denoting y - x = w, this gives for s > O

(2.8) EX(B(s)IQc = O,Sc = y) 3

02 exp«20)-lW2 ) foo 2 _z2 «w-z)2 -(w+z)2)d =y- ~ 10 z exp 2(0_s) exp 2s -exp 2s z

w"ffi(O - s)2 Vs

( O )-23exP«2s0)-1(S-0)W2)100 2 -OZ2 (WZ -wZ)d = y - -- Z exp exp- - exp-- Z

O-s wy'27rS o 2s(0-s) s s exp«2s0)-1(s - 0)w2 )

=y-wy'27rS

[100 2 ( _(u2 - 2)(1 - O-ls)wu) _(u2 + 2v1- O u exp 2 - exp 2

o s s lSWU)) du]

1 100 (-(U-V1-0-1SW)2 -(U+V1-0-1SW)2) = y - ~ u2 exp - exp 2 du

wy20 o ~ s

We are concerned with the behavior of this Iast as S -+ 0+ (corresponding to

A -+ 00 in (1.2)). It is trivial to see that the Iimit of the 2nd term concentrates to

w so that the expression has Iimit y - w = x, as expected, but this is the x that

is subtracted in the integrand of (1.2), and (as A -+ 00) we need the other terms

that are Ieft over. Considering the 2nd term as the obvious difference, the 3econd

integral is equal to

(2.9)

~ [ foo U (u +}O - S w) exp -Cu + Vl- O lsw)2 du wy'27rS 10 O 2s

_w}O ~ S 100 (u + w}O ~ s _ w}O ~ s) exp -cu + V12~ O du lSW)2 ]

~- (w~t ,-hv) + ( }(O 2~;)S exp -(O ~sS)W2) _ (o ~ S W [00 e-:;; dV) ,

where L = w}O ~ s. lntroducing a notation for these 3 terms, we write (2.9) as

-R1 + R2 - R3' On the other hand, the fir3t integral becomes

1 00 O-s O-s O-S 2 wy'27rS 1 u -} O w + 2} O wu - -o-w [( ) 2 ( )]

-Cu - v1- O-lsw)2 exp du.

2s

Page 252: Seminar on Stochastic Processes, 1990

Calculating the Compensator 249

Breaking this as a sum of two integrals as indicated the first one is a variance if

the Iower limit is extended to -00, which entails the error

-1 100 ..L • ~ x2e-" x dx = -(R1 + R2).

wy27rS L..jS

So this equals ~ - (R1 + R2 ). The second is w

1 tş-slOO[ ( tş-s) tş-s] -(u-v'1-0-1 sw)2 -- -- 2 u-w -- +w -- exp du, ..j2-i8 O o O O 2s

and writing this in turn as a sum of two integrals, the first equals

2 - s s v_e_'_ds = 2R2, jffŞ5i0 ) 100 _lv'

O -L..jii;

and the second is

w (O;s) i: e; dv = w (O;s) -R3.

Adding alI of these terms, we obtain

s (O - s) 2R2- 2R3+ 2Rl+:;;;+W -0-

(2.10) _2)(0-s)s -(0-s)w2 2 (O-s)looe-4- d - exp - w -- -- v 27r0 20s O L..jii;

_~ rooe-~V'dv+~+w(O-s). w..jii;}L w O

The first three terms are 0(8k ) as s --> O for any k > O if w > O, and (2.8) reduces

to

EX(B(s)IQc = O,Sc = y) = y _ (_s_) v-x (2.11) - (y - x) (1-~) - 2R2 + 2R3 +2Rl

S S = x - -- + (y - x)-O - 2R2 + 2R3 + 2Rl.

V-x

Let us return afterwards to the three O terms, and first complete the contribution

to the compensator based only on x and the O(s) terms of (2.11). Now the total

contribution to T2 up to time t* (using (2.1)) is

Iim>. X u - - e->'· l t< [ >'lc-u /.c-u 100

>.-+00 o 7r o • S.(u)

EX(u)(B(s)IQc_u = O,Sc-" = y)pX(u)(Sc_" > So(u))-l

(y - X,,)exp - (y -2:,,)2 dy)O-~(c - u - O)-ldOdsJ du.

Page 253: Seminar on Stochastic Processes, 1990

250 F.B. Knight

The term X(u) in EX(u)(B(s)IQe_u = 8,Se-u = y) from (2.11) is combined with

the former X u to contribute

Iim >. t" X u (1- >. r-u e_>.sPX(u)(Qe-u > s and Se-u > So(u)) dS) du >. .... 00 lo lo PX(u)(Se_u > So(U))

= Iim>. t" X u r-u e->,spX(u)(Qe_u < slSe-u > So(u))dsdu. >. .... 00 lo lo

We now proceed much as in (2.6) to write this as

n-l !:.±lt" 1l"-1 Iim >. 2)c _ ~t*)-i { n

,).,n-oo k=O n J ~t·

tJ>( u)X (u) >.e ->.s 8- I exp - o d8dsdu 100 18 1 (S (u) - X(u))2 o o 28

n-l !:.±lt" = 1l"-1 Iim >. ~)c _ ~t*)-î ( n

A,n-oo k=O n J ~t·

100 " (So(U) - X(u))2 tJ>(u)X(u) s-Ie-ASexp dsdu, o 2s

where tJ>(u) = (PO{Se_u > So(u) - X(u)})-I. Continuing, this becomes in terms

ofthe local time C+(t,x),

The justification of the interchange of Iimits using also pX( u)( Se-u > So( u)) > K-l for u < t*, is the same as for (2.7), and the Iimit is both pathwise and in LI.

Since dC+( u, O) increases only when X u = So( u), this term cancels the contribution

(-(2.7)) of TI.

The remaining two terms __ s_ + (y - x)-8s from (2.11) contribute (with tJ>(u) y-x

Page 254: Seminar on Stochastic Processes, 1990

Calculating the Compensator 251

as before)

lim - du?jJ(u) dsse->'· d88-!(c - '1.1 - 8)-t. >.21t' 1c - u J.c-u >'-00 71' o o •

___ ~ (y-Xu)exp- - u dy (100 (1 y X) (y X)2 ) s.(u) y - Xu 8 28

= lim ~ ft' du?jJ(u) foo dvve- IJ r-u

>. ..... 00 71' 10 10 1l;

d88- i (c _ '1.1 _ 8)-i (f OO (1 _ (y - Xu)2) exp _ (y - Xu)2 dY) lso(u) 8 28

= ~ t' du?jJ(u) r-u d8S-i(c-u-8)-t 71' 10 10

( foo (1 _ (y - Xu)2) exp (y - Xu)2 dY) ls.(u) 8 28

1 t' r-u (8 ('1.1) - X )2 =-;10 dutP(u)(80 (u)-Xu)10 d88- i (c-u-8)-i exp - o 28 u

!2 ( _.1 (80 ('1.1) - Xu)2 = -V; 10 (c - '1.1) .- exp 2(c _ '1.1) ?jJ(u)du,

where we integrated out 8 from the joint density in (8,y) at y = 80 ('1.1) - Xu

for the last step. Setting H(u,v) = ln(IlJooexp-2d~u)dY), this term reduces

to It tlJ H( '1.1, 80 ( '1.1) - X( '1.1 ))du, but a local time representation is foiled by the

inhomogeniety in u. We note the intuitive meaning of the integrand as the px.

conditional density of 8c- u at 8 0 ('1.1) given that it exceeds 8 0 ('1.1).

It remains to show that the three o-terms in (2.11) do not contribute to At . This

required a further lengthy calculation, involving the same methods used already

plus some rather intricate analysis. In view of the Further Remarks following

Proposition 2.1, we have decided to spare the reader the details.

REFERENCES

[1]. M. T. Barlow, Study of a filtration expanded to include an honest time, Z. Wahrscheinlichke~tstheorie verw. Geb. 44 (1978), 307-323.

[2]. C. Dellacherie and P.-A. Meyer, Probabilites et Potentiel, Chap. V-VIII, Her­mann, Paris.

[3]. Thierry Jeulin, Semi-Martingales et Grossissement d'une Filtration, Lect. Notes in Math .. Springer-Verlag, Berlin.

[4]. T. Jeulin and M. Vor, Grossissement d'une filtration et semimartingales: For­mules explicites, Seminaire de Probabilites XII. Springer-Verlag, Berlin.

[5]. F. B. Knight, Essentials of Brownian Motion and Diffusion, Math. Surveys 18 (1981). Amer Math. Society, Providence.

Page 255: Seminar on Stochastic Processes, 1990

252 F.B. Knight

[6]. F. B. Knight, A post-predictive view of Gaussian processes, Ann. Scient. Ee. Norm. Sup. 4' series t16 (1983), 541-566.

[7]. F. B. Knight, Essays on the Prediction Process, Leeture Notes and Monograph Series, S. Gupta, Ed., Inst. Math. Statisties 1 (1981). Hayward, Cal.

[8]. P.-A. Meyer, Probability and Potentials, Blaisdell Pub. Co .. 1966. [9]. P.-A. Meyer, A remark on F. Knight's paper, Ann. Seient. Ee. Norm. Sup. 4'

series t16 (1983), 567-569. [10]. K. M. Rao, On decomposition theorems of Meyer, Math. Seand. 24 (1969),

66-78. [11].1. A. Shepp, The joint density of the maximum and its location for a Wiener

process with drift, J. Appl. Prob. 16 (1979), 423-327.

Professor Frank B. Knight Department of Mathematies University of Illinois 1409 West Green Street Urbana, Illinois 61801 U.S.A.

Page 256: Seminar on Stochastic Processes, 1990

Rate of Growth of Local Times of Strongly Symmetric Markov Processes

MICHAEL B. MARCUS

Let S be a locally compact metric space with a countable base and let X = (O,,ft,XtoP"'), t E R+, be a strongly symmetric standard Markov process with

state space S. Let m be au-finite measure on S. What is actually meant by

"strongly symmetric" is explained in [MR] but for our purposes it is enough to

note that it is equivalent to X being a standard Markov process for which there

exists a symmetric transition density function Pt(x,y), (with respect to m). This

implies that X has a symmetric l-potential density

(1)

We assume that

(2) \/x,y E S

which implies that there exists a local time L = {Lf, (t, y) E R+ X S} for X which

we normalize by setting

(3)

It is easy to see, as is shown in [MR], that u1(x,y) is positive definite on S x S.

Therefore, we can define a mean zero Gaussian process G = {G(y),y E S} with

covarience

E(G(x)G(y)) = u 1(x,y) \/x,y E S

The processes X and G, which we take to be independent, are related through the

l-potential density u 1(x,y) and are referred to as associated processes.

There is a natural metric for G

Page 257: Seminar on Stochastic Processes, 1990

254 M.B. Marcus

which, obviously, is a function of the 1-potential density of the Markov process

associated with G. We make the following assumptions about G. Let Y c S be

countable and let Yo E Y. Assume that

(5)

(6)

(7)

and let

(8)

Iim d(y, Yo) = O " ..... "0 .ev

sup G(y) < 00

d(","0)~6 a.s. 'rI6 > O

.ev

Iim sup G(y) = 00 6 ..... 0 d(","0)~6

.ev

a.s.

a(6) = E ( sup G(Y») d(","0)~6

.ev

Note that by (7) lim6 ..... 0 a(6) = 00. In Theorem 1 we present some estimates on

the rate at which L~ goes to infinity as y goes to Yo.

THEOREM 1. Let X and G be associated processes as described above, so tbat,

in particular, (5) (6) and (7) are satislied an a countable subset Y of S. Let

L = {Lf, (t,y) E R+ X S} be the local time of X. Then

- L" 'rItER+ (9) Iim sup _t_ ~ 2(Lfo)1/2 &.8. 6 ..... 0 d(",,,0)~6 a(6)

.ev

and

- L" 'rItER+ (10) Iim sup _t_ < 1 a.s. 6 ..... 0 d(","0)~6 a2 (6) -

.ev

where a(6) i8 given in (8).

Theorem 1 shows that (6) holds with G(y) replaced by Lf whatever the value

of t and that (7) holds with G(y) replaced by Lf as long as Lfo > o. But we know

from IMRI Theorem IV that these statements are equivalent. 80 we could just as

well have given the hypotheses in terms of the local time. However, since there

is such an intimate relationship between the local time of X and the Gaussian

process associated with X and since the critica! function a( 6) is given in terms of

Page 258: Seminar on Stochastic Processes, 1990

Rate of Growth of Local Times 255

the Gaussian process, there is no reason not to give conditions on the associated

Gaussian process as hypotheses for properties of the local time.

Obviously there is a big gap between (9) and (10). On the other hand these

estimates which are a consequence of a great deal of work developed in [MR] are

the best that we can obtain. We present them because we think that they are new

results and hope that they will stimulate further investigat ion of this problem.

Equivalent upper and lower bounds for a( 6) have been obtained by Fernique

and Talagrand. See [T] (7) and Theorem 1. (We say that functions 1(6) and g(6) are equivalent, and write 1(6) ~ g(6), as 6 -+ O, (resp. as 6 -+ 00) if there exist

constants O < Cl ~ C2 < 00 such that cd(6) ~ g(6) ~ c2/(6) for all6 E [0,6 /], for

some 61 > O, (resp. for all6 E [AI, 00), for some AI < 00)). We will use a part of

this result in the examples below.

Before we go on to the proof of Theorem 1 let us discuss some applications.

What we are examining here is a local time which is unbounded at a point but

bounded away from the point. One source of examples comes from symmetric

Markov chains with a single instantaneous state in the neighborhood of which the

local time blows up. Processes of this sQrt were considered in [MR] Section 10.

In fact (9) is a general statement of a result which was obtained for special cases

in [MR] Theorem 10.1, (10.11). An abundant source of Markov processes with

unbounded local times are certain real valued Levy processes. See [B] and also

[MR]. However the local times of these processes are unbounded on all intervals.

Still we can apply the Theorem to these processes by looking at them on a nowhere

dense sequence in their state space which has a single limit point. By choosing

sequences converging to the limit point at different rates we can get an idea of

how quickly the local time blows up. The following Corollary of Theorem 1 gives

some examples.

COROLLARY 2. Let X be a symmetric real vaJued Levy process such that

(11)

where

(12)

at innnity. Let YIc = exp(-(logk)P), k = 1, ... ,00, where 0< f3 < 00, O < a < 1

and f3a < 1. Let fj = f3 V 1 and Jet Y = {{YIc}k:2' O}. Then we have

L"k (13) Iim t > C(LO)l/2 TIt E R+ a.s.

Ic-+oo (logk)(1-a,8l/2 - t

Page 259: Seminar on Stochastic Processes, 1990

256 M.B. Marcus

for 80me constant C > O and

(14) L"· Iim t <C' k-oo (log kP-ap -

"It E R+ a.8.

for 80me constant C' < 00. Equivalently, we have

(15) "It E R+ a.8.

and

(16) - L" Iim 8UP t < C' 6_0 d(",0)~6 (log 1/ c5)(l-aP)/~ -

"It ER+ a.8.

.EV

Writing the limits as in (15) and (16) gives a clearer idea of how these sequences

blow up in the neighborhood of zero.

We will first give the proof of Theorem 1 and next, in Lemma 3, derive some

results on the suprema of Gaussian sequences. These results, along with Theorem

1, will immediately give Corollary 2.

PROOF OF THEOREM 1: The statement in (9) follows from Theorem 6.4 [MR]

as modified in the proof of Theorem 10.1 of the same paper. In fact (9) is what

is actually proved in Theorems 6.4 and Theorem 10.1 even though it is stated in

(10.11) of [MR] in a special case. The main motivation for this note is to give a

clearer statement of what is actually proved in Theorems 6.4 and 10.1 of [MR].

The only point that might be confusing is that Y n {d(y,yo) ~ c5} is taken to be

finite in the Theorems in [MR]. This is to insure that (6) of this paper is satisfied.

In this note (6) is imposed as an hypothesis. (This enables us to apply Theorem

1 to a process that is defined on a countable dense set and has an isolated point

at which it goes to infinity).

The statement in (10) is not given in [MR] but follows easily along the same

line as many of the results contained in that paper. By [FI Theorem 3.2.1 we see

that

(17)

where

(18)

a.s.

m(c5) = median of ( sup IG(Y)I) d(","o)~6

.EV

Page 260: Seminar on Stochastic Processes, 1990

Rate of Growth of Local Times 257

Therefore, by [MR] Lemma 4.3 for almost ali w with respect to the probability

space supporting G

Iim sup 6_0 d(rlollo)~6

.ey

Lf+~ 1 m 2 (c5) = 2 for almost ali t E R+ a.s.

where the almost sure is with respect to the probability space of X, i.e. P'" almost

sure, for ali z E S. Recall that L and G are independent. N ow if we take an w for

which (17) holds we see that

(19) - L" Iim sup --'- < 1 6-0d(II.lIo)~6 m 2 (c5) -

for almost ali t E R+ a.s.

Since (2), (5) and (6) imply that sUPIIEY EG2 (1I) < 00 it follows from [F] Corollary

2.2.2 that we can replace m(c5) by a(c5) in (19). Finally we can replace "almost

ali t" by "ali t" since Lf is monotone in t and thus we get (10). (By much more

elementary considerations we note that

(20) m(c5) ~ 2E sup IG(y)1 ~ 4E sup G(y) = 4a(c5) d(II.lIo)~6 d(II.lIo)~6 ~y ~y

which also gives (10) but with the constant 16).

We will now give some specialized results on the rate of growth of the expected

value of some Gaussian sequences that are suited to the problem at hand.

LEMMA 3. Let {JI.I:}~l be a sequence of real numbers such that lim.l:_oo Y.l: = o. Let Y = {{Y.l:}~1'0} and Jet {G(y),11 E Y} be a mean zero Gaussian process

such that for some h' > O

(22) pHh) ~ E(G(h) - G(0))2 ~ p~(h) Vh E [O,h']

where p(h) is non-decreasing on [O, h']. Assume that

(23) Y.l:-1 - Y.l: ! as k 100

and

(24)

Then

Page 261: Seminar on Stochastic Processes, 1990

258 M.B. Marcus

for alI le Buflicient1y large and constants O < CI :5 C2 < 00 independent of {yA:}~1.

(CI is an absolute constant). Furthermore, assume that p~(h) = E(G'(h) -G'(0»2

for Bome mean zero Gaussian process G' and that 111 :5 1. Then we alBa have that

(26) ( ) (1 1 P2(tI) ) E s~p G(II;) :5 Ca (1 1/ )1/2 dtl + 1

1~1~A: 11.-1-11. ti og ti

for Bome absolute constant Ca.

PROOF: To obtain the left-hand-side of (25) we use the Sudakov bound for the

expected value of the supremum of Gaussian processes, stated for the problem

under consideration. Let Y6 = {y E Y : d(lI, 110) ~ 6} and let N (Y6, f) be the

minimum number of closed balls in the pseudo-metric d (see (4», that covers Y6.

Then there exists a universal constant K such that

(27)

(See [LT] for this and other results on Gaussian processes that are not given a

specific reference). Now,let Y6 = {y E Y : d(II, O) ~ IIA:}, i.e. 6 = IIA:. It is obvious

that N(Y6,P1(1IA:-1 -1IA:)/2) = le and hence we get the left-hand-side of (25).

To obtain the right-hand-side of (25) we use the Borel-Cantelli Lemma applied

to the sequence ((G(II;) - G(0»/(d(O,II;}(31ogjP/2)}~2 to see that

G(II;) - G(O) sup d( )( 1 ·)1/2 < 00 a.s. 2~;~00 O, II; 3 og,

Thus by [JM] II Corollary 4.7

E ( G(II;) - G(O) ) C sup = 2~;~oo d(O, II; }(31ogj)1/2

for some finite constant C. Therefore

(28) E ( sup G(II;) - G(O») :5 C sup d(0,1I;}(31ogj)1/2 2~;~A: 2~;~A:

Using (24) and (28) we obtain the right-hand-side of (25). (Note, the existence

of G(O) implies that EG2 (O) < 00).

To obtain (26) we use the following result of Fernique, stated for the problem

at hand. (See [T] or [LT]). Let YA: = {y;}~=1 and let m be a probability measure

on YA:. Then, for {H(II),II E YA:} a Gaussian process

00

(29) E (sup H(II») :5 K' sup f (log (B~ »)1/2 df IIEY. zEY. m Z, f

o

Page 262: Seminar on Stochastic Processes, 1990

Rate of Growth of Local Times 259

for some absolute constant K', where B(X,E) = {y E Yk : d(y,x) ~ E} and d is

defined for H as in (4). It follows by [JM] II Lemma 4.4 that

(30) E ( sup G(Y;)) ~ E ( sup G'(Y;)) l~d::;,k l::;;::;k

We will show that (26) holds for G'. (We do this to use the regularity hypotheses

imposed on P2). Let

m({y;}) = Y;-l - Yj Y1 - Yk

({ }) Y1 - Y2 m Yi = 2(Y1 - Yk) j = 1,2

Define

(31)

Joo( 1 )1/2 h = log m(B(Yk' E)) dE

o

~ P2(Yk-1 - Yk) (log __ 2 __ ) 1/2 Yk-1 - Yk

k-2 ( 2) 1/2 + L log -- (P2(Yi - Yk) - P2(Yi+l - Yk))

;=1 Yi - Yk

~ P2(Yk-1 - Yk) (log 2 ) 1/2 + fI (log _2_) 1/2 dp2(U - Yk) Yk-1 - Yk l l1k _, U - Yk

~ P2(Yk-1 _ Yk) (log 2 ) 1/2 + fI (log~) 1/2 dp2(V) Yk-l - Yk l l1k-'- lIk V

-11 P2(V) d ()(l )1/2 - (1 /) 1/2 V + P2 1 og 2 IIk-,-lIk v og2 v

Using the same methods one can check that Ii ~ 2Ik for 1 ~ j < k. Thus we get

(26) from (29), (30) and (31).

PROOF OF COROLLARY 2: Since X is a symmetric Levy process and (1+!/I(A))-1

E L1(dA), it's 1-potential density is finite and satisfies

Moreover 00

d2 (0 h) =2(1-u1(0 h)) = ~/ 1-COSAh dA , '~ 1+!/I(A)

o

(See [MR] Section 9 and the references therein). It is easy to see that

ash--+O

Page 263: Seminar on Stochastic Processes, 1990

260 M.B. Marcus

Therefore in order to calculate a( 6) we can use Lemma 3, applied to the Gaussian

process associated with X, with pf(h) = cl(log l/h)-a. and p~(h) = c2(log l/h)-a.

for some constants O < Cl ~ C2 < 00. Note that under the hypotheses of Lemma 2

and

a(1Ik) = E( sup G(1Ii)) lSiSk

sup Lf = sup Lf· d(II.O)~ II. iSk

.EY

Note also that 1

log - R$ (log k)11 1Ik

ask-+oo

and

log 1 R$ (log k)P as k -+ 00 1Ik-l - 1Ik

When p ~ 1 we use (25) and Theorem 1 to obtain (13) and (14) and when p ~ 1 we

use the left-hand-side of (25), (26) and Theorem 1 to obtain (13) and (14). Since

p2(h) is concave in for h E IO,h'] for some h' > O it satisfies the requirements for

(26). Aiso note that it doesn't matter in (13) and (14) if we write limk-+oo sUPiSk

or simply limk-+oo. We get (15) and (16) from (13) and (14), (writen in the form

limk-+oo SUPiSk) simply by taking 6 = 1Ik and observing that log k = (log 1/6)1/11.

It is interesting that we need both upper bounds in (25) and (26) to make these

simple estimates. The result in (25) is completely elementary and is well known.

The bound in (26) is more subtle. But it is not as good as the one in (25) if {1Ik}

goes to zero quickly enough.

REFERENCES

B Barlow, M.T. Necessary and sufficient conditions for the continuity of local time of Levy processes, Ann. ProbabiIity 16, 1988, 1389-1427.

F Fernique, X. Gaussian random vectors and their reproducing kemal Hilbert spaces, Technical Report Series No. 34, University of Ottawa, 1985.

JM Jain, N.C. and Marcus, M.B. Continuity of subgaussian protesses, In: Probabilitv on Banach spaces, Advances in Probability VoI. 4, 1978, 81-196, Marcel Dekker, NY.

LT Ledoux, M. and Talagrand, M. ProbabiIity in Banach spaces, preprint; to appear as a book published by Springer Verlag, New York.

MR Marcus, M.B. and Rosen, J. Sample path properties of the local times of strongly symmetric Markov processes via Gaussain processes, preprint

T Talagrand, M. Regularity of Gaussian processes, Acta Math. 159, 1987, 99-149.

Michael B. Marcus, Department of Mathematics, Texas A&M University, College Station, TX 77843

Page 264: Seminar on Stochastic Processes, 1990

On the Continuity of Measure-Valued Processes

EDWIN PERKINS

Let Y = (Q, I, It' Yt , pX) be a Hunt process with Borel semigroup

Pt taking values in a topological Lusin space (E,~) (E is homeomorphic

to a Borel subset of a compact metric space and ~ is its Borel

o-field). MF(E) and Ms(E) denote the spaces of finite measures and

finite signed measures, respectively, on (E,E) with the weak (i.e.

weak-) topo!ogy. The (y,-A2/2) - superprocess ia a continuous

MF(E)-valued Borel strong Markov process X.lf mo € MF(E) the law Qm on o

C( [O,=),MF(E)) of X starting at Xo = mo is uniquely determined by

(write Xt(~) for J~(x)Xt(dx))

Qm (exP(-Xt(~))) = exp(-mo(Vt~)) ~€bp~ o

(bp~ is the set of bounded non-negative measurable functions on E)

where Vt~ is the unique solution of Jt 2

Vt~(x) = Pt~(X) - Ps((Vt - s$) /2) (x) ds O

t ~ O, X € E

(see Fitzsirranons (1988, (4.7), (2.3)).

Although the weak continuity of X only implies Xt(~) is continuous

for each bounded continuous real-valued ~ on E (~ in Cb(E)), Xt(~) is

continuous on (O, =) a.s. for alI $ € b~ (the space of bounded

measurable functions from E to R) for a large class of y's. This fact

seems to have first been noticed by Reimers (1989a, Thm 7.3) who proved

it when Y is a Brownian motion by means of a nonstandard construction

of X. He used it to show

Xt(A)=O for alI t>O Qmo-a.s. if and only if A is Lebesgue null.

In this article we give an elementary standard proof of this stronger

continuity for a broad class of Hunt processes, Y.

Page 265: Seminar on Stochastic Processes, 1990

262 E. Perkins

The only observation in this article which is perhaps non-trivial

is Theorem 1 which is stated in the more general setting of an

arbitrary Ms(E)-valued process X where (E,~) is any measurable space.

Under sui table hypotheses, this result applies the

Garsia-Rodemich-Rumsey (G.R.R.) theorem to establish the

a.s.-continuity of Xt(~) for ~ e b~. The catch is that a direct

application of the G.R.R. theorem only gives a continuous vers ion

of Xt(~). The solution is that the G.R.R. theorem gives an explicit

modulus of continuity whic~ is preserved under bounded pointwise

convergence and allows us to bootstrap up from the continuous

functions. This result is applied to the superprocess X by

considering X = Xt - XO(Pto).

The existence of an elementary standard proof of this continuity

result should not deter anyone from studying Reimers' nonstandard

construction of X. The argument given here was motivated by the

nonstandard proof and the nonstandard perspective gives many other

insights into the nature of this process and solutions of other

stochastic p.d.e.'s. For example Reimers (1989a) gives the only direct

and rigorous connection between X and the stochastic p.d.e. au = Au + i~ ii at 2

in dimensions other than 1 (see Reimers (1989b) or Konno-Shiga (1988)

for the one-dimensional case).

Notation. If {~n}C:b~ we write ~ngp ~ when ~n converges to ~ in the

bounded pointwise sense. If f C:b~, fbp denotes the bounded pointwise

closure of f. Let bl~ = (~eb~: sUPxeEI~(x) I = II~II S 1}.

Theorem 1. Let (E,~) be a measurable space, {Xt: te[O,N]} be an

Ms(E)-valued process and fC:bl~. Assume 'i':[O,ea) ... [O,ea) is an

increasing convex function increasing to ea at ea and p: [O,ea) ... [O,ea) is

an increasing function such that Iim pIuI = o. We suppose

(1)

(2)

(3)

NN u ... O+

If r ... (w) = J J 'i'(\Xu ") - Xt(~) IIpliu - tl))qdu dt then there 'f',q O O

are co > O and q > 1 such that p(r~,q) s co for alI ;ef.

Xt(;) is continuous on [O,N] a.s. for alI ~ef.

N -1 -2 J 'i' (rr )dp(r) (ea for alI r > O. O

Page 266: Seminar on Stochastic Processes, 1990

Continuity of Measure-Valued Processes 263

Then for any $ € ~bp, Xt ($) is continuous on [O,N) a.s. and lu-tl

(4) I Xu ($) - Xt ($) I S 8 i 'i'-l(r$,l r-2)dp(r) for all u,t € [O,N),

(5) p(r$,q) S ca and so r$,l < = a.s.

Proof. Let li be the set of functions $ in b§ for which (4) and (5)

hold. f Cli by (2) and the theorem of Garsia-Rodemich-Rumsey (Garsia

(1970». Assume ($ ) c: Hand $ ~p $. Then X ($ ) ~ X ($) for alI n - nun u u€[O,N) and a double application of Fatou's Lemma gives

(by (5».

p(r~ ) S ca implies ('i'(lx ($ )-Xt ($ ) IIp(lu-tl)): neR} is uniformly 'l'n,q u n n

integrable with respect to du dt dP on [O,N)2 x Q and hence N N

lim P( J J 1'i'(lx ($ )-Xt ($ ) IIp(lu-tlll-'i'(lx ($)-Xt ($) IIp(lu-tlll Idudt n~ O O u n n u

= O •

Therefore there is a subsequence (nk ) such that

(6)

Let k ~ = in

(recall $n € li) to conclude k

IXu ($) - Xt ($) I

a.s.

lu-tl J O

-1 -2 'i' (r~ 1r )dp(r)

'I'n ' k

lu-tl -1 -2 J 'i' (r$ lr )dp(r) O nk '

lu-tl -1 -2 = 8 ~ 'i' (r$,lr )dp(r)

(the last by (6». We have proved $€li and hence li is closed under

bounded pointwise convergence. This proves (4) and (5) for alI $ in -bp • f and the a.s. continuity of Xt ($) follows from this and (3).

We next state a simple special case of Theorem 1 which may be

easier to apply in practice. It's what one would have obtained by

applying the usual proof of Kolmogorov's continuity criterion rather

than the G.R.R. theorem.

Corollary 2. Let (E,~) be a measurable space, {Xt: t€[O,N)1 be an

Ms(E)-valued process and fCb1~' Assume p>l, 6,co > O satisfy 1• • p 1+6 P( Xu ($) - Xt(~) I ) s colu-ti for alI u,t€[O,N), $€f,

and Xt(~) is continuous on [O,N) a.s. for alI $€f. Then for any $€~bp

Page 267: Seminar on Stochastic Processes, 1990

264 E. Perkins

and any O<n<6/p there is a p(~,n,w) > O a.s. such that

Ix (~) - Xt(~)1 S lu-tl n for all u,tE(O,N] satisfying lu-tl < p(~,n,w). u ,

Proof. This is a simple application of Theorem 1. Take ~(r) = r P ,

pIuI = uE and q = p/p' where p'E(l,p) is sufficiently close to p, and

E < (2+6)/p is sufficiently close to (2+6)/p.1

Consider now the (y,-A2/2)-superprocess X, and the associated

Ms(E)-valued process

Xt(~) = Xt(~) - XO(Pt~) ~ E bE .

Then there is an orthogonal martingale measure (see Walsh(1986, Ch.2»

Z on E x (O,m) such that

and

(7)

~ E bE;

t Xt(~) = f f Pt_s~(x)dZ(s,x) a.s. for all t~O, ~ E bE;

O

(seeFitzsimmons (1989, (2.13». NotethatOm (Xt(~»=mo(Pt~) = o

XO(Pt~) 0mo-a . s . (e.g. by (7» and so the weak continuity of Xt implies

that of X.

It is a now a routine exercise to use (7) to verify the hypotheses

of Theorem 1 for sui table ~ and p. Let x x

h(v,6) = SUPII~IIS1I1Pv+6~ - Pv~11 = sUPXEEIP (Yv+6E.) - P (YvE.) I

where Ivi is the total variat ion of VEMS(E). Note that h(o,6) is

decreasing by the semigroup property and hs2. If N>O, let N 2 1/2

PN(r) = sUPr'sr (r'+ f h(v,r') dv) O

and if ~ E biE; let

N N 1/2 r = f f exp{qlx (~)-Xt(~) 1/6N PN(lu-tl)}dudt, q > O. N,~,q O O u

As Xt (l) is the diffusion on [O,m) with generator (x/2) d2/dx2

absorbed at O, the following lemma is a simple application of the

maximal inequality for the submartingale exp(9Xt (1» (see Knight

(1981,p.l00) for its transition density). Let X;(l)=SUPSSN Xs (l).

* Lemma 3 ° (~(l»A) S exp{-A/2N} for all A ~ 4mo (E). mo

Page 268: Seminar on Stochastic Processes, 1990

Continuity of Measure-Valued Processes

Theorem 4. As sume

(8)

(a)

(b)

(9)

particular rN,~,l < ~ for alI NER o - a.s. mo

+ (log rN,~/II~II, l)PN (Iu-t 1)]

for alI u,tE[O,N], NER Om - a.s. for each ~Eb~. o

The right-hand of (9) approaches O as lu-tl~O and therefore for each

265

~ E b~, Xt(~) is a.s. continuous on [O,~), Xt(~) is a.s. continuous on

(O,~), and Xt(~) is a.s continuous at O if and only if mo(Pt~) is.

Proof (a) If O SuS t < N and ~ E b1~' then using (7) we have for any

K > O . Om (IXu (~) - Xt (~) 1 /PN (u-t) ~ x)

o t

S O (1 f f P ~(x) - Pt ~(x)dZ(s,x) 1 ~ xPN(u-t)/2) mo O u-s -s U

+0 (1 f fP ~(x)dZ(s,x)1 ~XPN(U-t)/2) mo t u-s

2 2 S 4exp[-x PN(u-t) /8Kl

t 2 +0 (fX((p ~-Pt_s~)lds>K) mo O s u-s

U

+0 (fx((p ~)2)ds>K) mo t s u-s

(e.g. by (37.12) of Rogers-Williams (1987))

2 2 * t 2 S 4 exp[-x PN(u-t) /8Kl + Om2(~(1) ~ h(t-s,u-t) ds > K)

+ Om (~(1) (u-t) > K)

2 2 o 2 -1 S 4 exp{-x PN(u-t) /8Kl + exp{-K(2N PN(u-t)) 1

-1 + exp[-K(2N(u-t)) 1

by Lemma 3 providing that

(10) K ~ 4mo (E)PN(U-t)2.

Let K = x Nl/2PN(U-tI2/2. If x ~ 8mo (E1N-1/ 2, then (10) holds and the

above estimate implies

Om (Iiu(~) - it(~) I/PN(u-t) ~ x) S 6 exp{-x/(4N1/ 2)). o

Page 269: Seminar on Stochastic Processes, 1990

266 E. Perkins

(a) now follows by a trivial calculation.

(b) Let f denote the class of continuous functions in bl~ and

let NeN. We first check the hypotheses of Theorem 1 on [O,N] with

~(r) = exp{r/6Nl / 2}, q = 4/3 and p(r~ = PN(r). (a) implies (1). (2)

follows from the weak continuity of X. The monotonicity of h(o,r)

implies PN(r)2 S N Pl (r)2 and so (8) shows N 1

(11) f PN(r)r- dr < -. O

An integrat ion by parts shows that if 1 > O then for 0<6<N. N 2 2 2 N_l f In(1/r )dPN(r) + PN(6) In (1/6 ) = In(1/N )PN(N) + 2 f PN(r)r dr. 6 6

The right-hand side approaches a finite limit as 6~0 hence so does the

left side and (3) holds. This together with (11) and 2 6 2

PN(6)ln(1/6 ) S f In(1/r )dPN(r) O

shows that the right-hand side of (9) approaches O as lu-tl~o. We may -bp

now apply Theorem 1 to conclude that for alI f e bl~ = f and alI

u,te[O,N] , ~ ~ 1/2 Iu- tl -2

IXu(f) - Xt(f)1 S 48 N ~ In(rN,~,lr )dpN(r)

1/2 -1 = 48 N [(ln(rN,f,l) + 2In(lu-tl lIPN(lu-tl)

lu-tl 1 + 2 J r- dPN(rl].

O

This implies (9) for ~ebl~ and hence for aH ~ in b~ (consider ~/II~II).

since PN(r) approaches O as r~O and h(o,r) is decreasing it follows

that Pt~ is II II-continuous in te(O,-) for any feb~. The remaining

assertions in (b) are therefore obvious.1

Corollary 5. Assume Pt~(X) = f Pt(x,y)f(y)v(dy) for alI t>O, feb~

where v is a measure on E, and suppose

f -1 (12) supx IPu(x,y) - Pt(x,y) Iv(dy) S Cl(u-t)t for alI O<t<u.

(a) For any ~eb~ Om -a.s. Xt(~) is continuous on (0,-) and is o

continuous at O if and only if mo(ptf) is continuous at O.

(b) For alI f in b! and N in N there is a random variable K(N,f)

(finite Om -a.s.) such that

IXu(f) - X:(f)IS IIfli 96 Nl/2(4Cl+l)1/2[(log(1/lu-tlll lu-tl l / 2

+ K(N,f) lu_tI 1/2] + Im (P f)-m (Ptf) I o u o for alI u,t in [O,N] ~ -a.s.

o

Page 270: Seminar on Stochastic Processes, 1990

Continuity of Measure-Valued Processes 267

(c) If in addition Jpt (x,y)dmo(x) > O v-a.a.y for some to>O' o

then for any Ae~

Xt(A)=O for alI t>O Qm -a.s. if and only if v(A)=O. o

-1 Proof. (12) implies that h(v,r) ~ min(c1rv ,2) and hence 1/2 1/2 PN(r) ~ (4c1+1) r . (a) and (b) are now irnrnediate from Theorem 4

with K(N,cj» = log (rN,cj>/IIcj>II, 1) + 2. (c) Suppose vIA) = O. Then for each t>O Qm (Xt(A)) = mo (pt 1A) = O.

o Therefore Xt(A)=O for alI teQ>O Qm -a.s. and hence Xt(A)=O for alI t>O

o by the a.s.-continuity of Xt(A) on (O,~).

Conversely if to is as above and Xt (A)=O Qm a.s., then o o

O Qm (Xt (A)) = J f Pt (x,y)v(dy)mo(dx) o o A o

and hence vIA) O by the hypothesis on Pt .1 o

Corollary 6. If Pt is the semigroup of the syrnrnetric o-stable

process in Rd (Oe(O,2)) and hence Xt is the super-o-stable process, then

the hypotheses (and hence conclusions) of Corollary 5 hold for any

mo t- O with dv = dx and +

c = 2da-12 (d/a-1) + 1. 1

Proof. ~f Pa(y) i~ the density of the syrnrnetric a-stable

process ~n R at t~me t then Pt(y) is a strictly positive decreasing -l/a -dia function of lyl satisfying pt(yl = pt/c(YC lc for all c > O.

Using these facts it is easy to see that for t, r > O -l/a -dia

\Pt{Y) -pt+r(y)1 = 1Pt(Y) -Pt(y(l+r/t) ) (l+r/t) I -dia -l/a s pt(y) [1 - (l+r/t) ) + Pt (y(l+r/t) ) - pt(y)

(consider pt(y) > pt+r(y) or pt(y) < pt+r(y) separately). Integrate out

y to find

fIPt(y) - pt+r(y) Idy ~ 1 - {l+r/t)-d/o + (l+r/t)d/a_1 +

s 2da-1 2(d/a-1) (r/t) O<rst.

If r>t use the trivial upper bound 2 to complete the derivation of

(12) .1 If Xt(cj» is Q - a.s. continuous on (O,~) for each m e MF(E) and

~ o

cj>eb§ then, taking mean values, we see that t ~ Ptcj>(x) is continuous on

Page 271: Seminar on Stochastic Processes, 1990

268 E. Perkins

(0,=) for each x€E and ~€b~ (the necessary uniform integrability is

given by Lemma 3).

Open Problem. Is the converse true?

The hypothesis (8) of Theorem 4 implies (use the fact that h(·,r)

is non-decreasing) 1

f IIPt+r~ - Pt~11 O

-1 r dr < = for alI ~€b~, t>O,

and, in particular, t ~ Pt~ is a norm-continuous on (0,=).

References

P. Fitzsimmons (1988). Construction and regularity of measure-valued

Markov branching processes. Israel J. Math 64, 337-361.

P. Fitzsimmons (1989). Correction and addendum to: Construction and

regularity of measure-valued Markov branching processes, Israel J.

Math, to appear.

A. Garsia (1970). Gaussian processes with multidimensional time

parameter, 6th Berkeley Svmposium on Math., Statistical probability voI.

~, 366-374, University of California Press, Berkeley.

F. Knight (1981). Essentials of Brownian motion and diffusion, Amer.

Math. Soc., providence.

N. Konno and T. Shiga (1988). Stochastic partial differential

equations for some measure-valued diffusions, Probab. Theory ReI.

Fields 79, 201-226.

M. Reimers (1989a). Hyperfinite methods applied to the critical

branching diffusion, Probab. Theory ReI. Fields 81, 11 - 28.

M. Reimers (1989b). One dimensional stochastic partial differential

equations and the branching measure diffusion, Probab. Theory ReI.

Fields 81, 319-340.

L.C.G. Rogers and D.williams (1987). Diffusions, Markov Processes and

martingales volume 2: Ito Calculus, Wiley, Chichester.

J.B. Walsh (1986). An introduction to stochastic partial differential

equations. Lecture Notes in Mathematics 1180, Springer-Verlag, New

York.

Edwin Perkins

Department of Mathematics

University of British Columbia

Vancouver, B.C. V6T IY4

Canada

Page 272: Seminar on Stochastic Processes, 1990

A Remark on Regularity of Excessive Functions for Certain Diffusions

z. R. POP-STOJANOVIC

In an earlier paper [4], the first author has shown that a

diffusion process whose potential kernel satisfies certain analytic

conditions, has alI of its excessive harmonic functions, which are

not identically infinite, continuous. In a subsequent paper [5], the

same author has shown that under these conditions the excessiveness

of its nonnegative harmonic functions is automatic. In this paper we

are showing that a regularity condition for the excessive functions

introduced here, will imply that the Riesz measure does not charge

the semi-polar sets of the process.

x Let X = (Q ,F ,Ft' Xt , St' P ) denote a transient diffusion,

i.e., a strong Markov process with continuous sample paths on a local-

ly compact hausdcrff state space (E,E ) with a countable base.

Following [2,3], we are assuming the existence of a potential kernel

with following properties.

Let

U(x,dy) = u(x,y)ţ(dy)

denote this kernel where ţ is a Radon measure, and the potential densi-

ty function u is such that:

(a) For every x, and for every y, function (x,y) .... u-1(x,y)

is finite and continuous; in particular, this implies u(x,y) > O for a11 (x,y).

Page 273: Seminar on Stochastic Processes, 1990

270 Z.R. Pop-Stojanovic

(b) u(x,y) = 00 if and only if x = y.

(Other notations and concepts throughout this paper are generally as

in Blumenthal and Getoor [1]).

Remark. It can be shown that our assumptions imply the

existence of a strong Markov dual. Hence, one can refer to the Chapter

6 of [1] and to apply techniques developed there in order to obtain

our results presented here. However, it is not clear that alI the

assumptions of that Chapter are consequences of the assumptions made

here. Thus, we choose to follow the direct path.

The next two Propositions are rather expected and we are

presenting them without the proofs.

Proposition 1. Let s be an excessive function (for X), and

(Tn ) a sequence of terminal times tending to infinity as u+ 00 • We

can write

s = p + h

where p and h are excessive, PT h n

h for all n, and PTn P .j. O almost

everywhere, as u+ 00 •

Proposition 2. Let (Tn ) be a sequence of terminal times

increasing to infinity as n __ . One has

u(x,y) = v(x,y) + w(x,y)

where v and w are excessive functions for each y, P v(.,y).j.O Tn

almost everywhere for each y as n--, and

P w(x,y) = w(x,y) Tn

for alI x,y. Moreover, the set of y's such that w(.,y) is not identi-

cally zero is a polar set.

The proof of this Proposition follows directly from Proposi-

tion 1.

Now, toward our main goal we have the following definition.

Definition. We say that an excessive function s is of class

Page 274: Seminar on Stochastic Processes, 1990

Regularity for Excessive Functions

(D) if for each sequence (T ) of stopping times increasing to infi­n

nity as n + "',

PT s .j. O n

almost everywhere as n + "'. We say that an excessive function s is

regular if for every sequence of stopping times (T ) increasing to n

a stopping time T, as n+ "',

almost everywhere as n+ "'.

Using the two given Propositions and the representation of

excessive functions we have the following result.

Theorem. A potential is of class (D) if and only if its

Riesz measure does not charge polar sets. A potential is regular if

and only if its Riesz measure does not charge semi-polar sets.

Proof. We shall prove here the second statement of this

theorem. The proof is presented in two steps.

Step 1. Here, we are showing the following: if g is a

function such that

Jg(x)u(x,y)dx ~ 1,

then for every Borel set A, the set

(1) B = {y; Y t: A, Jg(x)t~(x,y)dx < Jg(x)u(x,y)dx}

is a semi-polar set.

To see this, it is sufficient to show that every compact

subset of B is a semi-polar set. Let L be a compact subset of B. We

271

can write L = S + Q where S is a semi-polar set and.Q is finely closed

with all its points being regular points for Q.

Let

5 = PQl.

The Riesz measure p of 5 is concentrated on ~(see Corollary 3, p.

178 in [2J). In particular, it is concentrated on L. Clearly, P 5 = 5

Q

Page 275: Seminar on Stochastic Processes, 1990

272 Z.R. Pop-Stojanovic

which implies that for every x,

(2) PQu(x,y) = u(x,y)

for p - almost alI y. By Fubini theorem (2) holds for almost alI x,

and p - almost alI y. In particular, since P u(x,y) ~ P u(x,y), A Q

fg(x)P u(x,y)dx = fg(x)u(x,y)dx A

for p - almost alI y. But this contradicts (1) because p is concen-

trated on L c B.

Step 2. Here, we are showing that if p does not charge

semi-polar sets, then s = Up is regular.

Indeed, since the sum of regular potentials is a regular po-

tential and, since p is a Radon measure, we may assume p to be a

finite measure. Let g be a function such that fg(x)dx = 1 and

fg(x)s(x)dx < '"

Given E > O and a sequence (Tn) of stopping times which increases to

a stopping time T as n + "'. Let Ufk t s as k + '" • By Egorov theorem

there are a compact set K and a positive integer m such that if

(3) s ~ Uf + E on K and f(s - s )g(x)dx ~ E m K

Since p does not charge semi-polar sets it follows from the first

observat ion that P s = sK . This fact together with (3) implies KK

(4) sK ~ Ufm + E everywhere, and f(s - sK)g(x)dx ~ E.

Now, by using the first inequality in (4), one gets:

o

(5)fl~m PTnsK(x)g(X)dX ~ E + fl~m PTnUfm(x)g(X)dX =E + fg(X)PTUfm(X)dX

~ E + fg(x)PTs(x)dx,

where the regularity of Ufm and the fact that Ufm ~ s have been used.

On the other hand, since s - s is an excessive function, K

P (s - s ) ~ s - sK Tn K

for alI n, so by Fatou lemma and the second inequality in (4), one

gets:

Page 276: Seminar on Stochastic Processes, 1990

Regularity for Excessive Functions

f g(x)lim PT s(x)dx ~ E: + f g(x)lim P s (x)dx. n n n Tn K

This inequality and (5) imply that

f g(x)lim P s(x)dx ~ 2 E:+ f g(x)P s(x)dx, n ~ T

which in turn implies, with E: arbitrary and lim PT sex) ~ PTs(x), n

that for almost all x, lim P sex) = P sex) as desired. n Tn T

The proof of the second statement of the Theorem follows

now from these two observations just proved.

REFERENCES

273

o

[1] R. M. Blumenthal and R. K. Getoor, Markov Processes aud Potential Theory, New York, Academic Press, 1968.

[2] K. L. Chung and M. Rao, A new setting for Potential Theory, Ann. Inst. Fourier 30, 1980, 167 - 198.

[3] K. L. Chung, Probabilistic approach in Potential Theory to the equilibrium problem, Ann. Iust. Fourier 23, 1973.

[4] Z. R. Pop-Stojanovic, Continuity of Excessive Harmonic Functions for certain Diffusions, Proc. the AMS, Vol. 103, num. 2, 1988, 607-611.

[5] Excessiveness of Harmonic Functions for Certain Diffusions, Journal of Theoretical Probability, Vol. 2, No. 4, 1989, 503 - 508.

Z.R. Pop-Stojanovic Department of Mathematics University of Florida Gaiuesville, Florida 32611

Page 277: Seminar on Stochastic Processes, 1990

A(t,Bt ) is not a Semimartingale

L.C.G. ROGERS and J.B. WALSH

1. Introduction. Let (Btk~o be Brownian motion on R, Bo = 0, and for each

real x define

A(t,x) == it I(-oo,zJ(B.)ds = f'" L(t,y)dy, o 1-00

where {L(t,y): t 2: O,y E R} is the local time process of B. The process A(t,x)

enters naturally into the study of the Brownian excursion filtration (see Rogers &

Walsh [1],[2], and Walsh [4]). In [2], it was necessary to consider the occupation

density of the process Yi == A(t, B t ), which would have been easy if Y were a

semimartingalej it is not, and the aim of this paper is to prove this.

To state the result, we need to set up some notation. Let (Xt)o~t:9 be

the process A(t,Bt) - J: L(u,B .. )dB .. , and define for j,n E N

and 2n

Vpn == LI~jXIP. ;=1

THEOREM 1. For any p > 4/3,

Li

(1) ~n a.; ° (n -+ 00).

For any p < 4/3,

(2) Iim sup Vpn = +00 a.s. n-+oo

. < 2n J _ ,

Page 278: Seminar on Stochastic Processes, 1990

276 L.c.G. Rogers and IB. Walsh

This proves conclusively that X (and hence Y) cannot be a semimarlingale,

because if it were, it could be written as X = M + A, where M is a local

martingale, A is a finite-variation process (both continuous since X iSi see Rogers

& Williams (4), VI.40). Now since l/2n ~ O, M must be zero, and X = A; but

limVr = +00 rules out the possibility that X is finite-variation, as we shall see.

In outline, the proof runs as follows. Firstly, we estimate EI~j XIP above

and deduce from this that EVpn --+ O for any p > 4/3; in fact, the LI co~vergence

is sufficiently rapid that ~n ~ O. Next we estimate EI~jXIP below, and

combine the estimates to prove that E~i3 is bounded away from O and from

infinity. The upper bound allows us to prove that {~i3 : n ~ 1} is uniformly

integrable, and hence that P(lim sup ~i3 > O) > O. From this, by Holder's

inequality, we prove that for any p < 4/3, P[limsup ~n = +(0) > O. Finally, an

application of the Blumenthal O - 1 law allows us to conclude.

In the forthcoming paper, we analyse the exact 4/3-variation of X com­

pletely, and prove that it is 'Y J; L( s, B. )2/3 ds, from which the present conclusions

(and more) follow. (Here, 'Y is 47r- t r(7/6)E(J L(1,x)2dx)2/3.) The proof ofthis

is a great deal more intri eate, however, and this paper shows how to achieve the

lesser result with less effort.

2. Upper bounds. To lighten the notation, we are going to perform a scaling

so that there is only one parameter involved. It is elementary to prove that for

any e > O, the following identities in law hold:

(3) (L(t,x)h~o."'Ell g (eL ( t2 , =-)) ; e e t~O."'Ell

(4) (A(t, x)h~O."'Ell g (e2 A ( t2 , =-)) ; e e t~O."'Ell

(5)

Page 279: Seminar on Stochastic Processes, 1990

A(t,Bt ) is Not a Semimartingale 277

Hence Vpn ;g N-P I:f=l IXj -Xj-lIP, where N == 2n . We can write the increment

Xj+! - Xj in the form

Let us write

so that

(7)

j+l 1 I{Bu:S;Bi+d du == Zj,l,

l Bi+ 1

{L(j, x) - L(j,Bj)}dx == Zj,2, Bj j+l 1 {L(s,B.) - L(j,B.)}dB. == Zj,3,

j+l 1 {L(j,B.) - L(j,Bj)}dB. == Zj,4,

X j +l - X j = Zj,l + Zj,2 - Zj,3 - Zj,4.

We now estimate various terms. For p :::: 2, with c denoting a variable constant

(i)

j+l

EIZj,3IP == Eli (L(j,B.) - L(s,B.))dB.IP (ii)

j+l

:::; CE(l (L(j, B.) - L( s, B.))2ds )P/2

j+l

:::; cE 1 IL(j,B.) - L(s,B.)JPds

= elI EL(u,O)Pdu,

by reversing the Brownian mot ion from (s,B.);

:::; c.

Page 280: Seminar on Stochastic Processes, 1990

278 L.C.G. Rogers and IB. Walsh

(iii) By Tanaka's formula,

L(t,x) - L(t,O) = IBt - xl-lxl-IBtl-[ (sgn(B. - x) - sgn(B.))dB.,

and

so we have the estimation

but

E'1t I(0<B.<lzl>dsIP/2 = Ellzl L(t,y)dylp/2

rizI/Vi =tP/2E(10 L(1,y)dy)p/2,

using the scaling relationship (3);

( I I ) P/2-1 rizi/Vi ~tP/2 ~ Ela L(1,y)p/2dy

< ctp/2 (l=l)P/2-1 l=l - .,fi .,fi = clx IP/ 2tP/ 4 •

Hence for p ~ 2

1BH1 (iv) EIZj,2IP=1 {L(j,x)-L(j,Bj)}dxIP B;

= El1W1 {L(j,x) - L(j,O)}dxIP,

where W is a Brownian motion independent of (B.)o:$s:$j;

rlW11 = EI la {L(j,x) - L(j,O)}dxIP

~ E(lOO I(z:$lw1 1>IL(j,x) - L(j,O)IPdxIW1IP-1)

= 100 dxEIL(j,x) - L(j,O)IPE(IW1IP-1; IW11 > x),

Page 281: Seminar on Stochastic Processes, 1990

A(t, Bt ) is Not a Semimartingale

and the function <l>p(x) == E(IW1IP-\ IW11 > x) decreases rapidly, so

~ c l°O((lx, /\ VJ)P + IxIP/2jP/4)<l>p(x)dx,

~ c(l + jP/4).

j+l

(v) EIZj,4IP == Eli (L(j,B.) - L(j,Bj))dB.IP

~ CE(11 (L(j, w.) - L(j, 0))2ds)P/2,

where W is a Brownian motion independent of (B.)O~8~j;

~ cE 11 IL(j, W.) - L(j, O)IPds

=c f gl(y)EIL(j,y)-L(j,O)IPdy,

where gl is the Green function of Brownian motion on [0,1];

~ c f gl(y){(lyl/\ VJY + lylp/2jP/4}dy,

~ c(l + jP/4).

by (iii)

by (iii);

279

Thus of the four terms in (7) making up Xj+1 - Xj, the pth moments of Zj,l

and Zj,3 are bounded, and the pth moments of Zj,2 and Zj,4 grow at most like

1 + jP/4. (Notice that the bounds for the pth moments, proved only for p 2 2,

extend to alI p > O by Holder's inequality.) We shall soon show that this is the

true growth rate. Firstly, though, we complete the upper bound estimation by

replacing Xj+1 - Xj by something more tractable, namely

1Bj +1 Jj+l (9) ~j== L(j,x)dx-. L(j,B.)dB.

Bj J

1Bi+1 jj+1 == {L(j, x) - L(j, Bj)}dx -. {L(j,B.) - L(j,Bj)}dB •.

Bj J

To see that this is negligibly different from Xj+l - Xj, observe the elementary

inequality valid for ali p 2 1, and a, b E R:

(10)

Page 282: Seminar on Stochastic Processes, 1990

280 L.e.G. Rogers and 1.B. Walsh

Now since ej = Zj,2 - Zj,4 = Xj+l -Xj - Zj,l + Zj,3, we conclude from (10) that

EllejIP-IXj+1 - XjlPI

~ pE{IZj,1 - Zi.3!(lejIP-I v IXj+1 - XjIP-I)}

~ p(EIZj,1 - Zj,3Ia)l/a(E{lejlb(p-l) + IXj+1 _ X j lb(p-l)l)l/b

for any a, b > 1 such that a-l + b- l = 1;

using the estimates (i), (ii), (iv) and (v). Thus since Vpn ;g N-P L:f=l IXj -

Xj-IIP, we have for p > 1

N-I

EIN-P L (lejlP -IXj+1 - XjIP)1 j=O

N-I

~ cN-P L (1 + j(P-I)/4) j=O

~ c(l + N-3(p-I)/4)

-> O as N -> 00,

N B(j2- n ) j2- n

Vpn == LI r L((j -l)Tn ,x)dx -1 L((j -l)Tn ,Bs )dBs IP j=l } B«j-1)2- n ) (j-1)2- n

N

g N-P L lej-IIP. j=l

Henceforth, we shall concentrate on Vpn , that is, on the ej. N otice that we can

say immediately that for p > 4/3

N

EVpn = N-P EL IXj - Xj-IIP j=l N

~ cN-P L(1 + jP/4) j=l

~ CN-P(l + N1+P/4)

Page 283: Seminar on Stochastic Processes, 1990

A(t,Bt ) is Not a Semimartingale 281

so that not only does Vpn -+ O in LI, but also the convergence is geometrically

fast in n, so there is even almost sure convergence. This proves the statement

(1) of Theorem 1.

3. Lower bounds. We can compute

( BHi

E(ejl,rj) = E[iB" L(j,x)dxl,rj] , = L'" {L(j,Bj + x) - L(j,Bj - x)}~(x)dx,

where ~(x) == P(BI > x) is the tail of the standard normal distribution;

g l°O{L(j,x)-L(j,-x)}~(x)dx

= 100 (IBj - xl-IBj + xl)~(x)dx

+ 2100 (lj I[_z,zl(Bs)dBs)~(x )dx

by Tanaka's formula.

We estimate the pth moment of each piece in turn, the first being negligible in

comparison with the second. Indeed, since IIBj - xl- IBj + xii::::; 21xl, the D.rst

term is actually bounded, and for the second we compute

where f(x) == ~':I~(y)dy, so that by the Burkholder-Davis-Gundy inequalities,

the pth moment of the second term is equivalent to

E(lj f(Bs?ds)P/2 = E(! f(x)2L(j,x)dx)p/2

=jP/4E(! f(x?L(I,x/..jJ)dx)p/2

'" jP/4 E(! f(x? L(I, O)dx )P/2

as j -+ 00. Thus we have for each p ::::: I that

(11)

Page 284: Seminar on Stochastic Processes, 1990

282 L.c.G. Rogers and J.B. Walsh

which, combined with the bounds of §2, implies that for each p 2': 1 there are

constants O < cp < Cp < 00 such that for alI j 2': O

(12)

Hence in particular

(13)

and for each p < 4/3

(14) lim EVpn = +00, n--+oo

making the conclusion of the Theorem Iook very likeIy.

4. The final steps. We shali begin by proving that {V4/ 3 : n 2': O} is uniformly

integrabIe. lndeed, for each p 2': 1

N

IIVpn ll2 = IIN-P L lei-llP ll2 j=1

N

~ N-P L Illei-llP l12 j=1

N

~ cN-P L(1 + jP/4) j=1

by (12). Hence for p = 4/3, the sequence (Vpn ) is bounded in L2, therefore

uniformly integrabIe. Hence

(15) P(limsup V4/ 3 > O) > O, n

because otherwise ~/3 ---+ O a.s., and hence in LI (by uniform integrabiIity),

contradicting (13). Now define

[2 n tJ Vpn(t) == L l~jXIP,

j=1

Page 285: Seminar on Stochastic Processes, 1990

A(t. Bt ) is Not a Semimartingale 283

and let 2"-11

F" == {limsup L l~jXI4/a > O}, n-+oo ;=1

an event which is F(2-")-measurable. Notice that F,,+! ~ F"i and by Brownian

scaling, ali the F" have the same probability, which is positive by (15). By the

Blumenthal O - Ilaw, P(F,,) = 1 for every k, and hence for each t > O

(16) P [limsup V4ja(t) > O] = 1. n-+oo

Now suppose that X were of finite variation, so that there exist stopping times

T" Î 1 such that Vl(T,,) ==Î limn -+oo Vln(T,,) :::; k. Choose a > 1 > a > O such

that 4aa/3 = 1, and let b be the conjugate index to a (b- l + a-l = 1). By

Holder's inequality,

and since 4b(1 - a)/3 > 4/3, the second factor on the right-hand side goes to

zero a.s. as n -+ 00. The first factor remains bounded as n -+ 00, by definit ion

of T",. Hence V4ja(T,,) ~ O as n -+ 00, which is on1y consistent with (16) if each

T" is zero a.s., which is impossible since T" Î 1.

References

[1] L.C.G. ROGERS and J.B. WALSH. Local time and stochastic area integrals. To appear in Ann. Probab.

[2] L.C.G. ROGERS and J.B. WALSH. The intrinsic local time sheet of Brown­ian motion. Submitted to Probab. Th. Rel. Fields.

[3] L.C.G. ROGERS and D. WILLIAMS. Diffusions, Markov Processes and Mar­tingales, Vol.2. Wiley, Chichester, 1987.

[4] J.B. WALSH. Stochastic integrat ion with respect to local time. Seminar on Stochastic Processes 1982, pp. 237-302. Birkhiiuser, Boston, 1983.

L.C.G. Rogers Statistical Laboratory 16 Mill Lane Cambridge CB2 ISB GREAT BRITAIN

J.B. Walsh Department of Mathematics University of British Columbia Vancouver, B.C. V6T lY4 CANADA

Page 286: Seminar on Stochastic Processes, 1990

Self ... Intersections of Stable Processes in the Plane: Local Times and Limit Theorems

JAY s. ROSEN

1. Introduction

It will denote the symmetric stable process of index

p>1 in R2 , with transition density Pt(x) and ~-potential m -~t

G~(x) = JC e pt(x) dt. O

Ve recall that

r(~) 1 1 Go (x) = ----=--

r(p/2) ~ :x: 2- P (1.1)

To study the k-fold self-intersections of I we will

attempt to give meaning to the formal expression

f ... f o(It - It ) ... o(It - It ) O<t < ... <t <t 2 1 k k- 1

- 1- - k-

(1.2)

Let f~O be a continuous function supported in the unit

disc, and set

fe(x) = ~ f(x/e)

If we think of f e as an approximate O function, we are led

to consider

*This research supported in part by NSF DMS-8802288

Page 287: Seminar on Stochastic Processes, 1990

286 1.S. Rosen

(1.3) ock,E(t) = f···f dt1 ~ fE(Xt . - Xti_1)dt i O~tl~ ... ~tk~t i=2 1

as an approximation to (1.2).

As E ~ 0, ock (t) will diverge (due to the , E

contributions near the 'diagonals' {ti=tj }). To get a

non-trivial limit we must 'renormalize', which in our case

means subtracting from ock (t) terms involving lower order , E

intersections. Thus, we define the approximate

renormalized self-intersection local time,

(1.4)

=

where

(1.5)

k

'rk (t) = ~ (-h )k-j [~:::llJoc. (t) ,E ~ E J J,E

j=l

~ [f (Xt - Xt )dt. - h i=2 E i i- 1 1 E

h E = ff E (x)Go(x)d2x

~ fGo(x) f(x) d2x. E

Note that 11 (t) = t. , E

6t (dt.)] i-l 1

Following Dynkin [1988B], to reduce our anlaysis to

managable proportions, rather than study 1k (t) for fixed , E

t, we study 1k (() where ( is an independent exponential , E

random variable

(1.6) P((>t) = e-At

Ve will find that 1k (() converges, as E ~ 0, if and , E

only if P is sufficiently large. Ve recall that X has

k-fold self-intersections if and only if k(2-P)<2.

Page 288: Seminar on Stochastic Processes, 1990

Self-Intersections of Stahle Processes 287

Theorea 1: If (2k-1) (2-P)<2, then 7k,e(() converges in L2

to a non-trivial random variable denoted by 7k(().

Moreover, we have

where

11 7k,e(() - 7k(()112 ~ c eIX / 2

IX = 2-(2k-1) (2-P) >0

Aside from the intrinsic interest of 7k(() as a

measure of k-fold intersections, we hope to show in future

work that 7k(() arises naturally in the asymptotic

expansion for the area of the 'stable sausage'

Se = {x e 1R211 inf IIX - xII ~ e} O~s« s

generalizing the work of LeGalI [1988] for Brownian motion.

Ve also note our previous work involving a different form

of renormalization, iosen [1986]. The simplifications

arising from the present form of renormalization will be

most helpful in what follows.

Vhen the condition of Theorm 1 is not satisfied,

7k (() will not converge in L2 . Instead, appropriately ,e normalized, we get a central limit type theorem involving

L, a random variable with density ~e-Ixl, [known as

Laplace's first law].

Theorea 2: If (2k-1)(2-P) = 2 then

7k,e(() (dist.) > - - - [ J c (f!lk ) J Ig(1/ e)

where

c(P,k) = 271" [ ~~~~-]

Page 289: Seminar on Stochastic Processes, 1990

288 1.S. Rosen

.eaark: (i) compare (1.1).

(ii). If Bt denotes a real Brownian motion then B( and 1 --- L have the same law. This provides a conceptual

{'lT link between Theorem 2 and Rosen [1988], Vor [1985].

Theorea 3: If (2k-l)(2-P»2 but (2(k-l)-1) (2-P) <2 then

EOC / 21k,E(O (dist.) > [J c<q,k) JL

where oc = (2k-l) (2-P)-2>O and c(P,k) is an explicit

constant .

• emark: In the proof of Theorem 3, we will find that

c(P,k) = Iim ~ E(1~ E(())' E-+O '

and we will give an explicit formula for c(P,k). For more informat ion on self-intersection local times

see the survey of Dynkin [1988A] and the references

therein.

2. Preliainaries

We have formulated our theorems in terms of 1k (t), , E

an expression which does not involve A, the parameter of

the exponential time (. In our proofs, it will be more

convenient to work with k

(2.1) rk,E(t) = L(-HE)k-j[f=~JOCk,E(t) j=l

= f···f dtl.~ [fE(Xti-Xti_l) dt C HE(\i_l(dt i )] O~tl~···~tk~t J=l

which differs from 1k (t), (1.4) in that h =ff (x)G (x)dx ,E E E o

is replaced by

Page 290: Seminar on Stochastic Processes, 1990

Self-Intersections of Stable Processes 289

(2.2)

It is easily checked that

(2.3) k [ J k-j rk-1J 7k,E(t) = ~ -(hE - HE) ~-1 rJ. (t).

, E

j=l This expression will allow us to derive results about the

7'S from results on the r's.

The main point of this section is to derive a useful

expression for

(2.4)

=E [ J ... J j ~1 dt{

where

(2.5) I(D)

=E[J ... J * dt{ ~ [f [x .-X. ]dt~-H 6. (dt~)]] D j=l i=2 Ej ti ti_ 1 1 E ti_ 1 1

and D runs over the set of orderings of the nk+l points

O,tf; l~i~k; l~j~n; such that O~t{~t~~ ... ~t~ for alI j.

Fix D. Ve caII a set S of t's eleaentary, relative to

D, if

(2.6) S={tJ1:, t~ l' 1+ tn 1

and S satisfies

a)

b)

t~ < t J 1+l - ...

1

no other t's come between tf and t~ in D, (except 1

Page 291: Seminar on Stochastic Processes, 1990

290

c)

IS, Rosen

S is maximal in the sense that the t preceeding t~ in 1

D is not t~ l' 1-

Vith such an elementary sequence S, (2.6), we

associate a function Hs(Y) of nk variables

Y = {yI; l~i~k; l~j~n} by the formula

Here j Yi+l - Y~ 1 1+ - Y~

1 ' etc.

{:"l F al' ... , al = {:"

al {:" ... {:"

a2 al F

and

{:"a F(x) = F(x+a) - F(x)

In particular, if S = {ti, t~} has only two elements, 1

then the above reduces to

(2.8)

Let E(D) denote the elementary sequences in D. Dur

formula for I(D) is

(2.9) I(D) = f"'f[~ f (y~)] i=2 Ej 1

Vj as is easily checked, using (2.2).

n HS(Y) dY tEt(D)

The following lemma, proven in Section 7 is basic.

Lemma 1: Let p > 1, then

(2.10) O ~ G,,(z) < c[Go(z) A~]. iz:

If Izl ~ 21E, then

(2.11) sup I{:"l G,,(z)1 l

< c [Tzr] Go (z) R(z) laii ~ E a1 ,···,al

Page 292: Seminar on Stochastic Processes, 1990

Self-Intersections of Stable Processes

(2.12)

[ ] ({J-l)l

< c G~+l(z) ~ R(z)

where R(z) is a bounded monotone decreasing integrable 1 function. (In fact we can take R(z) = ~R).

l+z /J

If S E E(D) has the form (2.6), we say that S has

291

length l, and write leS) = l. For this S, (2.7) and lemma

1 mean that

I I l(S)+l [E.] ({J-l)l(S) (2.13) IHS(Y) I~ c Go (Z) * R(z)

whenever Z = Y~ - Y1 satisfies IZI > 21E .. 1 - J

3. Proof of Theorea 1

From now on, ~ is fixed and G(x) without a subscript will

refer to G~(x). Similarly, we write 7k for 7k «(), etc. ,E ,E

Ve first show that to prove theorem 1, it suffices to

prove the following analogue for r.

Proposition 1: If (2-P)(2k-l)<2, then r k converges in L2 , E

to a non-trivial random variable denoted by r k . Moreover,

we have

(3.1) IIr k E - rkll2 ~ c Erx / 2 , where rx = 2-(2k-l)(2-P) > o.

To see that proposition 1 implies theorem 1, define

(3.2)

Page 293: Seminar on Stochastic Processes, 1990

292 IS. Rosen

Since P > 1, R(x) is continuous, bounded and

(3.3) IhE - RE - R(o)1 = l~fE(x) [R(x) - R(o)]dx

=

for any O ~ o ~ 1.

Thus,

(3.4) { if P > 3/2 Ih~ - R~ - R(o)1 < CE2R2~' C C CE v- -U, if P $ 3/2

for any "O > o.

Ve write

2P - 2 - "O

= 2-2(2-P) - "O

= ~(2-(2k-1)(2-P)) + 1 - "O + (k - ~)(2-P)

> ~(2-(2k-1)(2-P)) since k ~ 2, and "O > O can be chosen small.

Since, obviously

1 > ~(2-(2k-1)(2-P)), (3.4) gives

(3.5) IhE - RE - R(o)1 ~ c E(2-(2k-1)(2-P))/2

so that (2.3) and proposition 1 now imply Theorem 1, with k

(3.6) 'Yk == L( - R(o))k-j~=iJrj j=l

Proposition 1 will follow from

Proposition 2: If (2-P)(2k-1) < 2, then for

Page 294: Seminar on Stochastic Processes, 1990

Self-Intersections of Stable Processes 293

o < e ~ e ~ 2e < 1

we have

(3.7) IIrk,e - rk ,e ll 2 ~ c ecx / 2

where cx = 2 - (2k-1)(2-P)

For, assume proposition 2. Given any O < e < e < 1.

choose n ~ O such that

(3.8)

n!! < e <~. Then by (3.7), 2 - 2n

n-1

IIrk - rk -112 ~ ~ IIrk -/ . - rk -/ . 1112 ,e ,e .ţ..J ,e 21 ,e 21+ 1=0

+lIrk ,e/2n - r k ,e ll 2

n < -cx/2~ 1 _ c e .ţ..J -( 2-.;cx=-jT1'2 .... ) .... i-

1=0

~ c ecx / 2 .

This shows the L2 convergence of r k ,and also ,e establishes (3.1)

Proof of Proposition 2: According to section 2

(3.9)

= E«rk - rk _)2) = ~I(D) ,e ,e .k.i D

where k

(3.10) F _(y.) e,e = II f (y.) e 1

i=2 Fix D.

Page 295: Seminar on Stochastic Processes, 1990

294 J.S. Rosen

The ordering D, in a natural way, induces an ordering

1 2 on Y., Y .. Thus, if t~ ~ t~, we will say that Y~ comes 1 i 1

before Viii. 1

This induces an order on E(D). Ve may assume

1 that the first element in D is t 1 , hence our first element

of E(D) is {o, t~}

S = {t~, ... ,t},t~} 2 1 Z = Y1 - Y1.

giving rise to the factor G(Y~).

be the next element in E(D). Let

Let

Ve first show that the contribution to I(D) from the

region {IZI ~ 4k€} is O(€~). To see this, we first integrate the Y's in reverse

order; we start with the last Y and integrate successively

until we reach Y~ using the bound

(3.11)

= cf e ipa f(€ p)

A 1 2 ~ cflf(€ p)1 ~ d p

p

~ (2-/8). €

For the Y~ integral we use

(3.12) ~ G(Y~ - V!) dY~ IZI9k €

<

Page 296: Seminar on Stochastic Processes, 1990

Self-Intersections of Stable Processes

The remaining Y~'Y~-1' ... 'Y~ integrals are handled

using (3.11), and finally ~G(yî)dyî = -j-.

295

Since there were ~2k G factors in (3.9), we find that

the contribution from the region {IZI~4kE} is 2

a [ (2- li> (2k- 1)J = a( E(J() E

Thus, for the remainder of our proof we can assume

that IZI~4KE. In view of (2.13), we can bound the integral

I(D) over IZI~4kE by

[ - ] ({3- 1) leD) (3.13) ~ G~K-1(Z) E R(Z)dZ

IZI~4kE -rzr-where leD) = L l(S).

SEE(D) If leD) ~ 2, we can bound (3.13) by replacing (P-1)l(D) with

2(P-1) 2 - 2 (2-P) , giving

(3.14) ~ IZI~4kE

E2-2(2-P) 1 IZI 2+(2k-3) (2-li)

since k ~ 2.

Ve can thus assume that leD) ~ 1. If leD) = o, D must

be the ordering

(3.15) D* = t 11 < t 2 < t 1 < t 2 < t 1 <_ ... <_tk2 - 1 - 2 2 - 3

and then k

(3.16) I(D*)=~FE,E(y1)FE,E(Y~)~G(Yî - y~_1)G(Y~ - Yî)dY i=1

Page 297: Seminar on Stochastic Processes, 1990

296 J.S. Rosen

Ve note that

G(Z+a+b) = G(Z)+â G(Z) + âbG(Z) + â2 bG(Z) a a, 2 1 2 1 and we use this to expand G(Yi - Yl)' with Z = Y1 - Y1 as

before

and l

a = Y~ - Y} = -~y} j=2 i

b = Y~ - Y~ = ~ y~ j=2

Ve can thus write the product in I(D) as a sum of 2 monomials in G(Z), âaG(Z) and âa,bG(Z). If any monomial

contains either a â2G factor, or 2 âG factors then we can

use (2.11), in a manner similar to (3.13), (3.14) to show

that the integral over IZI~4k~ is o(~~).

But, because of the factor F -(y~) F _(y~) in 10,10 10,10

(3.16), it is clear that the integral will vanish if our

monomial is of the form G2k- 1 (Z) or G2k- 1 (Z)âaG(Z).

A similar analysis applies to the case of leD) = 1,

completing the proof of proposition 2, hence of Theorem 1.

4. The second .oment

In this section we calculate the asymptotics of

E(r~,E) as 10 ~ O. If (2k-l)(2-P) < 2, then the last

section shows that

(4.1) E(r~ 10) -+ + JG2k- 1(z) d2z. , Consider now the case (2k-l)(2-P) = 2, so that ~ = O.

It is easily checked that alI estimates of the previous

section which were O(E~), also hold in this case, i.e. are

Page 298: Seminar on Stochastic Processes, 1990

Self-Intersections of Stable Processes

2 = -::r f

4kE~lzl~1 since G(z) is bounded and integrable for Izl~l.

As in (3.2), we write

(4.3) G(z) = Go(z) - H(z)

297

with H bounded, and we find immediately that (using (1.1))

(4.4) E(r~,€) = -i- f G~k-l(z)d2z + 0(1) 4kE~IZI~1

= 2 cCq,k) Ig(l/E) + 0(1)

[ rr~ 1 J2k-l where c(P,k) = 2~ ~ ~ as in Theorem 2.

Ve next consider the case where (2k-l)(2-P) > 2. Here

we will see that alI orderings D will contribute a term of

order ___ 1 ___ (where now ~ = (2k-l) (2-P)-2>0) , plus terms of €~

lower order.

(4.5)

with

(4.6)

Consider a fixed ordering D as before, and

IeD)

k

F€(y.) = ~ f€(Yi). i=2

Assume for definiteness, as in section 3, that the

first element in E(D) is {o,ti}, so that we have a factor

G(yi) in (4.5). Ve change variables

Page 299: Seminar on Stochastic Processes, 1990

298 1.S. Rosen

y~,y: ~ Xi' i 1, ... ,2r where Xi is the argument of the

i'th G factor in I(D). More precisely, if the i'th

interval in D = {o<ti< ... } m iii is t. < t_, then X. J j 1.

1 Ve integrate out dX1 = dYl and write

= Y~ - yl!l. j J

(4.7) I(D) = -î-~ ... ~ FE(Y~)FE(y:).J[l HS(Y) dX2 ···dX2r

SEE(D) where E(D) is obtained from E(D) by removing the first

sequence, {o,ti}.

Ve write G(z) = Go(z) - H(z) as in (4.3), and use this

to rewrite (4.7) as the sum of many terms. One term is

(4.8) -î- ~FE(y~)FE(Y:) ~ HS(Y) dX2 ···dX2k

SEE(D) where H~ is defined by replacing each G in HS with Go . The

other terms arising from (4.7) differ from (4.8) in that at

least one G has been replaced by H. Ve first deal with

(4.8), which will turn out to be the dominant term.

Ve scale in (4.8), and obtain

(4.9) ~ -î- ~F(y~)F(Y:) ~ HS(Y)dX2 ···dX2k E

where now SEE(D)

k

F(y) = ~ f(Yi). i=2

Let us show that the integral in (4.9) converges. If

the first sequence in E(D) is {ti,t~, ... t},t~}, set 210 Z = Xt +1 = Y1 - Yt . If IZI ~ 4k, then by the HS analogue

of (2.13) we can bound our integral by

c ~ G~k-l(z) dz < w.

IZI~4k If, on the other hand, IZI ~ 4k. then alI IXil < 8k, and

Page 300: Seminar on Stochastic Processes, 1990

Self-Intersections of Stable Processes

using ~ Go(x) dx < m we can bound our integral by

IXI~c integrating in reverse order dX2k , ... ,dl.

Next, consider a term arising from the expansion of

299

(4.7), in which at least one of the Go factors of (4.8) has

been replaced by H(·).

If IZI ~ 4kE, we first bound any H(·) factor by a

constant, and then scale. Ve obtain an integral, which can

be bounded as above (since now IZI ~ 4k) multiplied by ~ Eli

with li < oc.

If IZI ~ 4kE, then by (7.10) and (7.12) we find that

for any l, including l = O, and laii ~ E,

l [ I ali .... laii (4.10) lâa1 , ... ,al H(x) I ~ c xl . )_p]" 1

, IXI ~ 2lE

for any O ~ 6 ~ 1. Scaling with these bounds, gives a

factor ___ 1_ with li < oc if 6 < 1, and an integral which can Eli

be bounded as long as 6 is chosen close enough to 1 so that

(2k-l)(2-P) o > 2.

Thus we finally have

(4.11) E[r~ ] = ~ , E Eoc

+ L~ ... ~F(Y~)F(Y~) II D SEE(D)

+0[7-]

Page 301: Seminar on Stochastic Processes, 1990

300 IS. Rosen

5. Proof of Theorem 2

Ve proceed

(5.1)

by the method of moments.

{E(L2n ) = (2n)! E(L2n+1) = O

it suffices to show that

Since

r[ r k2 E ] 2n

~g(17 E) -+ (2n)! [c Cq,k)J n

(5.2) r 2n+l

E[ k 2 E

J -+0

J Ig(l/ E) in order to get

r k 2 E

J Ig(l/ E)

Cdist) > [j C(q2k ) JL,

which then implies Theorem 2, by (2.3) and Theorem 1.

Ve recall from section 2 that

(5.3) E(~,E) = LJ···J[.~ FE(y~)J n HS(Y) dY D J-l S~EtD)

where D runs over alI orderings of

{o,ti, j=l, ... ,m;i=1. .. ,k}

Let

(5.4) m

[ti, t~] U(D) = U j=l

U(D) naturally decomposes into the union of its components; 1 2 .

U , U, ... , UJ . If, say, p

[ j l j l] Ui = U l=l t 1 ' t k

then we say that Ui has height p, and denote by Di the

ordering induced on

l=l, ... ,p; n=l, ... ,k}

by D. By translation invariance we find that

Page 302: Seminar on Stochastic Processes, 1990

Self-Intersections of Stable Processes 301

j

(5.5) I(D) = ~ I(Di ) i=l

It is clear from this that if any component of U(D)

has height 1, then I(D) = O. Furthermore, from section 4

we know that if Di has height 2, then

{ c(q,k) 19(1/E) + 0(1), if D

0(1) otherwise

where D* is given by (3.15), and D** is obtained from D* by

permuting t~ with t~.

If m = 2n, and U(D) has n components of height 2, then

the above allows us to compute I(D), and since there are

(2n)! ways to permute the tj,s, we see that the

contribution to (5.2) from orderings D with n components of

height 2 is

(2n)! [c(q,k)]n (lg l/E)n + O(lg(1/E))n-1

To complete the proof of (5.2) it suffices to show

that if U(D) is connected and of height n > 2, then

(5.6) I(D) = 0(lg(1/E))n/2

Ve will develop a three step procedure to prove (5.6).

Ve will refer to y~,y~, ... ,Y~ as n letters, and to Y~ J

as the j'th component of the letter Y~. If SEE(D) is of

the form (2.6), i.e.,

(5.7) S = {tI,· .. ,t~+i,t~} and if l > o, then HS(Y) , see (2.7), contains factors

G(yl+1) ... G(yl+l)' and we say that the letter Y~ has l

isolated G factors. This terminology refers to the fact

that in these factors Y~ appears alone, without any other

Page 303: Seminar on Stochastic Processes, 1990

302 IS. Rosen

letter. Let

I = {iIY~ has isolated G factors}.

It is the presence of isolated G factors which

complicates the proof of (5.6), and necessitates the three

step procedure which we soon describe.

For each S~E(D) of the form (5.7), (even if l = o) we

write

(5.8) HS(Y) = HS(Y) [l{ IYI - yil~4n~} + l{IYI - Yil>4n~}J and expand the product in (5.3) into a sum of many terms.

Ve work with one fixed term. Ve then say that Y~ and Y~ are G-close or G-separated depending on whether the first

or second characteristic function in (5.8) appears in our

integral. If yj,yJ never appear together in any HS(Y) ,

then they are neither G-close nor G-separated. (This

determination of G-close, etc. is fixed at the onset, and

is not amended during the proof.)

For ease of reference we spelI out two simple lemmas.)

Le_a 2: Let gi(Z) ~ O be monotone decreasing in IZI· If

p (5.9) f II gi(Z)dnZ ~ M(~).

IZI~~ i=l then for any al' ... ,ap

f p

(5.10) II gi(Z-ai)dnZ ~ pM(~). {1Z-ail~~,Vi} i=l

Proof: The integral in (5.10) is bounded by

Page 304: Seminar on Stochastic Processes, 1990

Self-Intersections of Stable Processes

p

<I, j=l

by (5.9). []

Le_a 3:

Proof: See the discussion about (3.11), (3.12). []

If S is of the form (5.7), and if Y~,Y~ are

G-separated we recall the bound of (2.13): 6

(5.12) IHS(Y)I ~ c Go f (S)+l(Z)[-rZr-J R(Z)

where Z = Yi - yI, and O ~ 6 ~ (P-l)f(s) is at our

disposal.

Let

(5.13) 10 = {iEIlyi is not G-close to any yj, jEI}

(5.14) Il = 1-10

303

We briefly outline our three steps, and then return to

spelI out the details. We integrate out one letter at a

time, in a manner which allows us to keep track of

potential problems.

Page 305: Seminar on Stochastic Processes, 1990

304 J.S. Rosen

Step 1: Ve integrate out Vi, ielo using (5.12) when

applicable.

Step 2: Ve integrate out the letters from Il' using (5.11)

whenever possible.

Step 3: Ve integrate the letters from I C , i.e. letters

without isolated G-factors. This is the most

straightforward case.

Before spelling out the details, we can immediately

recognize a potential problem. After integrat ing several

letters, we may, inadevertently, have integrated out alI

G-factors containing some other letter, not yet integrated.

Its integral might then diverge. To remedy this, before

integrating each letter we carry out the following.

Preservation Step: Before integrating Y., we search for

any two letters, say X.,Z. with components which are

separated only by components of Y. Thus we may have

factors of the form

(5.15)

G[X-Y.]G[Y. l-Y.] ... G[Y. ilY. I_l]l!.l G(Y-Z) 1 1+ 1 1+' 1+, Yi+l' ... 'Yi+l

(if (5.12) is not applied) or (if (5.12) is applied) of the

form

(5.16) G[X - Yi ] G~(S)+l [Y1 - Zl] [ : V:- Zl :] °R[Y1 - Zi]

(Ve include the case X. = 0, i = 1).

In the case of (5.15), we write out l!.lG as a sum of

many terms, focus on one of them, say

G[Yi + y. + y. + ... Jl J2

From (5.15) we select the factors

+ y. -z] J p

Page 306: Seminar on Stochastic Processes, 1990

Self-Intersections of Stable Processes

(5.17) G[X-Yi ] G [Yj 1] ... G[Yjp]G[Yi + Yjl + ... + Yjp-Z]

Now

305

(5.18) IX-ZI < I [X-Yi]-Yjl-Yj2···-Yjp+[Yi+Yjl+···+Yjp-Z] I

$ IX-Y·I+ly· 1+·· ·+Iy· 1+IY.+y. + .. ·+Y -ZI 1 Jl J p 1 Jl P

Hence IX-ZI is less than (p+2) times the maximum of the

terms on the right hand side of (5.18). Hence one of the

factors in (5.17) can be bounded by a constant times

G(X-Z).

If we have the form (5.16), then necessarily :Y1-Z1 : > 4n€. If IX-Zi I $ 4n€, then we can bound

. [€] O V(Y1-Z1) = Go(Y1-Z1) I Yl-Zl I R(Y1-Z1) by V(X-Z1).

Note that V(·) is integrable. If IX - Z1 1 ~ 4n€, then

we use

so that

(5.20)

so that as before we can replace either the first factor in

(5.16) by G(X - Zi)' or a factor V(Y1 - Zi) by V(X - Zi).

Note that this step actually lowers the number of

G-factors involving Y. prior to integrat ing Y .. After

integrating Y., we find that we have not increased the

number of G-factors involved with X., (or Z).

One way to think of this preservation step, is to

suppress alI Y.'s, and 'link up' with G or V the remaining

Page 307: Seminar on Stochastic Processes, 1990

306 1.S. Rosen

letters which are now adjacent. (The case X. = O is

included). The upshot is that we never Iose any letters

prior to their integration.

We finally remark that in (5.15), (5.16) we took our

first factor to be G(X. - Yi ). If this factor is actually

W(X. - Yi ) the same analysis pertains.

We now give the details of our three steps.

SteD 1: We apply the bound (5.12) whenever S is of the

form (5.7), with j E 10 having isolated G-intervals (i.e.

leS) f O) and IY{ - Yll ~ 4En. This is the only place we

will apply (5.12). Note that (5.12) does not increase the

total number of G-factors in our integral (we count both GA

and Go)' but may increase the number of G factors i containing Y ..

claim that

(5.12)

Let N. de note this latter quantity. 1

L N. < 2klI 1. 1 - o

ido

1

To see this, let lei) denote the number of isolated

G-factors containing y i in the original integral, i.e.,

prior to applying the bound (5.12). At that stage y~ could

not have appeared in more than 2k-l(i) G-factors. The

effect of (5.12) is to replace certain of the lei) isolated

G-factors each of which had contributed 1 to Ni and zero to

any Nj , j f i, by G-factors which contribute 1 to N. and, 1

at most, 1 to one other Nj . This proves (5.12)

If some N. < 1 -

2k-l then as in section 4 the dyi

integral is bounded. For, since i E 10 , yi has isolated

G-factors - hence, either it is close to some other letter,

Page 308: Seminar on Stochastic Processes, 1990

Self-Intersections of Stable Processes 307

in which case lemma 3 shows the integral to be 0(1), or

else we will have applied (5.12), in which case lemma 2,

with 8 > o small, will show our integral to be 0(1) as seen

in section 4. (But remember, we always apply the

preservation step prior to integrating !).

Ve proceed in this manner integrating alI Vi with N. < 1 -

2k-l, (after each integration we update the remaining

Nj 's).

If alI remaining Ni ~ 2k, then since (5.21) still

holds, showing that now alI Ni = 2k. The analysis of

(5.21), in fact, shows that in such a case isolated

G-factors containing such Vi must be contained in factors

HS(V) containing a remaining vj, j € 10 and to which (5.12)

has been applied; in particular, IV~ - Vii ~ 4n. In such a

case we check that v~,vj cannot be contained together in

alI 2k factors, hence Vi must be contained in at least one

factor with another letter, say VJ. If the preservation

step does not directly reduce the number of G-factors

containing Vi, then, since IVi - v~1 ~ 4n€, we can still

bound one factor by V(V~ - V~), by using the same approach

as in the preservation step, arguing separately for

IVi - vII ~ 4n€ or > 4n€.

In this manner we integrate out alI letters Vi, i €

10·

Step 2: Il is naturally partitioned into equivalence

classes Ql, ... ,Qq' where i N j if we can find a sequence

i = il' i 2 , i 3 ,···, il = j

Page 309: Seminar on Stochastic Processes, 1990

308 J.S. Rosen

i i with Y p G-close to Y p+1

Consider Q1. Choose a j € Q1 such that l(j) ~ lei),

Vi€Q1. AII yi, i € Q1' are close to yj in the sense that . . 2

IY~ - y{1 ~ 4n €. We then use lemma 3 to integrate, in any

order, alI y~,

i € Q1' i * j. Since Q1 ~ 1, we have lei) ~ 1 so that the

contribution from the dY~ integral is at most

(5.22) 0[€2 - (2k-l(i))(2-P)] = 0[€(l(i)-1)(2- P)]

The dY~ integral, which is done last, is at most

(5.23)

from the l(j) ~ 1 isolated G-factors.

Combining (5.22) and (5.23) with l(j) ~ lei) ~ 1, we

see that the total contribution from Q1 is 0(1) unless

either lei) = 1, V i € Q1 or if some lei»~ 1, then

necessarily Q1 = {i,j} and lei) = l(j) . In the former case

we can also integrate out alI i * j except for one - so in

both cases we can reduce ourselves to Q1 = {i, j},

lei) = l(j) ~ 1. We caII such a pair a twin. are

close to each other, and we can as sume they are close to no

remaining letter (otherwise (5.23) can be improved to

(5.22)). We leave such twins to step three.

We handle Q2, ... ,Qq similarly.

Step 3: We begin with the remaining letter, say yi, which

appears at the extreme right. Because of this, yi appears

in ~ 2k-1 G-factors. If yi were part of a twin, then it

has at most 2k - lei) - 1 G-factors, as opposed to the

2k - lei) assumed for (5.22). This controls the twin.

If yi is not part of a twin, then i € I C . If yi

Page 310: Seminar on Stochastic Processes, 1990

Self-Intersections of Stable Processes

appears in 2k-l G-factors with yj, then the analysis of

section 4, shows that the dY~ dyj integral is at most

O(lg(l/e».

309

It yi appears with 2 letters, we already know how to

reduce the number of G-factors, so that the dyi integral is

bounded. Ve proceed in this manner until alI letters are

integrated.

This analysis shows that (5.6) holds unless I = ~, and

the rightmost letter has alI G-factors in common with one

other letter - but then these two letters form a component,

contradicting the assumption that U(D) is connected of

height > 2. This completes the proof of theorem 2.

6. Proof of Theorea 3

Taking over the notation of section 5, it suffices to

show that if U(D) is connected and of height n > 2, then

(6.1)

where « = (2k-1) (2-P)-2. The situat ion here is more complicated than that of

Theorem 2, since typically our integrals diverge and we

must control the divergence. Ve make two major

modifications. In (5.12) we now take O = O, and in

applying the preservation step, or any other time we bound

a factor such as G or V with factors not involving X in

order to reduce the number of factors involving X to < 2k-2, we only bound G7, V7 where 7 is close to, but not

equal to, one. This will not significantly affect the

Page 311: Seminar on Stochastic Processes, 1990

310 1.S. Rosen

order of our X. integral - but when we come to integrate

the other letters, a situation which would have led to

O(e-«) with r = 1 will now lead to o(e-«). These

modifications will be taken for granted in what follows.

As in the last section, we will find that we can

associate a factor O(e-«/2) with each letter, while at

least one letter will be associated with 0(e-«/2). By the

remarks in the previous paragraph, and as detailed in the

sequel, this will occur if any factors associated with our

letter were obtained through a preservation like step.

Ve will assume that (2k-2)(2-P) > 2. The other cases

are similar, but simpler.

Step 1: As in (5.21), we have

(6.2) N. < 2k II I 1 - o

i where Ni are the number of G-factors involving Y., after

application of (5.12).

If Ni < 2k-1 for any i, the dyi integral is

o[e-[(2k-2)(2-P)-2]] = o [e-«/2] , since our assumption

(2k-3)(2-P) < 2 implies (2k-2)(2-P)-2 < (2-P). Now assume Ni = 2k-1. Ii yi is linked to at least

other letters, then as in section 5, we can reduce the

number of factors involving Vi, and now the dyi integral

two

is

0(e-«/2). If yi is linked to only one other letter, say

yj, then Ni = 2k-1 is possible only if alI yi,yj,s are

contiguous. (Ve note for later that yj can be in I C or 10

but not in 1 - 10). The dyi integral is O(e-«) , while the

dyj integral will be bounded.

Page 312: Seminar on Stochastic Processes, 1990

Self-Intersections of Stable Processes 311

Ve can assume that alI remaining Ni ~ 2k, so that by

(6.2), we actually have Ni = 2k. Ve recall that this can

occur only if (5.12) is applied with pairs in 10. Ve leave

this for the next step.

Step 2: Ve begin integrating from the right. Let X denote

the rightmost remaining letter.

If X f I C , it has no isolated factors, and being

rightmost can appear in at most 2k-1 G-factors (the extra

factors arising from (5.12) have either been integrated

away, or involve only letters from 10). If there were

actually < 2k-1 G-factors, then the dX integral would be

0(f-«/2). If X is linked to two distinct letters, we can

reduce the number of factors as before, while if alI 2k-1

links are to the same letter, say Y, then Y is necessarily

in I C , and the dX integral is O(f-«) , with the dY integral

bounded.

If, as we integrate, we find the rightmost letter

X = y i f 10 , we can check that Ni = 2k is no longer

possible, and we return to the analysis of step 1.

Let us now suppose that the remaining rightmost letter

X f 1 - 1 . o

Then X f Qi for some i, say i = 1. Assume first that

X is within 4k2f of some letter in Q~ (we include o), then

automatically an analogous statement holds for alI letters

in Q1. Before applying this we consider alI Q1 as one

letter and apply the preservation step to Q~. This way, we

do not attempt to preserve letters of Q1 itself. By the

definition of Q1' each letter has at least one isolated

Page 313: Seminar on Stochastic Processes, 1990

312 IS. Rosen

G-factor, hence ~ 2k-l G-factors, while X, being rightmost,

must have ~ 2k-2. Ve begin by integrating dX, giving

0(E-«/2). Again, by the definition of Ql' X had a G-factor

in common with at least one other letter of Ql' hence that

letter now has ~ 2k-2 G-factors and we can integrate it,

again giving a contribution 0(E-«/2). At any stage in our

successive integration of the letters of Ql' it must be

that some remaining letter has had on G-factor removed

since Ql was defined by an equivalence relation. This

gives a contribution 0(E-«/2) for each letter of Ql.

Assume now that X E Ql is not within 4k2E of any

letter . QC l.n , so that in fact no letter of Ql is within 4kE

of any letters of Q~. If 1Ql1 > 3, we integrate dX. Ve

can use lemma 3 since X is close to the remaining letters

of Ql. Being the rightmost letter, its contribution is

0(E-«/2). Prior to the dX integration we preserve alI

other letters, including Ql - X. Because of this, it is

now possible that the remaining letters in Ql no longer

form an equivalence class, but it will always be true that

they are within 4kE of each other and of no letters in Q~.

Ve continue in this fashion and can assume that X is

in (an updated) Ql' with Ql = {X,Y}. If l(Y) ~ l(X), we do

the dX integral using lemma 3 for a contribution

O [E2-(2k-l(X)-1) (2-P)] . Vhen we reach Y, we have l(Y)

isolated G-factors contributing O[E- l (Y)(2- P)] , and

~ 2k - 21(Y) - 1 G-factors which give a convergent integral

by lemma 2. Thus, the total contribution is O(E-«) if

l(Y) = l(X), and O(E-«) if in fact l(Y) < l(X).

Page 314: Seminar on Stochastic Processes, 1990

Self-Intersections of Stable Processes 313

If, on the other hand l(X) < l(Y), we first do the dY

integral using lemma 3. Y has at most 2k-l(Y) G-factors.

If in fact this is ~ 2k-l(Y)-1 ~ 2k-l(X)-2 then the dY

integral is

O[€2-[2k-l(X)-2J(2-~)J = o(€-«/2) O(€l(X)(2-~))

and the dX integral is o[€-l(X)(2-~)J as above.

Otherwise, we preserve Q~, then if Y still has 2k-l(Y)

G-factors, we first assume that at least one of these

G-factors links Y with some Z f X. ~e bound G(Y-Z) ~ c

G(X-Z), and after the dY integral there remain l(X)

isolated G-factors for X and ~ 2k - 21(X) ~ 2k - 2

G-factors linking X with other letters. Thus the dX

integral is bounded by o[€-l(X)(2-~)Jo(€-«/2) and

altogether the dX dY integral is o(€-«).

If none of the 2k-l(Y) G-factors involving Y, involve

any letters Z f X, then alI non-isolated G-factors must

link X and Y, in particular those factors to the immediate

right and left. Since X occurs on the immediate left of Y,

we needn't bother preserving it from the Y integrationj

which is

O[€2-(2k-l(Y))(2-~)J = O[€2-(2k-l)(2-~)J o[€(l(X)(2-~)J

= O(€-«) o[€l(X)(2-~)J

and the contribution from dXdY is O(€-«).

In this manner we see that I(D) = O(€-«/2)n.

Step 3: we must now show that in fact

(6.3) I(D) = o(€-«/2)n

Let us agree to caII two letters X,Y totally paired if

there are no other letters between them. From the above

Page 315: Seminar on Stochastic Processes, 1990

314 1.S. Rosen

analysis, we know that (6.3) holds unless D is such that

alI letters X falI into one of the following three types.

1) X E I C , and X is totally paired.

2) X E 10 , and X totally paired. Ve recall that it

cannot be paired with a letter from 1 - 10.

3a) X E 1 - 10 , and X E Qi' 1Qi l = 2. If, say Q1 = {X,Y},

then necessarily X,Y are G-close, hence have at least

one common G-factor, and by the above we know that

l(X) = l(Y) and X,Y are far (i.e. not within 4kE) from

Q~.

3b) Qi = {X,Y} with X,Y totally paired.

Consider now Xhe very first letter on the right, X. X

cannot be totally paired, since that would mean we have a

component of height 2, contrary to our assumption that U(D)

is connected of height ~ 3. Thus X is of type 3a, say

X E Q1 = {X,Y}.

Once again, Q1 cannot be totally paired, hence,

proceeding from the right there is a first letter, caII it

Z interrupting X,Y. Following Z there may be other letters

from Q~ - we let V be the last of these prior to the next X

or Y. (Of course, we can have Z = V).

Ve begin by trying to preserve this V from Q1. If

this step removes a G-factor involving X or Y we break up

the analysis into three cases.

a) If the removed G-factor contained X, then X now has

~ 2k-l(X)-2 G-factors, leading to an 0(E-«/2)

contribution as in step 2.

b) If the removed G-factor linked Y, but Z links X, then

Page 316: Seminar on Stochastic Processes, 1990

Self-Intersections of Stable Processes 315

bound G(X-Z) ~ c G(Y-Z). Now preserve Q~ from Q1.

Once again X has ~ 2k-f(X)-2 factors, and while

apriori Y has gained an extra G-factor, this gain is

compensated by the loss of the G-factors which X,Y

have in common. Note: we didn't have to preserve Y

from the dX integration, because we have the factor

G(Y-Z).

c) If both the removed G-factor and Z link to Y, then

bound G(Z-Y) ~ c G(Z-X). Preserve Q~ from Q1' and do the

dY integral first, since Y now has ~ 2k-f(Y)-2 factors.

(In fact, the gain of G(Z-X) is compensated by the loss of

a factor in common with V). In any event the X,Y integral

is O(E-<X).

We can thus assume that our attempt to preserve the

above W didn't remove any G-factors from X or Y. This can

only happen if there is another W linked to X or Y to the

left. We use step 2 to bound the X,Y integral by O(E-~),

and now show that our resultant removal of two G-factors

involving W will yield a proof of (6.3).

If W is of type 1), 2) or 3b) this is obvious, since

they require total pairing without any loss of G-factors.

Thus, W is of type 3a, hence part of a pair Q2 = {U,W}. If

W is to the right of U, the analysis of step 2 gives the

desired result. Even of W is to the left of U, W has at

most 2k-f(W)-2 G-factors so that the dW integral is

O[E2-[2k-f(W)-2](2-P)] = 0(E-<x/2) O[Ef(W)(2-P)]

The dU integral has f(U) = f(w) isolated integrals, and

~ 2k-2f(W) ~ 2k-2 others - hence the total dU, dW integral

Page 317: Seminar on Stochastic Processes, 1990

316 J.S. Rosen

is o(e-«). This completes the proof of Theorem 3.

7. Proof of Le .. a 1

Proof of le .. a 1: Ve have

(7.1) f CII .,lt r CII

G(x) = e- Pt(x)dt ~ J~ Pt(x) 00

which gives half of (a). Ve note that

(7.2) Pt(x) = 1 2 feiP'Xe-tpPd2p, (2'11")

is a positive, CCII function of x, and

(7.3) pt(x) ~ ct-2/ P

t > o

If Ixl f O, say xl f O, then integrating by parts in (7.2)

in the dPl direction gives

(7.4)

Substituting this into (7.1) we have

(7.5) G(x) = -1 ~ rClle-~tdt[feiP'Xtpp pP-2e-tpPd2p)] (2'11")2 xl JO 1

P-2 C f ip'x P1P 2

= ---xt e (hpp)2 d P

where interchanging the order of integrat ion is easily

justified by Fubini's theorem since p>1.

Ve write (7.5) as

(7.6) G(x) = ~1 fe ip ' x rp-l,p+l(P) dp

where the notation ra,b(p) will remind us that

Page 318: Seminar on Stochastic Processes, 1990

(7.7)

Self-Intersections of Stable Processes

{Cpa

r b(P) ~ 1 a, c --;o , Ipl~l

Ipl~l

~e integrate by parts twice more to find

G(x) = ~ ~eip.x rp-3,p+3(P)d2p 1

which completes the proof of (a), since rp-3,p+3(P) is

integrable.

Furthermore, by (7.7)

(7.8) () c r ip·x ~ ( ) 2 vG x = -g- J e p rp-3,p+3 p d P xl

c r ip·x ( )d2 =~Je r~2,p+2P p

1

and we can integrate by parts once more to find

7.9) vG(x) = ~~eip.xr~3,p+3(P)d2p. 1

This procedure can be iterated, and shows that

(7 .10) I vfG(x) I < -_c'Jr-:--"r-I I - l+3 x

317

This will provide a good bound for large x. For small

x, we recall (3.2):

(7.11) G(x) = Go(x) ~ H(x).

Of course, we have

(7.12) i vfGo (x) i < ----.c..---..,.---,,--x 2-/3+1

and we intend to show that

(7.13) lâf H(x) I ~ la1 I1 a2 1·· ·la,,1 ~ a1 ,···,af ' x'

for laii ~ E, IXI ~ 4fE

Altogether, this will give, for IXI ~ 4lE

(7.14) lâl G(x)i < lall ... la,,1 c a1 ,···,af ' x2-/3+1

Combined with (7.10) we have

Page 319: Seminar on Stochastic Processes, 1990

318 1.S. Rosen

(7.15) 16l G(x) a1 ,···,al

which is (2.11).

Ve note that rp-2,3(x) is integrable.

From (7.15) we have, for Ixl ~ 4l€ l

rp-2,3(x)

(7.16) laiiu~ € I:I G(ai )16!1,···,alG(x)1

l [€] (P- 1) l ~ c Go(x) ~ rp-2,3(x)

which is (2.12).

Ve now prove (7.13), (but we first remark that if

p > 3/2, then H(x) is el and the following analysis can be

simplified considerably). ( ) 1 f ip·x A 2

H x = (2~)2 e pP(A+PP) d p. so that

(7.17)

Page 320: Seminar on Stochastic Processes, 1990

Self-Intersections of Stable Processes 319

Since le ip ' a_ll ~ 2 Ipl lai we obtain (7.13) for l = 1.

Vrite F(xja) for the integral in (7.18) so that

al (7.19) h.a H(x) = c -- H(x+a)

Then,

(7.20)

xl

+ _c_ F(xja) xl

h.b F(xja)

Ve study the last term

(7.21) h.bF(xja) = ~eiP'X(eiP'b-l)(eiP'a-l)r_P-l,2P+l(P)d2p

Integrating by parts gives us

(7.22)

+ _c_ xl

b1 h.bF(xja) = c -- F(x+bja)

xl al

+ c -- F(x+ajb) xl

f ip'x ip'a ip'b 2 e (e -l)(e -1)r_p-2,2p+2(P)d p

and as before this establishes (7.13) for l=2. Iterating

this procedure proves (7.13) for alI l, completing the

proof of lemma 2.

Page 321: Seminar on Stochastic Processes, 1990

320

[1]

[2]

[3]

[4]

[5]

[6]

J.S. Rosen

B.EFEB.ENCES

Dynkin, E. [1988A] Self-Intersection Gauge for Random Valks and For Brownian Motion. Annals of Probab., VoI. 16, No. 1, 1988.

Dynkin, E.B. r1988Bl Regularized Self-intersection Local Times of the ~lanar Browninan Motion, Ann. Probab., VoI. 16, No. 1, 1988.

LeGalI, J.-F. r1988] Viener Sausage and Self-Intersect~on Local Times. Preprint.

Rosen, J. [1986] A Renormalized Local Time for the Multiple Intersections of Planar Brownian Motion. Seminaire de Probabilities XX, Spring Lecture Notes in Mathematics, 1204.

Rosen [1988] - Limit Laws for the Intersection Local Time of Stable Processes in ~2. Stochastics, VoI. 23, 219-240.

Yor, M. [1985] Renormalisation et convergence en loi pour les temps locaux d'intersection du mouvement Brownien dans ~3, Seminaire de Probabilites XIX, 1983/4, J. Azema, M. Yor, Eds., Springer-Verlag, Berlin-Heidelberg-New York-Tokyo, 350-356 (1985).

Jay S. Rosen* Department of Mathematics College of Staten Island City University of New York Staten Island, New York 10301

Page 322: Seminar on Stochastic Processes, 1990

On Piecing Together Locally Defined Markov Processes

C.T. SHIH

Let E be a noncompact, locally compact separable metric space and let En

be relatively compact open sets increasing to E. Suppose that for each n we are

given a right process X:, on En and assume these processes are consistent, in the

sense that X;+l killed at the exit from En is a process that is (equivalent to) a

time change of X:', (equivalently, has identical hitting distributions as X:'). We

consider the problem of constructing a right process Yt on E such that for each

n the process Yt killed at the exit from En is a time change of X:' . The problem

was posed in Glover and Mitro [3].

The problem is solved here under a technical condition, that any path of

X:' must have finite lifetime if in X;+l the corresponding time-changed path

continues, i.e. still lives, after exiting from En. AIso, we require the paths of

each X:' to have left limits up to, but not including, their lifetime.

Actually what will be proved is somewhat more general. It is not required

that the state spaces En be increasing, but only that they form an open covering

of E (in this case the exit distributions of X:' will also be given); of course, the

X:' must be consistent in an obvious way. The precise result is stated as the

Main Theorem in section 1.

The problem of piecing together Markov processes that are equivalent on the

common parts of their state spaces is treated in Courrege and Priouret [1] and

Page 323: Seminar on Stochastic Processes, 1990

322 C.T. Shih

Meyer [4]j see the remark following theorem 1.1 below.

We remark that, with the result in this article, the theorem in [5] on con­

struction of right processes from hitting distributions extends to the nontransient

casej that is, the transience condition needs only to hold locally.

It is our pleasure to acknowledge very valuable discussions with Joe Glover

on this work.

1. Statement and Proof of the Main Result

Let Ea = E U {t1} be the one-point compactification of a locally compact

separable metric spare E, and ea be its Borel u-algebra. All (right) processes

X t considered in this article have Ea as the state space, with t1 as the usual

adjoined death point, and have (almost) alI paths right continuous, and with left

limits on (O, Ta), where Ta = inf{t ~ O:Xt = t1}. X t is said to have an open

set G CEas its proper state space, and we usually say that X t is a process on

G, if each x E E - G is absorbing, i.e. Xo = x implies X t == x a.s.j the time

TE", -G = inf{ t ~ O: X t ~ G} is called its lifetime (. (We remark, however, that

a proper subset G' of G can also be a proper state space of X t . But no confusion

will arise.)

Let X t be a process on G, and leţ H be an open subset of G. We denote by

XtlH the process X t stopped at the exit from H, i.e. the process X(tATE",_H).

80 XtlH is the process obtained from X t by changing every x E G - H into an

absorbing point.

Let Xl, xl be two processes. We write Xl = Xl if they are equivalent (in

the usual sense), and write xl '" Xl if they are time changes of each other.

Main Theorem. Let {En,n ~ 1} be an open covering of E with (compact)

closures En CE. For each n let XI' be a (right) process with En as its proper

state space. Assume that the XI' satisfy the following consistency condition: for

alI m =1= n

Page 324: Seminar on Stochastic Processes, 1990

Piecing Together Processes 323

Then there exists a (right) process yt on E such that for all n

Remark 1. Note it is assumed that we are given, for sets En, the stopped

(rather than killed) process X:, at the exit from En, up to a time change, of a

cert ain process on E. The stopped process contains a bit more informat ion than

the killed process, namely the exit distributions. Note also the requirement that

if a path of the stopped process X:' reaches a point in E - En at the exit time

from En, then of course this time is finite. In the case that En i E, we need only

to be given the killed processes to know the stopped processes, because the exit

distributions of the stopped process X:' are the weak limits of the corresponding

exit distributions from En of the killed processes X:" as m -t 00. However

the above mentioned condition of the exit time of X:' being finite if a path is to

continue beyond this time (in X;'H) is nevertheless a restriction. [This restriction

is not a real one if the following conjecture is true: every right process, which

may be partly transient and partly recurrent, can be time-changed so that the

lifetime of almost every path is finite except possibly when the path left limit

does not exist there.]

Remark 2. Another case where we know the exit distributions from the

killed processes is when the Xi are diffusions. In general, of course, we need to

be given the stopped processes (again, up to a time change) in order to be able

to construct the yt.

Remark 3. The theorem covers the case when E is compact (where .6. is an

isolated point). This is the case, for example, for a Brownian motion or diffusion

on a circle or sphere.

Remark 4. If E is noncompact, the process yt is not necessarily unique

(unique up to a time change). The process yt we will obtain is minimal in the

sense that, with Tn = inf{t ~ O: yt fi. El U ... U En}, limTn is its lifetime. n

Page 325: Seminar on Stochastic Processes, 1990

324 C.T. Shih

Remark 5. In the case where En i E, the proof is relatively shortj see

corollary 1.4. Actually it can be proved without theorems 1.1 and 1.2j see the

remark after theorem 1.3.

Theorem 1.1. For i = 1,2 let zi be a (right) process on an open set Gi and

assume ZllOt n02 :::; z;IOtn02' Then there exists a (right) process.it on G I U G2

such that .it 10; = zi for both i.

A proof of theorem 1.1 can be found in Meyer [4), which derives a cert ain

general result and uses it to prove among other things (a variat ion of) the the­

orem of Courrege and Priouret [1) on piecing together Markov processes that

are equivalent on the common parts of their state spaces. For completeness we

include, in section 2, a proof, which is somewhat different from the one in [4).

The reference [4) was pointed out to us by Pat Fitzsimmons.

The process .it in theorem 1.1 is not necessarily uniquej however, we have

the following uniqueness result, which is needed later.

Theorem 1.2. Let Gi, zi and .it be as in theorem 1.1. Let G3 be open with

C 3 C G2 • Then if F is open with F c G1 U Ga and Zt is a (right) process on F

such that ZtlFno; '" Z;IFno; for i = 1,2, we have Zt '" ZtIF.

This will be proved at the end of section 2.

Theorem 1.3. For i = 1,21et W; be a (right) process on an open set Hi with

HI c H2 • Suppose that W1IHl '" Wf, Then for any open H with H c HI there

exists a (right) process Zt on H2 such that Zt '" Wt2 and ZtlH = wllH.

Proof. Let Wt = W;IHt = W 2 (t /\ TE",-H,)j then W t '" Wf, Let At be

a (strictly increasing continuous) additive functional whose inverse time-changes

Wt into wl. Define

B t = it 1H(W.)dA. + [lEA - H(W.)dS .

B t is a well-defined strictly increasing continuous additive functional in Wt . De­

note by zi the time-changed process from Wt by the inverse of B t . Clearly

Page 326: Seminar on Stochastic Processes, 1990

Piecing Together Processes 325

ZilH = WlIH. Let Gl = Hl, G2 = H 2 - H and Z; = wllG2. Then ZI, Z; satisfy the conditions oftheorem 1.1. Denote by Zt the process Zt in theorem 1.1.

Thus Zt is a process on Gl U G2 = H2, and ZtlG1 = Zi, ZtlG2 = Z; = W11 G2 . The first of these equivalences implies ZtlH = WlIH. To show Zt ~ wl, let G3

be open with H 2 - Hl C C3 C G2 . Note Gl U G3 = H 2 • Applying theorem 1.2

to Zt = wl and F = H 2 = Gl U G3 we have Wl ~ Zt. •

Remark. Theorem 1.3 can be proved directly, i.e. without using theorems

1.2 and 1.3, as follows. Define stopping times Tn in wl by: To = 0, and for

The fact that paths have left limits on (O, T,t,) implies Tn i T,t,. Let B t be as in

the above proof, which is defined in Wl. Define in Wl

where () denotes the shift operator. It is not difficult to rigorously show that Ct

is a strictly increasing continuous additive functional. The time-changed process

Zt from Wl by the inverse of Ct then satisfies theorem 1.3. Note that based on

this, corollary 1.4 (which establishes the special case of the main theorem where

En i E) does not have to rely on theorems 1.1 and 1.2, as its proof uses only

theorem 1.3.

Corollary 1.4. Let En be relatively compact open sets with En i E. For each

n we are given a (right) process Xf on En such that Xf+llEn ~ Xf. Then there

exists a (right) process Yt on E such that YtIEn ~ Xf for aH n.

Proof. Choose open sets E~ with E~ C En and E~ i E. We will define

processes ~n on En such that ~n+lIE~ = ~nIE~ and ~n ~ Xf. The sequence

Page 327: Seminar on Stochastic Processes, 1990

326 C.T. Shih

of processes ytlE~ then admits a projective limit process Yi on E staisfying

YiIE~ = y;nIE~ for all n. The property YiIEm '" Xf' will follow because if

Em C E~, YiIEm = y;nl Em '" XI'IEm '" Xf'. To define the sequence y;n,

first let y? = xl, and apply theorem 1.3 with H1 = El, H2 = E2' H = EL Wl = Y;1 and W; = xl to get a process y;2 (which is the Zt in the theorem)

on H2 = E2 satisfying Y;2IE; = Y;l IE; and Y;2 '" Xl. In general, assuming

that we have obtained a process y;n on En satisfying ytlE' = y;n-1IE' and n-l n-l

y;n '" X:" apply theorem 1.3 with H 1 = En, H2 = En+b H = E~, Wl = y;n

and W; = X;,+1 to get a process y;n+1 on En+1 satisfying y;n+1IE~ = y;nIE~ and

y;n+1 '" X;'+1. The existence of the sequence y;n thus follows from induction .

• Theorem 1.5. Let J1 , J2 , J3 be open sets with J 3 C J2 • For i = 1, 2let V/ be a

(right) process on Ji such that V?IJtnJ. '" V?IJtnJ •. Then i) there exists a (right)

process Vi on J1 U J3 such that ViIJt = V? and ViIJ. '" V?IJ.i ii) if Fis open

with F C J1 U J3 and Vi is a (right) process on F such that ViIFnJi '" V/IFnJi

for i = 1,2 then Vi '" ViIF.

Proof. Let J4 be open with J 3 C J4 C J 4 C J2 • Applying theorem 1.3

with H 1 = J1 n J4 and H 2 = J2 , Wl = V?IJtnJ., W; = V? we obtain a process

Zt on J2 satisfying Zt '" V? and ZtlJtnJ. = Wl = VlIJtnJ •. Next use theorem

1.1 with G1 = J1, G2 = J4 , Zl = V? and Zl = Zt I J. to obtain a process Zt on

J1 U J4 such that ZtlIt = V? and ZtIJ. = ZtIJ., the latter equivalence implying

ZtIJ. '" V?IJ.· Let Vi = ZtIJtuJ •. The Vi satisfies i). ii) follows from theorem

1.2 with Gi , zi as above, G3 = J3 , and Zt = Vi. •

Proof of Main Theorem. Let {Gn,n ~ 1} be an open covering of E

with G l = El and Gn C En for n ~ 2. We will define for each n a process

y;n on Fn = G1 U ... U Gn such that y;n+1I Fn = y;n. The process Yi will be the

projective limit of the sequence y;n, which satisfies YiIFn = y;n for all n and has

lifetime li~TEA-Fn. Let Y;1 = Xl. Applying theorem 1.5 with J l = FI = Gl =

Page 328: Seminar on Stochastic Processes, 1990

Piecing Together Processes 327

El, J2 = E2' Ja = G2 , and V? = Y?, v,? = x't we obtain a process Yl (which is

the Vi in the theorem) on J1 U Ja = FI U G2 = F2 such that i) Y?IF, = Y? and

Y?IG2 ~ X'tIG., and ii) if F is open with F C FI UG2 = F2 and Vt is a process on

F with VtI FnF, ~ l'lIFnF, and VtIF,nG2 ~ X'tIFnG., then Vt ~ Y?IF. Using ii)

with F = Ea n F2 and Vt = XlIF, (note Xl I EanF, ~ XIIEanF, = Y?IEanF, and

XlIEanG2 "" X'tIE3nG2 ), we get Y?IE3 nF2 "" Xl1EanF2 • This permits us to apply

theorem 1.5 to J1 = F2' J2 = Ea, Ja = Ga, and V? = ~2, lft2 = xl to obtain ~a.

In general suppose ~n is obtained as aproceSB on Fn = Fn- 1 UGn = G1 U ... UGn

such that i) Yt1Fn- 1 = ~n-l and ~nlGn "" XflGn, and ii) if F is open with

F C Fn-l U Gn = Fn and Vi is a process on F with VtIFnFn_l "" ~n-lIFnFn_l

and ViIFnGn "" XflFnGn, then Vi ~ ~nIF. Using ii) with F = En+l n Fn

and Vt = X;'+IIF (and an appropriate induction) we have X;'+lIEn+1 nFn '"

~nIEn+lnF. Now applying theorem 1.5 with J1 = Fn, J2 = En+l, Ja = Gn+l'

and lftl = ~n, lft2 = X;'+l we obtain ~n+l on Fn+1 = Fn U Gn+1 satisfying

the corresponding i) and ii). Thus the existence of the sequence ~n follows

from induction. Finally we need to show that the projective limit process Yi

satisfies YiIEm '" X;". Choose n with Em C Fn; then YiIFn '" ~n implies

YiIEm '" ~nIEm· But ~nlEm "" X;", which follows by applying condition ii) of

ytn with F = Em, Vt = XI", and using an appropriate induction on n. •

2. Proofs of Theorems 1.1 and 1.2

To prove theorem 1.1, let Q be the space of alI right continuous functions

from [0,00) into EA. Q can serve as the sample space of both zi- Of course

Z;(w) = Wt. Let Pl',x E EA, be the probability measure governing zi when it

starts at x. Define

= P{ = P: = point mass at the W with Wt == x if x E EA - G1 U G2 •

Let Zt(w) = Wt = zi(w). With (i = TE",-G, = inf{t ~ O: zi fţ G;}, the lifetime

Page 329: Seminar on Stochastic Processes, 1990

328 C.T. Shih

of Z;, let

Now set

Q(w,dw') = pZ«(("'»(dw')

(note Zoo == ~ by convention). Q is a (transition) kernel in (Q,:F) where :F is

the usual completion of u(Zt,t ~ O) w.r.t. the measures P" = I p.(dx)P"'. Next

define

fi = O x ... x O x ... , :f = :F x ... x :F x ...

and let P"',x E Ei:>., be the probability measure on (fi,:f) satisfying

P"'{(Wl, ... ,Wn, ... ):Wk E Ak,l:5 k:5 n}

= / P"'(dw1) / Q(wI,dw2) / ...

... / Q(wn-l,dwn)lA,x ... XAn(WI, ... ,wn).

With W = (Wl, ... ,wn , ••• ) let

Finally define

it(w) = Zt-Tn_,(w)(wn) ifTn- 1(w):5 t < Tn(w)

= ~ if t ~ ((w) > Tn(w) for ali n

= Z((wn) if t ~ ((w) = Tn(w) for some n ~ 1 .

By the construction we have an obvious Markov properly of it at the times

Tn , which reflects the Markov properly of the discrete time process w -+ W n on

(O,:F); this will be used below.

In order to show that it is a right process on G1 U G2, define for CI! > O, f E

Page 330: Seminar on Stochastic Processes, 1990

Piecing Together Processes

U;"'j(x) = Pt 1(; e-ettj(Z;)dt ,

Uetj(x) = p x 1( e-ettj(Zt)dt

=Ufj(x)ifxEG1 ; =U:fj(x)ifxEG2 -GI ; =Ootherwise,

Uetj(x) = p x l' e-ettj(Zt)dt.

The Markov property of Zt at the time TI yields immediately

Lemma 2.1. For x E EA, a: > O, j E bEA

iT, (;etj(x)=P X o e-ettj(Zt)dt+pxe-etT'(;etj(ZT,)

= uet j(x) + PXe-et(Uet j(Zd

Lemma 2.2. For y E G I n G2 , a: > O, j E bEA

329

Proof. Define R = inf{t:::: O: Zt ~ G I n G2 },R = inf{t:::: O: Zt ~ G I n G2 }.

Then

Uetj(y) = pY l R e-ettj(Zt)dt + PY[h' e-ettj(Zt)dt; ZR E GI - G2 ]

+ PY[e-etRf)et !(ZR) ; ZR E G2 - GI ]

=Pf lR e-ettj(Zi)dt + PY[e-etRf)OIj(ZR) ; ZR E G I -G2]

+ Pf[e-etRf)et j(Zh) ; zh E G2 - G I ] ,

using the fact pY = Pf for y E G I on the lst and 3rd terms; and for the 2nd

term, combining the Markov property of zI at R with that of Zt at TI. Since

zIlG,nG2 = zilG,nG2 and since pz = Pi for z E G2 - GI , the above

= PI l R e-OItj(Z;)dt + PI[e-OIRf)OI j(Z1) ; z1 E GI - G2 ]

+ PI[e-OIR(U:fj(Z1) + p:'(R)e- 0I(2f)0I j(Z~2)) ; z1 E G2 - GI ]

= I + II + (II I + IV) ,

Page 331: Seminar on Stochastic Processes, 1990

330 C.T. Shih

where we have use Lemma 2.1 to obtain the third term. Now

completing the proof. •

Lemma 2.3. Let x E EI!:.,s ~ O,A be ofthe form A = {Z'j E Ei,1 ~ j ~ kj

s < TI} where O ~ sI < ... < Sk ~ S. Then for a > O, f E bel!:.

(2.1)

Proof. We need only to prove this for x E G1 U G2 • By the Markov property

of Zt at TI, the left-hand-side of (2.1) equals

(2.2)

where A = {Z'j E Ei, 1 ~ j ~ kj S < O. The right-hand-side of (2.1) is

P"'[irQ f(Z.)j Al. If x E G1 , applying the Markov property of ZI at the time S

and lemma 2.1 we have that this last expression equals (2.2). If x E G2 - GI,

write this expression as

Apply the Markov property of Zl at time s, and use lemma 2.1 on the first term

above and lemma 2.2 on the second term, to obtain (2.2). •

Lemma 2.4. (Zt,F"') is simple Markov.

Proof. Let x E G1 U G2 ,u ~ O and Î' be of the form Î' = {ZUj E Ai, 1 ~

j ~ m} where O ~ Ul < ... < U m ~ u. We need to show tht for a > O, f E bel!:.

(2.3)

Page 332: Seminar on Stochastic Processes, 1990

Piecing Together Processes 331

Let f ni = f n {UI-I < Tn ::; UI ::; U < Tn+I}, where Uo stands for -1. Then

using the Markov property of Zt at Tn we have

pX[l( e-at!(Zu+t)dt; f nd

- 1( = px[pZ(Tn(w)) { e-at!(Zu_Tn(wHt)dt; o

Zu;-Tn(w) E Aj, l::; j ::; m,u - Tn(w) < Td ;

Zu;(w) E Aj ,l::;j < l,UI-I < Tn(w)::; ud

(where the inner integrand is a function of w'). Apply lemma 2.3 with x

Z(Tn(w)) to reduce the above to

-x -Z(Tn(w)) -a - . - . -. P [P {U !(Zu-Tn(w)), Zu;-Tn(w) EAj,l::;] ::;m,u-Tn(w)<Td,

Zu;(w) E Aj,I::;j < l,UI-I < Tn(w)::; ud,

which by the Markov property of Zt at Tn equals

Summing over n, l we obtain (2.3). •

Proof of theorem 1.1. By lemma 2.4 and a standard theorem, to complete

the proofthat (Zt, PX) is a right process it suffices to show that for a> O,! E be~

with ! ~ O, t -> Ua !(Zt) is right continuous a.s. px for alI x. Using the Markov

property of Zt at times Tn, it suffices to show t -> Ua !(Zt) is right continuous

on [O, TI) a.s. Py for alI y, i.e. t -> Ua !(Zt) is right continuous on [O, () a.s. pY

for aH y. By lemmas 2.1 and 2.2

for both i = 1,2. The right-hand-side is obviously a-exessive w.r.t. zi, and so

a.s. Pl, t -> Ua f(zi) is right continuous on [O, (i). Thus t -> Ua !(Zt) is right

continuous on [O, () a.s. pY for alI y. Finally, it remains to show

Page 333: Seminar on Stochastic Processes, 1990

332 C.T. Shih

for both i. But this is immediate from construction and Iemma 2.2. •

Proof of theorem 1.2. Denote Zt = ZtIF' We show that Zt and Zt have

identical hitting distributions; thus by the Blumenthal-Getoor-McKean theorem

one has Zt ~ Zt. (For a modern version of the B-G-M theorem, see [2].) Let

D be a compact set in E and TD = inf{t ;::: O: Zt E D} or inf{t ;::: O: Zt E D}.

We must show that for aU x,Î"l!(Z(TD) E .) = PX(Z(TD). E .). Define stopping

times Sn in Zt by: So = O and

Sn+1 = inf {t ;::: Sn: Zt E D U (G1 n Fn if Z Sn E G1 n F

= inf{t;::: Sn: Zt E DU (G2 n Fn if ZSn E G3 n F

= inf{t ;::: Sn: Zt E D} otherwise.

The same stopping times in Zt are also denoted Sn. Now using the fact

one has by induction PX[Z(Sn) E .] = PX[Z(Sn) E .] for alI n. The desired

equality of hitting distributions will follow from this and the convergence

for B C E and the same convergence in Zt. The reason for this convergence is

that if Sn < TD for alI n, then for infinitely many n we have Z(Sn) E G3 n F

and Z(Sn+1) E G~ n F, and so Z(Sn) diverges (because dist(G3 , GD > O),

which implies Iim Sn = T 1!.. (because the paths have Ieft Iimits on (O, T 1!..)) and so n

TD = 00; and the same is valid for Zt. •

REFERENCES

[1] PH. COURREGE et P. PRlOURET. Recollements de processus de Markov. Publ. Inst. Statist. Uni1J. Paris 14(1965) 275-377.

[2] P.J. FITZSIMMONS, R.K. GETOOR and M.J. SHARPE. The Blumenthal­Getoor-McKean theorem revisited. Seminar on Stochastic Processes, 1989. Birkhauser, Boston (1990) 35-57.

Page 334: Seminar on Stochastic Processes, 1990

Piecing Together Processes 333

[3) JOSEPH GLOVER and JOANNA MITRO. Symmetries and functions of Markov processes. Annals of Probab. 18(1990) 655-668.

[4) P.A. MEYER. Renaissance, recollements, melanges, ralentissement de pro­cessus de Markov Ann. Imt. Fo'Urier, Grenoble 23(1975) 465-491.

[5) C.T. SHIH. Construction ofright processes from hitting distributions. Sem­inar on Stochastic Processes, 1983. Birkhauser, Boston (1984) 189-256.

C.T. SHIH Department of Mathematics University of Michigan Ann Arbor, Michigan 48109-1003

Page 335: Seminar on Stochastic Processes, 1990

Measurability of the Solution of a Semilinear Evolution Equation

BIJAN z. ZANGENEH

1 Introduction Let H be a real separable Hilbert space with an inner product and a norm denoted by <, > and 1111, respectively. Let (f!,.r,.rt,P) be a complete stochastic basis with a right continuous filtration. Let Z be an H-valued cadlag semimartingale. Consider the initial value problem of the semilinear stochastic evolution equation of the form:

where

{ dXt = A(t)Xt dt + ft(Xt)dt + dZt X(O) = Xo, (1)

• ftO = f(t,w,·) : H --+ H is of monotone type, and for each x E H, ft(x) is a stochastic process which satisfies cert ain measurability conditions; • A(t) is an unbounded closed linear operator which generates an evolution operator U(t,s).

We say X t is a mild solution of (1) if it is a strong solution of the integral equation

Xt = U(t,O)Xo + l U(t,s)f.(X.)ds + l U(t,s)dZ •. (2)

Since Z is a cadlag semimartingale the stochastic convolution integral J~ U(t, s )dZ. is known to be a cadlag adapted process [see Kotelenez(1982)]. More generally, in­stead of (2) we are going to study

Xt = U(t,O)Xo + l U(t,s)f.(X.)ds + Vt, (3)

where Vt is a cadlag adapted process. The existence and uniqueness of the solution of equation (3) in the case in which

fis independent of w and V == ° is a well-known theorem of Browder (1964) and Kato (1964).

In Theorem 4 of this paper we will show the solution of (3) is measurable in the appropriate sense. In addition diverse examples which have arisen in applications are shown to satisfy the hypotheses of Theorem 4 and consequent1y the results can

Page 336: Seminar on Stochastic Processes, 1990

336 B.Z. Zangeneh

be applied to these examples. This solution will be shown to be a weak limit of solutions of (3) in the case when A == 0, which in turn have been constructed by the Galerkin approximation of the finite-dimensional equation.

In Section 2 we prove that the solution of (3) in the case when A == ° is mea­surable and in Section 3 we generalize this to the case when A is non-trivial.

In Zangeneh (1990) measurablity of the solution of (3) is used to prove the existence of the solution of stochastic semilinear integral equation

X t = U(t,O)Xo + l' U(t,s)f.(X.)ds + l' U(t,s)g.(X)dW. + Vt, (4)

where • g.(.) is a uniformly-Lipschitz predictable functional with values in the space of Hilbert-Schmidt operators on H . • {Wt , tE R} is an H-valued cylindricalBrownian motion with respect to (0" :F,:Ft , P).

1.1 Notation and Definitions Let 9 be an H-valued function defined on a set D(g) C H. Recall that 9 is monotone if for each pair

x,yED(g), <g(x)-g(y),x-y>;::::O,

and 9 is semi-monotone with parameter M if, for each pair x,y E D(g),

< g(x) - g(y),x - y >;:::: -Mllx _ y1l2.

On the real line we can represent any semi-monotone function with parameter M, by f(x) - Mx; where f is a non-decreasing function on R.

We say 9 is bounded if there exists an increasing continuous function .,p on [O, 00) such that IIg(x)1I ::; .,p(lIxll), Vx E D(g). 9 is demi-continuous if, whenever (x n )

is a sequence in D(g) which converges strongly to a point x E D(g), then g(xn )

converges weakly to g(x). Let (0" :F, :Ft , P) be a complete stochastic basis with a right continuous filtra­

tion. We foUow Yor (1974) and define cylindrical Brownian motion as

Definition 1 A family of random linear functionals {Wt, t ;:::: O} on H is called a cylindrical Brownian motion on H if it satisfies the following conditions: (i) Wo = ° and Wf(x) is ;Ft-adapted for every x E H. (ii) For every x E H such that x =f. 0, Wt(x)/lIxli is a one-dimensional Brownian motion.

Note that cylindrical Brownian motion is not H-valued because its covariance is not nuclear. For the properties of cylindrical Brownian motion and the definition of stochastic integrals with respect to the cylindrical Brownian motion see Yor (1974).

2 Measurability of the Solution

2.1 Integral Equations in Hilbert Space

Let (G,Q) be a measurable spare, i.e., G is a set and g is a O'-field of subsets of G. Let T > ° and let S = [O,T]. Let f3 be the Borel field of S. Let L2(S,H) be the set of aU H-valued square integrable functions on S.

Page 337: Seminar on Stochastic Processes, 1990

A Semilinear Evolution Equation 337

Consider the integral equation

u(t,y) = l f(s,y,u(s,y))ds + V(t,y), t E S, Y E G, (5)

where f : S x G x H --+ Hand V : S x G --+ H. The variable y is a parameter, which in practice will be an element w of a probability space.

Our aim in this section is to show that under proper hypotheses on f and V there exists a unique solution u to (5), and that this solution is a fJ x Q-measurable function of t and the parameter y.

We say X(·,·) is measumble if it is fJ x Q-measurable. We will study (5) in the case where -fis demi-continuous and semi-monotone

on H and V is right continuous and has left limits in t (cadlag). This has been well-studied in the case in which V is continuous and f is bounded

by a polynomial and does not depend on the parameter y. See for example Ben­soussan and Temam (1972).

Let 11. be the BoreI field of H. Consider functions f and V

f: SxGxH --+ H

V: S x G --+ H.

We impose the following conditions on f and V:

Hypothesis 1 (a) f is fJ x Q x lt-measurable and V is Q x lt-measurable. (b) For each t E S and y E G, x --+ f(t,y,x) is demi-continuous and uniformly bounded in t. (That is, there is a function cp = cp(x,y) on 14 x G which is continuous and increasing in x and such that for all t E S, x E H, and y E G , IIf(t, y, x)1I :::; cp(y,lIxll)·) (c) There exists a non-negative Q-measumble function M(y) such that for each tE S andy E G, x --+ -f(t,y,x) issemi-monotone withpammeterM(y). (d) For each y E G, t --+ V(t,y) is cadlag.

Theorem 1 Suppose f and V satisfy Hypothesis 1. Then for each y E G, (5) has a unique cadlag solution u(.,y), and u(.,.) is fJ x Q-measumble. Furthermore

lIu(t,y)lI:::; IIV(t,y)1I +2l eM(y)(t-')lIf(s,y, V(s,y))llds; (6)

lIu(., y)lIoo:::; IIV(·,y)lI°o + 2CTCP(y, IIV(·, y)lIoo), (7)

where lIulloo = sUP09~T lIu(t)lI, and

{ _l_eM(y)T if M(y) =1 O

CT = M(y) 1 otherwise.

Let us reduce this theorem to the case when M = O and V = O. Define the transformation

X(t,y) = eM(y)t(u(t,y) - V(t,y)) (8)

and set g(t,y,x) = eM(y)tf(t,y, V(t,y) + xe-M(y)t) + M(y)x. (9)

Page 338: Seminar on Stochastic Processes, 1990

338 B.Z. Zangeneh

Lemma 1 Suppose f and V satisfy Hypothesis 1. Let X and 9 be defined by (8) and (9). Then 9 is fi x 9 x 1i-measurable and -g is monotone, demi-continuous, and uniformly bounded in t. Moreover u satisfies (5) if and only if X satisfies

X(t,y) = l g(s,y,X(s,y))ds, Vt E S, y E G. (10)

Proof: The verificat ion of this is straightforward. Suppose that V and f satisfy Hypothesis 1. We claim 9 satisfies the above conditions. • 9 is fi X 9 x 1i-measurable.

Indeed, if h E H then < f(t,y,.),h > is continuous and V(t,y) + xe-M(y)t is fi X 9 X 1i-measurable, so < f(t, y, V(t, y) +xe-M(y)t), h > is fi X 9 X 1i-measurable. Since H is separable then f (t, y, V(t, y) + xe-M(y)t) is also fi X 9 X 1i-measurable,

and since eM(y)t and M (y)x are fi X 9 x1i-measurable, then 9 is fi X 9 X 1i-measurable. • 9 is bounded, since SUPtIlVt(y)II < 00 and IIg(t,y,x)11 ::; if>(y, Ilxll), where

if>(y,~) = eM(y)T if>(y,~ + SUPtllVtID + M(y)~. • 9 is demi-continous. • -g is monotone.

Furthermore, one can check directly that if X is measurable, so is u. Since X is continuous in t and V is cadlag, u must be cadlag. It is easy to see that different solutions of (9) correspond to different solutions of (5). Q.E.D.

By Lemma 1, Theorem 1 is a direct consequence of the following.

Theorem 2 Let 9 = g( t, y, x) be a fi X 9 X 1i-measurable function on S X G X H such that for each t E S and y E G, x -+ -g(t, y, x) is demi-continous, monotone and bounded by c.p. Then for each y E G the equation (10) has a unique continuous solution X(.,y), and (t,y) -+ X(t,y) is fi X g-measurable.

Furthermore X satisfies (7) with M = O and V = O.

Remark that the transformation (8) u -+ X is bicontinuous and in particular, implies if X satisfies (6) and (7) for M = O and V = O, then u satisfies (6) and (7).

Note that y serves only as a nuisance parameter in this theorem. It only enters in the measurability part of the conclusion. In fact, one could restate the theorem somewhat informally as: if 9 depends measurably on a parameter y in (10), so does the solution.

The proof of Theorem 2 in the case in which f is independent of y is a well­known theorem of Browder (1964) and Kato (1964). One proof of this theorem can be found in Vainberg (1973), Th (26.1), page 322. The proof of the uniqueness and existence are in Vainberg (1973). In this section we will prove the uniqueness of the solution and inequalities (6) and (7). In subsection 2.3 we will prove the measurability and outline the proof of the existence of the solution of equation (10).

Since y is a nuisance parameter, which serves mainly to clutter up our formulas, we will only indicate it explicitly in our notation when we need to do so.

Let us first prove a lemma which we will need for proof of the uniqueness and for the proof of inequalities (6) and (7).

Lemma 2 lf a(.) is an H-valued integrable function on S and if X(t) . - Xo + IJ a( s )ds, then

II Xoll2 + 2l < X(s),a(s) > ds.

Page 339: Seminar on Stochastic Processes, 1990

A Semilinear Evolution Equation 339

Proof: Since a(s) is integrable, then X(t) is absolutely continuous and X/(t) = a(t) a.e. on S. Then IIX(t)1I is also absolutely continuous and

! IIX(t)1I 2 = 2 < d~;t), X(t) > = 2 < a(t), X(t) > a.e.

so that

Thus

IIX(t)112 - II Xoll2 = 2l < X(s),a(s) > ds.

Q.E.D. Now we can prove inequalities (6) and (7) in case M = ° and V = O.

Lemma 3 1/ M = V = 0, the solution o/ the integral equation (10) satisjies the inequality

IIX(t)1I ~ 2lllg(s,0)lIds ~ 2Tcp(0).

Proof: Since X(t) is a solution of the integral equation (10), then by Lemma 2 we have

IIX(t)1I2 2l < g(s,X(s)), X(s) > ds

2l <g(s,X(s)) -g(s,O),X(s»ds

+ 2l < g(s,O),X(s) > ds

~ 2l < g(s,X(s)) - g(s,O),X(s) > ds

+ 2lllg(s, O)IIIIX(s)llds.

Since -g is monotone, the first integral is negative. We can bound the second integral and rewrite the above inequality as

IIX(t)11 2 ~ 2lllg(s,0)IIIIX(s)llds

~ 2SUPo$s:stIIX (s)lllllg(s,0)lIds.

Thus sUPo<s<tIIX(s)11 ~ 2J~ IIg(s,O)lIds. Since sUPo$s:stllg(s,x)1I ~ cp(lIxll), the proof is complete. Q.E.D.

Proof of U niqueness Let X and Y be two solutions of (10). Then we have

X(t,y) - Y(t,y) = l[g(s,y,X(s,y)) - g(s,y,Y(s,y))]ds.

Page 340: Seminar on Stochastic Processes, 1990

340 B.Z. Zangeneh

By Lemma 2 one has

IIX(t, y) _Y(t,y)1I 2 = l < g(s, y,X(s,y))-g(s,y, Y(s, y)),X(s, y) -Y(s, y) > ds.

Since -g is monotone, the right hand side of the above equation is negative, so

X(t,y) = Y(t,y).

Q.E.D.

2.2 Measurability of the Solution in Finite-dimensional Space

Consider the integral equation

X(t,y) = l h(s,y,X(s,y))ds,

where h(·,·) satisfies the following hypothesis.

Hypothesis 2 (a) h satisjies Hypothesis 1 (a), (b). (b) For each t E S and y E G, -h(t, y,.) is continuous and monotone.

(11)

Since h is measurable and uniformly bounded in t, then h(., y, x) is integrable. As h(t, y,.) is continuous, the integral equation (11) is a classical deterministicintegral equation in Rn and the existence of its solution is well known. In subsection 2.1 we proved that (11) has a unique bounded solution, so we only need to prove the measurability of the solution.

The existence, uniqueness and measurability of the solution of (11) is known (see Krylov and Rozovskii (1979) for a proof in a more general situation). Since the measurability result is easy to prove in our setting, we will include a proof in the following theorem for the sake of completeness.

Theorem 3 The solution of the integral equation (11) is measurable.

Proof: For the proof of measurability we are going to construct a sequence of solutions of other integral equations which converges uniformly to a solution of (11). First: Let 1/>(.) be a positive COO-function on Hn~Rn with support {lIxli :::; T<p(O)+ 2}, which is identically equal to one on {lIxli :::; T<p(y, O) + 1}. Now define h(t, x) = h(t,x)1/>(x).

-h is semi-monotone. This can be seen because if IIXII > T<p(O) + 2 and IIZII > T<p(O) + 2,then h(t, X) = h(t, Z) = O and so

< h(t,X) - h(t, Z), X - Z >= O.

Let IIZII :::; T<p(O) + 2. Then

< h(t,X) - h(t,Z), X - Z > < h(t,X)1/>(X) - h(t, Z)1/>(X),X - Z > + < h(t, Z)1/>(X) - h(t, Z)1/>(Z), X - Z > .

Page 341: Seminar on Stochastic Processes, 1990

A Semilinear Evolution Equation 341

By the Schwarz inequality this is

::; tI;(X) < h(t, X) - h(t, Z),X - Z > +llh(t, Z)lIlt1;(X) - tI;(Z)IIIX - ZII.

Since -h is monotone and ti; is positive, the first term of the right hand side of the inequality is negative. Now as Z is bounded and ti; is Coo with compact support, the second term is ::; M(y)IIX - ZI1 2 for some M(y).

Since by Lemma 3 the solution of (11) is bounded by T<p(O), it never leaves the set {lIxll ::; T<p(O) + 1}, so the unique solution of (11) is also the unique solution of the equation X(t) = f~ k(s,X(s))ds. Thus without loss of generality we can assume h(t,.) has compact support. Second: Define k(x) to be equal to Cexp{lIxIlL1 } on {llxll < 1} and equal to zero on {lIxll ~ 1}. Then k(x) is Coo with support in the unit ball {lIxll ::; 1}. Choose C such that fRn k(x)dx = 1. Introduce, for c: > O

I.u(x) = c:n k(--)u(z)dz. 1 x-z Rn c:

This is a Coo-function called the mollifier of u. Now define h.(t,x) = I.h(t,.)(x). Since for any c: the first derivatives with

respect to x of J.u(x) and also J.u(x) itself are bounded in terms of the maximum of Ilu(x)lI, then h. and Dxh. are bounded in terms of the maximum of IIh(t,x)lI. Thus there exist Kl(Y) and K2 (y) independent of c: such that

Kl(Y) ~ sup IIDxh.(x)11 and K2(y) ~ sup Ilh.(x)lI. IIxll::;T<p(y,O)+2 IIxll::;T<p(y,O)+2

By the mean value theorem we have

(12)

Now consider the following integral equation:

X.(t) = l h.(s,X.(s))ds. (13)

Equation (13) can be solved by the Picard method. Since y --+ h(t,y,x) is mea­surable in (t,y), y --+ h.(t,y,x) is measurable in (t,y). Then the solution X. of equation (13) is measurable and so is lim.--+oX •. To complete the proof of Theorem 3 we need to prove the folIowing lemma.

Lemma 4 As c: --+ O, the solution X. of (13) converges uniformly to a solution X of (11).

Proof: From (11) and (13) we have

Then

X.(t) - X(t) = l(h.(s,X.(s)) - h(s,X(s)))ds.

IIX.(t) - X(t)11 < lllh.(s,X.(s)) - h.(s,X(s))llds

+ lllh.(s,X(s) - h(s,X(s))llds.

Page 342: Seminar on Stochastic Processes, 1990

342 B.Z. Zangeneh

By (12) we see this is

~ K1(y)lIlX.(s)-X(s)lIds

+ lllh.(s,X(S)) - h(s, X (s))ll ds.

By Gronwall's inequality we have

sUP09~TIIX.(t) -X(t)1I ~ exp(TKt} loT IIh.(s,X(S)) - h(s,X(s))lIds.

But h.(s,X(s)) --+ h(s,X(s)) pointwise and IIh.(t,X(t))ll ~ K 2 so by the domi­nated convergence theorem,

Q.E.D.

2.3 The Proof of the Measurability in Theorem 2

Now we shall briefly outline the proof of the existence from Vainberg (1973), Th(26.1), page 322 and give a proof of the measurability of the solution of equation (10).

Vainberg constructs a solution of this equation by first solving the finite-dimensional projections of the equation, and then taking the limit. Since the solution of the infinite-dimensional case is constructed as a lirnit of finite-dimensional solutions, one merely needs to trace the proof and check that the measurability holds at each stage. There is one extra hypothesis in [Vainberg, Th(26.1)], namely that t --+ g(t,x) is derni-continuous, whereas in our case, we merely assume 9 is measurable and uni­formly bounded in t [Hypothesis 1 (a) (b)]. However, the derni-continuity of 9 is not used in showing the existence of the solution of the integral equation (10). It is only used to show the inequality (6) for the finite-dimensional case. We have reproved (6) in Lemma 3.

Now let (Hn) be an increasing sequence of finite-dimensional subspaces of H such that UnHn is dense in H, and let Jn be the orthogonal projection of H onto Hn, so that Jn --+ 1 strongly. Consider the integral equation

First let us show that Jn 9 satisfies Hypothesis 2 . • Jn g(t, y,.) is continuous.

(14)

Since g(t, y,.) is demi-continuous, g(t, Xk) --+ g(t, x) weakly when IIXk - xII --+ O. But Jn 9 takes its values in the finite-dimensional space Hn, where weak and strong convergence coincide, therefore

and Jn g( t, y, .) is continuous . • Jng(t,y,.) is monotone from Hn to Hn.

Let X, Z E Hn. Then

< Jng(t,X)-Jng(t,Z),X-Z> = <g(t,X)-g(t,Z),Jn X - JnZ> (15)

Page 343: Seminar on Stochastic Processes, 1990

A Semilinear Evolution Equation 343

since Jn = J~. For X, Z E Hn, Jn (X - Z) = X - Z so the left hand si de of (15) is negative, hence Jng(t,y,.) is monotone . • Jng(t,y) satisfies Hypothesis l(a) . • Jng(t,y) is uniformly bounded by r.p.

Now by Theorem 3, equation (14) has a unique continuous measurable solution which satisfies

N ow we are going to prove

Lemma 5 For each y, Xn(·,y) converges weakly in L2(S,H) to a solution X(·,y) of (10). Furthermore X(·, y) is continuous for each y.

Proof: Let (Xn.) be an arbitrary subsequence of (Xn ). By (16) and Hypothesis 1 (b) we have

so g(·,Xn.(·)) is a bounded sequence in L2(S,H). Then there is a further subse­quence (nk,) such that g(.,Xn .,(·)) ----> Z(·) weakly in P(S,H) as 1 ----> 00. Each X n satisfies (14) and it can be proved that Xn., (-) ----> fâ Z(s )ds weakly [see Vain­berg]. We define X to be the weak limit of X n ., in P(S, H). Vainberg proved that X(y,.) is continuous and is a solution of (10) [see Vainberg, pp. 325-326].

Since the solution X(·,y) is unique, every subsequence of (Xn ) has in turn a subsequence which converges to X(y,·) weakly, it follows that the whole sequence X n converges weakly to X. Q.E.D.

To complete the proof of Theorem 2 we need to show the measurabilityof X(·, .). Fix t E S , h E H, since by Theorem 3 X n is measurable in (t,y), then

f~ < Xn(s,y),h > ds is measurable in (t,y). But f~ < Xn(s,y),h > ds converges to fJ < X(s,y),h > ds pointwise, so fJ < X(s,y),h > ds is measurable in (t,y).

As the integrand < X(s, y), h > is continuous in s, then

d r dt lo < X(s,y),h > ds =< X(t,y),h >

and since the integral is measurable in (t, y), the function < X(t, y), h > is measur­able. By the separablity of H, X(t,y) is measurable in (t,y). Q.E.D.

3 The Semilinear Evolution Equation

3.1 The Main theorem

Suppose A = {A(t), tE S} is a family of operators satisfying the following hypoth­eSlS.

Hypothesis 3 (a) There exists AER such that for aU s > O, (A( s) - AI) is the generator of a contraction semigroupj

(b) the operator-valued function (-A(t) + ţtI)-1 is strongly continuously di:ffer­entiable with respect to t fort ~ O and ţt > Aj

Page 344: Seminar on Stochastic Processes, 1990

344 B.Z. Zangeneh

(c) there exists a fundamental solution U(t,s) of the linear equation u(t) A(t)u(t). Moreover, ifuo E Hand f E C(S,H), then the equation

{ ti(t) u(O)

has a strong solution u given by

A(t)u(t) + f(t) uo

u(t) = U(t,O)uo + l U(t,s)f(s)ds.

(17)

(18)

lf uo E D(A(O)) and f E CI(S, H), then (18) is also a strong solution of (17).

Remark 1 Note that Hypothesis 3 holds, for example, if {A(t), tE R+} is a family of closed operators in H with domain D independent of t, satisfying the following conditions:

(i) considered as a mapping of D (with graph norm) into H, A(t) is CI in t on R+ in the strong operator topology;

(ii) if A(t)* is the adjoint of A(t), then D(A(t)*) C D for ali t; (iii) :3 AER such that

< A(t)x,x >~ Allxll 2 , 'Vx E D(A(t)), 'Vt E S.

Proof: See Browder (1964) In the following theorem which is our main theorem we will study the integral

equation (3) in a more abstract setting, where V == V(t,y) and f == f(t,y,x) satisfy the hypotheses of Theorem 1.

Theorem 4 Let Xo(-) be 9-measurable. Suppose that f and V satisfy Hypothesis 1 and suppose that A(t) and U(t, s) satisfy Hypothesis 3. Then for each y E G, (3) has a unique cadlag solution X(·, y), and X(·, .) is fJ x 9-measurable. Furthermore

IIX(t)1I ~ IIXolI + IIV(t)1I + l e{Ă+M)(t-s)lIf(s,U(s,O)Xo + V(s))llds, (19)

where

IIXlloo ~ IIXolI + IlVlloo + CT<P(IIXolI + IlVlloo), (20)

{ _1_e{M+Ă)T if M + A =F O

CT = M+Ă 1 otherwise.

lf Xl and X 2 are solutions corresponding to different initial values Xill and X 02 ,

then (21)

Proof: By using the transformations (8), and (9) we can assume by Lemma 1 that Xo = O, M = O and V = O in (3). We can also suppose A == O in Hypothesis 3(a) [see Zangeneh (1990) Lemma 3.1, page 30). Thus we consider

X(t,y) = 10' U(t,s)f(s,y,X(s,y))ds, tE S, Y E G. (22)

Here y serves only as nuisance parameter. It only enters in the measurability part of the conclusion. The proof of Theorem 4 in the case in which f is independent of y is a well-known theorem of Browder (1964) and Kato (1964).

Page 345: Seminar on Stochastic Processes, 1990

A Semilinear Evolution Equation 345

The existence and uniqueness are therefore known. To establish the measura­bility and inequalities (19)-(21) we follow the proof of Vainberg (1973), Th (26.2) page 331. Let An(t) := A(t)(I - n-l A(t))-t, and consider the equation

(23)

An is a bounded operator with IIAn(t)IIL ::; 2n which converges strongly to A(t). Vainberg shows that (23) has a unique solution Xn , and moreover that there is a subsequence (Xn.) of X n which converges weakIy in L2(S, H) to a lirnit X, which is a solution of (22); and for each y, X(·,y) is continuous.

But now by Lemma 5 Xn converges weakly to X in L2(S, H). Moreover fn(x) := Anx+ f(x) satisfies thehypotheses of Theorem 2 so that Xn(-,·) is ,8xQ-measurable. It follows by the proof of Theorem 2 that X(.,·) is ,8 x Q-measurable.

The proofs of the inequalities (19)-(21) in case M = O, A = O and V == O are in Vainberg (1973), and the extension to the general case of Theorem 4 follows immediately from transformation (8) and (9). Q.E.D.

As an application of Theorem 4 we can show the existence and uniqueness of the solution of (3) when Xo , f and V satisfy the following conditions.

Hypothesis 4 (a) X o EFo. (b) f = f(t,w,x) and V = V(t,w) are optional; (c) There exists a set Gen such that P(G) = 1, and ifw E G, then f and V

satisfy Hypothesis 1.

Corollary 1 Suppose that X o, f and V satisfy Hypothesis 4. Suppose A and U satisfy Hypothesis 3. Then (3) has a unique adapted cadlag (continuous, if Vt is continuous) solution.

Proof: The existence and uniqueness of a cadlag solution is immediate from Theo­rem 4. We need only prove that it is adapted. To see this, fix s < t, take S = [O, s], and take 9 = .:Fila in Theorem 4, where G is the set of Hypothesis 4. Now n - G has measure O so it is in Fo C Ft .

Theorem 4 implies X(s,. )Ia is Q-measurable; as alI subsets of n - G are in F t by completeness, X(s,.) itself is Ft-measurable. By right continuity of the filtration,

X(s,.) E F. = nt> • .:Fi.

Thus {X(t,·),t E S} is adapted. Note that any discontinuity of the solution in general comes from a discontinuity

of V. Q.E.D.

3.2 Some Examples

Example (1) Let A be a closed, self-adjoint, negative definite unbounded operator such that

A-l is nuclear. Let U(t) == etA be a semigroup generated by A. Since A is self­adjoint then U satisfies Hypotheses 3, so it satisfies alI the conditions we impose on U.

Let W(t) be a cylindrica1 Brownian motion on H. Consider the initial-value problem:

Page 346: Seminar on Stochastic Processes, 1990

346 B.Z. Zangeneh

{ dXt = AXt dt + ft(Xt)dt + dW(t), X(O) = Xo,

where X o, and f satisfy Rypothesis 4. Let X be a mild solution of (24), i.e. a solution of the integral equation:

(24)

Xt = U(t)X(O) + l U(t - s)fs(Xs)ds + l U(t - s)dW(s). (25)

Note that since A-l is nuclear then J~ U(t - s)dW(s) is H-valued [see Dawson (1972)].

The existence and uniqueness of the solution of (25) have been studied in Marcus (1978). Re assumed that f is independent of w E n and t E S and that there are M > O, and p ~ 1 for which

< f(u) - f(v),u - v >~ -Mllu - vil P

and

Re proved that this integral equation has a unique solution in LP(n, LP(S, H)). As a consequence of Corollary 1 we can extend Marcus' result to more general

f and we can show the existence of a strong solution of (25) which is continuous instead of merely being in LP(n, LP(S, H)).

The Ornstein-Uhlenbeck process Vi = J~ U(t - s)dW(s) has been well-studied e.g. in [Iscoe et al. (1990)] where they show that Vi has a continuous version. We can rewrite (25) as

Xt = U(t)X(O) + l U(t - s)fs(Xs)ds + Vi,

where Vi is an adapted continuous process. Then by Corollary 1 the equation (25) has a unique continuous adapted solution. Example (2) Let D be a bounded domain with a smooth boundary in Rd. Let -A be a uniformly strongly elliptic second order differential operator with smooth coefficients on D. Let B be the operator B = d(x)DN + e(x), where DN is the normal derivative on fJD, and d and e are in COO(fJD). Let A (with the boundary condition B f == O) be self-adjoint.

Consider the initial-boundary-value problem

{ Wt+Au = ft(u)+W on Dx[O,oo)

Bu = O on fJD x [0,00) u(O,x) = O on D,

(26)

where W = W(t,x) is a white noise in space-time [for the definition and properties of white noise see J.B Walsh (1986)], and ft is a non-linear function that will be defined below. Let p > ~. W can be considered as a Brownian motion Wt on the Sob%v space H_ p [see Walsh (1986), Chapter 4. Page 4.11]. There is a complete orthonormal basis {ek} for Hp •

The operator A (plus boundary conditions) has eigenvalues {Ak} with respect to {ek} i.e. Aek = Akek, Vk. The eigenvalues satisfy ~j(1 + Aj)-P < 00 if p > ~

Page 347: Seminar on Stochastic Processes, 1990

A Semilinear Evolution Equation 347

[see Walsh (1986), Chapter 4, page 4.9]. Then [A-l]p is nuclear and -A generates a contraction semigroup U(t) == e-tA . This semigroup satisfies Hypotheses 3.

Now consider the initial-boundary-value problem (26) as a semilinear stochastic evolution equation

(27)

with initial condition u(O) = O, where f: S x n x H_p --+ H_p satisfies Hypotheses 4(b) and 4(c) relative to the separable Hilbert space H = H_p • Now we can define the mild solution of (27) (which is also a mild solution of (26)), as the solution of

r r-ut = 10 U(t - s)f.(u.)ds + 10 U(t - s)dW •. (28)

Since Wt is a continuous local martingale on the separable Hilbert space H_p, then J~ U(t - s)dW. has an adapted continuous version [see Kotelenez (1982)]. If we define

Vt := l U(t - s)dW.,

then by Corollary 1, equation (28) has a unique continuous solution with values in H_ p •

3.3 A Second Order Equation

Let Zt be a cadlag semimartingale with values in H. Let A satisfy the following:

Hypothesis 5 A is a closed strictly positive definite self-adjoint operator on H with dense domain D(A), so that there is a K > O such that < Ax,x >~ Kllxll2, Vx E D(A).

Consider the Cauchy problem, written formally as

{

82 • atf+Ax = Z

x(O) = Xo,

~~(O) = Yo. (29)

Following Curtain and Pritchard (1978), we may write (29) formally as a first-order system

{ dX(t) = AX(t)dt + dZt X(O) = Xo,

where X(t) = ( :~g ), Zt = (~t ), X o = ( :: ), and A = ( _OA

Introduce a Hilbert space K = D(Al/2) X H with inner product

< X,X >K=< A1/2X, A1/2X > + < y,y >,

and norm IIXlIi = IIAl/2X I1 2 + Ily112,

where X = ( ~ ), X = ( ~ ) [see Chapter 4, page, 93, Vilenkin (1972)J.

Now for X E D(A) = D(A) x D(Al/2), we have

< X,AX >K=< Ax,y > + < y,-Ax >= O

(30)

Page 348: Seminar on Stochastic Processes, 1990

348 B.Z. Zangeneh

Thus

< (A-.\I)X,X >K=< AX, X >K -.\lIxllk = -.\lIxlit

Since

1< (A - U)X,X >K I :S II(A - U)XlldIXlk,

we have II(A - U)Xlk ~ .\llxlk

The adjoint of A* of A is easily shown to be -A. With the same logic

II(A* - .\I)Xlk ~ .\IIXlk

Then A generates a contract ion semigroup U(t) == etA on IC. [see Curtain and Pritchard (1978), Th (2.14), page 22]. Moreover A and U(t) satisfy Hypothesis 3 with.\ = O.

Now consider the mild solution of (30):

VI = U(t)Xo + l U(t - s)dZs • (31)

Since Zt is a cadlag semimartingale on IC, the stochastic convolution integral J~ U( t­s )dZs has a cadlag version [see Kotelenez (1982)], so VI is a cadlag adapted process on IC.

Now let us consider the semilinear Cauchy problem, written formally as

{ 8~~~t) + Ax(t) = f(x(t),~) +:it

x(O) = Xo,

~~It=o = Yo,

(32)

where f : D(A1/2) X H -> H satisfies the following conditions:

Hypothesis 6 (a) -f(x,.): H -> H is semi-monotone i.e. 3M > O such thatfor ali x E D(A1/2) and aU Y1, Y2 E H

(b) for al! x E D(A1/2), f(x,.) is demi-continuous and there is a continuous in­creasing function c.p: R+ -> R+ such that IIf(O,y)11 :S c.p(lIyll); (c) f(·,y) : D(A1/2) -> H is uniformly Lipschitz, i.e. 3M > O such that Vy E H

IIf(X2, y) - f(xt, y)1I :S MIIA1/2(X2 - x1)11·

[The completeness of D(A1/2) under the norm IIA1/2xll fol1ows from the strict pos­itivity of A 1/2.]

Note that any uniformly Lipschitz function f : D(A1/2) x H -> H satisfies Hypothesis 6.

Proposition 1 lf f satisfies Hypothesis 6, then the Cauchy problem (32) has a unique mild adapted cadlag solution x(t) with values in D(A1/2). Moreover ~ is an H-valued cadlag process. lf Zt is continuous, (x,%) is continuous in IC.

Page 349: Seminar on Stochastic Processes, 1990

A Semilinear Evolution Equation 349

Proof: Define a mapping F from K to K by F(x,y) = ( f(~,Y) ). We are going

to show that F satisfies the hypotheses of Corollary 1. • F is semi-monotone.

Let Xl = ( ~~ ) and X 2 = ( ~: ). Then

< F(X2) - F(XI),X2 - Xl >K < f(X2,Y2) - f(XI,YI)'Y2 - YI > = < f(X2' Y2) - f(X2' YI)' Y2 - YI >

+ < f(X2, YI) - f(xI, YI)' Y2 - YI > .

By Hypothesis 6(a) and the Schwartz inequality this is

S MIIY2 - Yl1l2 + Ilf(X2,YI) - f(XI,YI)IIIIY2 - Y&

By Hypothesis 6( c) this is

S MIIY2 - Yll1 2 + MIIAI/2(X2 - xI)IIIIY2 - YIII S MIIY2 - Yl1l 2 + M/2I1AI/2(X2 - xI)1I2 + M/211Y2 - Yll12

S 3M/2(IIAI/2(X2 - xI)11 2 + IIY2 - Y11l2) 3M/211X2 -Xlllk·

Thus -F: K -+ K is semi-monotone. • F is demi-continuous in the pair (x, y) because it is demi-continuous in Y and uniformly continuous in x. • F is bounded since

IIF(x)lk = IIf(x,y)1I s IIf(x,y) - f(O,y)11 + IIf(O,y)lIi

by Hypotheses 6(b) and 6(c) this is

S MIIAI/2X Il + <p(lIyll),

and since IIAI/2X II :S IIXlk and lIylI :S IIXlk then

IIF(X)lk s MIIXlk + <p(IIXlk)·

Thus F is bounded by the function 'IjJ(r) = Mr + <per). Then F satisfies the hypotheses of Corollary 1 on K.

Now as in the linear case we may write (32) as a first order initial value problem:

{ dXt = A(t)Xtdt + F(X(t))dt + dZt, X(O) = Xo.

Since A generates a contraction semigroup U (t) we can write the above initial value problem as

X(t) = U(t)X(O) + l U(t - s)F(X(s))ds + l U(t - s)dZt.

By (31) we can rewrite this as

X(t) = l U(t - s)F(X.)ds + VI.

Since VI is cadlag and adapted then F, U and V satisfy all the conditions of Corollary 1. Then there is an adapted cadlag solution on K. If Zt is continuous, VI is continuous too and Xt is a continuous solution of (32) on K. Q.E.D.

Page 350: Seminar on Stochastic Processes, 1990

350 B.Z. Zangeneh

Remark 2 We assume f : D(A1/2) X H -+ H. We could let f depend on w E n and t E 8 as well. This would not involve any essential modification of the proof

Example (3): Let D, A, B, and W be as in Example (2). Let p > d/2 and consider a mixed problem of the form:

{ ~+Au = f(u,~~)+W onDx[O,oo)

Bu = O onâD x [0,00) u(x,O) = O onD ~~(x,O) = O onD,

(33)

where f : H_p+1 X H_p -+ H_p. As in Example (2) we consider W as a Brownian motion Wt on the Sobolev

space H_p. Now A is a strictly positive definite self-adjoint operator on H_p, and [A-1)p is nuclear. Since alI of the eigenvalues of A are strictly positive, then

(34)

for alI X E D(A) = H_p+2 '

Then we can write (33) as the folIowing Cauchy problem on the Sobolev space H_p :

{ dUt = Utdt dUt = -Autdt + f(ut,ut)dt + dat

u(O) = O u(O) = O.

(35)

Now A satisfies (34) and it is a positive definite self-adjoint operator on H_p • Note that if f E Hn, then A1/2 f E Hn_1 [see, Walsh (1986), Example 3, Page 4.10). Then D(A1/2) = H_p+1' Since Wt is continuous then by Proposition 1, (35) has a continuous mild solution Ut E C(8, H_P+1) and, moreover, Ut E C1(8, H_p) Le., the mild solution of (33) is continuous process in H_p for any p > d/2 - 1, and it is a differentiable process in H_p for any p > d/2.

Acknowledgement This work has been part of the author's Ph.D. dissertation. Re wishes to express his gratitude to his supervisor Professor J.B. Walsh for his guidence and encouragement.

References [1) Bensoussan, A. & Temam, R.E.T. (1972). Equations aux derivees par­

tielles stochastiques non lineaires (1). Israel Journal of Mathematics, 11, 95-129.

[2) Browder, F.E. (1964). Non-linear equations of evolution. Ann. of Math., 80, 485-523

(3) Curtain, R.F. & Pritchard, A.J. (1978). Infinite dimensional linear system theory. LN in control and information sciences. 8, Springer-Verlag, Berlin­Reidelberg, New York.

(4) Dawson, D.A. (1972). Stochastic evolution equations. Mathematical Biosci., 15, 287-316.

Page 351: Seminar on Stochastic Processes, 1990

A Semilinear Evolution Equation 351

[5) Iscoe, 1., Marcus, M.B., McDonald, D., Talagrand, M. & Zinn, J. (1990). Continuityof i2-valued Ornstein-Uhlenbeck Process. The Annals of Probability, 18(1), 68-84.

[6) Kato, T. (1964). Nonlinear evolution equations in Banach spaces. Proc. Symp. Appl. Math., 17, 50-67.

[7) Kotelenz, P. (1982). A submartingale type inequality with applications to stochastic evolution equations. Stochastics, 8, 139-15l.

[8) Krylov, N.V. & Rozovskii, B.L. (1981). Stochastic evolution equations. J. Soviet Math., 16, 1233-1277.

[9) Marcus, R. (1978). Parabolic Ito equations with monotone non-linearities. J. Functional Analysis, 29, 275-286.

[10) Vainberg, M.M. (1973). Variational method and method of monotone oper­ators in the theory of nonlinear equations. John Wiley & Sons, Ltd.

[11) Vilenkin, N.Ya. (1972). Functional analysis, Wolters-Noordhoff Publishing, Groninge, The Netherlands.

[12) Walsh, J.B. (1986). An introduction to stochastic partial differential equa­tions. Lecture Notes in Math., 1180, 266-439.

[13) Vor, M. (1974). Existence et unicite de diffusions a valeurs dans un espace de Hilbert. Ann. Inst. H.Poincare, 10, 55-88.

[14) Z. Zangeneh, B. (1990). Semilinear Stochastic Evolution Equations, Ph.D thesis, University of British Columbia.

BIJAN Z. ZANGENEH Department of Mathematics University of British Columbia Vancouver, B.C. V6TIY4 CANADA