Proctional Ornstein-Uhlenbeck processes
Transcript of Proctional Ornstein-Uhlenbeck processes
Tampereen teknillinen yliopisto. Julkaisu 1338 Tampere University of Technology. Publication 1338 Terhi Kaarakka Fractional Ornstein-Uhlenbeck Processes Thesis for the degree of Doctor of Philosophy to be presented with due permission for public examination and criticism in SÀhkötalo Building, Auditorium S4, at Tampere University of Technology, on the 6th of November 2015, at 12 noon. Tampereen teknillinen yliopisto - Tampere University of Technology Tampere 2015
ISBN 978-952-15-3604-5 (printed) ISBN 978-952-15-3620-5 (PDF) ISSN 1459-2045
Abstract
In this monograph, we are mainly studying Gaussian processes, in particularly threedifferent types of fractional OrnsteinâUhlenbeck processes. Pioneers in this field may bementioned, e.g. Kolmogorov (1903-1987) and Mandelbrot (1924-2010).
The OrnsteinâUhlenbeck diffusion can be constructed from Brownian motion via a Doobtransformation and also from a solution of the Langevin stochastic differential equation.Both of these processes have the same finite dimensional distributions. However thesolution of the Langevin stochastic differential equation, which driving process is fractionalBrownian motion and a Doob transformation of fractional Brownian motion do not havesame finite dimensional distributions. Indeed we verify, that the covariance of the fractionalOrnsteinâUhlenbeck process of the first kind (which we call the solution of the Langevinstochastic differential equation in which the driving process is fractional Brownian motion)behaves at infinity like a power function and the covariance of the fractional OrnsteinâUhlenbeck process (constructed by a Doob transformation of fractional Brownian motion)behaves at infinity like an exponential function. Moreover we study the behaviour ofthe covariances of these fractional OrnsteinâUhlenbeck processes. We also calculate thespectral density function for the Doob transformation of fractional Brownian motionusing a Bochner theorem.
We present the Doob transformation of fractional Brownian motion via solution of theLangevin stochastic differential equation. One of the main aims of our research is toanalyse its driving process. This driving process is Y (α)
t = eâtαZÏt , where Ït = HeαtH
α andZt : t ℠0 is fractional Brownian motion. We find out that the process Y (α) := Y (α)
t :t â„ 0, if scaled properly, has the same finite dimensional distributions as the processY (1) := Y (1)
t : t â„ 0. The main result in this monograph is that we define a stationaryfractional OrnsteinâUhlenbeck process of the second kind as a process with a two-sideddriving process Y (1)
t : t â R and create a new family of fractional Ornstein-Uhlenbeckprocesses. We study many properties of the fractional OrnsteinâUhlenbeck process of thesecond kind. For example, we show that the fractional OrnsteinâUhlenbeck process of thesecond kind is Hölder continuous of any order ÎČ < H and find the kernel representationof its covariance.
We research many properties of the processes Y (α) and Y (1), since they are quite interestingthemselves. We represent these processes as stochastic integrals with respect to Brownianmotion and prove that the sample paths of the process Y (α) are Hölder continuous ofany order ÎČ < H. In the case H â ( 1
2 , 1), we find out the covariance kernel of incrementprocess of Y (α), and using that we investicate the covariance of Y (α) and the variance ofY (α), when t tends to infinity. One of our main results is that the increment process ofY (α) is short-range dependent. We also study weak convergence and tightness and then
i
ii Abstract
finally prove that 1âaY
(α)at converges weakly to scaled Brownian motion.
In the case H â ( 12 , 1), fractional Brownian motion and the fractional OrnsteinâUhlenbeck
process of the first kind both exhibit a long-range dependence, but the fractional OrnsteinâUhlenbeck process of the second kind exhibits a short-range dependence. This offers moreopportunities to model network traffic or economic time series via tractable fractionalprocesses. The fractional OrnsteinâUhlenbeck process of the first kind and the fractionalOrnsteinâUhlenbeck process of the second kind are quite similar to simulate, since theycan both be represented via stochastic differential equations.
Preface
The work presented in this thesis and the post-graduate studies including the Licentiatethesis were carried out in the Department of Mathematics at the University of Joensuuand the Department of Mathematics at Tampere University of Technology.
I want to express my deepest thanks to my supervisors Prof. Paavo Salminen and Prof.Sirkka-Liisa Eriksson for their skilful guidance and encouragement during my doctoralstudies. I also want thank all people from the Finnish Graduate School in Stochastics:Paavo, Göran, Ilkka et al. not forgetting late Esko, you have made the best place everto study stochastics, the atmosphere is unique! I would like to thank pre-examinersProfessor Tommi Sottinen and Adjunct Professor Ehsan Azmoodeh for their encouragingcomments and careful reading. In addition, I would like to thank Professor Ilkka Norros,who wanted to be the opponent in the public defence of this monograph. I thank HeikkiOrelma and Osmo Kaleva for discussions and Osmo and Simo Ali-Löytty for guidancewith Matlab and LaTeX. I am also grateful to all my colleagues in the University ofJoensuu and Tampere University of Technology. I want also thank Virginia Mattila andAnu Granroth, who also helped me with the language and grammar.
The financial support of the Finnish Graduate School in Stochastic, University of East-ern Finland (Joensuu) and Tampere University of Technology and grants from thePohjois-Karjalan kulttuurirahasto, the foundation of Tiedeakatemia (VÀisÀlÀ), TampereenTiedesÀÀtiö and The Magnus Ehrnrooth Foundation are gratefully acknowledged.
Above all I want to thank my family, especially my dear husband Pasi for his love andsupport and my lovely daughter Peppi and wonderful son Otso. And also my motherMarjatta and father Tapani and my lovely siblings Kristiina and Olli, and dear friends(to name of few) Saija, Minni, Merja,... thanks for all of you.
Tampere, September 2015
Terhi Kaarakka
iii
Contents
Abstract i
Preface iii
Contents v
Mathematical notations viList of notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
1 Introduction 11.1 Gaussian processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Self-similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.3 Asymptotic behaviour and long-range dependence . . . . . . . . . . . . . 101.4 Some important Gaussian processes . . . . . . . . . . . . . . . . . . . . . 11
2 Covariances and spectral density functions 292.1 Covariances and stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . 292.2 On asymptotic behaviour of the Doob transformation of fBm . . . . . . . 302.3 Spectral density function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332.4 Spectral density functions of OU and fOU . . . . . . . . . . . . . . . . . 41
3 Fractional OrnsteinâUhlenbeck processes 493.1 fOU(2), Fractional OU process of the second kind . . . . . . . . . . . . . . 493.2 Covariance kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693.3 Conclusion of the stationarity of the fOU processes . . . . . . . . . . . . . 823.4 fOU(2) is a short-range dependent for H > 1
2 . . . . . . . . . . . . . . . . 83
4 Weak convergence 854.1 Weak convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854.2 Weak convergence of Y (α) . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5 Conclusion 935.1 Main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 935.2 On some statistical studies of fOU(2) . . . . . . . . . . . . . . . . . . . . . 94
A Summary of some properties and definitions 97A.1 Gaussian processes: conclusions . . . . . . . . . . . . . . . . . . . . . . . . 97
References 98
v
vi Contents
References 99
Mathematical notations
List of notations
B(U) Borel Ï-algebra of subsets of U
Ft Filtration
N Natural numbers, 1, 2, 3, . . .
N0 Natural numbers âȘ0, 0, 1, 2, 3, . . .
R Real numbers
H â (0, 1) Constant of Hurst(2Hn
)= 2H(2H â 1) · · · (2H â n+ 1)
n! Binomial coefficient, n > 0, H â (0, 1)
Ït = He αH tα
Ï(1)t = He t
H
â Odd increasing density functionââČ Spectral density functionÎ Gamma function
E(X) = ” Mean
E ((Xt â ”t)(Xs â ”s)) = cov(Xt, Xs) Covariance
F Distribution function
P(Xs < y) Distribution of Xs is less than y
P(Xs < y|Ft) Conditional distribution of Xs given Ft
ÏX(n) = E(XiXi+n) Covariance, when mean is zero and iis arbitrary non-negative integer
Q(s, t) = Q(tâ s) = E(XtXs) Covariance, when mean is zero
⧠: f(Îł) =â«f(x)eiÎłxdx Fourier transformation of a function f
âš : f(x) = 12Ïâ«f(Îł)eâiÎłxdÎł Inverse Fourier transformation of a function f
Contents vii
Xt : t â R Stochastic process, t is time
Bt : t â„ 0 Brownian motion, Bm
Bt : t â R Two-sided Brownian motion
Zt : t â„ 0 Fractional Brownian motion, fBm
Zt : t â R Two-sided fractional Brownian motion
IZ = Zn â Znâ1 : n = 0, 1, 2, . . . Increment process of fBm
Vt : t â R or Ut : t â R OrnsteinâUhlenbeck process, OU
U (Z,α)t : t â R Fractional OrnsteinâUhlenbeck process of
the first kind, fOU(1)X(D,α)
t : t â R Fractional OrnsteinâUhlenbeck processor Doob transformation of fBm, fOU
U (D,Îł)t : t â R Fractional OrnsteinâUhlenbeck process of
the second kind, fOU(2)
Y (α)t : t â R Y
(α)t =
tâ«0
eâαsdZÏs
IY = Y (α)t â Y (α)
s : s, t â R Increment process of Y (α)
Y (1)t : t â R Two-sided driving process of fOU(2)
kX(u, v) Kernel in the representation of thecovariance of process Xt : t â„ 0
kαH(uâ v) = rαH(u, v) Kernel in the representation of thecovariance of Y (α)t:tâ„0
Xt : t â„ 0 d= Yt : t â„ 0 Processes Xt : t â„ 0 and Yt : t â„ 0 havethe same finite dimensional distribution
Pnwâ P Pn converges weakly to P .
Xt : t â„ 0 d= Ys : s â„ 0 Xt : t â„ 0 converges weakly to Ys : s â„ 0in the space of continuous functions.
1 Introduction
The main topics of this dissertation in the field of stochastic are fractional Ornstein1 âUhlenbeck2 processes, that are special types of Gaussian3 processes. This area has beenstudied by several authors. There are many publications devoted to fractional OrnsteinâUhlenbeck processes, e.g., Klepsyna and Le Breton [33], Cheridito [11], Mashui and Shieh[40], Nualart and Hu [48] and Azmoodeh and Viitasaari [3] or Morlanes [2]. We alsofind out that the Wolfram Demonstration Project has a demonstration of the fractionalOrnsteinâUhlenbeck process [37]. In this software, there is an interactive simulation withthe options to choose from, for example, the mean, the variance and the Hurst constantH.
It is well-known that the OrnsteinâUhlenbeck diffusion can be constructed from Brownianmotion via a Doob4 transformation as well as a solution of the Langevin 5 stochasticdifferential equation (see Doob [16]). Both of these processes have the same finitedimensional distributions. We thought that this would be the same for the fractionalOrnsteinâUhlenbeck processes, but noticed fairly soon that this is not the case, since thecovariance of the fractional OrnsteinâUhlenbeck process as a solution of the Langevinstochastic differential equation (abbreviated fOU(1)) behaves at infinity like a powerfunction and the covariance of the fractional OrnsteinâUhlenbeck process constructed bythe Doob transformation of fractional Brownian motion (abbreviated fOU) behaves atinfinity like an exponential function. (A detailed discussion is presented in Chapter 2)
We present the Doob transform of fractional Brownian motion via the Langevin stochasticdifferential equation. One of the main objects is to analyse the driving process ofthis stochastic differential equation. The driving process of the Langevin equation isY
(α)s = eâsαZÏs , where Ïs = He
αsH
α and Zt : t â„ 0 is fractional Brownian motion(abbreviated fBm). We find out that the process Y (α), if scaled properly, has the samefinite dimensional distributions as the process Y (1). We define a stationary fractionalOrnsteinâUhlenbeck process of the second kind (abbreviated fOU(2)) as a process inwhich a driving process is the two-sided process Y (1)
t : t â R (see Definition 3.6)
U(D,Îł)t = eâÎłt
tâ«ââ
eÎłsdY (1)s = eâÎłt
tâ«ââ
e(Îłâ1)sdZÏ
(1)s, Îł > 0, (1.1)
1Leonard Ornstein (1880-1941), Dutch physicist.2George Uhlenbeck (1900-1988), Ducht physicist.3Karl F. Gauss (1777-1855), German mathematician.4Joseph L. Doob (1910-2004), American mathematician.5Paul Langevin (1872-1946), French physicist.
1
2 Chapter 1. Introduction
where Ï (1)s = He
sH , H â (0, 1) and Z is fractional Brownian motion. This fOU(2) coincides
with fOU (the Doob transformation of fBm), when Îł = 1.
One major motivation for studying these fractional OrnsteinâUhlenbeck processes is thatif H > 1
2 , fractional Brownian motion and the fractional OrnsteinâUhlenbeck processof the first kind both exhibit a long-range dependence, but the fractional OrnsteinâUhlenbeck process of the second kind exhibits a short-range dependence. This offers moreoptions to model network traffic or economic time series via tractable fractional processes.However, the fractional OrnsteinâUhlenbeck process of the first kind and the fractionalOrnsteinâUhlenbeck process of the second kind are quite similar to simulate, since theycan all be represented via stochastic differential equations. In Subsection 3.1.2 of Section3.1 we present some simulations of these processes.
The fractional OrnsteinâUhlenbeck process of the second kind, U (D,Îł), can be defined viaa stochastic differential equation, where the driving process is the two-sided process Y (1).The process Y (1)
t : t â„ 0 itself is interesting, since it is also a similar type of stochasticprocess as another fOU and
Y(α)t =
tâ«0
eâαsdZHe
αsHα
,
scaled properly, has the same finite dimensional distributions as the process Y (1) (inChapter 3). We also study properties of weak convergence and tightness and then provethat 1â
aY
(α)at converges weakly to the scaled Brownian motion.
We also study other properties of U (D,Îł) and Y (α). For example, we verify that theyare locally Hölder continuous of the order ÎČ < H, Y (α) has stationary increments andU (D,Îł) is stationary. We find the kernel representation of the covariance of the incrementprocess of Y (α) and the process U (D,Îł) and using these representations we find manyother properties of these processes. One of the main results is that the both processesU (D,Îł) and the increment process of Y (α) are short-range dependent.
In order to make this monograph reader-friendly, we recall in Chapter 1 the basicdefinitions and properties of the Gaussian processes. We also recall stationarity andself-similarity and define some important Gaussian processes: Brownian motion, twoOrnsteinâUhlenbeck processes, fractional Brownian motion and the fractional OrnsteinâUhlenbeck process as a solution of the Langevin stochastic differential equation and thefractional OrnsteinâUhlenbeck process as the Doob transformation of fractional Brownianmotion. We present and prove numerous properties of these processes.
We calculate the spectral density function for the Doob transformation of fractionalBrownian motion, using a Bochner theorem. To make the representation self-contained,the Bochner theorem is also given. We recall that in the Bochner theorem the covarianceof the process is expressed as an integral with respect to its spectral density function (inChapter 2, Sections 2.3 and 2.4).
Collecting everything together, our aim is to write a clear self-explanatory monograph,dealing with different fOU processes and their important properties.
In mathematics it is a habit to write things using "we" form, since we think that in aprocess of understanding there is a writer and a reader together. This means that thepersonal pronoun we is actually me and a reader.
1.1. Gaussian processes 3
The brief list of the novelty values and authorâs role in all chapters of the monograph arethe following:1. Introduction. In this chapter I compile the theoretical background, I recall basicdefinitions and theorems. There are plenty of proofs of mine, but only with minor noveltyvalue.2. Covariance and spectral density functions. I have written this chapter usingmy licentiate theses [28], which is written in Finnish. I have also developed further itsideas and improved results. Some theorems, for example, Corollary 2.4 is new. The maintheorem of this chapter is Theorem 2.8, where the spectral density function of the Doobtransformation of fBm is given, this theorem has also a novelty value.3. Fractional Ornstein-Uhlenbeck processes. All results in this chapter are nov-elties. Fractional Ornstein-Uhlenbeck processes of second kind is defined for the firsttime in the publication [29], where I was the corresponding author and did the mainmathematical work. In this chapter I have written propositions with complete proofsand some of them were already published (there is references in title) in [29], but withbrief proofs. The publication [29] was important to publish fast, since results have strongnovelty value.4. Weak convergence. In Section 4.1., I recall the main concept of the weak con-vergence and in Section 4.2., I show that the driving process of fOU(2), Y (α), if scaledproperly, converges weakly to scaled Brownian motion. This novelty result and proof ofmine is also published in [29].
1.1 Gaussian processes
1.1.1 Basic properties of Gaussian processesIn this section we recall some important definitions and properties that we use the mostin this dissertation. They are quite standard in the literature. There are several goodreferences on this subject. We mention, for example, Doob [17] and Dym and McKean[18].
Definition 1.1. A real-valued stochastic process Xt : t â R in the probability space(Ω,F ,P) is called a Gaussian process if the vector
(Xt1 , Xt2 , . . . , Xtn)
is multivariate Gaussian for every t1, t2, . . . , tn â R, n â„ 1, i.e., every finite collection ofrandom variables has a multivariate normal distribution.
It is well-known that the distribution of a Gaussian process Xt : t â R is determineduniquely by its mean function t 7â E(Xt) and the covariance function
(s, t) 7â E ((Xt âE(Xt)) (Xs âE(Xs))) .
Often in the definition of a Gaussian process it is assumed that the mean is zero.An important property is the stationarity of a process. This means that the finitedimensional distributions do not change in time. We define in Definition 1.7, whenstochastic processes have the same infinite dimensional distributions. We state thedefinition and the theorem of Dym and McKean [18]. In this definition the mean isassumed to be zero.
4 Chapter 1. Introduction
Definition 1.2. The Gaussian process Xt : t â R of zero mean is called stationary ifthe process XT+t : t â R has the same finite dimensional distributions as the processXt : t â R, for any T â R. In other words, in the stationarity case the probability
P(nâi=1ai †Xti+T †bi)
does not depend on T â R for any n â N.
There is also a weaker form of stationarity for all processes (not only Gaussian). If theprocess is not necessarily stationary but its mean and variance are constants and thecovariance depends only on the difference of the time, then we say that the process issecond order stationary. This definition is used, for example, in Cowpertwait and Metcalfe[13]. In the next theorem we actually show that every stationary Gaussian process isalso second order stationary. Thus, another way to state the stationarity for Gaussianprocesses is
Theorem 1.3. The Gaussian process Xt : t â R of zero mean is stationary if and onlyif the covariance E(Xt1+TXt2+T ) does not depend on T , for any t1, t2 â R.
Proof. Let Xt : t â R be a Gaussian process of zero mean. If the process is stationarythen obviously the covariance does not depend on T .Conversely, if we assume that
E(Xt1+TXt2+T ) = E(Xt1Xt2)
then
P(
nâi=1ai †Xti+T †bi
)= P
(nâi=1ai †Xti †bi
)for any n â N, since in the Gaussian case, the covariance function determines distributionuniquely.
We denote the covariance function by
cov(Xt, Xs) := E ((Xt âEXt)(Xs âEXs)) .
And we define the covariance matrix or covariance-variance matrix of two random vectorsX := (Xt1 , Xt2 , · · · , Xtn) and Y := (Ys1 , Ys2 , · · · , Ysn) for ti, sj â R, i, j = 1, . . . , n by[aij ]nĂn, with the general element
aij = E((Xti âEXti)(Ytj âEYtj )
).
Theorem 1.4. Let Xt : t â R be a Gaussian process. Then for any n â N andevery t1, . . . , tn â R the covariance matrix of the multivariate Gaussian random vector(Xt1 , . . . , Xtn) is non-negative definite.
Proof. Let Xt : t â R be a Gaussian process and therefore the random vector(Xt1 , Xt2 , . . . , Xtn) is Gaussian for any t1, t2, . . . , tn â R. We write its covariance matrixas
Q :=
Q11 · · · Q1n...
Qn1 · · · Qnn
,
1.1. Gaussian processes 5
whereQjk := E
((Xtj âE(Xtj ))(Xtk âE(Xtk))
).
Since every covariance matrix Q is symmetric and symmetric matrices are orthogonallydiagonalizable there exists a nĂ n matrix C such that the matrix CTQC, denoted by B,is a diagonal matrix and the eigenvalues of Q are located in the main diagonal of B.We consider the diagonal matrix B
B = CTQC= CTE
([Xt1 , Xt2 , . . . , Xtn ]T [Xt1 , Xt2 , . . . , Xtn ]
)C
= E(CT [Xt1 , Xt2 , . . . , Xtn ]T [Xt1 , Xt2 , . . . , Xtn ]C
)= E
(([Xt1 , Xt2 , . . . , Xtn ]C)T [Xt1 , Xt2 , · · · , Xtn ]C
),
where Y = [Xt1 , Xt2 , · · · , Xtn ]C, is Gaussian being a linear combination of Gaussianrandom variables Xti , i = 1, . . . , n and therefore it is a Gaussian random vector. Thus,B is the covariance matrix of the Gaussian vector Y . Since B is a diagonal matrix itactually consists of the variances of Y and we write that B = Var(Y ) . Let v be a rowvector v = uCT . Applying previous statements, we may write as follows
vQvT = uCTQCuT
= uBuT
= uVar(Y )uT
= Var(u · Y )℠0.
Hence vQvT =dâ
j,k=1Qjkvivj â„ 0, and therefore the covariance matrix is non-negative
definite.
We represent some important definitions of continuity and equality of stochastic processes.
Definition 1.5. A process Xt : t â R is called L2âcontinuous at t0, if for any Δ > 0there exists ÎŽ > 0 such that the property
E(|Xt âXt0 |2
)< Δ
holds for all |tâ t0| < ÎŽ.
Dealing with stochastic processes, we often need their continuous versions (modifications).The following definition of a version may be found in Klebaner [32].
Definition 1.6. Two stochastic processes Xt : t â R and Yt : t â R are calledversions (modifications) of each other if
P (Xt = Yt) = 1, for all t â„ 0.
Note that two stochastic processes may be versions of each other although one of them iscontinuous, but the other is not. However, if X is a version of Y , then X and Y have thesame finite dimensional distributions. The definition is in Karatzas and Shreve [31].
6 Chapter 1. Introduction
Definition 1.7. Rd-valued Stochastic processes X = Xt : t â„ 0 and Y = Yt : t â„ 0have the same finite dimensional distributions if, for any integer n â„ 1, real numbers0 †t1 < t2 < · · · < tn < â and A â B(Rnd), where B(Rnd) is the smallest Ï-fieldcontaining all open sets of Rd, we have:
P ((Xt1 , . . . , Xtn) â A) = P ((Yt1 , . . . , Ytn) â A) .
If X and Y have the same finite dimensional distributions we use the notation
Xt : t â„ 0 d= Yt : t â„ 0.
There is also a stricter requirement for the identity of the two processes:
Definition 1.8. Two processes Xt : t â„ 0 and Yt : t â„ 0 are indistinguishable, if
P (Xt = Yt, for all t â„ 0) = 1.
The indistinguishability of the processes means the sample paths of the processes arealmost surely equal. The indistinguishability of processes requires slightly more than thatthe property of processes be versions of the each other, since indistinguishable processesare versions of each other, but the converse is not necessarily true. See, for example,Capasso and Bakstein [9].
If the processes X and Y are defined on the same state space but different probabilityspace, we can define whether they have the same finite dimensional distribution, see, forexample, [31].
Definition 1.9. Let X = Xt : t â„ 0 and Y = Yt : t â„ 0 be stochastic processesdefined on probability spaces (Ω,F ,P) and (Ω, F , P), respectively, and having the samestate space (Rd,B(Rd)). Stochastic processes X and Y have the same finite dimensionaldistributions if, for any integer n â„ 1, real numbers 0 †t1 †t2 †· · · †tn < â andA â B(Rnd), we have:
P ((Xt1 , . . . , Xtn) â A) = P ((Yt1 , . . . , Ytn) â A) .
We emphasize that if there is a continuous version, we use that. For this reason werewrite the special continuity theorem modified for the 1-dimensional time parameterfrom Borodin and Salminen [7, Ch.1, Sec.1]
Theorem 1.10 (The Kolmogorov6 continuity criterion). Let X = Xt : t â [0, T ] be astochastic process. If there exist positive constants α > 0, ÎČ > 0 and M > 0 such that
E (|Xt âXs|α) â€M |tâ s|1+ÎČ
for every 0 †s, t †T , then X has a continuous version.
We recall the following technical lemma and after that we consider more continuityproperties of a covariance and the L2 continuity of a stationary Gaussian process. If theprocess Xt : t â R of zero mean is stationary we denote
Q(t1, t2) = E(X0Xt2ât1) = Q(0, t2 â t1) =: Q(t2 â t1).6Andrey N. Kolmogorov (1903-1987), Russian mathematician.
1.1. Gaussian processes 7
Lemma 1.11. Let Xt : t â R be a stationary Gaussian process. Then
E((Xt2 âXt1)2) = 2 (Q(0)âQ(ÎŽ)) ,
for any ÎŽ = t2 â t1 and t1, t2 â R.
Proof. We calculate
E((Xt2 âXt1)2) = E
(X2t2 â 2Xt1Xt2 +X2
t1
)= E(X2
t2)â 2E(Xt1Xt2) + E(X2t1)
= Q(t2 â t2)â 2Q(t2 â t1) + Q(t1 â t1)= 2 (Q(0)âQ(ÎŽ)) .
Applying Lemma 1.11 we obtain more properties for the covariance function.
Theorem 1.12. Let Xt : t â R be a Gaussian process. If Xt : t â R is stationary,then its covariance function Q is an even function. Moreover, if Q is continuous at zero,then it is continuous everywhere.
Proof. The covariance is even, since
Q(tâ s) = E(XsXt) = E(XtXs) = Q(â(tâ s))).
Since Q is continuous at zero, using the CauchyâSchwarz inequality, we obtain
limhâ0|Q(t+ h)âQ(t)|
= limhâ0|E (X0Xt+h)âE (X0Xt)|
= limhâ0|E (X0 (Xt+h âXt))|
†limhâ0
(E(X2
0)) 1
2(
E (Xt+h âXt)2) 1
2
= limhâ0
(Q(0))12 (2 (Q(0)âQ(âh)))
12
= 0.
We are able to state the following lemma concerning the L2 continuity. We first recallthat the L2 continuity of the process Xt : t â R is uniform if for any Δ > 0 there existsÎŽ > 0 such that
E((Xt2 âXt1)2) < Δ
for all t1, t2 â R with |t2 â t1| < ÎŽ. In other words, uniform L2 continuity means that thecontinuity does not depend on t1, t2 â R.
Lemma 1.13. The stationary Gaussian process X := Xt : t â R is uniformly L2
continuous if it is continuous at zero.
8 Chapter 1. Introduction
Proof. Assume that X is a stationary Gaussian process and let Q be its covariancefunction. We assume that Q is continuous at zero. By the previous theorem it iscontinuous everywhere. Let t1, t2 â R and Δ > 0. Since Q is continuous at zero thereexists a ÎŽ > 0 such that
|Q(0)âQ(u)| < Δ
2 ,
for |u| < ÎŽ. Denoting u = t2 â t1 and applying Lemma 1.11, we infer
E((Xt2 âXt1)2) < Δ,
hence the process is uniformly L2 continuous.
Corollary 1.14. The covariance function Q of a stationary Gaussian process X attainsits maximum at zero.
Proof. Applying the technical Lemma 1.11 and the fact that E((Xt2 âXt1)2) is always
non-negative, we infer that Q(0) â„ Q(ÎŽ), for all ÎŽ and therefore the greatest value of Qis attained at zero.
Definition 1.15. Let X = Xt : t â„ 0 be a stochastic process and T â R+. If forall T > 0 there exists some ÎČ > 0 and a finite random variable KT (Ï) satisfying thecondition
sups,t<T ;s6=t
|Xt(Ï)âXs(Ï)||tâ s|ÎČ
†KT (Ï)
for almost all Ï, then X is called locally Hölder continuous of the order ÎČ.
There is the following connection between the Kolmogorov criterion and the Höldercontinuity. If there exist strictly positive α and ÎČ such that
E|Xt âXs|α â€M |tâ s|1+ÎČ
then the process X has a Hölder continuous version of any order Îł < ÎČα . This remark can
be found, for example, in Revuz and Yor [52, Theorem 2.1, p.26].
1.2 Self-similarity
Sometimes a process looks the same as the original one, although the scale, on whichit is looked at, is changed from macroscopic to microscopic. This phenomenon is calledself-similarity and it is known from nature. For example, the branching of trees is aself-similar process. Another visual example is the romanesco broccoli, which contributesto understanding the meaning of self-similarity.
The process in Figure 1.2 is Brownian motion Bt : t â„ 0 and it is still perhaps the mostfamous example of self-similarity.
In the middle of the 20th century Hurst7 studied changes of the elevation of the water intheNile (for a long period of time). He noticed that the changes did not depend on thetime scale. Hurst built up a new statistical method, R/S analysis, which has connectionswith long-range dependent processes, see, for example, Hurst [23], [24] and [25]. When
7Harold E. Hurst (1880-1978), British hydrologist.
1.2. Self-similarity 9
Figure 1.1: Romanesco broccolli, the picture is from Walker [57]
0 0.2 0.4 0.6 0.8 1â1
â0.5
0
0.5
t
B(t)
Brownian motion B
t:tâ [0,1]
0 0.05 0.1 0.15 0.2â0.6
â0.4
â0.2
0
0.2
t
B(t)
BM B
t:tâ [0,0.2]
0 0.05 0.1â0.6
â0.4
â0.2
0
0.2
t
B(t)
BM B
t:tâ [0,0.1]
Figure 1.2: Sample paths of one Brownian motion, t â [0, 1], t â [0, 0.2] and t â [0, 0.1]
Mandelbrot8 and Van Ness [38] started to study fractional Brownian motion, they namedthe constant H as a Hurst constant in his honour.
8BenoĂźt B. Mandelbrot (1924-2010), Polish-born, French and American mathematician
10 Chapter 1. Introduction
Lamperti9 investigated mathematically the same kind of processes as Hurst. He studiedthe convergence of stochastic processes [34] and recognized stationarity in many of thoseprocesses. In [35] Lamperti introduced semi-stable processes that satisfy the same scalingproperty as the self-similar processes.
Definition 1.16. A stochastic process Xt : t â R is called H-self-similar, if thestochastic processes Xαt : t â R and αHXt : t â R have the same finite dimensionaldistributions for all α â R and for H â (0, 1), that is,
Xαt : t â R d=αHXt : t â R
.
We study many processes that exhibit the property of self-similarity. The most commonself-similar process is Brownian motion studied in Section 1.4.1.
1.3 Asymptotic behaviour and long-range dependence
We also need some properties of the asymptotic behaviour of the processes when studyinglong and short-range dependencies.Thus, we present the familiar symbol "Ordo" to consider the growing rates of functions.Remark 1.17. Let the functions f and g be defined in the same neighbourhood N0 onx â R âȘ ââ,â. If there exists strictly positive k such thatâŁâŁâŁâŁf(x)
g(x)
âŁâŁâŁâŁ †k,for any x â N0, then we define
f(x) = O(g(x)) as xâ x0.
The idea of this notation is that f increases more slowly or decreases more rapidly thansome multiple of g. If there exist strictly positive k1 and k2 such that
k1 â€âŁâŁâŁâŁf(x)g(x)
âŁâŁâŁâŁ †k2,
for any x â N0, then f has the same asymptotic behaviour as g, and they both increaseor decrease at the same rate. In this case we use the notation
f(x) = Ξ(g(x)) as xâ x0.
In the literature there are many definitions of long-range dependence. These all havethe same idea or contents: If the process Xt : t â„ 0 is long-range dependent then itscovariance vanishes slowly, in particularly not exponentially.
Definition 1.18. A stationary second order process Xt : t â R or a sequence Xn :n â N of zero mean is called long-range dependent if
âân=1
cov(X1, Xn) =âân=1
E(X1Xn)
diverges. If the sum converges, then the process or the sequence is called short-rangedependent.
9John W. Lamperti, American mathematician.
1.4. Some important Gaussian processes 11
Remark 1.19. We use Definition 1.18, since it is easy to understand and apply. However,a stationary second order stochastic process X = Xn : n = 0, 1, 2, . . . of mean zero is,sometimes called
âą long-range dependent if there exists α â (0, 1) and a constant C > 0 such that
limnââ
ÏX(n)C nâα
= 1, where ÏX(n) := E (XiXi+n) , for any non-negative integer i,
and
âą short-range dependent if limkââ
kân=0
ÏX(n) exists.
This kind of definition, for example, stated in Beran [4, p. 6 and p. 42] and is occasionallymore convenient to use than our Definition 1.18.
Definition 1.18 is not always equivalent to the above definition from [4]. Indeed, if weconsider the case where
E(Xn+kXk) = 1n, n = 1, 2, . . . .
Then we have harmonic series andâân=1
ÏX(n) = limkââ
kân=1
ÏX(n) = limkââ
kân=1
1n
=â.
Hence, X is a long-range dependent according to Definition 1.18. But if α â (0, 1)
limnââ
ÏX(n)Cnâα
= 0,
for any C > 0. This means that process X is not a long-range dependent according tothe definition of the remark 1.19.
In this dissertation we will use Definition 1.18 for long-range dependence and short-rangedependence.
1.4 Some important Gaussian processes
1.4.1 Brownian motion and OU processesBrownian motion was invented by botanist Robert Brown in 1827 [8]. Calling Brown aninventor of Brownian motion may be venturesome, since there were some other researchersat his time studying the same field. Brown might have been the first to have publishedsomething about this phenomenon. Anyhow, he observed the pollen grains moving onthe surface of water. The movement was random, and he did not understand the reasonfor this movement. First he thought that the reason was connected only to the organicparticles, but later he also observed the same kind of movements with synthetic particles.
It took quite some time to find an explanation for this movement. Albert Einstein10
recognized in 1905 that the reason for the movement is thermodynamic. But actually10Albert Einstein (1879-1955), German theoretical physicist
12 Chapter 1. Introduction
Luis Bachelier11 was the first person to model Brownian motion. He used Brownianmotion to model stock prices on the Paris Stock Exchange in 1900. In mathematicsBrownian motion is also often called the Wiener process, since Norbert Wiener12 provedthe existence of Brownian motion defined as:
Definition 1.20. A real valued stochastic process Bt : t â„ 0 is called the standardBrownian motion (Bm) starting at zero, if the following properties holds
(i) B0 = 0 a.s.;
(ii) E(Bti âBtiâ1
)2 = ti â tiâ1;
(iii) for any 0 = t0 < t1 < · · · < tn the increments
Btn âBtnâ1 , Btnâ1 âBtnâ2 , . . . , Bt1 âBt0
are independent and normally distributed with
E(Bti âBtiâ1
)= 0.
The item (ii) from Definition 1.20 implies, that B has continuous paths a.s. We recall thedefinition of the Markov process in [32] and OrnsteinâUhlenbeck process [50], since theyboth involved Brownian motion.
Definition 1.21. If for any t and s > 0, the conditional distribution of Xt+s given Ï-fieldFt is the same as the conditional distribution of Xt+s given Xt, that is, for all y â R
P(Xt+s †y|Ft) = P(Xt+s †y|Xt) a.s. ,
then X is a Markov process.
There are two different ways to construct the OU process; either via a time and spacetransformation, which is also called the Doob transformation or as a solution of a stochasticdifferential equation of which the driving process is the standard Brownian motion. Firstwe present definition of the Doob transformation.
Definition 1.22. Let Bt : t â„ 0 be the standard Brownian motion and α > 0. Thenthe process Vt : t â R
Vt := eâαtB e2αt2α,
is called the OrnsteinâUhlenbeck process (OU).
The preceding well-known construction of the OU process is due to Doob [16] and it isa deterministic time and space transformation of the standard Brownian motion. Thecovariance of V is
E(VtVs) = eâαtâαsE(B e2αt
2αB e2αs
2α
)= eâαtâαs min
(e2αt
2α ,e2αs
2α
)= 1
2αeâα(tâs), if t â„ s.11Louis J-B. A. Bachelier (1870-1946), French mathematician12Norbert Wiener (1894-1964), American mathematician
1.4. Some important Gaussian processes 13
Since the covariance depends only on the time difference, therefore the process is stationary.Secondly we construct an OU process, as a strong and unique solution to the Langevinstochastic differential equation. The solution of the linear first order differential equationis unique if we have initial value Xa = b.
Definition 1.23. Let Bt : t ℠0 be the standard Brownian motion and α > 0. Thesolution of the stochastic differential equation
dUt = âαUtdt+ dBt, (1.2)
is also called the OrnsteinâUhlenbeck process (OU1). The solution is
Ut = eâαtx+
tâ«0
eαsdBs
, t â„ 0, (1.3)
where x is the random initial value of U .
Recall the properties of a strong solution from Ăksendal [49, p.66]. We first define someterms: Fâ is the smallest Ï-algebra contains
ât>0 Ft and Bt : t â„ 0 is 1-dimensional
Brownian motion.
Theorem 1.24. Let T > 0 and b : [0, T ]ĂRn â Rn, Ï : [0, T ]ĂRn â Rn be measurablefunctions satisfying
|b(t, x)|+ |Ï(t, x)| †C(1 + |x|); x â Rn, t â [0, T ]
for some constant C, (where |Ï|2 =â|Ïij |2) and such that
|b(t, x)â b(t, y)|+ |Ï(t, x)â Ï(t, y)| †D|xâ y|; x, y â Rn, t â [0, T ]
for some constant D. Let Z be a random variable which is independent of the Ï-algebraFâ generated by Bs : s â„ 0 and such that
E(|Z|2) <â.
Then the stochastic differential equation
dXt = b(t,Xt)dt+ Ï(t,Xt)dBt,
where 0 †t †T,X0 = Z, has a unique t-continuous solution Xt(Ï) with the propertythat Xt(Ï) is adapted to the filtration FZt generated by Z and Bs; s †t and
E
Tâ«0
|Xt|2dt
<â.
This kind of solution X = Xt : t â [0, T ] from Theorem 1.24 is called a strong solution,see, for example, [49, p. 70]. We know that every linear stochastic differential equationwith constant coefficients has a unique strong solution at every interval [0, T ], see, forexample, Mikosch [42, p. 138].In fact the solution of stochastic differential equation (1.3), The OrnsteinâUhlenbeckprocess, in Definition 1.23 is strong and unique, but it is not yet stationary. First weextend it to the whole time space and then define the initial value to make it stationary.
14 Chapter 1. Introduction
Definition 1.25. Let B(â) = B(â)t : t â„ 0 be another Brownian motion, independent
of B, also starting from 0. When t â R, we set the two-sided Brownian motion
Bt :=Bt , t â„ 0,B
(â)ât , t < 0.
If in Definition 1.23 the variable x is equal to Ο
Ο :=⫠0
ââeαsdBs,
then Ο is a normally distributed random variable with the mean 0 and the variance 12α .
Theorem 1.26. The Ornstein-Uhlenbeck process U , defined by
Ut = eâαttâ«
ââ
eαsdBs, (1.4)
is the stationary solution of (1.2).
Proof. From the considerations above it is clear that Ut : t â„ 0 solves (1.3) with
U0 = x =â« 0
ââeαsdBs.
To prove stationarity, we compute as follows. The covariance of the process U may becomputed as
Q(tâ s) = E(UtUs)
= E
eâαttâ«
ââ
eαrdBr
eâαssâ«
ââ
eαrdBr
= eâαteâαs
E
sâ«ââ
eαrdBr
2
+E
tâ«s
eαrdBrsâ«
ââ
eαrdBr
,
when t > s. If we use the ItĂŽ isometry in the first part of the sum and the independenceof the increments of Brownian motion in the second one, we obtain
Q(tâ s) = eâαteâαssâ«
ââ
(eαr)2dr (1.5)
= eâα(tâs)
2α .
As we notice, the covariance is dependent only on the difference of time, so the process isstationary.
1.4. Some important Gaussian processes 15
We may also note that the covariance of U is the same as the covariance of V . Since theseprocesses are both Gaussian processes, they are equally determined by their covariances.Thus the processes have the same finite dimensional distributions.This connection was the first and the main one to lead me to study fractional OU processes.One of the main questions was whether or not they have the same finite dimensionaldistributions.In this dissertation we study fractional OrnsteinâUhlenbeck processes and their properties.These processes are constructed similarly to OU processes but Brownian motion is replacedby fractional Brownian motion.
1.4.2 Fractional Brownian motionIn various problems, existing models are unsatisfactory. Brownian motion and the short-range dependent processes derived from it are not the best explanation for problems indata traffic or in the economical time series, for example.A concept called fractional Brownian motion is an answer to many questions, it isneither a specified Brownian motion nor its contraction. Fractional Brownian motion is ageneralization of Brownian motion in the sense that the when H = 1
2 it coincides withBrownian motion. It belongs to the class of processes with a long memory, when H > 1
2 .Mandelbrot and Van Ness were the pioneers of studies of fractional Brownian motion [38].Their definition for fBm is not so easy to use as the definition which we propose here.Fractional Brownian motion is at the heart of the studies in this dissertation. We statetwo definitions of fractional Brownian motion.The first definition by Mandelbrot and Van Ness can be found in [38]. Their definition offractional Brownian motion uses an integral with respect to Brownian motion
BH(0, Ï) = b0
BH(t, Ï)âBH(0, Ï)
= 1Î(H + 1
2 )
0â«ââ
((tâ s)Hâ 1
2 â (âs)Hâ 12
)dB(s, Ï)
+tâ«
0
(tâ s)Hâ 12 dB(s, Ï)
,
where Î is the Gamma function.The definition above is not so easy to use, and we state the more common definition, see,for example, Memin, Mishura and Valkeila [41], as follows.
Definition 1.27. Let 0 < H < 1. Fractional Brownian motion (fBm) Zt : t â„ 0 withHurst parameter H is a centered Gaussian process with Z0 = 0 and
E(ZtZs) = 12(t2H + s2H â |tâ s|2H), t, s â„ 0. (1.6)
Theorem 1.28. The fractional Brownian motion (fBm) Z = Zt : t â„ 0 with a Hurstparameter H â (0, 1) satisfies the properties
16 Chapter 1. Introduction
(i) Z is H-self-similar;
(ii) Z has stationary increments;
(iii) E (Zt) = 0 for any t â„ 0;
(iv) E(Z2t
)= t2H for any t â„ 0.
Proof.
(i) Fractional Brownian motion is H-self-similar, i.e.,
Zαt : t ℠0 d=αHZt : t ℠0
for any α > 0. (1.7)
Indeed, the self-similarity of fBm follows from (1.6), since
E(ZαtZαs) = 12((αt)2H + (αs)2H â |αtâ αs|2H)
= α2H 12(t2H + s2H â |tâ s|2H)
= E(αHZtαHZs)
and the covariance determines the Gaussian distribution uniquely.
(ii) When s1 < s2 < t1 < t2, the relation (1.6) implies that
E ((Zt2 â Zt1)(Zs2 â Zs1)) (1.8)
= 12((t2 â s1)2H â (t2 â s2)2H + (t1 â s2)2H â (t1 â s1)2H).
By (1.8) and Theorem 1.3 we observe that the increments of fractional Brownian motionare stationary, since
E ((Zt2+h â Zt1+h)(Zs2+h â Zs1+h))
= 12((t2 â s1)2H â (t2 â s2)2H + (t1 â s2)2H â (t1 â s1)2H).
(iii) E (Zt) = 0, for any t â„ 0, since Zt : t â„ 0 is a centered process.
(iv) For any t â„ 0, using (1.6)
E(Z2t
)= E (ZtZt)
= 12(t2H + t2H
)= t2H .
1.4. Some important Gaussian processes 17
In the literature there are also more axiomatic definitions for fractional Brownian motion.These definitions usually contain a collection of same kind of properties as we have inTheorem 1.28. A more axiomatic definition can be found, for example, in Norros, Valkeilaand Virtamo [45]. The literature also has definitions of fractional Brownian motion wherethe time parameter t belongs to R. See, for example, [40]. We chose our definition,because using it makes it simpler to construct stationary processes, as stochastic integrals,where fBm is a driving process or to make some time transformations with respect tofBm.
Next, we consider more properties of fractional Brownian motion.Remark 1.29. From Theorem 1.28 (iv) we directly obtain that
E(Z20 ) = 0 and E(Z2
1 ) = 1.
Using the first identity of the proof of Lemma 1.11, the property (iv) and the Kolmogorovcontinuity criterion (Theorem 1.10), we may notice that fractional Brownian motionZ has a continuous version, when H > 1
2 , since the increments of fractional Brownianmotion are stationary and Z0 = 0 and therefore
E((Zt â Zs)2) = E
(Z2tâs). (1.9)
Indeed, we obtain the stronger result using the next lemma stated, for example, in Nualart[47].
Lemma 1.30. Let Z = Zt : t â„ 0 be fractional Brownian motion with Hurst parameterH â (0, 1). Then
E((Zt â Zs)2k) = (2k)!
k!2k |tâ s|2Hk,
for any integer k â„ 1.
Proof. Using (), we obtain
E((Zt â Zs)2k) = E
(Z2ktâs)
= E((|tâ s|HZ1)2k)
= |tâ s|2kHE(Z2k
1),
where the second equation follows from the fact that fractional Brownian Motion isH self-similar. From the definition of fractional Brownian motion we conclude thatZ1 ⌠N (0, 1) and so its moment generating function is
mZ1(p) =ââ«ââ
exp 1â2Ï
eâ x2
2 dx
=ââ«ââ
1â2Ï
eâ(xâ
2â pâ
2
)2
ep22 dx
= ep22 .
18 Chapter 1. Introduction
Since m(n)X (0) = E(Xn), then
E(Z2k1 ) = d2kmZ1(p)
dp2k
âŁâŁâŁâŁâŁp=0
= (2k â 1)(2k â 3)(2k â 5) · · · 1 = (2k â 1)!!
= (2k â 1)(2k â 3)(2k â 5) · · · 1 · 2k(2k â 2)(2k â 4) · · · 22k(2k â 2)(2k â 4) · · · 2
= (2k)!2kk! .
Hence
E((Zt â Zs)2k) = |tâ s|2kHE
((Z1)2k)
= (2k)!2kk! |tâ s|
2kH .
Theorem 1.31. Fractional Brownian motion Zt : t â„ 0 has a version with continuouspaths.
Proof. Applying Lemma 1.30 we notice that
E((Zt â Zs)2k) = (2k)!
k!2k |tâ s|2Hk,
for any integer k â„ 1. Choosing k > 12H , we have a continuous version by the Kolmogorov
continuity criterion (Theorem 1.10), for all H â (0, 1) .
From now on Z is assumed to be continuous. Moreover, Z is locally Hölder continuous ofthe order ÎČ for any ÎČ < H, (see the definition of the Hölder continuity in Definition 1.15).The proof of the locally Hölder continuity of fractional Brownian motion, can be found,for example, in Sottinen [56]. We give a brief proof.
By Lemma 1.30 we infer
E((Zt â Zs)2k) = C|tâ s|2kH (1.10)
and so in the Kolmogorov continuity criterion α = 2k and ÎČ = 2kH â 1.
Theorem 1.32. Fractional Brownian motion has locally Hölder continuous version ofany order ÎČ < H.
Proof. Equation (1.10) states a connection between the Kolmogorov continuity criterionand the Hölder continuity (on page 8), as follows, if
E|Xt âXs|α â€M |tâ s|1+ÎČ
holds, then the process X is Hölder continuous of any order Îł < ÎČα . Applying (1.10) the
process Z is Hölder continuous of the order H â 12k . When k ââ we obtain that Z is
Hölder continuous of the order γ < H.
1.4. Some important Gaussian processes 19
Proposition 1.33. If a stochastic process is locally Hölder continuous of the order ÎČand Îł < ÎČ, then the stochastic process is locally Hölder continuous of the order Îł.
Proof. We assume that a stochastic process X is locally Hölder continuous of the order ÎČand I = [a, b] â R+, then
sups,tâI;s6=t
|Xt âXs||tâ s|Îł
= sups,tâI;s6=t
|Xt âXs||tâ s|ÎČ
|tâ s|ÎČâÎł
†sups,tâI;s6=t
|Xt âXs||tâ s|ÎČ
(bâ a)ÎČâÎł < M,
since |tâ s| < bâ a <â and ÎČ â Îł > 0.
Applying Mishura [43, Section 1.2.] we have the kernel representation of incrementsâcovariance of fractional Brownian motion for H â (0, 1
2 ) âȘ ( 12 , 1), that is
E ((Zt2 â Zt1)(Zs2 â Zs1)) =s2â«s1
t2â«t1
(2H â 1)H(uâ v)2Hâ2dudv, (1.11)
if s1 < s2 < t1 < t2.
We state the following proposition
Proposition 1.34. Let Zt : t â„ 0 be fractional Brownian motion and H â (0, 1). Then
âą if H = 12 the increments are independent (Bm case),
âą if H > 12 the increments are positively correlated and
âą if H < 12 the increments are negatively correlated.
Proof. We may prove this proposition using the covariance representation (1.8) of theincrements of fractional Brownian motion and the kernel representation (1.11).
We recall that if the autocorrelation function ÏX(n) := E(X0Xn) is positive, the processXt : t â R is positively correlated, and if it is negative, the process is negativelycorrelated.
Thus we consider the covariance of the increments, in the case H = 12
E ((Zt2 â Zt1)(Zs2 â Zs1))
= 12((t2 â s1)2H â (t2 â s2)2H + (t1 â s2)2H â (t1 â s1)2H)
= 0.
Hence in the case of Brownian motion the increments are independent.
20 Chapter 1. Introduction
In the case H 6= 12 we calculate the covariance of the increments of fractional Brownian
motion to use the kernel representation as follows
E ((Zt2 â Zt1)(Zs2 â Zs1)) =s2â«s1
t2â«t1
(2H â 1)H(uâ v)2Hâ2dudv
= (2H â 1)Hs2â«s1
t2â«t1
(uâ v)2Hâ2dudv.
The integral is non-negative, since s1 < s2 < t1 < t2, and therefore
(2H â 1)Hs2â«s1
t2â«t1
(uâ v)2Hâ2dudv > 0, when H >12
and
(2H â 1)Hs2â«s1
t2â«t1
(uâ v)2Hâ2dudv < 0, when H <12
completing the proof.
Recall that Theorem 1.28 (ii) states stationarity of the increments of fractional Brownianmotion Z = Zt : t â„ 0, however, Z itself is not stationary. Hence the natural step is todefine the increment process of fractional Brownian motion.
Definition 1.35. A stationary second order stochastic process IZ(n) : n â N0 is calledthe increment process of fractional Brownian motion or the fractional Gaussian noise, if
IZ = Zn+1 â Zn : n = 0, 1, 2 . . . , (1.12)
and Zt : t â„ 0 is fractional Brownian motion.
The kernel representation (1.11) implies the following result. It can also be found, forexample, in [56, p. 9], but we present a brief proof.
Proposition 1.36. Let IZ be the increment process of fractional Brownian motion. Thenthe autocorrelation function ÏIZ (n) satisfies
ÏIZ (n) := E (IZ(0)IZ(n))= E (Z1(Zn+1 â Zn))
and holds
ÏIZ (n) = Ξ(n2Hâ2), (1.13)
when n tends to â.
Proof. Note first that
ÏIZ (n) = E (IZ(0)IZ(n))= E ((Z1 â Z0)(Zn+1 â Zn))= E (Z1(Zn+1 â Zn)) .
1.4. Some important Gaussian processes 21
Then we use the kernel representation (1.11) to obtain
E (Z1(Zn+1 â Zn)) =1â«
0
n+1â«n
(2H â 1)H(uâ v)2Hâ2dudv
=1â«
0
1â«0
(2H â 1)H(t+ nâ v)2Hâ2dtdv.
Applying twice the Mean Value Theorem we see that there exists d â (0, 1) and c â (0, 1),such that
ÏIZ (n) =1â«
0
(2H â 1)H(d+ nâ v)2Hâ2dv
= (2H â 1)H(d+ nâ c)2Hâ2.
Denoting a := dâ c, we have
ÏIZ (n) = (2H â 1)H(n+ a)2Hâ2,
where â1 < a < 1. After some approximations, we obtain
(n+ a)2Hâ2 †(nâ 1)2Hâ2 †2n2Hâ2
and
(n+ a)2Hâ2 â„ (n+ 1)2Hâ2 â„ 12n
2Hâ2
and therefore
ÏIZ (n) = (2H â 1)H(n+ a)2Hâ2 = Ξ(n2Hâ2),
when n is large and tending to infinity.
Since we have an asymptotic approximation of the autocorrelation function, we mayconsider if the preceding increment process is long-range dependent or not. To study thatwe may use Definition 1.18 and (1.13).
Proposition 1.37. The increment process IZ of fractional Brownian motion Z is
âą long-range dependent if H > 12 ,
âą short-range dependent if H < 12 .
Proof. From the proof of Proposition 1.36 we obtain
|ÏIZ (n)| †H(1â 2H)(nâ 1)2Hâ2,
for H < 12 . We consider the sum of the absolute values of covariance and obtain
âân=1|H(2H â 1)(nâ 1)2Hâ2| =
âân=0|H(2H â 1)n2Hâ2| <â.
22 Chapter 1. Introduction
Using direct comparison we infer that
âân=1
ÏIZ (n) <â,
for H < 12 . Applying again the proof of Proposition 1.36 we obtain
ÏIZ (n) â„ (2H â 1)H(n+ 1)2Hâ2,
for H > 12 . We consider the sum and obtain
âân=1
H(2H â 1)(n+ 1)2Hâ2 =âân=2
H(2H â 1)n2Hâ2 =â,
when 2H â 2 > â1. Thusâân=1
H(2H â 1)(n+ 1)2Hâ2
diverges for H > 12 and therefore the sum
âân=1
ÏIZ (n) also diverges by direct comparison.
We conclude, in the case H < 12 that the sum
âân=1
ÏIZ (n) is finite and, in the case H > 12
that the sumâân=1
ÏIZ (n) is infinite. Using Definition 1.18 we conclude that the increment
process IZ is long-range dependent if H > 12 , and short-range dependent if H < 1
2 .
1.4.3 The Fractional OrnsteinâUhlenbeck process, fOU(1)
To define a fractional OrnsteinâUhlenbeck process as a solution of the Langevin SDE, weproceed similarly as in the usual OU case, but we use fractional Brownian motion insteadof Brownian motion. Consider the following linear stochastic differential equation
dU(Z,α)t = âαU (Z,α)
t dt+ dZt, (1.14)
where α > 0. Using integrating factor eαt we obtain the solution as follows
U(Z,α)t = eâαt
(x+
â« t
0eαsdZs
), (1.15)
where x is some random initial value. This pathwise RiemannâStieltjes integral doesindeed exist (see, for example, Cheridito, Kawaguchi and Maejima [12]). For positivevalues s the following integration by parts holds
sâ«0
eαudZu = eαsZs âsâ«
0
αeαuZudu. (1.16)
We define the two-sided fractional Brownian motion next:
1.4. Some important Gaussian processes 23
Definition 1.38. Let Z(â) = Z(â)t : t â„ 0 be another fractional Brownian motion,
independent of Z, also starting from 0. When t â R, the two-sided fractional Brownianmotion is
Zt :=Zt , t â„ 0,Z
(â)ât , t < 0.
Lemma 1.39. Let Z = Zt : t â R be the two-sided fractional Brownian motion. Thenthe formula
Ο :=0â«
ââ
eαsdZs (1.17)
yields a well-defined random variable.
Proof. We first extend (1.16) for negative values by0â«s
eαudZu = âeαsZs â0â«s
αeαuZudu. (1.18)
Then we have to prove that the limit on the right-hand side of (1.18) exists, when sâ ââ.If we prove that the integral on the right-hand side in (1.17) exists and eαsZs tends tozero, then the random variable Ο is well defined.If we define
Z(o)t := t2H Zâ 1
t: t > 0
,
then Z(o)t is a centered Gaussian process and has the same covariance kernel as fractional
Brownian motion, since the covariance oft2H Zâ 1
t: t > 0
is given by
E(t2H Zâ 1
ts2H Zâ 1
s
)= 1
2 t2Hs2H
(âŁâŁâŁâŁâ1t
âŁâŁâŁâŁ2H +âŁâŁâŁâŁâ1s
âŁâŁâŁâŁ2H â âŁâŁâŁâŁâ1t
+ 1s
âŁâŁâŁâŁ2H)
= 12
(âŁâŁâŁâŁâ tstâŁâŁâŁâŁ2H +
âŁâŁâŁâŁâ tssâŁâŁâŁâŁ2H â âŁâŁâŁâŁâ tst + ts
s
âŁâŁâŁâŁ2H)
= 12
(|s|2H + |t|2H â |âs+ t|2H
),
which is actually the same as the covariance of fractional Brownian motion Zt : t > 0.We obtain
Z(o)t : t > 0
d=Zât : t > 0
.
We may prove that both of these processes
Zât : t > 0 and Z(o)t : t > 0
goes to the zero when t tends to the zero. By Definition 1.38 we know that Z0 = 0.Hence using the Borel-Cantelli Lemma we first prove lim sup
tâ0+Z
(o)t = 0. Let Δ > 0. Since
fractional Brownian motion has continuous paths, we have
Z0 = limtâ0+
Zât = 0
24 Chapter 1. Introduction
and thus lim suptâ0+
Zât = 0. This means that
P(Zât > Δ i. o.
)= 0.
Ifâân=1
P(Z
(o)1n
> Δ)<â,
then the Borel-Cantelli Lemma implies that there exists Δ > 0 such that
P(Z
(o)1n
> Δ i. o.)
= 0,
which means that lim suptâ0+
Z(o)t = 0 almost surely.
We manipulate the sum of probabilities as followsâân=1
P(Z
(o)1n
> Δ)
=âân=1
P((
1n
)2HZân > Δ
)
=âân=1
P((
1n
)2HnH Zâ1 > Δ
)
=âân=1
P((
1n
)HZ1 > Δ
)
=âân=1
P(Z1 > ΔnH
).
For the term inside the summation on the right-hand side, we have the upper bound
P(Z1 > ΔnH) †E(Z2k1 )
Δ2kn2kH , (1.19)
which is obtained using inequality
E(Z2k1 ) â„ E(Z2k
1 1Z1>ΔnH)
â„(ΔnH
)2k P(Z1 > ΔnH).
Using (1.19), we inferâân=1
P(Z
(o)1n
> Δ)
=âân=1
P(Z1 > ΔnH
)
â€âân=1
E(Z2k
1
)Δ2kn2kH .
This sum converges for every H > 0, since we can choose 2k, such that 2kH > 1, andâân=1
1n2kH <â.
1.4. Some important Gaussian processes 25
Henceâân=1
P(Z
(o)1n
> Δ)<â.
Applying Borel-Cantelli Lemma there exists Δ > 0 such, that
P(Z
(o)1n
> Δ i. o.)
= 0
and therefore we deduce that lim suptâ0+
Z(o)t = 0 a.s. for any Z(o).
Next we prove that lim inftâ0+
Z(o)t = 0 a.s.. This follows from the fact that âZ(o)
t , t â„ 0 isalso fractional Brownian motion and, hence
lim inftâ0+
Z(o)t = â lim sup
tâ0+
(âZ(o)
t
)= 0 a.s..
Therefore the propertylimtâ0+
Z(o)t = lim
tâ0+Zât = 0 a.s. (1.20)
holds. We still have to prove that the integral on the right-hand side of (1.18) exists whens tends to ââ. First we show thatâŁâŁâŁâŁâŁâŁ
Nâ«ââ
αeαuZudu
âŁâŁâŁâŁâŁâŁ <â,for small negative N . To see this, we deduce from (1.20) that
limsâââ
Zs|s|2H
= limtâ0+
t2H Zâ 1t
= limtâ0+
Z(o)t = 0, (1.21)
and therefore for any Δ > 0 there exists N < 0 such thatâŁâŁâŁZs/s2H
âŁâŁâŁ < Δ for any s < N .Thus, we infer âŁâŁâŁâŁâŁâŁ
Nâ«ââ
αeαuZudu
âŁâŁâŁâŁâŁâŁ =
âŁâŁâŁâŁâŁâŁNâ«
ââ
αeαu|u|2H Zuu2H du
âŁâŁâŁâŁâŁâŁâ€ Δ
âŁâŁâŁâŁâŁâŁNâ«
ââ
αeαu|u|2Hdu
âŁâŁâŁâŁâŁâŁ< â.
In the case α > 0, we can use the same argument (1.21) to prove that eαsZs tends tozero, when s tends to ââ
limsâââ
eαsZs = limsâââ
eαs|s|2H Zs|s|2H
= 0,
thereby completing the proof.
26 Chapter 1. Introduction
Remark 1.40.1) Note that for Brownian motion (H = 1
2 ) the result
limsâââ
Zs|s|2H
= 0 (1.22)
is an application of the Strong Law of Large Numbers (SLLN), that is
1n
nâi=0
(Bnâi âBnâ(i+1)
)= Bn
nâ E(B1) = 0.
Hence the result (1.22) appear to be a kind of strong law of large numbers for fractionalBrownian motion. Note that we cannot use the Kolmogorov strong law of large numbersfor fractional Brownian motion, since the Kolmogorov strong law of large numbers is validonly for independent and identically distributed random variables.
2) A different proof that Ο is well defined can be found in Maslowski and Schmalfuss [39]and in Garritdo-Atienza, Kloeden and Neuenkirch [21].
We apply the previous result, setting x = Ο in (1.15) we get
U(Z,α)t = eâαt
â« t
ââeαsdZs. (1.23)
Theorem 1.41. The process U (Z,α)t : t â R is stationary.
Proof. In the case of Gaussian process by Theorem 1.3 stationarity follows from equality
E(U
(Z,α)t1+h U
(Z,α)t2+h
)= E
(U
(Z,α)t1 U
(Z,α)t2
),
for any h â R. Let h â R, then changing variables r := sâ h in both integrals, we obtain
E(U
(Z,α)t1+h U
(Z,α)t2+h
)= E
((eâα(t1+h)
â« t1+h
ââeαsdZs
)(eâα(t2+h)
â« t2+h
ââeαsdZs
))
= E((
eâα(t1+h)â« t1
ââeα(r+h)dZr+h
)(eâα(t2+h)
â« t2
ââeα(r+h)dZr+h
)).
Since the increments of fractional Brownian motion are stationary, that is
Zt+h â Zs+h : s, t â R d= Zt â Zs : s, t â R
and the integrals are pathwise Riemann-Stieltjes integrals, which actually consist of thesums of the increments of Z, we conclude
E(U
(Z,α)t1+h U
(Z,α)t2+h
)= E
((eâαt1
â« t1
ââeαrdZr
)(eâαt2
â« t2
ââeαrdZr
))= E
(U
(Z,α)t1 U
(Z,α)t2
),
thereby completing the proof.
1.4. Some important Gaussian processes 27
Definition 1.42. The process U (Z,α) = U (Z,α)t : t â R introduced in (1.23) is called the
stationary fractional OrnsteinâUhlenbeck process of the first kind, abbreviated fOU(1).
1.4.4 The Doob transformation of fBmIn Brownian motion case, there are two ways to construct stationary OrnsteinâUhlenbeckprocesses: one via Definition 1.22 and the other via (1.4). We are interested in studyingthe same kind of situation for fractional Brownian motion. We will prove that for fractionalBrownian motion these two processes have different finite dimensional distributions.
The time and space scaling fractional OrnsteinâUhlenbeck process is called a Doob (asalso in the OU process case) transformation of fractional Brownian motion, see [16, Eq.(1.2.1)] or a Lamperti transformation of fBm, as in [12].
The Doob transformation is a deterministic time and space transformation, but a Lampertitransformation is a larger class of stochastic semi-stable processes, therefore we do not usethe term the Lamperti transformation for this process. The concept semi-stable meansthe same as self-similarity in 1.16, self-similarity being nowadays more commonly used.See, for example, Chaumont, Panti and Rivero [10].
With OrnsteinâUhlenbeck processes we did not use the concept of Lamperti transforma-tion, since, in addition to the above, OrnsteinâUhlenbeck processes were described in themiddle of the 20th century, but the concept of Lamperti transformation of fBm did notappear until in the latter part of the 20th century. In fact the Lamperti transformation ismore than mere the time scaling of a process.
Definition 1.43. Let Z = Zt : t ℠0 be the fractional Brownian motion. Then theprocess X(D,α) = X(D,α)
t : t â R is the Doob transformation of fBm (abbreviated fOU)if
X(D,α)t := eâαtZÏt ,
where t â R, H â (0, 1), α > 0 and Ït = HeαH t
α.
2 Covariances and spectral densityfunctions
2.1 Covariances and stationarity
2.1.1 Covariance of the Doob transformation of fBmThe covariance of the Doob transformation X(D,α) of fBm may be directly computedfrom the covariance of fractional Brownian motion. Using the self-similarity of fBm, weinfer that the random variable X(D,α)
t is normally distributed with zero mean and thevariance
(Hα
)2H for all t. Calculations of the covariance concurrently imply stationarityof the process.
Proposition 2.1. The covariance of the Doob transformation of fBm is
E(X
(D,α)t X(D,α)
s
)(2.1)
= 12
(H
α
)2H (eα(tâs) + eâα(tâs) â eα(tâs)
(1â eâ
α(tâs)H
)2H),
when t > s, and therefore the Doob transformation of fBm is stationary.
Proof. Using covariance of fBm with Ït = HeαHt
α , we calculate the covariance as follows
E(X
(D,α)t X(D,α)
s
)(2.2)
= E(eâαtZÏteâαsZÏs
)= eâα(t+s) 1
2
(H
α
)2H (e2αt + e2αs â e2αt
(1â e
âα(tâs)H
)2H)
(2.3)
= 12
(H
α
)2H (eα(tâs) + eâα(tâs) â eα(tâs)
(1â e
âα(tâs)H
)2H).
Since the process X(D,α) is Gaussian and, its covariance function depends only on thedifference, whence the process is stationary.
2.1.2 Covariance of fOU(1)The covariance function of fOU(1) is more complicated to calculate. The asymptoticformula of the covariance of U (Z,α) is given by [12, Theorem 2.3.].
29
30 Chapter 2. Covariances and spectral density functions
Proposition 2.2 ([12], Th. 2.3.). Let H â (0, 12 ) âȘ ( 1
2 , 1) and N = 1, 2, . . .. Then
E(U (Z,α)s U
(Z,α)s+t
)(2.4)
= 12
Nân=1
뱉2n
(2nâ1âk=0
(2H â k))t2Hâ2n + O(t2Hâ2Nâ2),
for s â R and when tââ.
2.2 On asymptotic behaviour of the Doob transformation offBm
The OU process introduced by Definition 1.22, called the Doob transformation, is short-range dependent. Similarly OU defined in (1.4) is also short-range dependent. This followsfrom their covariance functions. This is one of the reasons why we consider behaviour ofcovariances of fOU and fOU(1), for large values of t. Hence, without a loss of generalitywe may assume that t â„ 0.
Recall that both processes X(D,α)t : t â R and U (Z,α)
t : t â R are stationary.
Theorem 2.3 ([29], Prop. 3.1.). The Doob transformation of fBm X(D,α)t : t ℠0 is
short-range dependent for all H â (0, 1).
Proof. We start by writing the covariance of X(D,α)t in a more usable form. Using Taylor
series
(1 + x)2H = 1 +âân=1
2H(2H â 1) · · · (2H â n+ 1)n! xn, when |x| < 1, (2.5)
and setting x = âeâαtH , we infer
(1â eâαtH
)2H= 1 +
âân=1
2H(2H â 1) · · · (2H â n+ 1)n!
(âeâαtH
)n= 1 +
âân=1
(2Hn
)(âeâαtH
)n, (2.6)
since |eâαtH | < 1 and αtH > 0, for α, t > 0 and H â (0, 1).
Using Proposition 2.1 and (2.6) we compute
E(X
(D,α)t X
(D,α)0
)= 1
2
(H
α
)2H(
eâαt â eαtâαtHâân=1
(2Hn
)(â1)neâ
αt(nâ1)H
)
= 12
(H
α
)2H(
eâαt â eâαt( 1Hâ1)
âân=1
(2Hn
)(â1)neâ
αt(nâ1)H
)(2.7)
= 12
(H
α
)2Heâαt â 1
2
(H
α
)2Heâαt( 1
Hâ1)âân=1
(2Hn
)(â1)neâ
αt(nâ1)H .
2.2. On asymptotic behaviour of the Doob transformation of fBm 31
Obviously, the sum of the first part of the previous covariance, namelyâân=1
12
(H
α
)2Heâαn
converges as the geometric series. Therefore we consider the sum of the latter part. Wefind an upper bound of the sum as followsâŁâŁâŁâŁâŁ
âân=1
(2Hn
)(â1)neâ
αt(nâ1)H
âŁâŁâŁâŁâŁ =âŁâŁâŁâŁâŁâân=0
(2Hn+ 1
)(â1)n+1eâαtnH
âŁâŁâŁâŁâŁ=
âŁâŁâŁâŁâŁâ2H +âân=1
nâ 2Hn+ 1
(2Hn
)(â1)neâαtnH
âŁâŁâŁâŁâŁâ€ |â2H|+
âŁâŁâŁâŁâŁâân=1
(2Hn
)(â1)neâαtnH
âŁâŁâŁâŁâŁ , (2.8)
sinceâŁâŁâŁâŁnâ 2Hn+ 1
âŁâŁâŁâŁ < 1. It is allowed to use triangle inequality in (2.8), since the sum
âân=1
nâ 2Hn+ 1
(2Hn
)(â1)neâαtnH
is not actually alternative
âą if H < 12 , the sum is always positive, because 2H is positive, 2H â 1 is negative
and all the rest (nâ 3 pieces) are negative. So, if n is odd, then nâ 2 is also oddand the sum is positive.
âą if H > 12 , the sum is always negative, since 2H is positive, 2H â 1 is positive and
all the rest (nâ 3 pieces) are negative. So, if n is odd, then nâ 3 is even and thesum is negative and if n is even, then nâ 3 is odd and the sum is negative.
Using (2.6), we obtainâŁâŁâŁâŁâŁâân=1
(2Hn
)(â1)neâ
αt(nâ1)H
âŁâŁâŁâŁâŁ †|â2H|+âŁâŁâŁ(1â eâαtH )2H â 1
âŁâŁâŁ < 4.
Hence âŁâŁâŁâŁâŁeâαt( 1Hâ1)
âân=1
(2Hn
)(â1)neâ
αt(nâ1)H
âŁâŁâŁâŁâŁ < 4eâαt( 1Hâ1).
Note that H â (0, 1) implies that 1H â 1 > 0.
Since 0 < 1H â 1 †1, for H â„ 1
2 , it holdsâŁâŁâŁE(X(D,α)t X
(D,α)0
)âŁâŁâŁ †K|eâαt( 1Hâ1))|,
for some K â R+ and further
E(X
(D,α)t X
(D,α)0
)= O(eâαt( 1
Hâ1))) (2.9)
32 Chapter 2. Covariances and spectral density functions
by Remark 1.17. In the case H < 12 , it holds
E(X
(D,α)t X
(D,α)0
)= O(eâαt). (2.10)
Thus we haveâân=1
E(X(D,α)n X
(D,α)0
)†K
âân=1
eâαn( 1Hâ1) <â (2.11)
orâân=1
E(X(D,α)n X
(D,α)0
)†K
âân=1
eâαn <â. (2.12)
Applying Definition 1.18, we note that the process X(D,α)t : t ℠0 is short-range
dependent.
We obtain the following corollary directly from the proof of the previous theorem.
Corollary 2.4. For the covariance function of the Doob transformation of fBm X(D,α)t :
t â„ 0, we have
E(X
(D,α)t X
(D,α)0
)= O(eâαt), if H <
12
E(X
(D,α)t X
(D,α)0
)= O(eâαt( 1
Hâ1))), if H â„ 12 .
Theorem 2.5 ([29], Prop. 2.4.). The fractional OrnsteinâUhlenbeck process of the firstkind U (Z,α)
t : t â R is long-range dependent if H > 12 , and short-range dependent if
H < 12 .
Proof. Let c â R be arbitrary. Using (2.4) of Proposition 2.2 in the case H > 12 , we
obtainâân=0|ÏU(Z,α)(n)| =
âân=0|E(U (Z,α)c U
(Z,α)c+n
)|
=âân=0
âŁâŁâŁâŁâŁ12Nâk=1
뱉2k
(2kâ1âi=0
(2H â i))n2Hâ2k + O(n2Hâ2Nâ2)
âŁâŁâŁâŁâŁ=
âân=0
(12
Nâk=1
뱉2k
(2kâ1âi=0
(2H â i))n2Hâ2k + O(t2Hâ2Nâ2)
)
= 12
âân=0
αâ22H(2H â 1)n2Hâ2 (2.13)
+12
âân=0
Nâk=2
뱉2k
(2kâ1âi=0
(2H â i))n2Hâ2k (2.14)
+âân=0
O(n2Hâ2Nâ2). (2.15)
2.3. Spectral density function 33
We prove that the sums (2.14) and (2.15) are finite. First, we consider the term
뱉2k
(2kâ1âi=0
(2H â i))n2Hâ2k inside the sums of (2.14). Since it is positive and finite, we
use the Fubini theorem (see Rudin [54, Theorem 7.8.]) to obtainâân=0
Nâk=2
뱉2k
(2kâ1âi=0
(2H â i))n2Hâ2k =
Nâk=2
뱉2k
(2kâ1âi=0
(2H â i)) âân=0
n2Hâ2k
< â,
sinceâân=0
n2Hâ2k converges. The sum (2.15) converges, since 2H â 2N â 2 is always
less than â1. We know that the sum of covariancesâân=0|ÏU(Z,α)(n)| converges only if
12
âân=0
αâ22H(2Hâ1)n2Hâ2 converges. Hence in the case H > 12 ,âân=0|ÏU(Z,α)(n)| diverges.
Lastly, we consider the case H < 12 . In this case the sum â1
2
âân=0
αâ22H(2H â 1)n2Hâ2
converges, since H < 12 and therefore
âân=0|ÏU(Z,α)(n)| converges by comparison test.
Consequently the seriesâân=0|ÏU(Z,α)(n)| converges if H < 1
2 and diverges if H > 12 , thereby
completing the proof.
With the next corollary we show that the processes fOU and fOU(1) do not have thesame finite dimensional distributions following the idea presented in Kaarakka [28] (seealso [12]). We also present this corollary and the proof here, since it follows easily fromthe previous theorems.
Corollary 2.6. The fractional OrnsteinâUhlenbeck process of first kind and the Doobtransformation of fBm have not the same finite dimensional distributions.
Proof. fOU(1) is long-range dependent if H > 12 , and short-range dependent if H < 1
2 byTheorem 2.5. The Doob transformation of fBm is short-range dependent for all H â (0, 1)by Theorem 2.3. These processes are Gaussian, the covariance determines its distributionuniquely and therefore their finite dimensional distributions are not the same when H > 1
2 .If H < 1
2 , both of these processes are short-range dependent, but the covariance of theDoob transformation X(D,α) vanishes exponentially due to (2.10) and the covariance offOU(1), U (Z,α), vanishes as a power function on the strength of Proposition 2.2. Thisestablishes the desired property.
2.3 Spectral density function
Bochner 1 proved a theorem where every non-negative definite function can be representedas a Fourier transformation or RiemannâStieltjes integral, where the integrator is an odd
1Salomon Bochner (1899-1982), American mathematician (born in Poland).
34 Chapter 2. Covariances and spectral density functions
increasing function. Since every stationary Gaussian process has a non-negative definitecovariance function (Theorem 1.4) of one variable, this theorem may be applied to find anodd increasing function ââČ that is called spectral density function of this process (see [18]).This function is used to find a trigonometric isometry between the Gaussian Hilbert spaceG (that is Hilbert space formed by Gaussian processes) and L2(R, dâ) via the formula(from the Theorem 2.7, which we will prove later in this section)
E(Xt1Xt2) =â«
eiÎłt1eiÎłt2dâ(Îł), (2.16)
for all t1, t2 â R. This is the main tool of the prediction, since it connects the two spacesand their elements.
The spectral density function is used in prediction. Dym and McKean described aprediction method in their book [18], and we give here a brief representation of this idea.Lamperti also studied the same problem in his book [36].
Let G be the Gaussian Hilbert space and let Xt : t â R â G be a random process withthe spectral function â. Then Xt : t â R is Gaussian and the conditional probabilityP (XT †x | Xt : t †0) also has Gaussian distribution. The prediction means that wesolve the conditional expectation and the conditional mean square error
mc = E (XT | Xt : t †0) and Qc = E(
(XT âm)2 | Xt : t †0).
Actually the conditional expectation mc is the perpendicular projection of XT onto thespace generated by Xt : t †0. In other words, it is the best linear approximation toXT given Xt : t †0 and for the Gaussian case the linear approximation is the bestpossible. Dym and McKean describe a way to solve mc and Qc for stationary Gaussianprocesses.
We may calculate the spectral density function ââČ (Îł) using the covariance function ofX. If we have a spectral density function of a stationary Gaussian process, with somecondition, then using the spectral density function we get the connection between thespaces G and L2(R, dâ).
Let LâČ[a,b] be the closed subspace of L2(R, dâ) spanned by eiÎłt : a †t †b. In thisparticular case, the conditional expectation of XT is actually a projection onto LâČ(ââ,0).To calculate the projection, we apply another isometry between a Hardy class in theupper half plane H2+ and the space of the Fourier transformations of the space LâČ(ââ,0).Then we do the projection using the Hardy Class and the outer function of the Hardyclass. These classes and their properties are all described precisely in [18].
Prediction theory is very interesting and applicable, for example, in finance, and it mightbe one object of our interest in future, but it does not belong in the main focus of thisdissertation and so we concentrate on isometry (2.16).
The next theorem and the sketch of its proof are from [18], where the writers state that" the proof is adapted from Carleman [1944]." In this chapter there are many integrals,where the integration is over the whole real line. For notational simplification, we do notindicate this in the proof. Instead of nondecreasing, which was used in [18], we use termincreasing here meaning the same.
In the proof of the Bochner Theorem we need Fourier transformation and its inversetransformation, therefore we define these transformations: The Fourier transformation of
2.3. Spectral density function 35
a function f is⧠: f(γ) =
â«f(x)eiÎłxdx,
and its inverse Fourier transformation is
âš : f(x) = 12Ï
â«f(Îł)eâiÎłxdÎł.
There exists a Fourier transformation and its inverse transformation, see, for example,Kaplan [30, p. 519] if â« â
ââ|f(Îł)|dÎł <â. (2.17)
Theorem 2.7. [The Bochner Theorem] If the covariance function Q of a stationaryGaussian process X is continuous at zero, then Q is possible to express as
Q(t) =â«
eiÎłtdâ(Îł), (2.18)
with an odd increasing function â satisfying limÎłââ
â(Îł) <â.
The proof of the Bochner theorem is long and complicated. In many books it is writtenvery briefly and details are omitted. For the sake of completeness we want to clarify itsdetails.
In the Bochner theorem, the covariance is a function of t. Substituting in (2.18) t := t2ât1,we have
Q(t2 â t1) =â«
eiÎł(t2ât1)dâ(Îł) =â«
eiÎłt2(eiÎłt1)dâ(Îł).
Proof. The Bochner Theorem. We first write an outline of the proof.
âą We start by defining u(Ï) = u(a, b) =â«
ei(at+bi|t|)Q(t)dt.
âą Then we prove that in the upper half plane the function u(Ï) is bounded, harmonicand non-negative
âą Since u(Ï) is bounded, harmonic and non-negative in the upper half plane, we are
allowed to apply the Poisson formula and obtain u(Ï) = b
Ï
â«dâ(Îł)
(Îł â a)2 + b2, where
â is an odd increasing function.
âą Finally we use the inverse Fourier transform to obtain the function Q(t).
Note that we have Ï = a+ bi in this proof. Consider the Fourier transformation given by
u(Ï) =[eâb|t|Q(t)
]â§(a) (2.19)
=â«
eiateâb|t|Q(t)dt
=â«
ei(at+bi|t|)Q(t)dt.
36 Chapter 2. Covariances and spectral density functions
First we prove that u(Ï) is bounded and harmonic in the upper half plane.Let b > 0, then
|u(Ï)| =âŁâŁâŁâŁâ« eiateâb|t|Q(t)dt
âŁâŁâŁâŁâ€
â«|eiat|eâb|t||Q(t)|dt
†maxtâ„0|Q(t)|
â«eâb|t|dt
†2b
maxtâ„0|Q(t)|
= 2bâQââ (2.20)
= 2b
Q(0),
where we obtain the last equation using the Lemma 1.14, which says that the maximumvalue of Q is attained at zero. Hence u(Ï) is bounded.To prove that u(Ï) is harmonic, we compute the Laplacian of u(Ï) as follows
â2
âa2u(Ï) + â2
âb2u(Ï)
= â2
âa2
â«eiateâb|t|Q(t)dt+ â2
âb2
â«eiateâb|t|Q(t)dt.
(2.21)
We are allowed to change the order of the integration and the differentiation (or to beprecise the limit)
â
âx
â«f(t, x)dt
by using the Lebesgue Dominated Convergence Theorem (for example, in Royden [53]and more precisely in Ash [1, p. 52]), if
âą the absolute value of the integrand function is integrable for any x:â«|f(t, x)|dt <â,
âą the absolute value of the derivative of the integrand function is bounded by someintegrable function i.e.
âŁâŁâŁâŁ ââxf(t, x)âŁâŁâŁâŁ †N(t) and
â«N(t)dt <â.
In this case, the integrand is bounded by an integrable function, sinceâŁâŁâŁeiateâb|t|Q(t)âŁâŁâŁ †âŁâŁâŁeâb|t|Q(t)
âŁâŁâŁâ€
âŁâŁâŁeâb|t|Q(0)âŁâŁâŁ
and â« âŁâŁâŁeâb|t|Q(0)âŁâŁâŁ dt = |Q(0)|
â«eâb|t|dt
= 2b|Q(0)| <â.
2.3. Spectral density function 37
Since the derivative of the integrand is dominated by the integrable function as followsâŁâŁâŁâŁ ââaeiateâb|t|Q(t)âŁâŁâŁâŁ =
âŁâŁâŁiteiateâb|t|Q(t)âŁâŁâŁ †âŁâŁâŁteâb|t|Q(0)
âŁâŁâŁ ,where
|Q(0)|â« (âŁâŁâŁteâb|t|âŁâŁâŁ) dt = 2 |Q(0)|
b2<â.
Also in the second term of (2.21), the absolute value of the derivative function of theintegrand is also dominated by an integrable functionâŁâŁâŁâŁ ââbeiateâb|t|Q(t)
âŁâŁâŁâŁ =âŁâŁâŁ(â|t|)eiateâb|t|Q(t)
âŁâŁâŁ †âŁâŁâŁ|t|eâb|t|Q(0)âŁâŁâŁ ,
where|Q(0)|
â« (âŁâŁâŁ|t|eâb|t|âŁâŁâŁ) dt = 2 |Q(0)|b2
<â.
Hence we may change the order of integration and differentiation
â2u(Ï)âa2 + â2u(Ï)
âb2= â
âa
â«â
âaeiateâb|t|Q(t)dt (2.22)
+ â
âb
â«â
âbeiateâb|t|Q(t)dt
= â
âa
â«iteiateâb|t|Q(t)dt (2.23)
+ â
âb
â«(â|t|)eiateâb|t|Q(t)dt.
Similarly we may change the order of the integration and the differentiation again, sinceâŁâŁâŁiteiateâb|t|Q(t)âŁâŁâŁ †âŁâŁâŁteâb|t|Q(0)
âŁâŁâŁand also âŁâŁâŁ(â|t|)eiateâb|t|Q(t)
âŁâŁâŁ †âŁâŁâŁteâb|t|Q(0)âŁâŁâŁ ,
where â« âŁâŁâŁteâb|t|Q(0)âŁâŁâŁ dt <â.
Thusâ
âa
âŁâŁâŁiteiateâb|t|Q(t)âŁâŁâŁ =
âŁâŁâŁt2eiateâb|t|Q(t)âŁâŁâŁ †âŁâŁâŁt2eâb|t|Q(0)
âŁâŁâŁ ,and
â
âb
âŁâŁâŁteiateâb|t|Q(t)âŁâŁâŁ =
âŁâŁâŁt2eiateâb|t|Q(t)âŁâŁâŁ †âŁâŁâŁt2eâb|t|Q(0)
âŁâŁâŁ ,where
|Q(0)|â« âŁâŁâŁt2eâb|t|
âŁâŁâŁ = 4 |Q(0)|b3
<â.
38 Chapter 2. Covariances and spectral density functions
Hence
â2u(Ï)âa2 + â2u(Ï)
âb2
=â«
â
âaiteiateâb|t|Q(t)dt+
â«â
âb(â|t|)eiateâb|t|Q(t)dt
= ââ«t2eiateâb|t|Q(t)dt+
â«t2eiateâb|t|Q(t)dt
= 0
and therefore u(Ï) is harmonic.
We also prove that u(Ï) is non-negative. Since
â1b
12
(lim
kâââ(â1 + ebk) + lim
hââ(eâbh â 1)
)= 1b,
we infer1b
= â1b
12
â« âââ
(âb)eâb|t|dt.
Substituting this equation into (2.19):
1bu(Ï) = 1
2
(â1b
)â«(âb)eâb|t1|dt1
â«eiat2eâb|t2|Q(t2)dt2
= 12
â« â«eâb|t1|eiat2eâb|t2|Q(t2)dt1dt2. (2.24)
We change the variables t1 by t1 + t2 and t2 by t1 â t2 and we obtain further
1bu(Ï) =
â« â«eia(t1ât2)eâb(|t1+t2|+|t1ât2|)Q(t1 â t2)dt1dt2.
Since|t1 + t2|+ |t1 â t2| = |t1|+ |t2|+ ||t1| â |t2|| (2.25)
for all t1, t2 and
eâb|t| = b
Ï
â« âââ
eiÎłtÎł2 + b2
dÎł, (2.26)
(see, for example, in Erdélyi (editor) [20, p 118]) we obtain
1bu(Ï) =
â« â«eiat1eâb|t1|eâiat2eâb|t2|
· bÏ
â« eiÎł(|t1|â|t2|)
Îł2 + b2dÎłQ(t1 â t2)dt1dt2
= b
Ï
â« â« â«eiat1âb|t1|+iÎł|t1|eâiat2âb|t2|âiÎł|t2|
· 1γ2 + b2
Q(t1 â t2)dt1dt2dÎł
= b
Ï
â« 1Îł2 + b2
â« â«fÎł(t1)fÎł(t2)Q(t1 â t2)dt1dt2dÎł,
2.3. Spectral density function 39
where fÎł(t1) = eâb|t|+i(at+Îł|t|) and fÎł(t2) is its complex conjugate.We may approximate the double integral by a limit of the sums as followsâ« â«
fÎł(t1)fÎł(t2)Q(t1 â t2)dt1dt2
= limnââ
nâj=ân
nâk=ân
fÎł
(j
n
)fÎł
(k
n
)Q(j â kn
)2n
2n
= limnââ
nâj=ân
nâk=ân
fÎł
(j
n
)fÎł
(k
n
)E(X j
nX k
n
) 2n
2n
= limnââ
E
nâj=ân
fÎł
(j
n
)X j
n
2n
nâk=ân
fÎł
(k
n
)X k
n
2n
= limnââ
E
âŁâŁâŁâŁâŁâŁ
nâj=ân
fÎł
(j
n
)X j
n
2n
âŁâŁâŁâŁâŁâŁ2
â„ 0.
Consequently the function u is non-negative.Every positive harmonic function on the upper half plane has the Poisson formula (see,for example, [18, Section 1.2])
u(Ï) = kb+ b
Ï
â« 1|Îł â Ï|2
dâ(Îł), (2.27)
where k â„ 0 is a constant and â is an increasing function withâ« 1Îł2 + 1dâ(Îł) <â. (2.28)
The right-hand side of (2.27) is unbounded if k 6= 0. Since u is bounded k must be equalto zero for all b â R+. Hence we obtain
u(Ï) = b
Ï
â«dâ(Îł)
(Îł â a)2 + b2. (2.29)
Formula (2.29) is valid for increasing function â, which is integrable in the sense asinequality (2.28).To prove â(Îł) odd, we consider the function ub := u(a, b), for b > 0
ub(a) =â« âââ
ei(at+ib|t|)Q(t)dt.
Since Q is even by Corollary 1.12, substituting t = âs we obtain
ub(a) = ââ« âââ
ei(âas+ib|âs|)Q(âs)ds,
=â« âââ
ei(âas+ib|s|)Q(s)ds
= ub(âa),
40 Chapter 2. Covariances and spectral density functions
hence ub is even. Applying the Poisson formula (2.29), we note that â(Îł) must be odd.
The condition limÎłââ
â(Îł) <â is valid, since
2 (â(N)â 0) = 2Nâ«
0
dâ(Îł)
= 2Nâ«
0
limbââ
b2
Îł2 + b2dâ(Îł).
Changing the order of integration and limits using the Lebesgue Convergence Theorem[53, Ch. 11 Sec.3 Theorem 16] and applying the formula (2.20) we obtain further
2 (â(N)â 0) †limbââ
2Ïb
b
Ï
ââ«0
1Îł2 + b2
dâ(Îł)
= lim
bââÏbu(0, b)
†ÏâQââ <â.
Finally, we use the inverse formula of u(a, b) =[eâb|t|Q(t)
]â§. The constant b is fixed and
we make the transformation with respect to a. Note that in the following calculation theequality signs are almost surely (almost everywhere) valid:
A := eâb|t|Q(t)
=[eâb|t|Q(t)
]â§âš= u(a, b)âš
=
bÏ
ââ«â
dâ(Îł)(Îł â a)2 + b2
âš
= â 12Ï
ââ«ââ
b
Ï
ââ«â
eâiat dâ(Îł)(Îł â a)2 + b2
da.
If we make the substitution s = Îł â a and then change the order of integrations using theFubini Theorem in [54, Theorem 7.8.], we obtain
A = 12Ï
ââ«ââ
b
Ï
ââ«â
eists2 + b2
ds
eâiÎłtdâ(Îł).
Applying (2.26) we compute further
A = 12Ï
ââ«ââ
eâb|t|eâiÎłtdâ(Îł)
= eâb|t| 12Ï
ââ«ââ
eâiÎłtdâ(Îł).
2.4. Spectral density functions of OU and fOU 41
Now we have proved that
Q(t) = 12Ï
ââ«ââ
eâiÎłtdâ(Îł) a.s., for all t. (2.30)
Since Q is continuous and the right-hand side of (2.30) is a continuous function of t,we infer that equality (2.30) holds everywhere. An adjustment of the minus Îł and 2Ïcompletes this proof as follows
Q(t) = 12Ï
ââ«ââ
eâiÎłtdâ(Îł)
= 12Ï
âââ«â
eâi(âÎł)tdâ(âÎł)
= 12Ï
ââ«ââ
eiÎłtdâ(Îł)
if we embed 12Ï into function â, we obtain the assertion.
It is possible to do Lebesgue decomposition of the measure induced by â denoted bydâ(Îł), as follows
dâ(Îł) = dâ(Îł) + ââČ(Îł)dÎł.See, for example, [53, ch.11. Sec. 6. Prop. 24.],
The singular part dâ(Îł) is singular with respect to Lebesgue measure dÎł and thenonsingular part, which is absolutely continuous with respect to the Lebesgue measure dÎł.
We recall that
âą ââČ(Îł)dÎł is absolutely continuous with respect to dÎł, that is, ifâ«AdÎł = 0, thenâ«
AââČ(Îł)dÎł = 0, for all Lebesgue measurable A.
âą dâ(Îł) is singular with respect to dÎł, that is, there exists Lebesgue measurable Esuch that
â«EdÎł = 0 and
â«Î©\E dâ(Îł) = 0.
We have a Riemann-Stieltjes measure â, which can be thought of as a Borel measure.Every Borel measure can be decomposed using the Lebesgue decomposition theory asbefore. Hence â«
eiÎłtdâ(Îł) =â«
eiÎłtââČ(Îł)dÎł +â«
eiÎłtdâ(Îł).
The function ââČ is called the spectral density of the underlying Gaussian process.
2.4 Spectral density functions of OU and fOU
In this section we apply the Bochner theorem to find the spectral density functions of theOrnstein-Uhlenbeck process and the Doob transformation of fBm (fOU).
42 Chapter 2. Covariances and spectral density functions
2.4.1 Spectral density function of the OrnsteinâUhlenbeck processWe calculate the spectral density function of OU process. First we manipulate thecovariance function and using Formula (2.26) we obtain the spectral density function. Lett > 0, then
Q(t) = E(XtX0)
= eâαt2α
= eâα|t|2α
= 12α
ââ«ââ
eiÎłtαÏ
1γ2 + α2 dγ
= 12Ï
ââ«ââ
eiγt 1γ2 + α2 dγ.
Hence the spectral density function of Ornstein-Uhlenbeck process is
ââČ (Îł) = 12Ï
1γ2 + α2 .
Note also that â« âââ
âŁâŁâŁâŁ 12Ï
1γ2 + α2
âŁâŁâŁâŁ dÎł = 12Ï
â« âââ
1γ2 + α2 dγ = 1
2α <â.
2.4.2 Spectral density function of the Doob transformation of fBm(fOU)
Next, we derive the spectral density function of the Doob transformation of fractionalBrownian motion.
Theorem 2.8. The spectral density function of the Doob transformation of fBm (fOU) is
ââČ(Îł) = 12
(H
α
)2H(α
Ï
1Îł2 + α2 â
α
Ï
âân=1
(2Hn
) (â1)n(nH â 1
)Îł2 +
(αnH â α
)2).
Proof.Applying Proposition 2.1, we obtain the covariance of fOU
Q(t) = E(X(D,α)t X
(D,α)0 )
= 12
(H
α
)2Heαt(
1 + eâ2αt(
1â eâαtH)2H
).
Using (2.7), we infer
Q(t) = 12
(H
α
)2H(
eâαt â eâαt( 1Hâ1)
âân=1
(2Hn
)(â1)neâ
αt(nâ1)H
)(2.31)
= 12
(H
α
)2H(
eâα|t| â eâα|t|( 1Hâ1)
âân=1
(2Hn
)(â1)neâ
α|t|(nâ1)H
),
2.4. Spectral density functions of OU and fOU 43
since t â„ 0.
Due to Bochner Theorem (Theorem 2.7), the covariance function of the Gaussian processhas the following form
Q(t) =ââ«ââ
eiÎłtdâ(Îł).
Applying (2.26), we deduce
eâα|t| =ââ«ââ
eiÎłtαÏ
1γ2 + α2 dγ.
Since the Fourier transformation is additive we can substitute (2.26) into the sum in(2.31) and obtain
Q(t) = 12
(H
α
)2H ââ«ââ
eiÎłtαÏ
1γ2 + α2 dγ
ââân=1
(2Hn
)(â1)n
ââ«ââ
eiÎłtαnH â αÏ
1Îł2 +
(αnH â α
)2 dÎł .
Using [54, Theorem 1.38] we can change the order of the integration and the sum, if thefollowing lemma holds
Lemma 2.9. If α > 0 and H â (0, 1), then
âân=1
ââ«ââ
âŁâŁâŁâŁâŁ(
2Hn
)(â1)neiÎłt
αnH â αÏ
1Îł2 +
(αnH â α
)2âŁâŁâŁâŁâŁ dÎł <â. (2.32)
Proof. Denoting m = αnH â α > 0 and noting that |eiÎłt| = 1, we compute as follows
ââ«ââ
âŁâŁâŁâŁâŁ(
2Hn
)(â1)neiÎłt
αnH â αÏ
1Îł2 +
(αnH â α
)2âŁâŁâŁâŁâŁ dÎł
=ââ«ââ
|eiÎłt|âŁâŁâŁâŁâŁ(
2Hn
) αnH â αÏ
1Îł2 +
(αnH â α
)2âŁâŁâŁâŁâŁ dÎł
â€ââ«ââ
âŁâŁâŁâŁ(2Hn
)m
Ï
1Îł2 +m2
âŁâŁâŁâŁ dÎł =: An.
44 Chapter 2. Covariances and spectral density functions
Making the change of variable u = Îłm we obtain
An =âŁâŁâŁâŁ(2H
n
)âŁâŁâŁâŁââ«ââ
m
Ï
âŁâŁâŁâŁ 1Îł2 +m2
âŁâŁâŁâŁ dÎł=
âŁâŁâŁâŁ(2Hn
)âŁâŁâŁâŁââ«ââ
1Ï
âŁâŁâŁâŁ 1(u2 + 1)
âŁâŁâŁâŁ du=
âŁâŁâŁâŁ(2Hn
)âŁâŁâŁâŁ .Hence, the left-hand side of (2.32) has
âân=1
An as an upper bound and we have
âân=1
An =âân=1
âŁâŁâŁâŁ(2Hn
)âŁâŁâŁâŁ=
âân=1
âŁâŁâŁâŁ2H(2H â 1) · · · (2H â n+ 1)n!
âŁâŁâŁâŁ=
âân=1
âŁâŁâŁâŁââ2H(â2H + 1) · · · (â2H + nâ 1)n!
âŁâŁâŁâŁ=
âân=1
âŁâŁâŁâŁ Î(nâ 2H)Î(â2H)Î(n+ 1)
âŁâŁâŁâŁ .Using the recursion formula we may write
Î(â2H) =
Î(â2H+2)â2H(â2H+1) , if 1
2 < H < 1
Î(â2H+1)â2H , if 0 < H < 1
2 .
Î(â2H) = Î(â2H + 2)â2H(â2H + 1)Î(1â 2H) = Î(â2H + 2)
1â 2H .
By virtue of the Stirling formula (see, for example, Rudin [55, Section 8.22.]), theasymptotic behaviour of the Gamma function is
Î(x+ 1) ⌠xx+ 12 eâx
â2Ï, when xââ.
In the other wordslimxââ
Î(x+ 1)xx+ 1
2 eâx=â
2Ï.
Actually this means that there exist d â R+ and c â R+ such that
dxx+ 12 eâx
â2Ï â€ Î(x+ 1) †cxx+ 1
2 eâxâ
2Ï,
when x is greater than some large N . Since
Î(nâ 2H) †ceâ(nâ2Hâ1)(nâ 2H â 1)nâ2Hâ 12â
2Ï
2.4. Spectral density functions of OU and fOU 45
andÎ(n+ 1) â„ deânnn+ 1
2â
2Ï â€ ceânnn+ 12â
2Ï,we obtain
Î(nâ 2H)Î(n+ 1) †ceâ(nâ2Hâ1)(nâ 2H â 1)nâ2Hâ 1
2
deânnn+ 12
= c
de1+2H (nâ 2H â 1)nâ2Hâ 1
2
nn+ 12
= c
de1+2H
(nâ 2H â 1
n
)nâ2Hâ 12(nnâ2Hâ 1
2
nn+ 12
)
= c
de1+2H
(1â 2H + 1
n
)n(1â 2H + 1
n
)â(2H+ 12 )( 1
n1+2H
),
where (1â 2H + 1
n
)nâ eâ(2H+1)
and (1â 2H + 1
n
)â(2H+ 12 )â 1.
Hence, the inequality Î(nâ 2H)Î(n+ 1) â€
C
n1+2H holds, when n is large enough. Since 2H > 0
the seriesâân=1
1n1+2H converges and also the series
âân=1
Î(nâ 2H)Î(n+ 1) (2.33)
converges completing the proof of Lemma 2.9.
We can therefore change the order of the integration and the summation, leading to
Q(t) = 12
(H
α
)2H ââ«ââ
eiγt α
Ï(Îł2 + α2)dÎł
â12
(H
α
)2H ââ«ââ
eiÎłtâân=1
(2Hn
) (â1)n(αnH â α)
Ï(Îł2 +
(αnH â α
)2)dÎł.Thus the spectral density function of the Doob transformation of fBm is
ââČ(Îł) = 12
(H
α
)2Hα
Ï
(1
Îł2 + α2 ââân=1
(2Hn
) (â1)n(nH â 1
)Îł2 +
(αnH â α
)2).
We recall that a function f has the Fourier transformation ifâ« âââ|f(Îł)|dÎł <â.
In the next lemma we consider that the previous condition is true in our case.
46 Chapter 2. Covariances and spectral density functions
Lemma 2.10. Let ââČ(Îł) be the spectral density function of the Doob transformation offBm
ââČ(Îł) = 12
(H
α
)2Hα
Ï
(1
Îł2 + α2 ââân=1
(2Hn
) (â1)n(nH â 1
)Îł2 +
(αnH â α
)2),
then â« âââ|ââČ(Îł)|dÎł <â,
and there is a Fourier transformation of ââČ(Îł).
Proof. We compute
â« âââ|ââČ(Îł)|dÎł =
ââ«ââ
âŁâŁâŁâŁâŁ12(H
α
)2Hα
Ï
(1
Îł2 + α2 ââân=1
(2Hn
) (â1)n(nH â 1
)Îł2 +
(αnH â α
)2)âŁâŁâŁâŁâŁ dÎł
and approximate further
â« âââ|ââČ(Îł)|dÎł †1
2
(H
α
)2Hα
Ï
ââ«ââ
âŁâŁâŁâŁ 1Îł2 + α2
âŁâŁâŁâŁ dÎł+
ââ«ââ
âŁâŁâŁâŁâŁâân=1
(2Hn
) nH â 1
Îł2 +(αnH â α
)2âŁâŁâŁâŁâŁ dÎł
†1
2
(H
α
)2Hα
Ï
Ï
α+ââ«ââ
âŁâŁâŁâŁâŁâân=1
(2Hn
) nH â 1
Îł2 +(αnH â α
)2âŁâŁâŁâŁâŁ dÎł
.
Decomposing the sum
âân=1
(2Hn
) nH â 1
Îł2 +(αnH â α
)2=(
2H1
) 1H â 1
Îł2 +(αH â α
)2 +âân=2
(2Hn
) nH â 1
Îł2 +(αnH â α
)2and computing
â« âââ
(2H1
) 1H â 1
Îł2 +(αH â α
)2 = 2ÏHα
, we obtain
â« âââ|ââČ(Îł)|dÎł †1
2
(H
α
)2Hα
Ï
(Ï
α+ 2ÏH
α
+ââ«ââ
âân=2
âŁâŁâŁâŁ(2Hn
)âŁâŁâŁâŁ nH â 1
Îł2 +(( nH â 1)α
)2 dÎł .
2.4. Spectral density functions of OU and fOU 47
Changing the order of the integration and summation we inferâ« âââ|ââČ(Îł)|dÎł †1
2
(H
α
)2Hα
Ï
(Ï
α+ 2ÏH
α
+âân=2
âŁâŁâŁâŁ(2Hn
)âŁâŁâŁâŁââ«ââ
nH â 1
Îł2 +(( nH â 1)α
)2 dÎł ,
sinceâân=2
âŁâŁâŁâŁ(2Hn
)âŁâŁâŁâŁ converges, as we proved before (2.33). Computing the integral we
concludeâ« âââ|ââČ(Îł)|dÎł †1
2
(H
α
)2Hα
Ï
(Ï
α+ 2ÏH
α+âân=2
âŁâŁâŁâŁ(2Hn
)âŁâŁâŁâŁ Ïα)<â,
completing the proof.
Consequently the function is absolutely integrable and the spectral density function isunique. By uniqueness all requirements of spectral density functions are valid and wehave proved Theorem 2.8. 2
3 Fractional OrnsteinâUhlenbeckprocesses
3.1 fOU(2), Fractional OU process of the second kind
The fractional OrnsteinâUhlenbeck process of the first kind (fOU(1)) is stationary and, inthe case H > 1
2 long-range dependent since its driving process is long-range dependent. Inthis chapter we will study how this situation is changed if we take a process which is short-range dependent as driving process. Is it possible to represent the Doob transformation offBm (fOU) as a solution of a stochastic differential equation? What is the nature of thedriving process? If the driving process of a process X is stationary, is then the process Xstationary? What is its variance? We will answer these questions in this chapter.
First, we define the driving process Y (α) of the Doob transformation of fBm. We makesome changes to obtain a slightly better behaving process Y (1), but it still has someproperties similar to Y (α). The two-sided process Y (1) is a driving process of a fractionalOU process of the second kind, called fOU(2).
3.1.1 Definition
To explain the idea, we first look at the Ornstein-Uhlenbeck process and its representationas a stochastic differential equation. Considering the OU process of Definition 1.22 givenby
Vt = eâαtB e2αt2α.
Using the chain rule we infer
dVt = âαeâαtB e2αt2αdt+ eâαtdB e2αt
2α
= âαVtdt+ eâαtdB e2αt2α.
Hereâ« t
0 eâαsdB e2αs2α
is the standard Brownian motion according to the LĂ©vy character-ization theorem, which states that every centered continuous local martingale M withthe quadratic variation t is Brownian motion see, for example, [32]. To see this, we lookat the variance of
â« t0 eâαtdB e2αt
2α, since the quadratic variation and the variance are the
same in this situation. Changing the variable e2αt
2α by u and using the ItÎ formula we
49
50 Chapter 3. Fractional OrnsteinâUhlenbeck processes
obtain
E
Tâ«0
eâαtdB e2αt2α
2
= E
e2αT
2αâ«1
2α
1â2αu
dBu
2
=
e2αT2αâ«
12α
12αudu = T
which was the assertion.
In the classical OU case we have H = 12 , to extend the construction for all H â (0, 1), we
introduce:
Definition 3.1. For H â (0, 1) and α > 0, we define
Y(α)t :=
tâ«0
eâαsdZÏs =tâ«
0
eâαsdZHe
αsHα
, (3.1)
where Ït = HeαtHα
, where the stochastic integral is the pathwise Riemann-Stieltjes integral.
The process Y (α) may be represented as the Volterra process with respect to Brownianmotion, which we shall verify later in Section 3.1.4.
Proposition 3.2. The Doob transformation of fBm X(D,α) is a solution of the linearstochastic differential equation
dX(D,α)t = âαX(D,α)
t dt+ dY(α)t , (3.2)
with the random initial value
X(D,α)0 = ZH
α⌠N
(0,(H
α
)2H). (3.3)
Proof. Using Definition 1.43 we get the form X(D,α)t = eâαtZÏt , where Z is fractional
Brownian motion and Ït = HeαHt
α . Hence (3.3) holds. For (3.2) consider
dX(D,α)t = d
(eâαt
)ZÏt + eâαtdZÏt
= âαeâαtZÏtdt+ eâαtdZÏt= âαX(D,α)
t dt+ dY(α)t .
The next theorem, stated in Kaarakka and Salminen [29], is the key result for definingthe Langevin stochastic differential equation, where the driving process is slightly simplerthan Y (α).
3.1. fOU(2), Fractional OU process of the second kind 51
Proposition 3.3 ([29], Prop. 3.2.). The property
αHY (α)tα
: t â„ 0 d= Y (1)t : t â„ 0
holds for any α > 0.
Proof. Using the integration by parts formula we obtain a different form of the processY
(α)t as follows
Y(α)t =
tâ«0
eâαsdZÏs
= eâαtZÏt â ZHα
+ α
tâ«0
eâαsZÏsds. (3.4)
Introducing a new variable p := αs and using the self-similarity property of fBm, we infer
αHY
(α)tα
: t â„ 0
=
αHeâtZÏ t
α
â ZHα
+ α
tαâ«
0
eâαsZÏsds
: t â„ 0
=
αHeâtZ
etα Hα
â ZHα
+tâ«
0
eâpZÏ pα
dp
: t â„ 0
d=
αH 1αH
eâtZetαHâ 1αH
ZH +tâ«
0
eâpZÏ
(1)pdp
: t â„ 0
=
tâ«
0
eâpdZÏ
(1)p
: t â„ 0
=
Y
(1)t : t â„ 0
,
where Ï (1)t = He t
H , and this proves the statement.
Proposition 3.3 is inspiring since we can define a new Langevin stochastic differentialequation, where the driving process is Y (1). In the solution of that Langevin equationwe need the extended process Y (1), which is the two-sided Y (1) defined similarly to thetwo-sided fBm in the case of fOU(1).
Definition 3.4. Let Y (1) = Y (1) : t â„ 0 be an independent copy of Y (1), starting from0. Then for t â R, we define the process Y (1) : t â„ 0 via
Y(1)t :=
Y
(1)t , t â„ 0,Y
(1)ât , t < 0.
52 Chapter 3. Fractional OrnsteinâUhlenbeck processes
Proposition 3.5. The Langevin stochastic differential equation
dU(D,Îł)t = âÎłU (D,Îł)
t dt+ dY(1)t , Îł > 0 (3.5)
has the solution
U(D,Îł)t = eâÎłt
tâ«ââ
eÎłsdY (1)s = eâÎłt
tâ«ââ
e(Îłâ1)sdZÏ
(1)s, Îł > 0, (3.6)
where Ï (1)t = He t
H .
Proof. We can prove the existence of the integral in (3.6) in the same way as in the proofof Theorem 1.39. The most important thing is to show that
tâ«ââ
e(Îłâ1)sdZÏ
(1)s
is well defined.For Îł â (0, 1] we use the integration by parts in (3.4), to obtain
sâ«T
e(Îłâ1)udZÏ
(1)u
= e(Îłâ1)sZÏ
(1)sâ e(Îłâ1)TZ
Ï(1)T
(3.7)
â(Îł â 1)sâ«
T
e(Îłâ1)uZÏ
(1)udu,
where Ï (1)u = He uH and T < 0.
Note again that fractional Brownian motion Z is locally Hölder continuous of the order ÎČfor any ÎČ < H, see Theorem 1.32. Hence, for ÎČ < H
|Zs â Z0| †C|s|ÎČ .
For any ÎČ there exists ÎČ0 such that ÎČ < ÎČ0 < H and
|Zs â Z0| †C|s|ÎČ0âÎČ+ÎČ ,
which means that|Zs||s|ÎČ
†C|s|ÎČ0âÎČ .
Hence we obtainlimsâ0
|Zs||s|ÎČ
= 0 a.s. âÎČ < H. (3.8)
We consider the term e(Îłâ1)TZÏ
(1)T
in (3.7), when T â ââ as follows
limTâââ
eâ(1âÎł)TZÏ
(1)T
= limrâ0
eâ(1âÎł)H log( rH )Zr
= limrâ0
( rH
)â(1âÎł)HZr
= limrâ0
H(1âÎł)H Zrr(1âÎł)H .
3.1. fOU(2), Fractional OU process of the second kind 53
Since (1â Îł)H < H for Îł > 0 and applying (3.8), we infer
limTâââ
eâ(1âÎł)TZÏ
(1)T
= 0.
Next we prove the convergence of the integral in (3.7). Changing variables, we obtain
limTâââ
âŁâŁâŁâŁâŁâŁsâ«
T
e(Îłâ1)uZÏ
(1)udu
âŁâŁâŁâŁâŁâŁ †limTâââ
sâ«T
e(Îłâ1)u|ZÏ
(1)u|du (3.9)
= limÏTâ0
Ïsâ«ÏT
( rH
)(Îłâ1)H H
r|Zr|dr (3.10)
= limÏTâ0
Ïsâ«ÏT
Hâ(Îłâ1)H+1 |Zr|râ(Îłâ1)H+1 dr
= limÏTâ0
Ïsâ«ÏT
Hâ(Îłâ1)H+1 |Zr|rÎČ
r(Îłâ1)Hâ1+ÎČdr
†limÏTâ0
Ïsâ«ÏT
Hâ(Îłâ1)H+1 |Zr|rÎČ
r(Îłâ1)Hâ1+Hdr.
Since we have proved in (3.8) that limrâ0
|Zr|rÎČ
= 0 for any ÎČ < H and⫠Δ
0r(Îłâ1)Hâ1+Hdr =
⫠Δ
0rÎłHâ1dr = ΔγH
ÎłH<â,
then
limTâââ
âŁâŁâŁâŁâŁâŁsâ«
T
e(Îłâ1)uZÏ
(1)udu
âŁâŁâŁâŁâŁâŁ <âfor Îł > 0.
It is clearer to see that the integral exists when Îł > 1 than when Îł â (0, 1], for T < 0,the limit e(Îłâ1)TZ
Ï(1)T
â 0, since e(Îłâ1)T â 0. We also proved previously that the limitof the integral
limTâââ
âŁâŁâŁâŁâŁâŁsâ«
T
e(Îłâ1)uZÏ
(1)udu
âŁâŁâŁâŁâŁâŁis bounded for any Îł > 0, in particular Îł > 1.
Therefore, we conclude that the right-hand side of (3.7) is well defined, when T tends toââ. This completes the proof.
Now we are ready to define a new family of fractional OrnsteinâUhlenbeck processes asfollows.
Definition 3.6. The process U (D,Îł) introduced in (3.6) is called the fractional OrnsteinâUhlenbeck process of the second kind, abbreviated fOU(2).
54 Chapter 3. Fractional OrnsteinâUhlenbeck processes
Remark 3.7. For Îł = 1, we have
U(D,1)t = eât
tâ«ââ
dZÏ
(1)s
= eâttâ«
ââ
dZHe
sH
= eât(ZHe
tHâ Z0)
by continuitylim
tâââZHe
tH
= Z0 = 0.
Also,
U(D,1)t = eâtZ
HetH
= X(D,1)t .
3.1.2 Simulations of fractional OrnsteinâUhlenbeck processesWe have analysed mathematically the fractional OrnsteinâUhlenbeck processes. Now wepresent simulations of these processes. We have done these simulations in MatLab usingCholensky decomposition medhod (see, for example, Embrechts and Maejima [19]), wherewe calculated values in 270 points. In Figures 3.1, 3.2, 3.3 and 3.4 there are sample pathsof all three fractional OrnsteinâUhlenbeck processes with Hurst constant H values of0.15, 0.4, 0.6 and 0.8 Îł = α = 1. In Figures 3.5, 3.6, 3.7 and 3.8 there are sample pathsof all three fractional OrnsteinâUhlenbeck processes with Hurst constant H values of 0.4and 0.7 with two different value of Îł and α 0.5 and 2.5.
3.1. fOU(2), Fractional OU process of the second kind 55
0 20 40 60 80 100
â2
0
2
fOUs, H=0.15, α=1, γ=1
fOU
0 20 40 60 80 100
â2
0
2
fOU(1)
0 20 40 60 80 100
â2
0
2
fOU(2)
Figure 3.1: Fractional Ornstein-Uhlenbeck processes with H=0.15
0 20 40 60 80 100
â2
0
2
fOUs, H=0.4, α=1, γ=1
fOU
0 20 40 60 80 100
â2
0
2
fOU(1)
0 20 40 60 80 100
â2
0
2
fOU(2)
Figure 3.2: Fractional Ornstein-Uhlenbeck processes with H=0.4
In Figures 3.1 and 3.2 fractional Ornstein Uhlenbeck process of the second kind, fOU(2),
56 Chapter 3. Fractional OrnsteinâUhlenbeck processes
and the Doob transformation of fBm, fOU, describe actually the same process, sinceα = γ = 1. All the three processes are short-range dependent processes, when H < 1
2 ,which is the case in these Figures 3.1 and 3.2.
0 20 40 60 80 100
â2
0
2
fOUs, H=0.6, α=1, γ=1
fOU
0 20 40 60 80 100
â2
0
2
fOU(1)
0 20 40 60 80 100
â2
0
2
fOU(2)
Figure 3.3: Fractional Ornstein-Uhlenbeck processes with H=0.6
In Figures 3.3 and 3.4 the fractional Ornstein Uhlenbeck processes of the second kindand the Doob transformation of fBm are both short-range dependent, but the fractionalOrnstein Uhlenbeck process of the first kind, fOU(1), is long-range dependent, sinceH < 1
2 . Since α = γ = 1, fOU(1) and fOU describe the same process again.
In Figures 3.5 and 3.6, we consider, how the values of the parameters γ and α impact tothe sample path, when H = 0.4 and in figures 3.7 and 3.8 we do the same when H = 0.7.It is clear that the sample path is smoother with greater γ or α.
3.1.3 Stationarity of Y (α) and U (D,γ)
In this section we consider the stationarity of Y (α) and U (D,γ). In both cases the proof ofstationarity or the proof of stationarity of increments follows from the definitions of thefOU processes.
Proposition 3.8. The process Y (α) has stationary increments and fOU(2), that is theprocess U (D,γ), is stationary.
3.1. fOU(2), Fractional OU process of the second kind 57
0 20 40 60 80 100
â2
0
2
fOUs, H=0.8, α=1, γ=1
fOU
0 20 40 60 80 100
â2
0
2
fOU(1)
0 20 40 60 80 100
â2
0
2
fOU(2)
Figure 3.4: Fractional Ornstein-Uhlenbeck processes with H=0.8
0 20 40 60 80 100
â2
0
2
fOUs, H=0.4, α=0.5, γ=2.5
fOU
0 20 40 60 80 100
â2
0
2
fOU(1)
0 20 40 60 80 100
â2
0
2
fOU(2)
Figure 3.5: Fractional Ornstein-Uhlenbeck processes with γ = 2.5 and α = 0.5
58 Chapter 3. Fractional OrnsteinâUhlenbeck processes
0 20 40 60 80 100â2
0
2fOUs, H=0.4, α=2.5, γ=0.5
fOU
0 20 40 60 80 100â2
0
2
fOU(1)
0 20 40 60 80 100â2
0
2
fOU(2)
Figure 3.6: Fractional Ornstein-Uhlenbeck processes with γ = 0.5 and α = 2.5
0 20 40 60 80 100
â2
0
2
fOUs, H=0.7, α=0.5, γ=2.5
fOU
0 20 40 60 80 100
â2
0
2
fOU(1)
0 20 40 60 80 100
â2
0
2
fOU(2)
Figure 3.7: Fractional Ornstein-Uhlenbeck processes with γ = 2.5 and α = 0.5
3.1. fOU(2), Fractional OU process of the second kind 59
0 20 40 60 80 100
â2
0
2
fOUs, H=0.7, α=2.5, γ=0.5
fOU
0 20 40 60 80 100
â2
0
2
fOU(1)
0 20 40 60 80 100
â2
0
2
fOU(2)
Figure 3.8: Fractional Ornstein-Uhlenbeck processes with γ = 0.5 and α = 2.5
Proof. We use the longer form (3.4) of Y (α), where
Y(α)t2 â Y (α)
t1 = eâαt2ZÏt2 â eâαt1ZÏt1 + α
t2â«t1
eâαsZÏsds
Y (α)s2â Y (α)
s1= eâαs2ZÏs2
â eâαs1ZÏs1+ α
s2â«s1
eâαsZÏsds,
and consider
E((Y
(α)t2 â Y (α)
t1
)(Y (α)s2â Y (α)
s1
)). (3.11)
By self-similarity of fBm, we noteeâα(t+h)Z
Heα(t+h)Hα
: t â R
=
eâα(t+h)Z
Hα e
αtH e
αhH
: t â R
(3.12)
d=
eâα(t+h)(
eαhH)H
ZHα e
αtH
: t â R
=
eâαtZHα e
αtH
: t â R.
Using (3.12) and changing the variable s by uâ h, we infer further
E((Y
(α)t2 â Y (α)
t1
)(Y (α)s2â Y (α)
s1
))= E
((Y
(α)t2+h â Y
(α)t1+h
)(Y
(α)s2+h â Y
(α)s1+h
)),
60 Chapter 3. Fractional OrnsteinâUhlenbeck processes
where
Y(α)t2+h â Y
(α)t1+h = eâα(t2+h)ZÏt2+h â eâα(t1+h)ZÏt1+h + α
t2+hâ«t1+h
eâα(uâh)ZÏuâhdu
Y(α)s2+h â Y
(α)s1+h = eâα(s2+h)ZÏs2+h â eâα(s1+h)ZÏs1+h + α
s2+hâ«s1+h
eâα(uâh)ZÏuâhdu
and 0 < s2 < s2 < t1 < t2 and h > 0. We deduce using Theorem 1.3 that the incrementprocess of Y (α) is stationary.
Next, we prove that the process U (D,α) is stationary. Using Proposition 3.5 we calculatethe covariance
E(U
(D,α)t U (D,α)
s
)= E
eâÎłttâ«
ââ
e(Îłâ1)pdZÏ
(1)p
eâÎłssâ«
ââ
e(Îłâ1)pdZÏ
(1)p
To prove that process U (D,α) is stationary it is enough to show that
E(U
(D,α)t+h U
(D,α)s+h
)= E
(U
(D,α)t U (D,α)
s
).
When we calculate the covariance
E(U
(D,α)t+h U
(D,α)s+h
)= E
eâÎł(t+h)t+hâ«ââ
e(Îłâ1)pdZÏ
(1)p
eâÎł(s+h)s+hâ«ââ
e(Îłâ1)pdZÏ
(1)p
changing the variable p by r + h, we obtain
E(U
(D,α)t+h U
(D,α)s+h
)= E
eâÎł(t+h)tâ«
ââ
e(Îłâ1)(r+h)dZÏ
(1)r+h
eâÎł(s+h)sâ«
ââ
e(Îłâ1)(r+h)dZÏ
(1)r+h
.
Using self-similarity of fractional Brownian motion, we infer further
E(U
(D,α)t+h U
(D,α)s+h
)= E
eâÎł(t+h)tâ«
ââ
e(Îłâ1)(r+h)ehdZÏ
(1)r
eâÎł(s+h)sâ«
ââ
e(Îłâ1)(r+h)ehdZÏ
(1)r
= E
eâÎłttâ«
ââ
e(Îłâ1)rdZÏ
(1)r
eâÎłssâ«
ââ
e(Îłâ1)rehdZÏ
(1)r
= E
(U
(D,α)t U (D,α)
s
),
thereby completing the proof.
3.1. fOU(2), Fractional OU process of the second kind 61
3.1.4 Processes Y (α) and Y (1) as stochastic integrals with respect toBrownian motion
When we present a stochastic process as an integral, where the integrator is a semimartin-gale and the integrand bounded deterministic real-valued function, then we are dealingwith Volterra processes see, for example, Mytnic and Neuman [44]
M(t) =â« t
0F (t, r)dX(r), t â R+.
It is possible to represent the processes Y (α) and Y (1) with respect to the stochasticintegral of Brownian motion using the representation of fractional Brownian motion.
There are some relationships between two-sided Brownian motion and two-sided fractionalBrownian motion. The first presentation is due to Mandelbrot and Van Ness [38]. Almostsimultaneously Molchan and Golosov published their presentation, where the one-sidedfractional Brownian motion is presented as the integral with respect to Brownian motion.Proofs of both presentations can be found, for example, in Jost [27].
We use the presentation by Norros and Virtamo [46], from which the presentation ofMandelbrot and Van Ness can actually be attained.
When H > 12 fractional Brownian motion may be represented with respect to a stochastic
integral of Brownian motion and Brownian motion may be presented with respect to thestochastic integral of fractional Brownian motion as follows
Zt = C
tâ«0
K(t, s)dBs, (3.13)
Bt = C âČtâ«
0
KW (t, s)dZs, (3.14)
where we use the abbreviation
K(t, s) =(H â 1
2
)sâH+ 1
2
â« t
s
uHâ12 (uâ s)Hâ 3
2 du,
KW (t, s) = sâH+ 12 tHâ
12 (tâ s)âH+ 1
2
âsâH+ 12
(H â 1
2
) tâ«s
uHâ32 (uâ s)âH+ 1
2 du,
and
C =â
2H(H â 1
2 )Beta(H â 12 , 2â 2H)
=
â(2H)Î( 3
2 âH)Î(2â 2H)Î(H + 1
2 ),
c = 1Beta(H + 1
2 ,32 âH)
=sin((H â 1
2 )Ï)(H â 1
2 )Ï,
C âČ = c
C.
62 Chapter 3. Fractional OrnsteinâUhlenbeck processes
We use the previous formulas to present the processes Y (α) and Y (1) in the integral formswith respect to Brownian motion. The key to these presentations is (3.1) in Definition3.1.Making the substitution u = Ïs = He
αsH
α , we obtain
Y(α)t =
tâ«0
eâαsdZÏs
=(H
α
)H Ïtâ«Hα
uâHdZu
=(H
α
)H ÏâHt ZÏt â(H
α
)âHZHαâ
Ïtâ«Hα
Huâ(H+1)Zudu
.
Using the formula (3.13) for fractional Brownian motion, we infer that
Y(α)t =
(H
α
)HC (I1 â I2 + I3) ,
where
I1 := ÏâHt
Ïtâ«0
K(Ït, s)dBs
I2 :=(H
α
)âH Hαâ«
0
K
(H
α, s
)dBs
I3 :=Ïtâ«Hα
Huâ(H+1)uâ«
0
K(u, s)dBsdu.
Next we split I3 into two parts and change the order of the integration. This is allowedby the Fubini-type theorem for stochastic integrals if we find an integrable function f(u)such that
|Huâ(H+1)K(u, s)| †f(u)see, for example, Ikeda and Watanabe [26]. To achieve this, we make some approximations.Let s †u. Then we compute
Huâ(H+1)K(u, s) = Huâ(H+1)(H â 12)sHâ 1
2
uâ«s
rHâ12 (r â s)Hâ 3
2 dr
†Huâ(H+1)(H â 12)uHâ 1
2uHâ12
uâ«s
(r â s)Hâ 32 dr
= Huâ(H+1)u2(Hâ 12 )(uâ s)Hâ 1
2
†Huâ(H+1)u2(Hâ 12 )uHâ
12
= Hu2Hâ 52
=: f(u)
3.1. fOU(2), Fractional OU process of the second kind 63
andÏtâ«Hα
u2Hâ 52 du =
Ï2Hâ 3
2t â
(Hα t
)2Hâ 32
2H â 32
<â.
Hence we may change the order of integrations, yielding
I3 =Ïtâ«Hα
Huâ(H+1)
Hαâ«
0
K(u, s)dBsdu+Ïtâ«Hα
Huâ(H+1)uâ«
Hα
K(u, s)dBsdu
=
Hαâ«
0
Ïtâ«Hα
Huâ(H+1)K(u, s)dudBs +Ïtâ«Hα
Ïtâ«s
Huâ(H+1)K(u, s)dudBs.
Then, using the integration by parts and denoting K(t, s) :=â« tsk(u, s)du, where
k(t, s) := (H â 12)(t
s
)Hâ 12
(tâ s)Hâ 32 ,
we obtain
I3 =
Hαâ«
0
(H
α
)âHK
(H
α, s
)â ÏtâHK(Ït, s)dBs
â
Hαâ«
0
Ïtâ«Hα
(âuâH
(k(Ït, u)â k
(H
α, u
)))dudBs
+Ïtâ«Hα
sâHK(s, s)â ÏâHt K(Ït, s)dBs
âÏtâ«Hα
Ïtâ«s
(uâH (k(Ït, u)â k(s, u))
)dudBs.
Simplifying, we infer
Y(α)t =
(H
α
)HC
Hαâ«
0
Ïtâ«Hα
uâHk(Ït, u)dudBs
â
Hαâ«
0
Ïtâ«Hα
uâHk
(H
α, u
)dudBs
âÏtâ«Hα
Ïtâ«s
uâHk (Ït, u) dudBs +Ïtâ«Hα
Ïtâ«s
uâHk (s, u) dudBs
.
64 Chapter 3. Fractional OrnsteinâUhlenbeck processes
Since there are integrals which are independent on s andbâ«a
dBs = Bb âBa, we conclude
Y(α)t =
(H
α
)HC
BHα
Ïtâ«Hα
uâH(k(Ït, u)â k
(H
α, u
)du
)+
Ïtâ«Hα
Ïtâ«s
uâH (k (s, u)â k (Ït, u)) dudBs
.
If α = 1, we infer directly Y (1) as follows
Y(1)t = HHC
BH Ï
(1)tâ«H
uâH(k(Ï (1)
t , u)â k (H,u) du)
+Ï
(1)tâ«H
Ï(1)tâ«s
uâH(k (s, u)â k
(Ï
(1)t , u
))dudBs
.
Collecting the previous results we have proved the following proposition
Proposition 3.9. Let H > 12 and
k(α)(t, s, u) := k(t, u)â k(s, u)
= (H â 12)u 1
2âH(tHâ
12 (tâ u)Hâ 3
2 â sHâ 12 (sâ u)Hâ 3
2
),
then the Volterra representations of Y (α)t and Y (1)
t are
Y(α)t =
(H
α
)HC
(BH
α
( Ïtâ«Hα
uâHk(α)(Ït,H
α, u)du
)
+Ïtâ«Hα
Ïtâ«s
uâHk(α)(s, Ït, u)dudBs
)
Y(1)t = HHC
(BH
( Ï(1)tâ«H
uâHk(α)(Ï (1)t , H, u)du
)
+Ï
(1)tâ«H
Ï(1)tâ«s
uâHk(α)(s, Ï (1)t , u)dudBs
).
3.1. fOU(2), Fractional OU process of the second kind 65
3.1.5 Hölder continuity of the order ÎČ < H
Knowing that both processes Y (α) and U (D,Îł) have the same kind of continuity propertiesis useful. We prove that they are both locally Hölder continuous of any order ÎČ < H.This result is strong enough for us but in ZĂ€hle [58] there is a more general result. Werecall that a definition of locally Hölder continuity is given in Definition 1.15. Azmoodehand Viitasaari used the same kind of approximations of the fractional Ornstein-Uhlenbeckprocess of the second kind in [3, Lemma 2.4] as we use in the proof of Proposition 3.11.
Proposition 3.10 ([29], Prop. 3.4). The sample paths of the process Y (α) are locallyHölder continuous of any order ÎČ < H.
Proof. Applying (3.4), we obtain
Y(α)t = eâαtZÏt â ZH
α+ α
tâ«0
eâαsZÏsds
= X(D,α)t âX(D,α)
0 + α
tâ«0
X(D,α)s ds, (3.15)
where X(D,α)t = eâαtZÏt is the Doob transformation of fBm and Ït = e
αtH Hα . We first
prove that X(D,α) is locally Hölder continuous of any order ÎČ < H. We just computeâŁâŁâŁX(D,α)t âX(D,α)
s
âŁâŁâŁ|tâ s|ÎČ
= |eâαtZÏt â eâαsZÏs ||tâ s|ÎČ
= |eâαtZÏt â eâαsZÏt + eâαs (ZÏt â ZÏs)||tâ s|ÎČ
†|eâαt â eâαs| |ZÏt ||tâ s|ÎČ
+ eâαs |ZÏt â ZÏs ||tâ s|ÎČ
, (3.16)
where ÎČ > 0 and 0 < s, t < T . The first part of the sum (3.16) must be
|eâαt â eâαs| |ZÏt ||tâ s|ÎČ
†CT ,
when ÎČ < H. By the Mean Value Theorem there exists Ο â (s, t) such that
|eâαt â eâαs| = | â αeâαΟ(tâ s)| †α|tâ s|.
So|eâαt â eâαs||tâ s|ÎČ
†α |tâ s||tâ s|ÎČ
= α|tâ s|1âÎČ .
Let ST = sup0â€uâ€T
|ZÏu |, then we infer, since ÎČ < H < 1,
|eâαt â eâαs| |ZÏt ||tâ s|ÎČ
†αST |tâ s|1âÎČ
†αSTT1âÎČ =: CT .
66 Chapter 3. Fractional OrnsteinâUhlenbeck processes
Next, we consider the second part of the sum (3.16)
eâαs |ZÏt â ZÏs ||tâ s|ÎČ
= |ZÏt â ZÏs |eαs |tâ s|ÎČ
.
Using the Mean Value Theorem we know that there exists Ο â (s, t) such that
eαtH â eαsH = α
HeαΟH (tâ s).
Let Ο be the previous value, then it is possible to write
eâαs |ZÏt â ZÏs ||tâ s|ÎČ
= |ZÏt â ZÏs |
eα(sâΟ)âŁâŁâŁeαΟÎČ (tâ s)
âŁâŁâŁÎČand approximating it upwards, since ÎČ < H, we obtain
eâαs |ZÏt â ZÏs ||tâ s|ÎČ
†|ZÏt â ZÏs |
eα(sâΟ)âŁâŁâŁeαΟH (tâ s)
âŁâŁâŁÎČ .Now in (3.17) we use the already fixed Ο â (s, t), which we found using the Mean ValueTheorem, such that
|ZÏt â ZÏs |
eα(sâΟ)âŁâŁâŁeαΟH (tâ s)
âŁâŁâŁÎČ = |ZÏt â ZÏs |
eα(sâΟ)âŁâŁâŁHα (eαtH â eαsH
)âŁâŁâŁÎČand we approximate further
eâαs |ZÏt â ZÏs ||tâ s|ÎČ
†|ZÏt â ZÏs |
eα(sâΟ)âŁâŁâŁHα (eαtH â eαsH
)âŁâŁâŁÎČ (3.17)
†eαT |ZÏt â ZÏs |âŁâŁâŁHα (eαtH â eαsH)âŁâŁâŁÎČ = KT
|ZÏt â ZÏs ||Ït â Ïs|ÎČ
.
Hence we obtain âŁâŁâŁX(D,α)t âX(D,α)
s
âŁâŁâŁ|tâ s|ÎČ
†KT|ZÏt â ZÏs |âŁâŁâŁ eαtH Hα â e
αsH Hα
âŁâŁâŁÎČ + CT ,
where KT and CT are random constants that are independent of s and t. Since fBmis locally Hölder continuous of the order ÎČ < H, we conclude that X(D,α) is Höldercontinuous of the order ÎČ < H. Last we consider integral term in (3.15). SinceâŁâŁâŁâŁâŁâŁ
tâ«s
X(D,α)u du
âŁâŁâŁâŁâŁâŁ †supsâ€uâ€t
|X(D,α)u ||tâ s|
†sup1â€uâ€T
|X(D,α)u ||tâ s| =: CT |tâ s|,
3.1. fOU(2), Fractional OU process of the second kind 67
thenâ« tsX
(D,α)u du is Hölder continuous of the order 1. And therefore it is also Hölder
continuous of the order ÎČ < 1, sinceâŁâŁâŁâ« ts X(D,α)u du
âŁâŁâŁ|tâ s|ÎČ
=|tâ s|âÎČ+1
âŁâŁâŁâ« ts X(D,α)u du
âŁâŁâŁ|tâ s|
†|tâ s|âÎČ+1CT
†T 1âÎČCT .
Hence the sample paths of the process Y α are locally Hölder continuous.
Proposition 3.11 ([29], Prop. 3.4.). The sample paths of the process U (D,Îł) are locallyHölder continuous of any order ÎČ < H.
Proof. Using exactly the same idea as in the proof of Proposition 3.10, we can provethat U (D,Îł) is locally Hölder continuous of the order ÎČ < H, since we have the followingconnection between U (D,Îł) and Y (1)
U(D,Îł)t â eâÎłtU (D,Îł)
0 = Y(1)t â ÎłeâÎłt
tâ«0
eÎłsY (1)s ds, t > 0.
Using the previous equation we start to calculate the difference of U (D,Îł)
âŁâŁâŁU (D,Îł)t â U (D,Îł)
s
âŁâŁâŁ =âŁâŁâŁâŁâŁ(eâÎłt â eâÎłs)U (D,Îł)
0 + Y(1)t â Y (1)
s
â
(ÎłeâÎłt
tâ«0
eÎłsY (1)s dsâ ÎłeâÎłs
sâ«0
eÎłrY (1)r dr
)âŁâŁâŁâŁâŁâ€
âŁâŁeâÎłt â eâÎłsâŁâŁ âŁâŁâŁU (D,Îł)
0
âŁâŁâŁ+âŁâŁâŁY (1)t â Y (1)
s
âŁâŁâŁ+âŁâŁâŁâŁâŁÎł( tâ«
0
eâÎłpY (1)tâpdpâ
sâ«0
eâÎłpY (1)sâpdp
)âŁâŁâŁâŁâŁâ€
âŁâŁeâÎłt â eâÎłsâŁâŁ âŁâŁâŁU (D,Îł)
0
âŁâŁâŁ+âŁâŁâŁY (1)t â Y (1)
s
âŁâŁâŁ+âŁâŁâŁâŁâŁÎł( sâ«
0
eâÎłp(Y
(1)tâp â Y
(1)sâp
)dp+
tâ«s
eâÎłpY (1)tâpdp
)âŁâŁâŁâŁâŁ,where
âą using the Mean Value Theorem we know that there is Ο â (s, t) such thatâŁâŁeâÎłt â eâÎłsâŁâŁ =
âŁâŁâÎłeâÎłÎŸ|tâ s|âŁâŁ .
Denoting CT0 := ÎłâŁâŁâŁU (D,Îł)
0
âŁâŁâŁ we inferâŁâŁeâÎłt â eâÎłsâŁâŁ âŁâŁâŁU (D,Îł)
0
âŁâŁâŁ †âŁâŁâeâÎłÎŸCT0 |tâ s|âŁâŁ
†CT0 |tâ s|,
68 Chapter 3. Fractional OrnsteinâUhlenbeck processes
âą using the Hölder continuity of Y (1) we obtainâŁâŁâŁY (1)t â Y (1)
s
âŁâŁâŁ †CT1 |tâ s|ÎČ ,
âą using again the Hölder continuity of Y (1) we deduceâŁâŁâŁâŁâŁÎłsâ«
0
eâÎłp(Y
(1)tâp â Y
(1)sâp
)dp
âŁâŁâŁâŁâŁ â€âŁâŁâŁâŁâŁÎł
sâ«0
eâÎłpCT2 |tâ s|ÎČdp
âŁâŁâŁâŁâŁâ€ CT2 |tâ s|ÎČ(1â eâÎłs)†CT2 |tâ s|ÎČ ,
âą lastly the continuity of Y (1) at t, implies that
âŁâŁâŁâŁâŁÎłtâ«s
eâÎłpY (1)tâpdp
âŁâŁâŁâŁâŁ †γ supsâ€pâ€t
âŁâŁâŁY (1)tâp
âŁâŁâŁ tâ«s
eâÎłpdp
and âŁâŁâŁâŁâŁÎłtâ«s
eâÎłpY (1)tâpdp
âŁâŁâŁâŁâŁ †sup0â€pâ€T
âŁâŁâŁY (1)tâp
âŁâŁâŁ eâÎłÎŸ|tâ s|†CT3 |tâ s|.
Combining the previous approximations, we inferâŁâŁâŁU (D,Îł)t â U (D,Îł)
s
âŁâŁâŁ †âŁâŁeâÎłt â eâÎłsâŁâŁ âŁâŁâŁU (D,Îł)
0
âŁâŁâŁ+âŁâŁâŁY (1)t â Y (1)
s
âŁâŁâŁ+
âŁâŁâŁâŁâŁâŁÎł sâ«
0
eâÎłp(Y
(1)tâp â Y
(1)sâp
)dp+
tâ«s
eâÎłpY (1)tâpdp
âŁâŁâŁâŁâŁâŁâ€ CT0 |tâ s|+ CT1 |tâ s|ÎČ + CT2 |tâ s|ÎČ + CT3 |tâ s|.
Denoting CT0 + CT3 =: CTa and CT1 + CT2 =: CTb , we are able to writeâŁâŁâŁU (D,Îł)t â U (D,Îł)
s
âŁâŁâŁ|tâ s|ÎČ
†CTa |tâ s|1âÎČ + CTb
†CTaT1âÎČ + CTb
As we have seen above, U (D,Îł) is Hölder continuous of the order ÎČ if Y (1) is Höldercontinuous of the order ÎČ.
3.2. Covariance kernels 69
3.2 Covariance kernels
In this section we consider the kernel representation of fOU processes and the drivingprocesses of fOU processes. Since fractional Brownian motion is behind all these processesthe kernel representation of fractional Brownian motion we recall here
E ((Zt2 â Zt1)(Zs2 â Zs1)) =s2â«s1
t2â«t1
(2H â 1)H(uâ v)2Hâ2dudv,
for all H â(0, 1
2)âȘ( 1
2 , 1).
3.2.1 The Doob transformation of fBm (fOU)
Our aim is to find the kernel representation of the covariance of the Doob transformation.Differentiating twice the covariance of the Doob transformation of (2.3) given in the proofof Proposition 2.1 yields
â
ât
â
âs
(E(X
(D,α)t X(D,α)
s
))= â
ât
â
âs
(eâα(t+s) 1
2
(H
α
)2H (e2αt + e2αs â e2αt
(1â eâ
α(tâs)H
)2H))
= 12
(H
α
)2Heâα(t+s)α2
(âe2αs â e2αt + e2αt
(1â e
âα(tâs)H
)2H
+(
4â 2H
)e2αt
(1â e
âα(tâs)H
)2Hâ1eα(tâs)H
+(
4â 2H
)e2αt
(1â e
âα(tâs)H
)2Hâ2 (eα(tâs)H
)2).
This leads to
â
ât
â
âs
(E(X
(D,α)t X(D,α)
s
))= α2
2
(H
α
)2H (âeâα(tâs) â eα(tâs)
)+α2
2
(H
α
)2Heα(tâs)
(1â e
α(tâs)H
)2HĂ1 +
(1Hâ 2) e
α(tâs)H
1â eα(tâs)H
â
(eα(tâs)H
1â eα(tâs)H
)2 .
The kernel is quite complicated. Nevertheless, it is still good to know the kernel represen-tation of the Doob transformation.
We have just proved the next theorem
Proposition 3.12. For H â (0, 1) and α > 0, the covariance kernel of the increments
70 Chapter 3. Fractional OrnsteinâUhlenbeck processes
of X(D,α) is
kX(D,α)(u, v) = α2
2
(H
α
)2H (âeâα(vâu) â eα(vâu)
)+α2
2
(H
α
)2Heα(vâu)
(1â e
α(vâu)H
)2HĂ1 +
(1Hâ 2) e
α(vâu)H
1â eα(vâu)H
â
(eα(vâu)H
1â eα(vâu)H
)2 ,
and the covariance of the increments of X(D,α) is
E((X
(D,α)t2 âX(D,α)
t1
)(X(D,α)s2
âX(D,α)s1
))=
t2â«t1
s2â«s1
kX(D,α)(u, v)dvdu, (3.18)
where s1 < s2, t1 < t2.
3.2.2 Covariance kernels of Y (α) and Y (1), when H > 12
We note that in the case H â ( 12 , 1), the increments of fBm are positively correlated and
the covariance kernel has some nice integrability properties, see, for example, Pipiras andTaqqu [51, p.16] Equation (4.1) and discussion after that.
Proposition 3.13 ([29], Prop. 3.5.). For H â ( 12 , 1), the covariance kernel of the
increments of Y (α) is
E((Y
(α)t2 â Y (α)
t1
)(Y (α)s2â Y (α)
s1
))(3.19)
= C(α,H)t2â«t1
s2â«s1
eâα(1âH)(uâv)
HâŁâŁâŁ1â eâα(uâv)H
âŁâŁâŁ2â2H dudv,
where s1 < s2, t1 < t2 and
C(α,H) := H(2H â 1)( αH
)2â2H.
Proof. We need Proposition 2.2 of Gripenberg and Norros [22], stating that
E(â«
Rf(s)dZs
â«Rg(t)dZt
)(3.20)
= H(2H â 1)â«R
â«Rf(s)g(t)|sâ t|2Hâ2dtds,
when H â ( 12 , 1) and f, g â L2(R) â© L1(R).
Note that the integrals in (3.20) are over the real axis, but we also have a similar resultfor bounded intervals when using the indicator function
bâ«a
f(t)dt =â«Rf(t)1[a,b]dt.
3.2. Covariance kernels 71
We may indeed modify the process Y α to a form that fits into (3.20). Substituting
u := eαsH Hα
, we obtain
eâαs =(
1eαsH
)H=( αH
)âHuâH .
Therefore
Y αt =⫠t
0eâαsdZÏs
=(H
α
)H ⫠eαtH Hα
Hα
uâHdZu.
We consider the function
f(s) =sâH , if Ï0 < s < Ït0, otherwise,
where Ït = eαtH Hα
. Since in the case H â ( 12 , 1) the function uâH belongs to L2(I)â©L1(I),
where I = [Ï0, Ït] â R we can change the variables two times, to obtain
E((Y
(α)t2 â Y (α)
t1
)(Y (α)s2â Y (α)
s1
))= E
(H
α
)H Ït2â«Ït1
pâHdZp
(H
α
)H Ïs2â«Ïs1
râHdZr
=(H
α
)2HH(2H â 1)
Ït2â«Ït1
Ïs2â«Ïs1
pâHrâH |pâ r|2Hâ2dpdr
=( αH
)2â2HH(2H â 1)
t2â«t1
s2â«s1
eâ1âHH α(tâs)
âŁâŁâŁ1â eâα(tâs)H
âŁâŁâŁ2Hâ2dtds,
thereby completing the proof.
The next corollary is quite obvious, but the proof is a nice example of the behaviour ofthe exponential function.
Corollary 3.14 ([29], Prop. 3.5.). In the case H â ( 12 , 1), the kernel in the representation
of the covariance of the increments of the process Y (α) is symmetric.
Proof. We recall the kernel in the representation of the covariance of Y (α), given inProposition 3.13,
rα,H(u, v) := C(α,H) eâα(1âH)(uâv)
HâŁâŁâŁ1â eâα(uâv)H
âŁâŁâŁ2â2H .
72 Chapter 3. Fractional OrnsteinâUhlenbeck processes
Note that rα,H(u, v) is symmetric. This property follows from the computations
rα,H(u, v) = C(α,H) eâα(1âH)(uâv)
HâŁâŁâŁ1â eâα(uâv)H
âŁâŁâŁ2â2H = C(α,H) eα(1âH)(vâu)
HâŁâŁâŁ1â eα(vâu)H
âŁâŁâŁ2â2H
= C(α,H) eα(1âH)(vâu)
HâŁâŁâŁâŁ1â 1
eâα(vâu)H
âŁâŁâŁâŁ2â2H = C(α,H) eα(1âH)(vâu)
HâŁâŁâŁâŁ eâα(vâu)H â1
eâα(vâu)H
âŁâŁâŁâŁ2â2H
= C(α,H)eα(1âH)(vâu)
H â 2α(1âH)(vâu)HâŁâŁâŁ1â eâ
α(vâu)H
âŁâŁâŁ2â2H
= C(α,H) eâα(1âH)(vâu)
HâŁâŁâŁ1â eâα(vâu)H
âŁâŁâŁ2â2H = rα,H(v, u),
which establish the desired property.
3.2.3 Covariance kernel of fOU(2), when H > 12
We assume again that H â ( 12 , 1) since the covariance kernel has some integrability
properties, see, for example, Pipiras and Taqqu [51, p.16]. are valid in this interval.
Proposition 3.15 ([29], Prop. 3.10.). The covariance of the process U (D,Îł), for H â( 1
2 , 1), (fOU(2)) has the kernel representation
E(U
(D,Îł)t U (D,Îł)
s
)(3.21)
= (2H â 1)H2Hâ1eâÎł(t+s)tâ«
ââ
sâ«ââ
e(Îłâ1+ 1H )(u+v)âŁâŁe uH â e vH
âŁâŁ2(1âH) dudv.
Proof. The proof has the same elements as the proof of Proposition 3.13. However, thesituation is not identical because, now, we consider the expectation of the processesinstead of the expectation of their differences. Therefore, we do not have integrals overany restricted interval, instead we have double integrals over unrestricted intervals. Tostate that the property (3.20) holds for the functions f and g we need an extended versionof this property, stated in [51, Eq. (4.1)]. For using this Equation (4.1) of Pipiras andTaqqu the functions f and g have to satisfy the conditionâ«
R
â«R|f(s)||g(t)||sâ t|2Hâ2dtds <â. (3.22)
We recall that in Definition 3.6 of fOU(2), the process U (D,Îł) is given by
U(D,Îł)t = eâÎłt
tâ«ââ
e(Îłâ1)sdZÏ
(1)s,
3.2. Covariance kernels 73
where Ï (1)t = He t
H . Changing the variable u by He sH , we infer
U(D,Îł)t = eâÎłt
Ï(1)tâ«
0
(( uH
)H)Îłâ1dZu
= eâÎłtHâH(Îłâ1)
Ï(1)tâ«
0
uH(Îłâ1)dZu.
Substitution is allowed, since the integral is the pathwise Riemann-Stieltjes integral. Westill have to check that the condition (3.22) is valid for the functions f(u) = g(u) =uH(Îłâ1)1(0,Ï(1)
t )(u). We make the following calculationsâ«R
â«R|f(p)||g(r)||pâ r|2Hâ2dpdr
=Ï
(1)tâ«
0
Ï(1)tâ«
0
pH(Îłâ1)rH(Îłâ1)|pâ r|2Hâ2dpdr
=1â«
0
1â«0
(Ï (1)t u)H(Îłâ1)(Ï (1)
t v)H(Îłâ1)|Ï (1)t uâ Ï (1)
t v|2Hâ2(Ï (1)t )2dudv
=(Ï
(1)t
)2HÎł1â«
0
1â«0
(uv)H(Îłâ1)|uâ v|2Hâ2dudv,
and by the symmetry of the integrand we inferâ«R
â«R|f(p)||g(r)||pâ r|2Hâ2dpdr
= 2(Ï
(1)t
)2HÎł1â«
0
duuH(Îłâ1)uâ«
0
vH(Îłâ1)(uâ v)2Hâ2dv
= 2(Ï
(1)t
)2HÎł1â«
0
duuH(Îłâ1)1â«
0
(uw)H(Îłâ1)(uâ uw)2Hâ2udw
= 2(Ï
(1)t
)2HÎł1â«
0
duuH(Îłâ1)uH(Îł+1)â11â«
0
wH(Îłâ1)(1â w)2Hâ2dw
= 2(Ï
(1)t
)2HÎł1â«
0
duuH(Îłâ1)uH(Îł+1)â1Beta (H(Îł â 1) + 1, 2H â 1)
=
(Ï
(1)t
)2HÎł
ÎłHBeta (H(Îł â 1) + 1, 2H â 1) <â,
where Beta stands for the Beta function Beta(a, b) =â« 1
0xaâ1(1 â x)bâ1dx, which is
defined and finite for any positive a and b.
74 Chapter 3. Fractional OrnsteinâUhlenbeck processes
Hence, we can calculate the covariance as follows
E(U
(D,Îł)t U (D,Îł)
s
)
= E
eâÎłtHâH(Îłâ1)
Ï(1)tâ«
0
uH(Îłâ1)dZu
eâÎłsHâH(Îłâ1)
Ï(1)sâ«
0
vH(Îłâ1)dZv
= eâÎł(t+s)Hâ2H(Îłâ1)H(2H â 1)Ï
(1)tâ«
0
Ï(1)sâ«
0
uH(Îłâ1)vH(Îłâ1)|uâ v|2Hâ2dudv.
Substituting u with He pH and v by He r
H , we obtain
E(U
(D,Îł)t U (D,Îł)
s
)= eâÎł(t+s)Hâ2H(Îłâ1)H(2H â 1)Ă
tâ«0
sâ«0
(He
pH
)H(Îłâ1) (He r
H
)H(Îłâ1) |HepH âHe r
H |2Hâ2epH e r
H dpdr
= H(2H â 1)H(2Hâ2)eâÎł(t+s)tâ«
0
sâ«0
e(Îłâ1+ 1H )(p+r)|e
pH â e r
H |2Hâ2dpdr,
verifying the assertion.
3.2.4 Covariance and variance of Y (α)
We recall the definition of Y (α) of (3.1)
Y(α)t =
tâ«0
eâαsdZÏs ,
where Ïs = HeαsH
α . In this section we consider covariance of Y (α) and variance of Y (α)
and increments of Y (α). We also examine the asymptotic bahaviours of variance andcovariance, when tââ. For the sake of readability, we include many propositions withtheir brief proofs.
Corollary 3.16 ([29], Cor. 3.7.). In the case H â ( 12 , 1), the increments of Y (α) are
positively correlated.
Proof. The coefficient in Proposition 3.13
C(α,H) = H(2H â 1)( αH
)2Hâ2,
is always positive since H â ( 12 , 1). The kernel in Proposition 3.13
eâα(1âH)(uâv)
HâŁâŁâŁ1â eâα(uâv)H
âŁâŁâŁ2â2H
3.2. Covariance kernels 75
is obviously positive. Thus we conclude that the covariance of the increments of Y (α) ispositive and the increments of Y (α) are positively correlated.
Proposition 3.17 ([29], Prop. 3.8.). In the case H â ( 12 , 1), the variance of the
increments of Y (α) is
E(
(Y (α)t â Y (α)
s )2)
= 2tâsâ«0
(tâ sâ x)kα,H(x)dx. (3.23)
Proof. Applying Proposition 3.13, we infer
E(
(Y (α)t â Y (α)
s )2)
=tâ«s
tâ«s
rα,H(u, v)dvdu
= 2tâ«s
uâ«s
rα,H(u, v)dvdu,
by Corollary 3.14. Substituting x with u and y by uâ v we obtain
E(
(Y (α)t â Y (α)
s )2)
= 2tâ«s
xâsâ«0
kα,H(y)dydx
= 2tâsâ«0
tâ«y+s
kα,H(y)dxdy. (3.24)
We are allowed to change the order of integration in (3.24) using the Fubini theorem [54,Theorem 7.8.], since kα,H is positive and continuous. Note that we have to take intoaccount that the limits of the integral are changing, too. We can calculate the innerintegral
E(
(Y (α)t â Y (α)
s )2)
= 2tâsâ«0
kα,H(y)(tâ (y + s))dy,
verifying the statement.
The fact that in the next proposition the covariance is also positive, follows from theCorollary 3.16.
Proposition 3.18 ([29], Prop. 3.8.). In the case H â ( 12 , 1), the covariance of the Y (α)
is
E(Y
(α)t Y (α)
s
)=
tâ«0
(tâ x)kα,H(x)dx (3.25)
+sâ«
0
(sâ x)kα,H(x)dxâtâsâ«0
(tâ sâ x)kα,H(x)dx.
76 Chapter 3. Fractional OrnsteinâUhlenbeck processes
Proof. Using identity ab = 12(a2 + b2 â (aâ b)2) , for all a, b â R, we observe that
E(Y
(α)t Y (α)
s
)= 1
2
(E(
(Y (α)t )2
)+ E
((Y (α)s )2
)âE
((Y (α)t â Y (α)
s )2))
.
Now, from Proposition 3.17 we conclude the result as follows
E(Y
(α)t Y (α)
s
)=
tâ«0
(tâ x)kα,H(x)dx+sâ«
0
(sâ x)kα,H(x)dx
âtâsâ«0
(tâ sâ x)kα,H(x)dx.
In this subsection we consider the covariance and the variance of Y (α) when the timeparameter tends towards infinity. We rewrite the symmetric kernel in Corollary 3.14 asfollows
rα,H(u, v) = kα,H(uâ v),where
kα,H(x) = C(α,H) eâα(1âH)x
HâŁâŁ1â eâαxHâŁâŁ2â2H . (3.26)
Proposition 3.19 ([29], Prop. 3.8.). In the case H â ( 12 , 1), the variance of the Y (α)
satisfies
E(
(Y (α)t )2
)= O(t) as tââ. (3.27)
Proof. Applying Proposition 3.17 and substituting s = 0, we obtain
E(
(Y (α)t )2
)= 2
tâ«0
(tâ x)kα,H(x)dx
= 2ttâ«
0
kα,H(x)dxâ 2tâ«
0
xkα,H(x)dx. (3.28)
Using (3.28) we prove thatE(
(Y (α)t )2
)= O(t).
For this, it is sufficient to show the propertiesââ«
0
kα,H(x)dx <â (3.29)
andââ«
0
xkα,H(x)dx <â. (3.30)
3.2. Covariance kernels 77
We do not need approximate the integral in (3.29) since it is possible to calculate itsexact value. Substituting eâαxH with u, we have dx = âH
α
1udu, and
limtââ
tâ«0
kα,H(x)dx = limtââ
1â«eâ
αtH
C(α,H)HαuâH(1â u)2Hâ2du
= C(α,H)Hα
limtâ0
1â«t
uâH(1â u)2Hâ2du
= C(α,H)HαBeta(1âH, 2H â 1).
To prove (3.30), we have to study both limits, since there may be difficulties at zero andinfinity. We manipulate the kernel
xkα,H(x) = C(α,H) xeâα(1âH)x
HâŁâŁ1â eâαxHâŁâŁ2â2H
= C(α,H) xeαx(1âH)
HâŁâŁeαxH â 1âŁâŁ2â2H . (3.31)
We use the lâHospitals rule, to obtain
limxâ0
xâŁâŁeαxH â 1âŁâŁ2â2H = lim
xâ0
1(2â 2H) αH eαxH
âŁâŁeαxH â 1âŁâŁ1â2H
= limxâ0
HâŁâŁeαxH â 1
âŁâŁ2Hâ1
(2â 2H)αeαxH= 0,
since H â ( 12 , 1). Therefore
Δâ«0
xkα,H(x)dx <â
for any Δ > 0.
Thus, we have to study the integral in (3.30) for large values of x. For all α > 0 andH â ( 1
2 , 1) we can always find n such that
1â eâαxH >
12 ,
when x > n. Let a > n, we approximate the integralââ«a
xeâαx(1âH)
H
|1â eâαxH |2â2H dx <
ââ«a
22â2Hxeâαx(1âH)
H dx <â
and thereforeââ«a
xkα,Hdx <â.
78 Chapter 3. Fractional OrnsteinâUhlenbeck processes
We have finally verified thatââ«
0
kα,H(x)dx <â andââ«
0
xkα,H(x)dx <â.
SinceE(
(Y (α)t )2
)< Kt, for some finite K,
we conclude thatE(
(Y (α)t )2
)= O(t) as tââ,
thereby completing the proof.
Corollary 3.20 ([29], Prop. 3.8.). The covariance of Y (α) satisfies
limtââ
E(Y
(α)t Y (α)
s
)= s
ââ«0
kα,H(x)dx+sâ«
0
(sâ x)kα,H(x)dx. (3.32)
Proof. The proof is a straightforward calculation. Applying Proposition 3.18, we obtain
limtââ
E(Y
(α)t Y (α)
s
)= lim
tââ
tâ«0
(tâ x)kα,H(x)dx
+sâ«
0
(sâ x)kα,H(x)dx
âtâsâ«0
(tâ sâ x)kα,H(x)dx
=
sâ«0
(sâ x)kα,H(x)dx+ limtââ
tâ«0
(tâ x)kα,H(x)dx
âtâsâ«0
(tâ x)kα,H(x)dx+tâsâ«0
skα,H(x)dx
,
where we can combine the integrals yielding
limtââ
E(Y
(α)t Y (α)
s
)=
sâ«0
(sâ x)kα,H(x)dx+ limtââ
tâ«tâs
(tâ x)kα,H(x)dx
+tâsâ«0
skα,H(x)dx
=
sâ«0
(sâ x)kα,H(x)dx+ââ«
0
skα,H(x)dx,
thereby completing the proof.
3.2. Covariance kernels 79
3.2.5 Increment process of Y (α)
We are ready to define the increment process of Y (α), and prove that it is a short-rangedependent stationary process. We already proved in Proposition 3.8 that the process Y (α)
itself has stationary increment. Next we prove the same result but in more useful way.
Proposition 3.21 ([29], Cor. 3.7.). In the case H â ( 12 , 1), the increment process of
Y (α)
IY := IYn : n = 0, 1, . . . = Y (α)n+1 â Y (α)
n : n = 0, 1, . . .is stationary for any α > 0.
Proof. According to Proposition 3.13, the covariance of the increments of Y (α) is
E((Y
(α)n+1 â Y (α)
n
)(Y
(α)m+1 â Y (α)
m
))= C(α,H)
n+1â«n
m+1â«m
eâα(1âH)(uâv)
HâŁâŁâŁ1â eâα(uâv)H
âŁâŁâŁ2â2H dudv.
To prove the stationarity, we need to show that for any h > 0, we have
E((Y
(α)n+1+h â Y
(α)n+h
)(Y
(α)m+1+h â Y
(α)m+h
))= E
((Y
(α)n+1 â Y (α)
n
)(Y
(α)m+1 â Y (α)
m
)).
In (3.33) we can make the substitutions s := uâ h and r := v â h
E((Y
(α)n+1+h â Y
(α)n+h
)(Y
(α)m+1+h â Y
(α)m+h
))= C(α,H)
n+1+hâ«n+h
m+1+hâ«m+h
eâα(1âH)(uâv)
HâŁâŁâŁ1â eâα(uâv)H
âŁâŁâŁ2â2H dudv, (3.33)
= C(α,H)n+1â«n
m+1â«m
eâα(1âH)(s+hâ(r+h))
HâŁâŁâŁ1â eâα(s+hâ(r+h))
H
âŁâŁâŁ2â2H dsdr,
= C(α,H)n+1â«n
m+1â«m
eâα(1âH)(sâr))
HâŁâŁâŁ1â eâα(sâr)H
âŁâŁâŁ2â2H dsdr,
E((Y
(α)n+1 â Y (α)
n
)(Y
(α)m+1 â Y (α)
m
)).
Hence the sequence IY is stationary according to Theorem 1.3.
Proposition 3.22 ([29], Cor. 3.7.). In the case H â ( 12 , 1), the increment process IY
defined in Proposition 3.21 is short-range dependent.
Proof. The idea of the proof is to apply Proposition 3.13. We have to show that thecondition in Definition 1.18
âân=1
E(IY1IYn) =âân=1
E(Y
(α)1
(Y
(α)n+1 â Y (α)
n
))<â
80 Chapter 3. Fractional OrnsteinâUhlenbeck processes
holds. We start by considering
E(Y
(α)1
(Y
(α)n+1 â Y (α)
n
))=
n+1â«n
1â«0
rα,H(u, v)dvdu
= C(α,H)n+1â«n
1â«0
eâα(1âH)(uâv)
HâŁâŁâŁ1â eâα(uâv)H
âŁâŁâŁ2â2H dudv.
Changing the variable u to w + n, we infer that
E(Y
(α)1
(Y
(α)n+1 â Y (α)
n
))= C(α,H)eâ
α(1âH)nH
1â«0
1â«0
eâα(1âH)(wâv)
HâŁâŁâŁ1â eâαnH eâα(wâv)H
âŁâŁâŁ2â2H dwdv.
We are only interested in the large values of n, and therefore we need evaluate onlythe limit. To change the order of the integral and the limit we have to use ExtendedMonotone Convergence Theorem [1, p.47]. Let
gn(w, v) = eâα(1âH)(wâv)
HâŁâŁâŁ1â eâαnH eâα(wâv)H
âŁâŁâŁ2â2H .
If the sequence of functions gn is increasing with respect to n, and we find the integrableminorant, or if the sequence of functions is decreasing with respect to n, and we find theintegrable majorant, then we are allowed to change the order of the integration and thelimit. We consider that in two separable situations: when w > v and v â„ w.
âą Let 1 â„ w > v â„ 0 and n â„ 2. Then
1â eâαnH eâα(wâv)H
is strictly positive for any n and therefore has no poles and is continuous in thebounded area [0, 1] Ă [0, 1]. It is also increasing with respect to n and thereforefunctions has a minorant
1â eâαnH eâα(wâv)H > 1â eâ 2α
H eâα(wâv)H .
We approximate the sequence of functions gn(w, v) to find the majorant of thedecreasing sequence of functions gn(w, v) and obtain
gn(w, v) <eâ
α(1âH)(wâv)H(
1â eâ 2αH eâ
α(wâv)H
)2â2H
= 1(eα(wâv)
2H â eâα(wâv)
2H â 2αH
)2â2H ,
which is continuous on the bounded area [0, 1]Ă [0, 1] with no poles, therefore it isintegrable, since w â v < 2. Hence we have an integrable majorant and this allowsto change the order of operations by Extended Monotone Convergence Theorem.
3.2. Covariance kernels 81
⹠Let 0 †w †v †1 and n ℠2. Since we are only interested in large values of n, weneed only to evaluate the sequence when n ℠2. The sequence of functions
gn(w, v) = eα(1âH)(vâw)
HâŁâŁâŁ1â eâαnH eα(vâw)H
âŁâŁâŁ2â2H
= eα(1âH)(vâw)
H(1â eâαnH e
α(vâw)H
)2â2H
since αnâ α(w â v) > 0. And it is again decreasing with similar reasoning as inthe case w > v. Also for any n â„ 2 we may approximate
gn(w, v) < eα(1âH)(vâw)
H(1â eâ 2α
H eα(vâw)H
)2â2H
and also in this case a majorant which is continuous on the bounded area [0, 1]Ă[0, 1]with no poles, therefore it is integrable.
Subsequently we always have the integrable majorant of the sequence of functions and wecan change the order of integration and the limit. We obtain
limnââ
1â«0
1â«0
eâα(1âH)(wâv)
HâŁâŁâŁ1â eâαnH eâα(wâv)H
âŁâŁâŁ2â2H dwdv
=1â«
0
1â«0
eâα(1âH)(wâv)
H dwdv
=1â«
0
eâα(1âH)w
H dw
1â«0
eα(1âH)(v)
H dv = D <â,
where D is constant. Hence, we infer that
E(Y
(α)1
(Y
(α)n+1 â Y (α)
n
))< DC(α,H)eâ
α(1âH)nH .
Consequently, as nââ, we have
E(Y
(α)1
(Y
(α)n+1 â Y (α)
n
))= O(eâ
α(1âH)nH ).
Lastly, we consider the sumâân=1
E (IY1IYn) =âân=1
E(Y
(α)1
(Y
(α)n+1 â Y (α)
n
))= lim
Nââ
Nân=1
E(Y
(α)1
(Y
(α)n+1 â Y (α)
n
)),
= limNââ
(E(Y
(α)1 Y
(α)N+1
))âE
((Y
(α)1
)2).
82 Chapter 3. Fractional OrnsteinâUhlenbeck processes
By Proposition 3.18 and (3.28) of the proof of Proposition 3.19, we obtain
limNââ
E(Y
(α)1 Y
(α)N+1
)âE
((Y
(α)1
)2)
=ââ«
0
kα,H(x)dx+1â«
0
kα,H(x)dxâ1â«
0
xkα,H(x)dx
â21â«
0
kα,H(x)dx+ 21â«
0
xkα,H(x)dx
=ââ«
0
kα,H(x)dxâ1â«
0
kα,H(x)dx+1â«
0
xkα,H(x)dx <â
due to equations (3.29) and (3.30).
Applying the previous equations, we can conclude
âân=1
E (IY1IYn) = limNââ
(E(Y
(α)1 Y
(α)N
))âE
((Y
(α)1
)2)<â,
thereby completing the proof.
3.3 Conclusion of the stationarity of the fOU processes
The following processes are stationary;
âą U (Z,α)t : t â R the fractional OrnsteinâUhlenbeck process of the first kind, fOU(1)
(follows from Definition 1.42),
âą X(D,α)t : t â R the Doob transformation of fBm, fOU (follows from Proposition
2.1),
âą U (D,Îł)t : t â R the fractional OrnsteinâUhlenbeck process of the second kind,
fOU(2) (follows from Proposition 3.8),
âą IYn = n = 0, 1, . . . the increment process of Y (α)t : t â R (follows from
Proposition 3.21).
The following processes have stationary increments;
âą Zt : t â„ 0 fractional Brownian motion (follows from Definition 1.35).
âą Y (α) : t â R (follows from Proposition 3.8, (however it is not itself stationary)).
âą Y (1) : t â R (follows from Proposition 3.8).
3.4. fOU(2) is a short-range dependent for H > 12 83
3.4 fOU(2) is a short-range dependent for H > 12
We have already calculated long- or short-range dependence properties for all our stationaryprocesses except fOU(2). We prove that fOU(2) is short-range dependent. In this sectionwe consider the rate of decrease of the covariance of U (D,Îł), where Îł > 0, and after thatdeduce that the process is short-range dependent.
Proposition 3.23 ([29], Prop. 3.11.). In the case H â ( 12 , 1), the covariance of the
process U (D,Îł)t : t â R, fOU(2), decreases exponentially
E(U
(D,Îł)t U (D,Îł)
s
)= O(eâminÎł, 1âH
H t), as tââ. (3.34)
Proof. When we calculate the asymptotic covariance of U (D,Îł), we may, without loss ofgenerality, set s = 0. If T > 0, then
E(U
(D,Îł)t U
(D,Îł)0
)= (2H â 1)H2Hâ1eâÎłt
tâ«ââ
0â«ââ
e(Îłâ1+ 1H )(u+v)
|e uH â e vH |2â2H dvdu
= (2H â 1)H2Hâ1eâÎłt Tâ«ââ
0â«ââ
e(Îłâ1+ 1H )(u+v)
|e uH â e vH |2â2H dvdu
+tâ«
T
0â«ââ
e(Îłâ1+ 1H )(u+v)
|e uH â e vH |2â2H dvdu
= (2H â 1)H2Hâ1eâÎłt (D1(T ) +D2(t)) .
Since D1(T ) does not depend on t, we obtain
limtââ
(2H â 1)H2Hâ1eâÎłtD1(T ) = 0.
Thus(2H â 1)H2Hâ1eâÎłtD1(T ) = O(eâÎłt), as tââ. (3.35)
For the term D2(t), we factorise the denominator and deduce
D2(t) =tâ«
T
0â«ââ
e(Îłâ1+ 1H )(u+v)
e2u( 1Hâ1)|1â e vâuH |2â2H
dvdu
=tâ«
T
0â«ââ
e(Îłâ1+ 1H )u+(Îł+1â 1
H )v
|1â e vâuH |2â2Hdvdu.
In the previous integral the pair (u, v) belongs to the set (T, t) Ă (ââ, 0), where thedifference v â u is always less than or equal to âT , which implies
1 â„(
1â evâuH
)2(1âH)â„(
1â eâTH
)2(1âH).
84 Chapter 3. Fractional OrnsteinâUhlenbeck processes
We deduce furthertâ«
T
0â«ââ
e(Îł+1â 1H )u+(Îłâ1+ 1
H )v(1â e vâuH
)2â2H dvdu
â€tâ«
T
0â«ââ
e(Îł+1â 1H )u+(Îłâ1+ 1
H )v(1â eâTH
)2â2H dvdu
= (1â eâTH )â2+2H
tâ«T
e(Îł+1â 1H )udu
0â«ââ
e(Îłâ1+ 1H )vdv
= (1â eâTH )â2+2H
(e(Îł+1â 1
H )t â e(Îł+1â 1H )T
Îł + 1â 1H
)(1
Îł â 1 + 1H
).
Approximating, we obtain(2H â 1)H2Hâ1eâÎłtD2(t)†(2H â 1)H2Hâ1eâÎłt(1â e
âTH )â2+2H
·
(e(Îł+1â 1
H )t â e(Îł+1â 1H )T
Îł + 1â 1H
)(1
Îł â 1 + 1H
)†CeâÎłt+(Îł+1â 1
H )t +M2eâÎłt
= Ceâ1âHH t +M2eâÎłt
and therefore(2H â 1)H2Hâ1eâÎłtD2(t) = O(max(eâ
1âHH t, eâÎłt)), as tââ. (3.36)
Hence combining (3.35) and (3.36), we deduce
E(U
(D,Îł)t U (D,Îł)
s
)= O(max(eâÎłt, eâ
1âHH t)
= O(emax(âÎłt,â 1âHH t))
= O(eâmin(Îł, 1âHH )t), as tââ,
which is the assertion.
Theorem 3.24 ([29], Prop. 3.11.). The stationary process U (D,Îł) is short-range depen-dent.
Proof. We see that in the previous Proposition 3.23 the leading term of the covarianceof the stationary process U (D,Îł) is eâmin(Îł, 1âH
H )t. With that information and Definition1.18 we prove that U (D,Îł) is short-range dependent as follows:
âân=0
ÏU(D,Îł)(n) =âân=0
E(U (D,Îł)n U
(D,Îł)0
)†C
âân=0
eâmin(Îł, 1âHH )n
where the sum converges, since 1âHH > 0 and Îł > 0.
4 Weak convergence
4.1 Weak convergence
According to Proposition 3.19, the growth of the variance of Y (α)t is asymptotically linear
as tââ. Recall that for the standard Brownian motion Bt : t â„ 0 the variance of Btis equal to t. This property - similarity in variances - can be taken further. We prove inthis section that Y (α), if the time parameter is scaled properly, converges weakly to astandard Brownian motion.
4.1.1 Weak convergence and tightnessAn example of convergence of finite dimensional distributions helps us to understandweak convergence. The following example is taken from Billingsley [5]
Example 4.1. Let Ω = [0, 1], B be the collection of the Borel sets in [0, 1] and P theLebesgue measure on B. Then the triplet (Ω,B,P) is a probability space. If we define
Xt(Ï) = 0
and
Yt(Ï) =
0, if t 6= Ï1, if t = Ï,
when 0 †t †1 and Ï â Ω, then for all 0 †t †1
P(Xt = 0) = P(Yt = 0) = 1
and the stochastic processes Xt : 0 †t †1 and Yt : 0 †t †1 have the same finitedimensional distributions. This means that
P(Xt1 †x1, . . . , Xtk †xk) = P(Yt1 †x1, . . . , Ytk †xk)
for any choices of xj and ti. Note that the finite dimensional distributions are the same,but the the process behaves very differently. In fact,
sup0â€tâ€1
Xt(Ï) = 0 and sup0â€tâ€1
Yt(Ï) = 1,
for all Ï.
Hence it is also true that the convergence of finite dimensional distributions does notimply the convergence of the distribution of every functional of the process. The theory
85
86 Chapter 4. Weak convergence
of weak convergence of probability measures in metric spaces gives conditions whichguarantee the convergence.
The previous example is a good motivator to study these problems. We now brieflypresent some basic elements from the theory of weak convergence using [5, p. 5-6, p.9-10]).
Let
âą S be a separable complete metric space,
âą C(S) be the class of bounded, continuous real-valued functions on S,
âą S be the Ï-algebra of Borel sets of S,
âą P be a probability measure on S.
Definition 4.2. Let Pnnâ„1 be a sequence of probability measures on (S,S). Then, wesay Pn converges weakly to P , denoted by Pn
wâ P if
limnââ
â«S
fdPn =â«S
fdP
for all functions f in C(S).
The classical setting of the weak convergence is S = R and S = B, i.e. Borel sets onthe line R. In this situation the probability measure P is completely determined byits distribution function F , which is defined by F (x) = P ((ââ, x]), since F has to becontinuous from the right. Suppose that Pn is a sequence of probability measures on(R,B), with the distribution functions Fn. Then we have
Theorem 4.3 ([5], Th. 2.3.). Following statements
(i) Pnwâ P ,
(ii) Fn(x)â F (x) for all continuity points x of F
are equivalent.
In a more general setting, we have the probability space (Ω,B,P) and X : Ω â R areal-valued random variable. Let PX denote the distribution of X, i.e.
PX(A) = PXâ1(A) = P(Ï : X(Ï) â A) = P(X â A),
where A â B. Let Xnnâ„1 be a sequence of random variables defined on the probabilityspaces (Ωn,Bn,Pn) with distributions (PX)n = PnX
â1, in other words Pn(x) = Pn(Xn â€x).
Definition 4.4. If Pnwâ P , then we say that the sequence Xnnâ„1 converges in
distribution to X and we write Xndâ X.
4.1. Weak convergence 87
Definition 4.5. A family Î of probability measures on S is said to be relatively compactif each sequence Pnnâ„1 of the elements of the family Î contains some subsequencePninâ„1 converging weakly to some probability measure P .
The Prokhorov Theorem, for example in [5], combines these basic elements of weakconvergence and probability theory. We recall that a family Î of probability measures istight if for every Δ there exists a compact set KΔ such that P (KΔ) > 1â Δ for every P inÎ , see, for example, Billingsley [6, pages 8 and 58â59 ].
Theorem 4.6 ([5], Th. 4.1.). Family Î of probability measures is relatively compact ifand only if it is tight.
From Theorem 4.6, we obtain
Corollary 4.7. If a family of probability measures Pnnâ„1 is tight, then it is relativelycompact and it has a subsequence converging weakly to P .
The same subject matter was also studied by Lamperti and he proved an important result[34]. We use our terminology in this theory, but first define the Lipα space. Let Lipα bethe space of all real-valued functions t 7â Xt defined for t â [0, 1], with X0 = 0 and suchthat
âXâα = supt1,t2â[0,1]
|Xt2 âXt1 ||t2 â t1|α
+ maxt2â[0,1]
|Xt2 | <â.
Lamperti presented his theorems of continuity, which we need in using separable stochasticor Gaussian processes. Therefore we have to represent one definition of separability, see,for example, Creamer and Leadbetter [14]
Definition 4.8 ([14] p.48 ). A stochastic process Xt : t â [0, t] is said to be separableif there is a countable subset S of [0, 1] such that for any open interval I â [0, 1], withprobability one,
suptâIâ©S
Xt = suptâI
Xt and inftâIâ©S
Xt = inftâI
Xt.
Theorem 4.9 ([34], p.432 ). Let Xt : t â [0, t] and X0 = 0 be a sequence of separablestochastic processes satisfying the Kolmogorov criterion (Th. 1.10) with α, ÎČ and Mindependent on n. Suppose also that the finite dimensional distributions of X(n)
t : t â[0, 1] converge when n â â. Then there exists a process Xt : t â [0, t] whose finitedimensional distributions are these limits whose path-functions belong a.s. to LipÎł for
every Îł < ÎČ
αand such that
limnââ
â«S
fdPn =â«S
fdP
for every functional f which is continuous at almost all points of LipÎł for some Îł < ÎČ
α(with respect to the measure induced by the process
Xt : t â [0, t]
)
In the case of Gaussian processes we have a more specific Theorem (with our terminology).
88 Chapter 4. Weak convergence
Theorem 4.10 ([34], Cor. 2). Let X(n)t : t â [0, 1] and n = 1, 2, . . . be a sequence of
real separable Gaussian stochastic processes such that
E(X
(n)t
)= ”n(t)
E((X(n)s â ”n(s)
)(X
(n)t â ”n(t)
))= Ïn(s, t).
We assume also thatlimnââ
”n(t) = ”(t) and limnââ
Ïn(s, t) = Ï(s, t).
Suppose also that there exist constants Ο â [0, 2] and A,B <â such that for t, t+ ât â[0, 1]
|”n(t+ ât)â ”n(t)| †A|ât|Ο2 and
|Ïn(t, t)â 2Ïn(t, t+ ât) + Ïn(t+ ât, t+ ât)| †B|ât|Ο.Then there is a separable Gaussian process Xt : t â [0, t] with the mean function ”(t),the covariance Ï(s, t) and whose paths belongs a.s. to LipÎł for every Îł < Ο
2 such that
limnââ
â«S
fdPn =â«S
fdP
holds for every functional f which is continuous a.s. in the topology of LipÎł (with respectto the measure induced by the process Xt : t â [0, t]) for some Îł < Ο
2 .
In the next section we will consider the weak convergence of
Y(α)t =
tâ«0
eâαsdZÏs ,
where Ït = HeαtHα
. We recall that sample paths of Y (α) are locally Hölder continuous ofany order ÎČ < H by Proposition 3.10, and therefore the process Y (α) have a continuousmodification. This guarantees that the process is also separable and we may applyTheorem 4.10. In this theorem there is a condition for tightness
|Ïn(t, t)â 2Ïn(t, t+ ât) + Ïn(t+ ât, t+ ât)| †B|ât|Ο.We rewrite the left-hand side of the previous inequality
Ïn(t, t)â 2Ïn(t, t+ ât) + Ïn(t+ ât, t+ ât)= E(X(n)
t â ”n(t))2 â 2E(X(n)t â ”n(t))(X(n)
t+ât â ”n(t+ ât))
+ E(X(n)t+ât â ”n(t+ ât))2
= E(
(X(n)t â ”n(t))â (X(n)
t+ât â ”n(t+ ât)))2
and in our case, where processes are Gaussian processes of zero mean, we obtain|Ïn(t, t)â 2Ïn(t, t+ ât) + Ïn(t+ ât, t+ ât)|
=âŁâŁâŁâŁE(X(n)
t âX(n)t+ât
)2âŁâŁâŁâŁ .
Hence we may conclude that weak convergence of a zero mean locally Hölder continuousGaussian process is tight, if we can prove that the variance of increments of the Gaussianprocess is bounded by B|ât|Ο, where ât is increment of the time parameter and Ο â [0, 2].
4.2. Weak convergence of Y (α) 89
4.2 Weak convergence of Y (α)
In this section, we prove that the process Z(a,α)t : t â„ 0 := 1â
aY
(α)at , t ℠0 converges
weakly to the scaled Brownian motion. This holds if we show that the finite dimensionaldistributions of this process converge weakly as aââ and that the sequence of probabilitymeasures is tight. This is the contents of the next result.In the next theorem we state the precise result for arbitrary α > 0.Proposition 4.11 ([29], Prop. 3.12.). Let for t â„ 0
Z(a,α)t := 1â
aY
(α)at ,
where a > 0. If B = Bt : t â„ 0 is a standard Brownian motion starting from 0, then itholds
Z(a,α)t : t â„ 0 wâ ÏBt : t â„ 0, as aââ,
where wâ stands for the weak convergence in the space of continuous functions andÏ(α,H) = Ï is a non-random quantity depending only on α > 0 and H â ( 1
2 , 1).
Proof. First we prove that the finite dimensional distributions of Z(a,α)t : t ℠0 converge
to the finite dimensional distributions of that ÏBt : t â„ 0. Since both processes Z(a,α)
and ÏB are Gaussian processes of zero mean the covariance functions determine theirdistributions uniquely. Hence, it suffices to verify the convergence of the covariancefunctions. Applying Proposition 3.18 we infer, for t > s
E(Z
(a,α)t Z(a,α)
s
)= 1
aE(Y
(α)at Y (α)
as
)= 1
a
atâ«0
(atâ x)kα,H(x)dx+asâ«
0
(asâ x)kα,H(x)dx
âa(tâs)â«
0
(atâ asâ x)kα,H(x)dx
,
where the kernel
kα,H(x) = C(α,H) eâα(1âH)x
HâŁâŁ1â eâαxHâŁâŁ2â2H
= H(2H â 1)( αH
)2â2H eâα(1âH)x
HâŁâŁ1â eâαxHâŁâŁ2â2H
is as in Proposition 3.13. If aââ we obtain
limaââ
E(Z
(a,α)t Z(a,α)
s
)= limaââ
1a
atâ«0
(atâ x)kα,H(x)dx+asâ«
0
(asâ x)kα,H(x)dx
âa(tâs)â«
0
(atâ asâ x)kα,H(x)dx
.
90 Chapter 4. Weak convergence
Reorganizing the terms, we infer
limaââ
E(Z
(a,α)t Z(a,α)
s
)= limaââ
(1a
[a
atâ«0
tkα,H(x)dx+ a
asâ«0
skα,H(x)dx
âaa(tâs)â«
0
tkα,H(x)dx+ a
a(tâs)â«0
skα,H(x)dx]
(4.1)
â1a
( atâ«0
xkα,H(x)dx+asâ«
0
xkα,H(x)dxâa(tâs)â«
0
xkα,H(x)dx) .
Since we have proved in (3.30) thatââ«
0
xkα,H(x)dx <â,
and tâ s > 0, the expression on the last line of (4.1) tends to zero as aââ.Hence, we obtain
limaââ
E(Z
(a,α)t Z(a,α)
s
)= 2
ââ«0
skα,H(x)dx.
After changing the variables in the kernel we conclude for t > s
limaââ
E(Z
(a,α)t Z(a,α)
s
)= 2H(2H â 1)
( αH
)1â2Hs
1â«0
uâH
|1â u|2â2H du
= 2H(2H â 1)( αH
)1â2HBeta(1âH, 2H â 1)s
= 2H(2H â 1)( αH
)1â2HBeta(1âH, 2H â 1) min(s, t)
= 2H(2H â 1)( αH
)1â2HBeta(1âH, 2H â 1)E(BtBs),
since E(BtBs) = min(t, s).We define
Îș(α,H) := 2H(2H â 1)( αH
)1â2HBeta(1âH, 2H â 1)
= 2C(α,H)( αH
)Beta(1âH, 2H â 1).
We have now proved that the finite dimensional distributions of Z(a,α) converge to thefinite dimensional distributions of ÏB with
Ï = Ï(α,H) =âÎș(α,H).
4.2. Weak convergence of Y (α) 91
We still have to prove the tightness. It suffices to verify, see [34] or Theorem 4.10 that forevery α > 0 and H â ( 1
2 , 1) there exists a constant D(α,H) (not depending on a) suchthat for t > s
â := E((
Z(a,α)t â Z(a,α)
s
)2)†D(α,H)(tâ s).
Applying Proposition 3.17 we obtain
â = 1a
E((
Y(α)t â Y (α)
s
)2)
= 21a
â« a(tâs)
0(a(tâ s)â x)kα,H(x)dx
= 2â« a(tâs)
0(tâ s)kα,H(x)dxâ 21
a
â« a(tâs)
0xkα,H(x)dx
†2(tâ s)â« â
0kα,H(x)dx
= D(α,H)(tâ s),
thereby completing the proof.
5 Conclusion
5.1 Main results
The main result of the dissertation is that we have represented the Doob transformationof fBm via the solution of a Langevin stochastic differential equation, see Proposition 3.2
dX(D,α)t = âαX(D,α)
t dt+ dY(α)t ,
and have analysed the driving process
Y(α)t :=
tâ«0
eâαsdZHe
αsHα
.
Moreover,αHY (α)
tα
: t â„ 0 d= Y (1)t : t â„ 0
holds, which was proved in Proposition 3.3. Using this connection and the two-sidedextension of Y (1)
s we have defined a new family of processes
U(D,Îł)t = eâÎłt
tâ«ââ
eÎłsdY (1)s ,
where Îł > 0, and named it the fractional OrnsteinâUhlenbeck process of the second kind,fOU(2). See Section 3.1. We have also studied many properties of U (D,Îł), for example, itis stationary (Proposition 3.8) and its paths are locally Hölder continuous of any orderÎČ < H (Proposition 3.11).We have proved that the Doob transformation of fBm XD,α : t â„ 0 is short-rangedependent for all H â (0, 1) (in Theorem 2.8). Theorem 2.5 states that the stationaryfractional OrnsteinâUhlenbeck process of the first kind is long-range dependent if H > 1
2and short-range dependent if H < 1
2 . Therefore we have deduced and proved in Corollary2.6 that these processes do not have the same finite dimensional distributions (Chapter2).We found using the Bochner theorem that (Theorem 2.8) the spectral density function ofthe Doob transformation of fBm is
ââČ(Îł) = 12
(H
α
)2H(α
Ï
1Îł2 + α2 â
α
Ï
âân=1
(2Hn
) (â1)n(nH â 1
)Îł2 +
(αnH â α
)2).
This function is useful, for example, for prediction.
93
94 Chapter 5. Conclusion
One quite powerful result, when H â ( 12 , 1), is the kernel representation of the covariance
function of the increments of Y (α) in Proposition 3.13, namely
E((Y
(α)t2 â Y (α)
t1
)(Y (α)s2â Y (α)
s1
))= C(α,H)
t2â«t1
s2â«s1
eâα(1âH)(uâv)
HâŁâŁâŁ1â eâα(uâv)H
âŁâŁâŁ2â2H dudv,
where s1 < s2, t1 < t2 and C(α,H) := H(2H â 1)(αH
)2â2H. We have generated a lot of
information on processes Y (α) and U (D,Îł) using that kernel, for example, the covariance,the variance of the increments and how they behave when t tends to infinity. We wouldlike to highlight a few results of processes Y (α) likewise U (D,Îł). The increment process ofY (α) is short-range dependent (Proposition 3.22) and also the process U (D,Îł) is (Theorem3.24). In the case H â ( 1
2 , 1) it holds that fBm and fOU(1) are long-range dependentprocesses, but fOU(2) is short-range dependent. This offers interesting opportunities tomodel real life applications with tractable fractional processes.Finally, we prove the following on weak convergence result:
1âaY
(α)at : t â„ 0 wâ ÏBt : t â„ 0,
as a â â and Ï(α,H) = Ï is a non-random quantity depending only on α and H(Proposition 4.11).
5.2 On some statistical studies of fOU(2)
We briefly present some results of the fractional OrnsteinâUhlenbeck of the second kinddefined by us. Azmoodeh and Morlanes [2] and Azmoodeh and Viitasaari [3] have obtainedstatistically parameter estimation in fOU(2) model. Let U (D,Îł) = U (D,Îł)
t : t â R bethe non-stationary fractional Ornstein-Uhlenbeck process of the second kind. We havedefined that in (3.5) but in the publications [2] and [3] is with initial value U (D,Îł)
0 = 0and drift parameter Îł > 0. Then
U(D,Îł)t =
tâ«0
eâÎł(tâs)dY (1)s .
We also need the definition of least squares estimator
ÎłT = ââ« T
0 U(D,Îł)t ÎŽU
(D,Îł)tâ« T
0 U(D,Îł)t
2dt
, (5.1)
where the integralâ« T
0 U(D,Îł)t ÎŽU
(D,Îł)t is the Skorokhod integral. See, for example, Di
Nunno, Ăksendal and Proske [15].
Theorem 5.1 ([2], Th. 3.1). The least squares estimator ÎłT given by (5.1) is weaklyconsistent, i.e.
ÎłT â Îł
in probability, as T tends to infinity.
5.2. On some statistical studies of fOU(2) 95
Theorem 5.2 ([3], Th. 3.1.). Let U (D,Îł) be a fractional Ornstein-Uhlenbeck process ofthe second kind given in definition 3.6. Then as T tends to infinity, we have
âT
1T
Tâ«0
(U
(D,Îł)t
)2dtâΚ(Îł)
â N(0, Ï2),
whereΚ(Îł) := (2H â 1)H2H
ÎłBeta((Îł â 1)H + 1, 2H â 1), (5.2)
and where the variance Ï2 is given by
Ï2 = 2α2HH
4Hâ2
Îł2
â«[0,â]3
[eâÎłxâÎł|yâz|e(1â 1
H )(x+y+z) (5.3)
Ă(
1â eâyH
)2Hâ2 âŁâŁeâ xH â eâ z
H
âŁâŁ2Hâ2]dzdxdy (5.4)
and αH = H(2H â 1).
Theorem 5.3 ([3], Th. 3.2.). Assume we observe U(D,Îł)t at discrete points tk =
kâN , k = 0, . . . , N and TN = NâN . Assume we have âN â 0, TN ââ and Nâ2N â 0
as N tends to infinity. Set
”2,N = 1TN
Nâk=1
U(D,Îł)tk
2âtk and ÎłN := Κâ1(”2,N ),
where Κâ1 is the inverse of function Κ given in (5.2).
Then Îł is a strongly consistent estimator of drift parameter Îł in the sense that as N tendsto infinity, we have
ÎłN â Îł
almost surely. Moreover, as N tends to infinity, we haveâTN (ÎłN â Îł) dâ N (0, Ï2),
whereÏ2Îł = Ï2
[ΚâČ(Îł)]2
and Ï is given by (5.3).
A Summary of some properties anddefinitions
A.1 Gaussian processes: conclusions
We have actually studied five different Gaussian processes. Since some properties are atthe beginning of this dissertation and some in the middle part and so on, we present asummary of definitions and some basic properties of these five processes.
fBm Def.1.27
E(ZtZs) = 12 (t2H + s2H â |tâ s|2H), t, s â„ 0. Def.1.27
H- self-similar Th. 1.28Locally Hölder continuous of any order ÎČ < H Th. 1.32Short-range dependent, if H < 1
2 Prop. 1.37Long-range dependent, if H > 1
2 Prop. 1.37
fOU Def. 1.43
E(X
(D,α)t X(D,α)
s
)= 1
2
(H
α
)2H (eα(tâs) + eâα(tâs)
âeα(tâs)(
1â eâα(tâs)H
)2H)
Prop.2.1
E(X
(D,α)t X
(D,α)0
)= O(eâαt), if H < 1
2 Cor. 2.4
E(X
(D,α)t X
(D,α)0
)= O(eâαt( 1
Hâ1))), if H â„ 12 Cor. 2.4
Short-range dependent Th. 2.3
97
98 Appendix A. Summary of some properties and definitions
fOU(1) Def. 1.42
U(Z,α)t = eâαt
tâ«ââ
eαsdZs Def. 1.42
limtââ
E(U (Z,α)s U
(Z,α)s+t
)= 1
2
Nân=1
뱉2n
(2nâ1âk=0
(2H â k))t2Hâ2n + O(t2Hâ2Nâ2) Prop. 2.2
Short-range dependent, if H < 12 Th. 2.5
Long-range dependent, if H > 12 Th. 2.5
Y Def. 3.1
Y(α)t =
tâ«0
eâαsdZÏs =tâ«
0
eâαsdZHe
αsHα
Def. 3.1
αHY (α)tα
: t â„ 0 d= Y (1)t : t â„ 0 Prop. 3.3
Locally Hölder continuous of any order ÎČ < H Prop. 3.10
E(Y
(α)t Y (α)
s
)=
tâ«0
(tâ x)kα,H(x)dx
+sâ«
0
(sâ x)kα,H(x)dx
âtâsâ«0
(tâ sâ x)kα,H(x)dx, if H â ( 12 , 1) Prop. 3.18
fOU(2) Def. 3.6
U(D,Îł)t = eâÎłt
tâ«ââ
e(Îłâ1)sdZÏ
(1)s
= eâÎłttâ«
ââ
e(Îłâ1)sdZHe
sH
Def. 3.6
Locally Hölder continuous of any order ÎČ < H Prop. 3.11limtââ
E(U
(D,Îł)t U (D,Îł)
s
)= C expâmin (Îł, 1âH
H )t, if H â (12 , 1) Prop. 3.23
Short-range dependent, if H â ( 12 , 1) Th. 3.24
References
1. R. B. Ash. Measure, Integrating and Functional Analysis. Academic Press, New York,London, 1970.
2. E. Azmoodeh and J. I. Morlanes. Drift parameter estimation for fractional OrnsteinâUhlenbeck process of the second kind. Statistics: A Journal of Theoretical and AppliedStatistics, 49:1â18, 2015.
3. E. Azmoodeh and L. Viitasaari. Parameter estimation based on discrete observationsof fractional OrnsteinâUhlenbeck process of the second kind. Statistical Inference forStochastic Processes, 18:205â227, 2015.
4. J. Beran. Statistics for Long Memory Processes. Chapman & Hall, New York, 1994.
5. P. Billingsley. Weak Convergence of Measures: Application in Probability. Society forIndustrial and Applied Mathematics, Philadelphia, Pennsylvania, 1979.
6. P. Billingsley. Convergence of Probability Measures. John Wiley, New York, 1999.
7. A. Borodin and P. Salminen. Handbook of Brownian Motion â Facts and Formulae,2nd edition. BirkhĂ€user, Basel, Boston, Berlin, 2002.
8. R. Brown. A brief account of microscopical observations made in the months of June,July and August, 1827, on the particles contained in the pollen of plants; and onthe general existence of active molecules in organic and inorganic bodies. Phil. Mag.4, copy in http://sciweb.nybg.org/science2/pdfs/dws/Brownian.pdf, seen 16.10.2012,4:161â â173, 1828.
9. V. Capasso and D. Bakstein. An Introduction to Continuous-time Stochastic Pro-cesses Theory, Models and Applications to Finance, Biology, and Medicine. Springer,BirkhÀuser, Boston, 2012.
10. L. Chaumont, H. Panti, and V. Rivero. The Lamperti representation of real-valuedself-similar Markov processes. To appeal in Bernoulli, admitted 2012, 2012.
11. P. Cheridito. Representations of Gaussian measures that are equivalent to Wienermeasure. In J. AzĂ©ma, M Ămery, M. Ledoux, and M. Yor, editors, SĂ©minaire deProbabilitĂ©s XXXVII, number 1832 in Springer Lecture Notes in Mathematics, pages81â89, Berlin, Heidelberg, New York, 2003.
12. P. Cheridito, H. Kawaguchi, and M. Maejima. Fractional Ornstein-Uhlenbeck pro-cesses. Electr. J. Prob., 8:1â14, 2003.
99
100 References
13. P. Cowpertwait and A. Metcalfe. Introductory Time Series with R. Springer Sci-ence+Business Media, LLC, NewYork, 2009.
14. H. Cramer and M. Leadbetter. Stationary and Related Stochastic Processes. JohnWiley & Sons, Inc, New York, 1967.
15. G. Di Nunno, B. Ăksendal, and F. Proske. Malliavin Calculus for LĂ©vy Processeswith Applications to Finance. Springer, Softcover, 2009.
16. J. Doob. The Brownian movement and stochastic equations. Ann. Math., 43(2):351â369, 1942.
17. J. Doob. Stochastic Processes. John Wiley and Sons, Inc., New York, 1953.
18. H. Dym and H. McKean. Gaussian Processes, Function Theory, and Inverse SpectralProblem. Academic Press, Inc., New York, 1976.
19. P. Embrechts and M. Maejima. Selfsimilar Processes. Princeton University Press,United States, 2002.
20. A. Erdélyi(editor). Tables of Integral Transforms Vol. 1, based, in part, on notes leftby H. Bateman. McGraw-Hill, New York, 1954.
21. M. J. Garrido-Atienza, P. Kloeden, and A. Neuenkirch. Discretization of stationarysolutions of stochastic systems driven by fractional Brownian motion. Appl. Math.Optim., 60:151â172, 2009.
22. G. Gripenberg and I. Norros. On the prediction of fractional Brownian motion. J.Appl. Probab., 33:400â410, 1996.
23. H. E. Hurst. Long-term storage capacity of reservoirs. Transactions of the AmericanSociety of Civil Engineers, 116:770â808, 1951.
24. H. E. Hurst. Methods of using long-term storage in reservoirs. Proceedings of theInstitution of Civil Engineers, part 1, pages 519â577, 1955.
25. H. E. Hurst. Long-term Storage : an Experimental Study. Constable, London, 1965.
26. N. Ikeda and S. Watanabe. Stochastic Differential Equations and Diffusion Processes,2nd edition. North-Holland and Kodansha, Amsterdam, Tokyo, 1981.
27. C. Jost. Integral Transformations of Volterra Gaussian Processes. Ph.D. Thesis.Yliopistopaino, Helsinki, 2007.
28. T. Kaarakka. Properties of Gaussian Processes (in finnish), Licentiate thesis (post-graduate thesis). Joensuun yliopisto, Joensuu, 2002.
29. T. Kaarakka and P. Salminen. On fractional Ornstein-Uhlenbeck processes. Commu-nications on Stochastic Analysis, 5:121â133, 2011.
30. W. Kaplan. Advanced Calculus. Addison Wesley, USA, 2003.
31. I. Karatzas and S. Shreve. Brownian Motion and Stochastic Calculus, 2nd edition.Springer Verlag, Berlin, Heidelberg, 1991.
32. F. Klebaner. Introduction to Stochastic Calculus with Applications. Imperial CollegePress, London, 1998.
References 101
33. M. Kleptsyna and A. Le Breton. Statistical analysis of the fractional Ornstein-Uhlenbeck type process. Statistical Inference for Stochastic Processes, 5(3):229â248,2002.
34. J. Lamperti. On Convergence of stochastic processes. Transaction of AMS, 104:430â435, 1962.
35. J. Lamperti. Semi-stable Markov processes. Z. Wahrscheinlichkeitstheorie verw.Gebiete, 22:205â225, 1972.
36. J. Lamperti. Stochastic Processes. Springer-Verlag, New York, 1977.
37. C. Landroue. Fractional Ornstein-Uhlenbeck Process. WolframDemonstrations Project, Internet:http://demonstrations.wolfram.com/-FractionalOrnsteinUhlenbeckProcess/ see 28.5.2013, 2010.
38. B. Mandelbrot and J. VanNess. Fractional Brownian motion, fractional noises andapplications. SIAM Review, 10, No. 4:422â437, 1968.
39. B. Maslowski and B. Schmalfuss. Random dynamical systems and stationary solutionsof differential equations driven by the fractional Brownian motion. Proc. Stoc. Anal.and Appl., 22:1577â1607, 2004.
40. M. Matsui and N. Shieh. On the exponentials of fractional Ornstein-Uhlenbeckprocesses. Electronic Journal of Probability, 14:594â611, 2009.
41. J. Memin, Y. Mishura, and E. Valkeila. Inequalities for the moments of Wienerintegrals with respect to a fractional Brownian motion. Statistics and ProbabilityLetters, 51:197â206, 2001.
42. T. Mikosch. Elementary Stochastic Calculus with Finance in View. World Scientific,Singapore, 1998.
43. Y. Mishura. Stochastic Calculus for Fractional Brownian Motion and Related Pro-cesses. Springer, Berlin, Heidelberg, 2008.
44. L. Mytnik and E. Neuman. Sample path properties of Volterra processes. Communi-cations on Stochastic Analysis, 6:359â377, 2012.
45. I. Norros, E. Valkeila, and J. Virtamo. An elementary approach to a Girsanov formulaand other analytical results on fractional Brownian motion. Bernoulli, 5(4):571â587,1999.
46. I. Norros and J. Virtamo. Handbook of FBM formulae, Technical report 01. COST-257, 1996.
47. D. Nualart. Fractional Brownian motion: stochastic calculus and applications. In M.Sanz-SolĂ©, J. Soria, J. L. Varona, and J. Verdera, editors, Proceedings of the MadridICM 2006, Volume III, European Mathematical Society Publishing House, pages1541â1562, Madrid, 2006.
48. D. Nualart and Y. Hu. Parameter estimation for fractional OrnsteinâUhlenbeckprocesses. Statistics and Probability Letters, 80:1030â1038, 2010.
49. B. Ăksendal. Stochastic Differential Equations An Introduction with ApplicationsFifth Edition, Corrected Printing. Springer-Verlag, Heidelberg, 2000.
102 References
50. L. Ornstein and G. Uhlenbeck. On the theory of the Brownian motion. Physicalreview, 36, 1930.
51. V. Pipiras and M. Taqqu. Integration questions related to fractional Brownian motion.Probab. Theory Relat. Fields, 118(2):251â291, 2000.
52. D. Revuz and M. Yor. Continuous Martingales and Brownian Motion, 3rd edition.Springer Verlag, Berlin, Heidelberg, 1999.
53. H. L. Royden. Real Analysis. The Macmillan company, New York, 1968.
54. W. Rudin. Real and Complex Analysis. Tara McGraw-Hill, New Delhi, 1974.
55. W. Rudin. Principles of Mathematical Analysis. McGraw-Hill, Singapore, 1976.
56. T. Sottinen. Fractional Brownian Motion in Finance and Queueing. Ph.D. Thesis.Yliopistopaino, Helsinki, 2003.
57. J. Walker. Fractal food: Self-similarity on the supermarket shelf.http://www.fourmilab.ch/images/Romanesco/, Retrieved Nov 02 2012, 2005.
58. M. ZĂ€hle. Integration with respect to fractal functions and stochastic calculus, I.Probab. Theory Related Fields, 111(2):333â347, 1998.