# Space-time adaptive ﬁnite diﬀerence method for …johant/CAM7269.pdf= 1, between a Monte Carlo...

### Transcript of Space-time adaptive ﬁnite diﬀerence method for …johant/CAM7269.pdf= 1, between a Monte Carlo...

Space-time adaptive finite difference method for

European multi-asset options

Per Lotstedt1, Jonas Persson1�, Lina von Sydow1y, Johan Tysk2z1Division of Scientific Computing, Department of Information Technology

Uppsala University, SE-75105 Uppsala, Sweden2 Department of Mathematics

Uppsala University, SE-75106 Uppsala, Sweden

emails: perl, jonasp, lina @it.uu.se, [email protected]

Abstract

The multi-dimensional Black-Scholes equation is solved numerically

for a European call basket option using a priori–a posteriori error esti-

mates. The equation is discretized by a finite difference method on a

Cartesian grid. The grid is adjusted dynamically in space and time to

satisfy a bound on the global error. The discretization errors in each time

step are estimated and weighted by the solution of the adjoint problem.

Bounds on the local errors and the adjoint solution are obtained by the

maximum principle for parabolic equations. Comparisons are made with

Monte Carlo and quasi-Monte Carlo methods in one dimension and the

performance of the method is illustrated by examples in one, two, and

three dimensions.

Keywords: Black-Scholes equation, finite difference method, space adap-

tation, time adaptation, maximum principle

AMS subject classification: 65M20, 65M50

1 Introduction

We are interested in the numerical solution of the multi-dimensional Black-Scholes equation�F�t +

dXi=1

rsi �F�si +1

2

dXi;j=1

[���]ijsisj �2F�si�sj � rF = 0; (1)F (T ; s) = Φ(s);�Supported by FMB, the Swedish Graduate School in Mathematics and Computing Sci-ence.yPartially supported by the Swedish Research Council (VR) under contract 621-2003-5280.zPartially supported by the Swedish Research Council (VR) under contract 621-2003-5267.

1

to determine the arbitrage free price F of a European option expiring at T withcontract function Φ(s). Here, r 2 R+ = fxjx � 0g is the short rate of interestand � 2 Rd�d is the volatility matrix. Our numerical method allows r and �to be both level and time dependent but some of the theoretical estimates arerestricted to time independent interest and volatility.

We will consider a European call basket option where the contract functionis defined by

Φ(s) =

1d dXi=1

si �K!+ ; (2)

where (x)+ = max(x; 0) and K is the so called strike price. Our method willwork just as well for any contract function with vanishing second derivativeacross the boundary at si = 0. This way of determining the arbitrage free pricewas introduced by F. Black and M. Scholes in [1] and further developed by R.C. Merton in [2], both in 1973.

Another way to determine this price is to solve a stochastic differential equa-tion with a Monte Carlo method and use the Feynman-Kac formula, see e.g. [3].This method is well-known to converge slowly in the statistical error. If we de-note the number of simulations by M , the statistical error is proportional toM�1=2. Better convergence rates are obtained with quasi-Monte Carlo meth-ods [4, 5]. In [6], an adaptive finite difference method is developed with fullcontrol of the local discretization error which is shown to be very efficient. Thesolution with finite difference approximations on a grid suffers from the “curseof dimensionality” with an exponential growth in dimension d of the numberof grid points making it impractical to use in more dimensions than four (orso) and a Monte Carlo algorithm is then the best alternative. However, we be-lieve that the finite difference method is a better method in low dimensions dueto the uncertainty in the convergence of Monte Carlo and quasi-Monte Carlomethods. Furthermore, a finite difference solution is sufficiently smooth for asimple calculation of derivatives of the solution such as the hedging parameters∆i = �F=�si;Γi = �2F=�s2i ; � = �F=�t (”the Greeks”). While finite differencemethods are easily extended to the pricing of American options, this is not thecase with Monte Carlo methods [5].

Finite difference methods for option pricing are found in the books [7, 8]and in the papers [6, 9, 10, 11]. A Fourier method is developed in [12] and anadaptive finite element discretization is devised in [13, 14] for American options.Another technique to determine a smooth solution on a grid is to use the sparsegrid method [15]. For a limited number of underlying assets, sparse grids havebeen applied to the pricing of options in [16]. The purpose of this paper isto develop an accurate algorithm suitable for European options based on finitedifference approximations utilizing their regular error behavior to estimate andcontrol the solution errors.

The partial differential equation (PDE) (1) is here discretized by secondorder accurate finite difference stencils on a Cartesian grid. The time steps andthe grid sizes are determined adaptively. Adaptive methods have the advantages

2

of providing estimates of the numerical errors and savings in computing timeand storage for the same accuracy in the solution. Moreover, there is no needfor a user to initially specify a constant time step and a constant grid size forthe whole solution domain.

Examples of algorithms for adaptivity in space and time are found in [6, 17,18]. The grid and the time step may change at every discrete time point in[17]. In [6], a provisional solution is computed for estimation of the errors andthen the fixed grid and the variable time step are chosen so that the local errorssatisfy given tolerances in the final solution. The grid has a fixed number ofpoints but the points move in every time step for optimal distribution of them inmoving grid methods, see e.g. [18]. In this paper, the time step varies in everystep but the grid is changed only at certain time instants so that a maximalnumber of points are located optimally or a requirement on the error is fulfilled.

For the adaptive procedure, an error equation is derived for the global errorE(t; s) in the solution. The driving right hand side in this equation is the localdiscretization error. This error is estimated and the grid is adapted at selectedtime points so that the Cartesian structure of the grid is maintained and thetime step is adjusted in every time step. The step sizes are chosen so that alinear functional of the solution error at t = 0 satisfies an accuracy constraint �j Z g(s)E(0; s) dsj � � (3)

for a non-negative g chosen to be compactly supported where the accuracy ofthe solution is most relevant. The weights for the local error bounds in eachtime interval are solutions of the adjoint equation of (1). The growth of theerror in the intervals between the grid adaptations is estimated a priori by themaximum principle for parabolic equations. In the same manner, the solutionof the adjoint equation is bounded. Furthermore our algorithm automaticallychooses the discretization so that bounds on the errors of the type (3) above aresatisfied also for multi-dimensional equations. The emphasis is on error controland reduction of the number of grid points. Efficiency and low CPU times arealso important but these issues are very much dependent on the implementationand the computer system. The adaptation algorithm is first applied to a one-dimensional problem for comparison between the computed solution and theanalytical solution. Two- and three-dimensional problems are then successfullysolved with the adaptive algorithm.

An adaptive method for binomial and trinomial tree models on lattices foroption pricing is found in [19]. The advantages of our method compared tothat method are that there is no restriction on the variation of the spatial andtemporal steps due to the method, the discretization errors are estimated andcontrolled, their propagation to the final solution is controlled, and the errorthere is below a tolerance given by the user.

The paper is organized as follows. We start by presenting a comparisonbetween Monte Carlo methods, quasi-Monte Carlo methods and a finite dif-ference method to motivate the development in the rest of the paper. Then,the equation (1) is transformed by a change of variables and scaling in Section

3

3. The discretization in space and time is described in the following section.The adjoint equation and its relation to the discretization errors is the subjectof Section 5. The adjoint solution is estimated with the maximum principlein Section 6. In Section 7, the local discretization errors are estimated and asimplification is derived based on the maximum principle. The algorithms forthe space and time adaptivity are discussed in Sections 8 and 9. In Section 10,the adaptive algorithm is applied to the pricing of European call basket optionswith one, two, and three underlying assets. Conclusions are drawn in the finalsection.

2 Monte Carlo methods

In this section we are going to make a simple comparison in one dimension,d = 1, between a Monte Carlo method (MC), a quasi-Monte Carlo method(QMC) and a finite difference method (FD) to determine the arbitrage freeprice of a European call option with one underlying asset. For a description ofthe Monte Carlo and quasi-Monte Carlo methods, see e.g. [5, 20, 21]. The finitedifference method of second order is described in detail later in this paper.

Let M be the number of simulation paths. The error in the MC methoddecays as M�1=2 independently of the dimension d and in an optimal QMCmethod as M�1(logM)d [4, 5]. Time integration with an Euler method witha weak order of convergence of one introduces an error proportional to ∆t, thelength of the time step. The computational work grows linearly with M and isinversely proportional to ∆t. Hence, the work W and error E fulfillWMC = O(M∆t�1); EMC = O(∆t) +O(M� ); (4)

where = 1=2 (MC) or = 1 (QMC, ignoring the logarithmic factor).The grid size h in a finite difference method is of the order N�1=d, whereN is the total number of grid points. The error due to the space and time

discretizations is proportional to h2 and ∆t2 in a second order method. Ideally,the work depends linearly on N in every time step. Thus, we haveWFD = O(N∆t�1); EFD = O(∆t2) +O(N�2=d): (5)

Suppose that the error tolerance is � for the spatial and temporal errorsseparately. Then it follows from (4) and (5) thatWMC � ��1�1= ; WFD � ��(d+1)=2: (6)

The work depends on a decreasing � in the same way for a second order FDmethod in 5 dimensions compared to a MC method and in 3 dimensions com-pared to a QMC method. With a smaller d, the preferred method is the FDscheme for � sufficiently small. Otherwise, choose the stochastic algorithm.

A problem with constant volatility � = 0:3 and strike price K = 30 isconsidered in the numerical experiments. For such problems, time integrationin steps with MC and QMC is not necessary to solve a European option problem.

4

We have considered pure MC, MC with antithetic variates (MC-anti) [5] andQMC with a Sobol sequence [5, 22, 23] generated by the code at [24]. The spaceand time steps in the FD method were such that the contribution to the error,obtained by comparison with the exact solution [3, 25], was equal in space andtime. The methods are implemented in a straightforward manner in Matlabwithout any attempt to optimize the codes.

In Figure 1, the errors at s = K for MC, MC-anti and QMC are displayedas a function of the number of simulation paths M . We also show the error asa function of computational time for MC, MC-anti, QMC as well as FD. TheFD solution is available in an interval and the maximum error in the intervalis recorded. The random numbers are generated separately. The CPU time toobtain 3:2 � 106 pseudo-random numbers was 0.78 s and for the same number ofSobol quasi-random numbers 51631 s. The cost of calculating the QMC numbersis several orders of magnitude higher than only computing the solution withgiven random numbers.

102

104

106

108

10−5

10−4

10−3

10−2

10−1

100

Number of paths

Err

or

MCMC−antiQMC

10−4

10−2

100

102

10−5

10−4

10−3

10−2

10−1

100

Time in seconds

Err

or

MCMC−antiQMCFD

Figure 1: To the left: Error as a function of simulation paths M . To the right:Error as a function of computational time.

From Figure 1, the conclusion is that QMC is superior compared to MC andFD in this case. The slopes of QMC and FD are as expected from (6). SinceWMC is independent of ∆t, we have WQMC � ��1 and WFD � ��1 in the figureand in (6). A least squares fit to the data for MC yields an exponent between-1 and -2 while (6) predicts -2.

Next, we are going to consider time stepping for MC and QMC. This isneeded when we have to follow the simulation paths. This is the case e.g. forthe constant elasticity of variance model [26]. Also, for other types of optionssuch as barrier options, it is necessary to resolve the simulation path in order to

5

determine whether the barrier has been hit or not. In our comparisons, we havestill used the standard Black-Scholes model with constant volatility in order tobe able to accurately compute the error in the solution from the exact solution.

When using time stepping in QMC, each time step corresponds to one di-mension. It is well-known that QMC does not perform as well when multi-dimensional quasi-random sequences are needed. To enhance the performanceof QMC applied to these problems, a so called Brownian bridge construction isoften used, see e.g. [27, 28, 29], henceforth referred to as QMC-BB.

103

104

105

10−4

10−3

10−2

10−1

100

Number of paths

Err

or

MCMC−antiQMCQMC−BB

10−2

10−1

100

101

10−5

10−4

10−3

10−2

10−1

100

Time in seconds

Err

or

MCMC−antiQMCQMC−BBFD

Figure 2: MC, MC-anti, QMC, and QMC-BB use 8 time steps. To the left:Error as a function of simulation paths M . To the right: Error as a function ofcomputational time.

In Figures 2, 3, and 4, the error is plotted as a function of the numberof simulation paths M as well as a function of computational time. We haveused 8, 16, and 32 time steps for the different MC and QMC methods in thefigures. The time for computing the quasi-random numbers, the pseudo-randomnumbers, and the construction of the Brownian bridge is not included in thetime measurements. Again, we have to bear in mind that the computation ofthe quasi-random numbers is expensive and is by far the predominant part ofthe computing time for QMC.

The error in the stochastic methods in the Figures 2, 3, and 4, with different∆t has a more erratic behavior compared to the deterministic FD error. The FDerror converges smoothly when the computational work increases. The error inthe MC and QMC solutions decreases until the error in the time discretizationdominates. This is best illustrated in Figures 2 and 4 for QMC-BB where aplateau is reached for M � 104. The level of this plateau is about four timeshigher in Figure 2 where ∆t is four times longer. With more time steps and an

6

improved resolution in time, FD eventually becomes the most efficient method.

103

104

105

10−4

10−3

10−2

10−1

100

Number of paths

Err

or

MCMC−antiQMCQMC−BB

10−2

10−1

100

101

10−4

10−3

10−2

10−1

100

Time in seconds

Err

or

MCMC−antiQMCQMC−BBFD

Figure 3: MC, MC-anti, QMC, and QMC-BB use 16 time steps. To the left:Error as a function of simulation paths M . To the right: Error as a function ofcomputational time.

103

104

105

10−4

10−3

10−2

10−1

100

Err

or

Number of paths

MCMC−antiQMCQMC−BB

10−2

100

102

10−5

10−4

10−3

10−2

10−1

100

Time in seconds

Err

or

MCMC−antiQMCQMC−BBFD

Figure 4: MC, MC-anti, QMC, and QMC-BB use 32 time steps. To the left:Error as a function of simulation paths. To the right: Error as a function ofcomputational time.

7

10−2

10−1

100

10−4

10−3

10−2

10−1

100

Time in seconds

Err

or

MCQMCQMC−BBFD

Figure 5: MC, QMC, QMC-BB use 8, 16, and 32 time steps and M is increasedto keep the balance between the errors in the stochastic methods in the sameway as in the FD method.

In Figure 5, M is increased in the MC and QMC computations when ∆tis reduced using 8, 16, and 32 steps for the whole interval. The values of Mare 625, 2500, and 104 for MC and 2500, 5000, and 104 for QMC to avoid animbalance between the errors in time and space. The errors in the QMC-BBand the FD methods decay regularly with a steeper slope for the FD algorithm.From the derivations in (6) we expect the exponents for QMC and FD to be -2and -1 while in Figure 5 they are -1.3 and -0.8.

The conclusion of this section is that FD is the preferred method in lowdimensions when pricing options accurately and rapidly with error control andtime stepping is needed. For most types of options, time integration is neces-sary in order to resolve the simulation paths and high accuracy is of utmostimportance for compound options or for computing the Greeks. Furthermore,FD methods for European options can be extended to American options as in[30, 31, 32, 33]. Efficient MC methods for low-dimensional American optionsare not known.

Error control is possible also for MC and QMC algorithms. Confidenceintervals for the error due to the stochastic part of the MC method can beobtained by computing the variance of the solution for different realizations.This is more difficult for QMC methods [5]. The temporal error can also beestimated, e.g. by comparing the solutions using ∆t and ∆t=2 as in Richardsonextrapolation [5].

8

3 Model problem

We start by transforming (1) from a final value problem to an initial valueproblem with dimensionless parameters. The transformation of the timescalehas the advantage that standard texts on time integrators are applicable. Thefollowing transformations give the desired properties:Kx = s; r = r=�2; KP (t; x) = F (t; s);� = �=�; t = �2(T � t); K Ψ(x) = Φ(s); (7)

where � is a constant chosen as maxij �ij in the solution domain. These trans-formations result in the following linear partial differential equationPt � r dXi=1

xiPi � dXi;j=1

aijxixjPij + rP = 0; (8)P (0; x) = Ψ(x) =

1d dXi=1

xi � 1

!+ ;where aij = 1

2 [���]ij . The coordinates of Rd are called the spatial variablesand are denoted by x1; :::; xd. The subscripts i; j; and later also k; l;m; on adependent variable denote differentiation with respect to xi and xj , e.g. Pij .Subscripts on an independent variable denote components of a vector such as xi,or entries of a matrix such as aij . The matrix [aij ] is assumed to be positive def-inite. Thus, (8) is a parabolic equation. The subscript t denotes differentiationwith respect to normalized time.

We will solve (8) in a cylinderC = D � [0; T ]; (9)

where D is a bounded computational domain in Rd+ with boundary �D.

4 Discretization

Let L be the operatorL = r dXi=1

xi ��xi +dXi;j=1

aijxixj �2�xi�xj � r: (10)

The partial differential equation (8) can then be written asPt = LP: (11)

We introduce a semi-discretization of (11) in space by using centered secondorder finite differences (FD) on a structured but non-equidistant grid, see Figure6.

9

hk�1;i hkixk�1;i xki xk+1;i xiFigure 6: The xi-axis. Here, xki, k = 1 : : : ni; denotes the k:th node of dimensioni.

The number of grid points in the i:th dimension is ni; i = 1; : : : ; d. If welet Ph be a vector of the lexicographically ordered unknowns of length

Qdi=1 ni,then dPhdt = AhPh; (12)

where Ah is a matrix with the second order finite difference discretization ofL. The matrix Ah in (12) is a very large, sparse matrix with the number ofnon-zeros of each row depending on the number of space dimensions, i.e. thenumber of underlying assets.

The first derivative in the i-direction is approximated as in [6, 34] by�P (xki)�xi = Pi(xki) � axkiP (xk+1;i) + bxkiP (xki) + xkiP (xk�1;i); (13)

whereaxki =hk�1;ihki(hki + hk�1;i) ; bxki =

hki � hk�1;ihkihk�1;i ; xki = � hkihk�1;i(hk�1;i + hki) :and for the second derivative�2P (xki)�x2i = Pii(xki) � axkixkiP (xk+1;i) + bxkixkiP (xki) + + xkixkiP (xk�1;i); (14)

whereaxkixki =2hki(hk�1;i + hki) ; bxkixki = � 2hk�1;ihki ; xkixki =

2hk�1;i(hk�1;i + hki) :The cross-derivatives with respect to xi and xj are obtained by applying (13)once in the i-direction and once in the j-direction.

The leading terms in the discretization errors in (13) and (14) at xki are asin [34]�1i = � 1

6hk�1;ihkiPiii(xki) +O(h3i );�2i = � 13 (hki � hk�1;i)Piii(xki)� 1

12 (h2ki � hkihk�1;i + h2k�1;i)Piiii(xki) +O(h3i ):(15)

For a smooth variation of the grid such that hk�1;i = hki(1 + O(hki)), theapproximations (13) and (14) are both of second order.

10

There are several possible numerical boundary conditions that can be usedfor these problems. Here, the condition on a boundary where xi is constantis that the numerical approximation of the second derivative Pii is set to zero,which implies that the option price is nearly linear with respect to the spot priceat the boundaries. These and other boundary conditions are discussed in [7].

For integration in time we use the backward differentiation formula of ordertwo (BDF-2) [35], which is unconditionally stable for constant time steps. Thismethod can be written�n0Pnh = ∆tnL(Pnh )� �n1Pn�1h � �n2Pn�2h ;�n0 = (1 + 2�n)=(1 + �n); �n1 = �(1 + �n); �n2 = (�n)2=(1 + �n); (16)

for variable time steps, where �n = ∆tn=∆tn�1, and ∆tn = tn � tn�1, see[17, 35].

5 Discretization errors and the adjoint equation

Let P denote a smooth reconstruction of the discrete data in Pnh so that they

agree at t = tn and at the grid points. The solution error E = P � P approxi-mately satisfies the following boundary value problem (”the error equation”)Et � r dXi=1

xiEi � dXi;j=1

aijxixjEij + rE = Et �LE = �; (17)E(0; x) = 0; x 2 D; E(t; x) = 0; x 2 �D;where � is the local discretization or truncation error. By solving (17) we obtainthe approximate global error Enh = Pnh �P at tn at the grid points in D� [0; T ].

The local discretization error consists of two parts, the temporal discretiza-tion error �k and the spatial discretization error �h� = �k + �h: (18)

The aim is to develop a method that estimates � a posteriori at tn and thenestimates the evolution of �h a priori for t > tn. Then we determine computa-tional grids to control �h and the time steps are selected to control �k in orderto obtain a final solution fulfilling predescribed error conditions on a functionalof the global solution error E. Such methods have been developed for finiteelement discretizations of different PDEs, see e.g. [36, 37]. For this reason weintroduce the adjoint equation to (17)ut + L�u = 0;L�u = �r dPi=1

(xiu)i +dPi;j=1

aij(xixju)ij � ru;u(T; x) = g(x): (19)

The boundary conditions for the adjoint equation is u = 0 on �D. Note thatthe adjoint problem is a final value problem.

11

Using (17) and (19) we obtainR T0

RD u�dx dt =R T

0

RD uEtdx dt� R T0 RD uLEdx dt=RD g(x)E(T; x)dx � R T0 RD utEdx dt� R T0 RD(L�u)Edx dt (20)

=RD g(x)E(T; x)dx:

The function g(x) should be chosen such that it is non-negative and has com-pact support in the domain where one is most interested in having an accuratesolution. It is normalized such thatZD g(x)dx = 1: (21)

Partition the interval [0; T ] into L subintervals I` = [t`; t`+1) and take theabsolute value of the left-hand side in (20) to arrive atj Z T

0

ZD u(t; x)�(t; x) dxdtj � L�1X=0

j Z t`+1t` ZD u(t; x)�(t; x) dxdtj� L�1X=0

supx2Dt`�t�t`+1

j�(t; x)j Z t`+1t` ZD ju(t; x)j dxdt=

L�1X=0

kuk` supx2Dt`�t�t`+1

j�(t; x)j; (22)

with the definition kuk` =

Z t`+1t` ZD ju(t; x)j dxdt: (23)

Our goal now is to generate a discretization of D and [0; T ] adaptively sothat j ZD g(x)E(T; x)dxj � �; (24)

where � is a prescribed error tolerance. From (20) and (22), it is clear that wecan bound the integral from above by estimating sup j� j and kuk`.

The unknown u is the solution to the adjoint problem (19) and thus kuk`cannot be adjusted in order to fulfill (24). However, we are able to adjustthe discretization error � by controlling h and ∆t in the spatial and temporaldiscretization. Thus, we will require in each interval I` that

supx2Dt`�t�t`+1

j�(t; x)j � �`: (25)

We choose to equidistribute the errors in the intervals yielding�` = �=(Lkuk`): (26)

12

Then from (22), (25) and (26) we find thatj ZD g(x)E(T )dxj � L�1X=0

kuk` supx2Dt`�t�t`+1

j�(t; x)j � �: (27)

To summarize this section we have a strategy to obtain the prescribed tol-erance � in (24):

(i) Compute kuk`; ` = 0; : : : ; L� 1 in (23).

(ii) Compute �`, ` = 0; : : : ; L� 1 using (26).

(iii) Generate computational grids Γ`, ` = 0; : : : ; L� 1; and choose time steps∆tn for all n such that (25) is satisfied.

The time steps are adjusted in every step but the grids are changed onlyat L prespecified locations. The spatial error is estimated in the beginning ofeach interval with a constant grid and its growth in the interval is estimated(see Section 7). In this way, the expensive redistribution of the grid points andinterpolation of the solution to a new grid are limited to t = t`; ` = 0; 1; : : : ; L�1. When passing from grid Γ` to Γ`+1, the solution is transferred by cubicinterpolation. In Section 6 we will estimate kuk` a priori and in Sections 7,8, and 9 we will demonstrate how to estimate � a priori and a posteriori andderive new computational grids and vary the time step.

6 Maximum principle for the solution of the ad-

joint equation

A bound on the solution of the adjoint equation (19) is derived assuming con-stant r and aij using the maximum principle for parabolic equations, see [38].Performing the differentiation in (19) and transforming the adjoint equation toan initial value problem by substituting t = T � t yieldsut � dPi;j=1

(2aij(1 + Æij)� Æijr) xjuj � dPi;j=1

aijxixjuij � dPi;j=1

aij(1 + Æij)� (d + 1)r!u = 0;u(t; x) = 0; x 2 �D; u(0; x) = g(x); x 2 D: (28)

The Kronecker delta function is denoted by Æij . We also have t` = tL�`, ` =0; : : : ; L� 1.

We introduce the standard notion of parabolic boundary of the cylinderC` = D � [t`; t`+1); (29)

denoting it by �C` as the topological boundary of C` except D � t`. Thestandard maximum principle, see [38], says that in an equation of the type (28),

13

in the absence of zero order terms, the maximum and minimum of u over C`are attained on �C`. In our case there is a zero order term Ru whereR =

dXi;j=1

aij(1 + Æij)� (d + 1)r:However, the function e�Rtu satisfies (28) without zero order terms. Thus, bythe maximum principle

inf�C` ue�Rt � u(t; x)e�Rt � sup�C` ue�Rt: (30)

Using that g � 0 and the boundary condition on �D, the estimate

0 � u(t; x) � eRt sup�C` ue�Rt � ( eR(t�t`) sup�C` u ; R � 0;eR(t�t`+1) sup�C` u ; R < 0; (31)

holds for all (t; x) in C`.Let ∆` = t`+1�t`. From the previous section we are interested in estimating

supx2Dt`�t�t`+1

ju(t; x)j = supx2DtL�`�1�t�tL�` ju(t; x)j � ejRj∆` sup�C` ju(t; x)j;` = 0; : : : ; L� 1: (32)

Since u(t; x) = 0 on �D, sup�C` ju(t; x)j is reached at t = t`. In particular, withthe initial data g(x)

supx2Dt`�t�t`+1

ju(t; x)j � supx2D0�t�tL ju(t; x)j � ejRjtL supx2D g(x): (33)

Finally, by (33) we have a bound on kuk` in (23)kuk` � jDj∆` supx2Dt`�t�t`+1

juj � jDj∆`ejRjtL supx2D g(x): (34)

From this upper bound, �`, ` = 0; : : : ; L� 1; can be computed using (26). Theadjoint solution is bounded by the given data and there is a non-vanishing lowerbound on � to satisfy the tolerance in (27).

These a priori estimates are in general not sufficiently sharp for the selectionof �` and an efficient adaptive procedure. Instead, (28) is solved numerically ona coarse grid in order to find kuk`.7 Estimating the spatial discretization error

The spatial error is estimated a priori in this section by applying the maximumprinciple to equations satisfied by terms in the discretization error. A simpli-fying assumption concerning the spatial error in the analysis here and in theimplementation of the adaptive scheme is

14

Assumption 1 The dominant error terms in the approximations of the secondderivatives in (8) are due to the diagonal terms aiix2iPii.The assumption is valid if aij � akk for i 6= j and all k, i.e. the correlationsbetween the assets are small.

The following assumption is necessary for the analysis below to be valid. Theadaptive procedure works well for an x-dependent interest rate and volatilitybut the a priori analysis is much more complicated.

Assumption 2 The interest rate r and the volatility matrix [aij ] are level (i.e.space) independent.

If Assumption 1 is valid, then the dominant terms in the discretization errorin space of the operator (8) is�h =

dXk=1

�hk = r dXk=1

xk�1k +

dXk=1

akkx2k�2k; (35)

where �1k is the error in the approximation of Pk and �2k is the error in Pkk .Let �kh denote the derivative �h=�xk. The grid is assumed to have a smooth

variation such that j�khj � h; (36)

for some constant (cf. the discussion following (15)). With the centereddifference schemes in Section 4 and the assumption (36), the leading terms in�1k and �2k in the step size hk in the k-direction in (15) are�1k = � 1

6h2kPkkk + O(h3k); �2k = � 13hk�khPkkk � 1

12h2kPkkkk + O(h3k): (37)

The derivatives of P satisfy parabolic equations similar to the equation forP (8). These equations are derived in the following lemma.

Lemma 1 Let G = �KP=(�xk)K and let Assumption 2 be valid. Then G fulfillsGt =dXi;j=1

aijxixjGij +dXj=1

(�Kajkxj + r)Gj + (�Kakk + r K)G; (38)

where �K = 2K; �K =PK�1j=1 �j ; K = K � 1:

Proof. The result follows from induction starting with (8) for K = 0. �In order to estimate the error terms in each separate coordinate direction in

(35) and (37) a parabolic equation is derived for fG, where f depends on onlyone coordinate.

Lemma 2 Let G = �KP=(�xk)K ; f = f(xk), and let [aij ] be symmetric and letAssumption 2 be satisfied. Then

(fG)t =Pdi;j=1 aijxixj(fG)ij

+Pdj=1((�Kajk + r)xj � 2ajkxjxk(fk=f))(fG)j

+ (�Kakk + r K � akkx2k(fkk=f)� (�Kakk + r)xk(fk=f)+2akk(xkfk=f)2)(fG): (39)

15

Proof. Multiply (38) by f , replace fGj and fGij byfGj = (fG)j � Æjk(fk=f)(fG);fGij = (fG)ij � ÆikÆjk(fkk=f)(fG) + 2ÆikÆjk(fk=f)2(fG)�Æik(fk=f)(fG)j � Æjk(fk=f)(fG)i;and we have the equation (39). �

We are now able to obtain a bound on the spatial discretization error in (35)by letting f1 = xkh(xk)2; f2 = x2kh(xk)�kh(xk); f3 = x2kh(xk)2; and G = Pkkkand Pkkkk in Lemma 2.

Theorem 1 Let [aij ] be symmetric and let Assumption 2 be satisfied. Then thespatial error �h in (35) in C` = D � [t`; t`+1) is bounded byj�hj �Pdk=1

16r exp(z1k∆`) sup�C` xkh2(xk)jPkkk j

+Pdk=1

13akk exp(z2k∆`) sup�C` x2kh(xk)�kh(xk)jPkkk j

+Pdk=1

112akk exp(z3k∆`) sup�C` x2kh2(xk)jPkkkk j: (40)

The constants zpk are the upper bounds�Kakk + Kr � (�Kakk + r)xkfpk=fp + 2akk(xkfpk=fp)2 � akkx2kfpkk=fp � zpk;p = 1; 2; 3; K = 3; 3; 4:(41)

Proof. With f = f1 = xkh2(xk) and G = Pkkk the nonconstant part of theleading term of �1k in (35) and (37) satisfies (39). By the maximum principle,see [38], applied to (39) using the same type of argument as in Section 6 weobtain jxkh2(xk)Pkkk j � exp(z1k(t� t`)) sup�C` xkh2(xk)jPkkk j:Then the error due to the first derivatives is inferred from (35). The error �2kcaused by the second derivatives is derived in the same manner. �

The upper bounds zpk in the theorem depend on the smoothness of the stepsizes. The factors depending on fp = xqkh2k with (p; q) = (1; 1) and (3; 2) in (41)are xkfpkfp = q + 2

xk�khh ;x2kfpkkfp = q(q � 1) + 4q xk�khh + 2x2k�2khh + 2(

xk�khh )2:For f2 we havexkf2kf = 2 +

xk�khh +xk�2kh�kh ;x2kf2kkf2

= 2 + 4(xk�khh +

xk�2kh�kh ) + 3x2k�2khh +

x2k�3kh�kh :16

If the successive steps vary so that h(xk) = h0k exp( xk) for some constant ,then �khh = ; �2khh = 2; �2kh�kh = ; �3kh�kh = 2;(cf. the assumption in (36)) and with a small , xkfpk=fp and x2kfpkk=fp aresmall in (41).

8 Space adaptivity

The computational domain D is a d-dimensional cube [0; xmax]d covered bya Cartesian grid with the step sizes hij ; i = 1; : : : ; nj ; j = 1; : : : ; d. The gridpoints, the outer boundary xmax and the step sizes are related by (cf. Figure 6)xij = xi�1;j + hij ; i = 2; : : : ; nj ;Pnji=1 hij = xmax; j = 1; : : : ; d:

Suppose that the time step ∆t is constant in [t`; t`+1) and that the spatialstep hj is constant in the j:th dimension of D. If w0 is the computational workper grid point and time step, then the total computational work in C` isw = w0

∆`∆t dYj=1

xmaxhj : (42)

The discretization error according to (25), (35), and (37) satisfyj� j � j�kj+ j�hj � j�kj+ dXj=1

j�hj j � t∆t2 +

dXj=1

jh2j � �`; (43)

for all t and x for some positive constants t and j in a second order method.The step sizes ∆t and hj should be chosen such that w in (42) is minimized sub-ject to the accuracy constraint (43). Since t and j are positive, the minimumof w is attained when the right part of (43) is satisfied as an equality. Then wis w = w0

p t∆`q�` �Pdj=1 jh2j dYj=1

xmaxhj ;and a stationary point with respect to hi is at�w�hi = w ihi�` �Pdj=1 jh2j � 1hi! = 0:Hence, ih2i = �` � dXj=1

jh2j ; i = 1; : : : ; d;17

with the solution ih2i = �`=(d + 1); i = 1; : : : ; d: (44)

The optimal bound on the time steps is obtained from (43) and (44) t∆t2 = �`=(d + 1): (45)

Thus, it is optimal under these conditions to equidistribute the discretizationerrors in time and the dimensions. Ideally, w0 is constant but e.g. the numberof iterations in the iterative solver in each time step often depends on hj and ∆tin a complicated manner such that w0 grows with decreasing hj and decreaseswith smaller ∆t.

As in [6], the spatial error �h is estimated a posteriori from the numericalsolution by comparing the result of the fine grid space operator Bh with a coarsegrid operator B2h using every second grid point. Both Bh and B2h approximateB to second order. Suppose that Phi approximates the analytical solution P (x)at xi to second order in one dimension so thatPhi = P (xi) + (xi)h2i + O(h3i );where (x) is a smooth function and hi has a slow variation. Then

(BhPh)i = (BhP )(xi) + (Bh )(xi)h2i + O(h3i )= (BP )(xi) + (B )(xi)h2i + �hi + O(h3i );

(B2hPh)i = (B2hP )(xi) + (B2h )(xi)h2i + O(h3i )= (BP )(xi) + (B )(xi)h2i + �2hi + O(h3i ):

Subtract BhPh from B2hPh at every second grid point and use the the secondorder accuracy in the discretization error to obtain �2hi = 4�hi + O(h3i ) and�hi =

1

3((B2hPh)i � (BhPh)i) + O(h3i ): (46)

The leading term in the spatial error is given by the first term in the right handside of (46).

The sequence hij in each dimension j is determined according to Theorem1 and (40). Assuming that � 1 in (36), the second term in the estimate in(40)) is negligible and hij is chosen such that

maxi h2ij(16r exp(z1j∆`)xij jPjjj j+ 1

12ajj exp(z3j∆`)x2ij jPjjjj j) � �`=(d + 1)(47)

in each coordinate direction j where the maximum for i is taken over all theother dimensions. By changing the step size in each dimension separately, theCartesian grid structure is maintained. The derivative Pjjj is estimated bycomputing �hi in (46) with Bh being the centered difference approximation ofthe first derivative of P . Then �hi = h2iPjjj=6. With Bh approximating thesecond derivative of P to second order, we have �hi = h2iPjjjj=12.

18

The spatial error �h at t` is estimated as in (47) with the solution P ` at t`and the step size sequences hij ; i = 1; : : : ; nj ; j = 1; : : : ; d. The new sequencehij for t > t` is chosen locally at xij such thathij = hijr �`

(d + 1)(�`�h + j�hj(xij)j) : (48)

Then the new error �h is expected to bej�hj(xij)j = h2ij j�hj(xij)j=h2ij = �`=(d + 1): (49)

The small parameter �h in (49) ensures that hij is not too large when �h isvery small. Since �h occasionally is non-smooth we apply a filter on theseapproximations of the local discretization errors to avoid an oscillatory sequencehij .

For multi-dimensional problems, the storage requirements may be the lim-iting factor and as an option the number of grid points can be restricted to apredefined level. The grid will be optimized for a small error within the limitsof the available memory. By choosing a maximum number of grid points Nmaxin each direction j the method will still distribute the points so that j�hj(xij )jis minimized. Suppose that the numerically computed discrete distribution ofthe grid points is h(x) determined by �h and that this distribution induces thatN grid points are used. The new distribution will then place the grid pointsaccording to the scaled functionhnew =

NNmax h(x): (50)

In several dimensions this simple technique can reduce the number of grid pointsin each interval so that larger problems can be solved, but it can also be used toensure that not too many points are used in the first interval. Experiments haveshown that limiting the number of grid points, especially in the first interval,does not destroy the end-time accuracy in (24).

9 Time adaptivity

The discretization error in space is estimated by comparing a fine grid operatorwith a coarse grid operator. For the adaption of the time steps we comparean explicit predictor and an implicit corrector (BDF-2), both of second orderaccuracy, to find an approximation of the local error in BDF-2 in the same wayas in [17]. The predictor is the explicit method�n0 Pn = ∆tnL(Pn�1)� �n1Pn�1 � �n2Pn�2;�n0 = 1=(1 + �n); �n1 = �n � 1; �n2 = �(�n)2=(1 + �n); (51)

with the local discretization errorP (tn)� Pn = Cp(�n)(∆tn)3Pttt + O(∆t4);Cp(�n) = (1 + 1=�n)=6; (52)

19

and �n defined by �n = ∆tn=∆tn�1 as in (16). The solution at tn is determinedby the implicit method BDF-2 defined in (16) with the predicted value Pn from(51) as initial guess in an iterative solver. The local error of BDF-2 isP (tn)� Pn = Ci(�n)(∆tn)3Pttt + O(∆t4);Ci(�n) = �(1 + �n)2=(6�n(1 + 2�n)): (53)

The integration is initialized at t = 0 with the Euler backward method with�10 = 1; �1

1 = �1; and �2 = 0 in (16).The leading term Ci(�n)(∆tn)3Pttt in the local error in time in (53) is esti-

mated by computing the difference between the numerical solution Pn in (16)and Pn in (51)�k(tn) = ��n0Ci(∆tn)2Pttt � �0Ci(Pn � Pn)=(∆tn(Ci � Cp)): (54)

The maximum j�kj of the estimate �k(tn) in (54) over all grid points in D iscompared to the accuracy requirement �`=(d + 1) by computing�n =

r �`(d + 1)(�`�k + j�kj) ; (55)

where �k is a small parameter to avoid large time steps when �k is small (cf.(48)). If �n is too large, then the time step is rejected and Pn is recomputedwith a smaller ∆t. Otherwise, Pn is accepted and a new ∆tn+1 is determined. If0:8 � �n � 1:15, then we accept the time step and let ∆tn+1 = ∆tn. If �n < 0:8,then the time step to tn is rejected and Pn is recomputed with ∆tn := 0:9�n∆tn.If �n > 1:15, then the step is accepted and the next time step is increased to∆tn+1 = min(0:9�n∆tn; 2∆tn) with the upper bound 2∆tn introduced to avoidinstabilities.

Since BDF-2 is an implicit method in time, we must solve large, linear, sparsesystems of equations in each time step. These systems are solved with the GMRES

method [39]. The GMRES iterations are terminated when the relative residualnorm is sufficiently small. To be efficient and memory lean, the iterative methodis restarted after six iterations. The system of equations is preconditioned bythe incomplete LU factorization [40] with zero fill-in. The same factorizationcomputed in the first time step is applied in all time steps after t` in eachinterval.

10 Numerical results

The transformed Black-Scholes equation (8) is solved in one, two, and threespace dimensions with our adaptive method. Several different tests have beenperformed examining the method and its performance. Our method is comparedto the standard method with a uniform grid in space and adaptivity in time andwe also study how the memory can be used efficiently by restricting the numberof grid points.

20

Since the precision of the estimates of the derivatives was investigated in[6] we mainly focus on the estimates of the linear functional (24) in this paper.In one space dimension the true numerical error can be calculated so that thefunctional (24) can be determined. In higher dimensions this is not possible.However, in all tests the upper bound (22) of the leftmost integral in (20)is computed. This estimate will be denoted by Υ� and the adaptive processcontrols this value.

As a standard setup we have used the following parameters: the local meanrate of return r has been set to 0:05 and the volatility matrix � has the value0:3 on the diagonal and 0:05 in the sub- and super-diagonals. All other entriesare zero. In the examples that follow, the volatility matrix is neither level nortime dependent but it could be chosen to be so without causing any difficulty inthe adaptive method. In all computations we solve the transformed PDE (8) inforward time from 0 to T = 0:1. The computational domain D is a d-dimensionalcube truncated at xmax = 4dK in every dimension, using a generalization of thecommon ’rule of thumb’. The reason for multiplying by d is to have the far-fieldboundary at four times the location of the discontinuity of the derivative of theinitial function Φ(s) in each dimension. The location of the outer boundary isnot critical for the efficiency of the method. Few grid points are placed thereby the adaptive scheme.

10.1 Estimating the functional

To evaluate the method, the functional (24) is estimated in numerical experi-ments. In one space dimension, the exact solution for the European call optionis found in [25, 3] and is used to calculate the true error E(x; T ). The productg(x)E(x; T ) is integrated numerically with the second order trapezoidal method.The integral is denoted by

R �D g(x)E(x; T )dx.The estimate Υ� defined by

Υ� =L�1X

=0

kuk` �supx2Dt`�t�t`+1

j�(t; x)j (56)

has been used in one and multiple space dimensions. This is the most interestingquantity since it is used to generate the grids in space and to select the timesteps, see Section 5. The supremum of � in (56) is denoted by a � since it is nottruly the supremum but has been estimated as follows.

The adjoint solution (19) is computed on a coarse equidistant grid with onlya few time steps. Then kuk` in (23) is computed numerically. Theoretically thesupremum of �h should be measured on the parabolic cylinder C`, see Theorem1, but the errors are small on �D and we measure only � on D after a few timesteps from the start t` of each interval. The reason is that, when interpolatingthe solution from one grid to the next additional errors are introduced makingthe estimates of �h at t` unreliable. The initial condition is not sufficientlysmooth for the adaptive procedure to work properly. Hence, in the first interval,we measure � towards the end of the interval instead since the approximations of

21

the derivatives Pkkk and Pkkkk blow up close to t = 0 and the algorithm wouldthen use an excessive amount of grid points and very small time steps in thevicinity of t = 0. In Section 10.3 we show that the method actually can producegood results even with a restricted number of points in the first interval.

The a priori spatial error estimate in Theorem 1 contains the two factorsexp(z1k∆`) and exp(z3k∆`). These coefficients in front of the third and fourthderivatives of P are typically of the size 1 to 3 indicating that the local dis-cretization errors can grow that much in each interval. However, all our resultsshow that these are really overestimates of the growth. The discretization errorsdo not increase with time in the intervals. On the contrary, they decay. Thisimplies that Υ� will be overestimated in each interval.

The d-dimensional function g(x) has been chosen as the product of Gaussianfunctions g(x) = � dYi=1

exp(�5(xi � 1)2); (57)

scaled by � to satisfy (21).

10.2 A one-dimensional numerical example

In the first one-dimensional example we have studied two different levels of�. The estimate Υ� is compared with the numerically integrated (24) and thedesired tolerance level � for L = 8. The results are presented in Table 1.� Υ� j R �D gE dxj # grid points

10�3 0:001088 0:000083 [81 61 49 45 41 37 33 53]10�4 0:000152 0:000009 [233 173 137 121 109 101 93 157]

Table 1: The estimate Υ�, the error functional in (24) and the number of gridpoints used in each interval for two different tolerances.

We see that the algorithm produces a solution with a bound on the error closeto the desired tolerance. As expected the estimate Υ� is larger than

R �D gE dx.A sharper estimate is obtained by increasing the number of intervals implyingmore frequent changes of the grid. We seek a balance between accurate estimatesand many regridding operations (as in moving grid methods [18]) and coarserestimates with fewer changes of the grid (as we prefer here).

10.3 Restricting the number of grid points

An upper bound on the number of grid points is introduced in this one-dimensionalexample. Either this bound or the error tolerance determines the number ofpoints. The distribution of points still depends on the spatial error estimate,see Section 8 and (50).

The limit has been set to unlimited, 65 and 57 grid points in Table 2 and � =0:001. By restricting the number of grid points we can still achieve quite accurate

22

Nmax Υ� # grid points� 0:001088 [81 61 49 45 41 37 33 53]65 0:001161 [65 61 49 45 41 37 33 53]57 0:001246 [57 53 49 45 41 37 33 53]

Table 2: The bound on the number of points, the upper bound on the errorfunctional, and number of grid points in the eight intervals.

results. The method sometimes has to add a few extra points (maximum of 4)since the number of points nj must satisfy nj mod (4) = 1 to be suitable forthe error estimates.

10.4 Comparison with uniform grids in one dimension

A solution on an equidistant grid in space is compared to a solution with ouradaptive method in Table 3. The maximal number of grid points used by theadaptive algorithm with tolerance 0:001 is distributed equidistantly.

Υ� R �D gE dx # grid pointsAdap. grid 0:001088 �0:000083 [81 61 49 45 41 37 33 53]Equi. grid 0:002229 �0:000178 [81]Equi. grid 0:001086 �0:000075 [121]

Table 3: Estimates of the functionals are compared for adaptive grid and twouniform grids.

The results show that by redistributing the grid points adaptively, the errorfunctional can be reduced significantly with fewer points. Counting the totalnumber of points in the intervals, more than twice as many points in one di-mension are needed to reduce the error with an equidistant grid to the samelevel as the adapted grid (400 vs. 968). The price is more administration forthe adaptivity but this overhead cost drops quickly with increasing number ofdimensions.

10.5 Two-dimensional numerical example

In the first two-dimensional example, two tolerance levels � = 0.01 and 0.001are tested. In this case, an exact solution is not available. Therefore, only theestimate Υ� is presented together with the number of grid points used in eachdimension in Table 4.

As in the one-dimensional case in Table 1 we find that our method producesa result that almost fulfills the desired accuracy.

23

� Υ� # grid points10�2 0:007710 [612 452 332 292 292 252 252 292]10�3 0:001417 [1932 1132 812 652 572 532 452 732]

Table 4: The error tolerances, the estimate of the functional (24), and thenumber of grid points in two dimensions.

10.6 A second two-dimensional example

The one dimensional numerical example from Section 10.4 is repeated here intwo dimensions. The result on an adapted grid with � = 0:001 is compared tothe results on two equidistant grids in space in Table 5. The same number ofpoints in space is used in one uniform grid as the largest number in an interval ofthe adapted grid. The other uniform grid is chosen so that Υ� is approximatelythe same.

Υ� # grid pointsAdap. grid 0:007710 [612 452 332 292 292 252 252 292]Equi. grid 0:014202 [612]Equi. grid 0:007874 [812]

Table 5: Estimates of the functional for two uniform grids and the adaptivegrid.

From the table we observe that the equidistant grid results in lower bound onthe error even though 612 grid points were used in all time steps. The equidistantgrid uses 812 grid points to achieve the same level of accuracy as our adaptivemethod. However, as remarked in Section 10.4, the adaptive method introducesa certain overhead and the computation time is sometimes longer.

The variation of h along a coordinate is plotted in Figure 7 in three consecu-tive intervals [t`; t`+1). The maximum h permitted in the algorithm is 0.5. Theinitial singularity in the solution influences the choice of step size in the firstinterval.

24

0 1 2 3 4 5 6 7 80

0.5

1The step−length function

Inte

rval

1

0 1 2 3 4 5 6 7 80

0.5

1

Inte

rval

2

0 1 2 3 4 5 6 7 80

0.5

1

x

Inte

rval

3

Figure 7: The space steps and the grid points in three different time intervalsin one coordinate direction.

10.7 A three-dimensional example� Υ� # grid pointsAdap. grid 0.1 0:055996 [413 293 293 253 293 253 293 292 � 25]Equi. grid - 0:082717 [413]Adap. grid-Nmax 0.1 0:105518 [292 � 25 253 253 253 253 253 253 253]Adap. grid-Nmax 0.1 0:084784 [333 293 293 293 293 293 293 293]Adap. grid 0.05 0:039529 [533 373 293 293 293 293 293 293]

Table 6: The estimate of the functional (24) Υ� with two adaptive grids, anequidistant grid and an adaptive grid with a maximal number of grid points.

In this three-dimensional example we combine two of the experiments inthe previous examples. First we solve with our adaptive metod and the errortolerance � = 0:1. Then we solve with an equidistant grid with the same numberof grid points as the maximal number used by the adaptive method. In the nexttwo experiments, the number of grid points is restricted as in (50) (Nmax = 29and Nmax = 33). Finally, the solution is computed with a halved �. The resultsare displayed in Table 6. The conclusion is also here that adaptive distributionachieves a lower error bound for the same number of points compared to auniform distribution or the same error with fewer points. As an example, theCPU-time for the second case is about three times longer than for the adaptive,third case.

25

10.8 Time-stepping and iterations

0 0.02 0.04 0.06 0.08 0.10

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8x 10

−3

Time

Tim

e−st

ep

Figure 8: The time steps as a function of time. The vertical lines are theboundaries between the eight time intervals.

The time steps are selected at every tn following Section 9 such that theestimated �k satisfies maxD j�kj � �`=(d + 1).

The time history of the time steps in the one-dimensional example with� = 0:0001 is displayed in Figure 8. The vertical lines indicate the intervalboundaries t` where a new grid is determined. At t` the estimate of the timediscretization error is not always reliable and three steps with a constant ∆t aretaken there. The time step increases rapidly after t = 0 where higher derivativesof P are large due to the discontinuous initial data in (8).

The two-dimensional problem is solved in four intervals with � = 0:005. Thevariation of j�kj in the intervals is smooth in Figure 9. The error tolerance�`=(d + 1) is not satisfied in the first steps after t = 0 where the integration isadvanced with a minimal time step ∆tmin. The number of GMRES iterations ineach time step is found in Figure 10. It is about 10 in the whole interval witha small increase at the end when ∆t is longer.

26

0 20 40 60 80 100 120 1400

0.02

0.04τ k

1 2 3 4 5 6 7 8 9 100

0.01

0.02

0.03

τ k

1 2 3 4 5 6 70

0.01

0.02

0.03

τ k

1 2 3 4 5 6 70

0.05

0.1

Time step number

τ k

Figure 9: The measured local discretization error in time j�kj in four intervals.

0 0.02 0.04 0.06 0.08 0.10

5

10

15

20

25

time

Itera

tions

in e

ach

time

step

Figure 10: The number of GMRES iterations in each time step.

27

11 Conclusions

An analysis of the computational work and numerical experiments in one di-mension confirm that a finite difference method is more efficient compared toMonte Carlo and quasi-Monte Carlo methods for the accurate solution of theBlack-Scholes equation for European multi-asset call options. Therefore, anadaptive method with full error control has been developed for solution of thisequation. The multi-dimensional computational grid and the time step are cho-sen such that a tolerance on a functional of the final global error is satisfied bythe solution. The temporal discretization error is estimated a posteriori in everystep but the spatial grid is constant in intervals of the time domain. In eachinterval, the error due to the space discretization is first determined a posterioribased on the solution and then its growth is estimated a priori.

The grid is adjusted in each dimension separately so that its Cartesian struc-ture is maintained. The user has to supply the error tolerance and a maximalnumber of grid points in each dimension. The algorithm automatically selectsthe grid and the time steps and provides an upper bound on the numerical errorat the final time. The method has been tested successfully for problems withup to three dimensions corresponding to three underlying assets. Comparisonsbetween adapted and equidistant grids with time step control show that lowerbounds on the solution error are obtained with the same number of grid pointswith adaptation or we satisfy the same bounds with fewer grid points. Sincethe time step increases rapidly from a low level, important gains in efficiencyare achieved with a variable, adapted time step compared to a fixed, small timestep.

References

[1] F. Black, M. Scholes. The pricing of options and corporate liabilities, Jour-nal of Political Economy, 81:637–659, 1973

[2] R. C. Merton. Theory of rational option pricing. Bell Journal of Econom-ical Management Science, 4:141–183, 1973.

[3] M. Musiela, M. Rutkowski. Martingale Methods in Financial Modelling,Springer-Verlag, Berlin, 1997.

[4] R. E. Caflisch. Monte Carlo and quasi-Monte Carlo methods, Acta Nu-merica, 7:1–49, 1998.

[5] P. Glasserman. Monte Carlo Methods in Financial Engineering, Springer-Verlag, New York, NY, 2004.

[6] J. Persson, L. von Sydow. Pricing European multi-asset options using aspace-time adaptive FD-method. Technical report 2003-059, Departmentof Information Technology, Uppsala University, Uppsala, Sweden, 2003.Accepted for publication in Computing and Visualization in Science. Avail-able at http://www.it.uu.se/research/reports/

28

[7] D. Tavella, C. Randall. Pricing Financial Instruments - The Finite Dif-ference Method, John Wiley & Sons, Chichester, 2000.

[8] P. Wilmott. Option Pricing: Mathematical Models and Computation, Ox-ford Financial Press, Oxford, 1993.

[9] M. Gilli, E. Kellezi, G. Pauletto. Solving finite difference schemes arisingin trivariate option pricing, Journal of Economic Dynamics & Control,26:1499–1515, 2002.

[10] B. J. McCartin, S. M. Labadie. Accurate and efficient pricing of vanillastock options via the Crandall-Douglas scheme, Applied Mathematics andComputation, 143:39–60, 2003.

[11] E. S. Schwartz. The valuation of warrants: implementing a new approach,Journal of Financial Economy, 4:79–93, 1977.

[12] B. Engelmann, P. Schwender. The pricing of multi-asset options using aFourier grid method, Journal of Computational Finance, 1:53–61, 1998.

[13] O. Pironneau, Y. Achdou. Computational Methods for Option Pricing,Frontiers in Applied Mathematics 30, SIAM, Philadelphia, PA, 2005.

[14] O. Pironneau, F. Hecht. Mesh adaption for the Black & Scholes equations,East-West Journal of Numerical Mathematics, 8:25–35, 2000.

[15] H.-J. Bungartz, M. Griebel. Sparse grids, Acta Numerica, 13:147–269,2004.

[16] C. Reisinger. Numerische Methoden fur hochdimensionale parabolischeGleichungen am Beispiel von Optionspreisaufgaben, PhD thesis, Univer-sitat Heidelberg, Heidelberg, Germany, 2004.

[17] P. Lotstedt, S. Soderberg, A. Ramage, L. Hemmingsson-Franden. Implicitsolution of hyperbolic equations with space-time adaptivity, BIT, 42:128–153, 2002.

[18] A. Vande Wouwer, P. Saucez, W. E. Schiesser. Adaptive Method of Lines,Chapman & Hall, Boca Raton, FL, 2001.

[19] S. Figlewski, B. Gao. The adaptive mesh model: a new approach to efficientoption pricing, Journal of Financial Economics, 53:313–351, 1999.

[20] P. P. Boyle, M. Broadie, P. Glasserman. Monte Carlo methods for securitypricing, Journal of Economic Dynamics & Control, 21:1267–1321, 1997

[21] C. Joy, P. P. Boyle, K. S. Tan. Quasi-Monte Carlo methods in numericalfinance, Management Science, 42:926–938, 1996.

[22] P. Bratley, B.L. Fox. Algorithm 659, Implementing Sobol’s quasirandomsequence generator, ACM Transactions on Mathematical Software, 14:88–100, 1988.

29

[23] I. M. Sobol. On the distribution of points in a cube and the approximateevaluation of integrals, USSR Computational Mathematics and Mathemat-ical Physics, 7:86–112, 1967.

[24] J. Burkardt. http://www.scs.fsu.edu/�burkardt/msrc/msrc.html,School of Computational Science, Florida State University, Tallahassee,FL, 2006.

[25] T. Bjork. Arbitrage Theory in Continuous Time, Oxford University Press,New York, NY, 1998

[26] J. C. Cox, S. A. Ross. The valuation of options for alternative stochasticprocesses, Journal of Financial Economics, 3:145–166, 1976.

[27] R. E. Caflisch, B. Moskowitz. Modified Monte Carlo methods using quasi-random sequences, Monte Carlo and Quasi-Monte Carlo Methods in Sci-entific Computing, H. Niederreiter and P.J.S. Shiue, eds. Lecture Notes inStatistics, 106:1–16, Springer, New York, 1995.

[28] W. J. Morokoff. Generating Quasi-Random paths for stochastic processes,SIAM Review, 40:765–788, 1998.

[29] W. J. Morokoff, R. E. Caflisch. Quasi-Monte Carlo simulation of randomwalks in finance, in Monte Carlo and Quasi Monte Carlo Methods 1996,H. Niederreiter, P. Hellekalek, G. Larcher, and P. Zinterhof, eds., LectureNotes in Statistics, 127:340–352, Springer, New York, 1998.

[30] N. Clarke, K. Parrott. Multigrid for American option pricing with stochas-tic volatility, Applied Mathematical Finance, 6:177–195, 1999.

[31] S. Ikonen, J. Toivanen. Operator splitting methods for American optionpricing, Applied Mathematical Letters, 17:809–814, 2004.

[32] B. F. Nielsen, O. Skavhaug, A. Tveito, Penalty and front-fixing methodsfor the numerical solution of American option problems, Journal of Com-putational Finance, 5(4):69–97, 2002.

[33] C. W. Oosterlee, On multigrid for linear complementarity problems withapplication to American-style options, Electronic Transactions on Numer-ical Analysis, 15:165–185, 2003.

[34] T. A. Manteuffel, A. B. White, Jr. The numerical solution of second-orderboundary value problems on nonuniform meshes, Mathematics of Compu-tation, 47:511–535, 1986.

[35] E. Hairer, S. P. Nørsett, G. Wanner. Solving Ordinary Differential Equa-tions, Nonstiff Problems, 2nd ed., Springer-Verlag, Berlin, 1993.

[36] R. Becker, R. Rannacher. An optimal control approach to a posteriori errorestimation in finite element methods, Acta Numerica, 10:1–102, 2001.

30

[37] K. Eriksson, D. Estep, P. Hansbo, C. Johnson. Introduction to adaptivemethods for differential equations, Acta Numerica, 4:105–158, 1995.

[38] A. Friedman. Partial Differential Equations of Parabolic Type, Prentice-Hall, Englewood Cliffs, NJ, 1964.

[39] Y. Saad, M. H. Schultz. GMRES: A generalized minimal residual algo-rithm for solving nonsymmetric linear systems, SIAM Journal on ScientificComputing, 7:856–869, 1986.

[40] A. Greenbaum, Iterative Methods for Solving Linear Systems, SIAM,Philadelphia, PA, 1997.

31