Christophe Chorro - Variance reduction techniques: Around B&Schristophe.chorro.fr/docs/MC4.pdf ·...

45
Variance reduction techniques: Around B&S Christophe Chorro ([email protected]) University Paris 1 July 10 2008 Christophe Chorro ([email protected]) (University Paris 1) Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 1 / 45

Transcript of Christophe Chorro - Variance reduction techniques: Around B&Schristophe.chorro.fr/docs/MC4.pdf ·...

  • Variance reduction techniques: Around B&S

    Christophe Chorro ([email protected])

    University Paris 1

    July 10 2008

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 1 / 45

  • Study Plan

    Variance reduction techniquesIntroductionReminder on B&S ModelControl VariateAntithetic VariablesImportance samplingConditioning

    Biblio: • Stochastic Calculus and Black Scholes:

    http://christophe.chorro.fr/docs/CSangl.pdf

    • A.G.Z Kemna and A.C.F. Vorst, A pricing method for options based onaverage asset values, J. Banking Finan., 1990, March, 113–129.

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 2 / 45

  • Plan

    1 Variance reduction techniquesVariance reduction techniques:IntroductionReminder on B&S modelControl VariateAntithetic VariablesImportance samplingConditioning

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 3 / 45

  • Variance reduction techniques: Introduction

    Let (Xn)n∈N be i.i.d random variables with values in R such that E[|X1|2] < ∞.

    For large n, with a confidence of 95%,

    E[X1] ∈[

    Snn− 1.96σ√

    n,

    Snn

    +1.96σ√

    n

    ]with σ2 = Var(X1).

    The magnitude of the error is given by 1.96σ√n : the size of σ is fundamental forthe speed of convergence.

    Idea: Reduce σ

    Find Y such that E[X1] = E[Y ] and Var(X1) > Var(Y ).

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 4 / 45

  • Plan

    1 Variance reduction techniquesVariance reduction techniques:IntroductionReminder on B&S modelControl VariateAntithetic VariablesImportance samplingConditioning

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 5 / 45

  • Brownian motion

    We consider a probability space (Ω,A, P).

    DefinitionStandard Brownian motion (B.M) is a stochastic process (Bt)t∈[0,T ] fulfilling :

    a) B0 = 0 P-a.s.

    b) B is continuous i.e t → Bt(w) is continuous for P almost all w.

    c) B has independent increments: For Si t > s, Bt − Bs is independent ofFBs = σ(Bu, u ≤ s).

    d) the increments of B are stationary and gaussian: For t ≥ s, Bt − Bs followsa N (0, t − s).

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 6 / 45

  • Brownian motion

    We consider a subdivision 0 = t0 < ... < tn = T of [0, T ]. We want to simulate

    (Bt0 , ...Btn).

    Idea:Btk = Btk−1 + Btk − Btk−1︸ ︷︷ ︸

    N (0,tk−tk−1)⊥Btk−1 ,...,B0

    .

    PropositionIf (G1, ...Gn) are i.i.d N (0, 1), we define

    X0 = 0, Xi =i∑

    j=1

    √tj − tj−1Gj i > 0.

    Then(X0, ..., Xn) =

    D(Bt0 , ...Btn).

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 7 / 45

  • Brownian motion

    0.0 0.4 0.8−

    0.8

    −0.

    20.

    40.0 0.4 0.8

    −1.

    5−

    0.5

    0.0 0.4 0.8

    −0.

    60.

    00.

    4

    0.0 0.4 0.8−

    0.5

    0.5

    Figure: 4 paths of the Brownian motion on [0, 1] generated using the precedingmethod with the regular subdivision of step 0.001.

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 8 / 45

  • Black Scholes model

    We consider the time interval [0, T ] and r the risk free rate (supposed to beconstant) during this period.

    Non-risky asset: Its dynamic is given by

    S00 = 0, S0t = e

    rt .

    Risky asset: Under the historical probability P its dynamic is given by thefollowing SDE:

    dSt = µStdt + σStdBt (1)

    with initial condition S0 = x0 > 0 and where B is a standard BM under P.

    Itô formula ⇒ St = x0e(µ−12 σ

    2)t+σBt .

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 9 / 45

  • Black Scholes model

    0.0 0.4 0.81

    35

    sigma=1, mu=1

    0.0 0.4 0.8

    0.4

    0.7

    1.0

    sigma=1, mu=−1

    0.0 0.4 0.8

    1.0

    2.5

    4.0

    sigma=0.5, mu=1

    0.0 0.4 0.80.

    40.

    71.

    0

    sigma=0.5, mu=−1

    Figure: Simulation of a path of the risky asset in the B&S model for differentparameters

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 10 / 45

  • Black Scholes model

    What is, in this model, the price of a contingent claim with payoff ΦT at T?

    PropositionIn the B&S model there exists a unique probablity Q ∼ P such that the priceat t = 0 of a contingent claim with payoff ΦT at T is given by

    price = e−rT EQ[ΦT ].

    Moreover the dynamic of the risky asset under Q is given by

    dSt = rStdt + σStdWt (µ ⇔ r) (2)

    where W is a standard BM under Q.

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 11 / 45

  • Black Scholes model

    Examples: The Black Scholes Formulas

    For Call options (ΦT = (ST − K )+) one has

    EQ[(ST − K)+] = S0N(d1)− Ke−rTN(d2) (3)

    where

    d1 =log(S0K ) + (r +

    σ2

    2 )T

    σ√

    Tand d2 =

    log(S0K ) + (r −σ2

    2 )T

    σ√

    T(4)

    and where N is the distribution function of a N (0, 1).

    For Put options (ΦT = (K − ST )+) one has

    EQ[(K− ST)+] = −S0N(−d1) + Ke−rTN(−d2). (5)

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 12 / 45

  • Black Scholes model

    This formulas are fundamental because

    They are easy to compute in practice

    They are a Benchmark to test numerical methods (for example variancereduction techniques)

    In the sequel, all the variance reduction methods will be test computing byMonte carlo simulations

    E[(K − eσG)+] or E[(eσG − K )+]

    where G ↪→ N (0, 1) i.e computing (up to some coefficients) the price of a call

    or a put option in a B&S model where S0 = 1 and r = 0.

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 13 / 45

  • Black Scholes model: Numerical example

    Price of a call E[(eσG − K )+] with σ = 0.5 and K = 1Exact value=0.28353Estimated value (N=100) = 0.249, confidence interval at 95% : [0.164; 0.334]Estimated value (N=1000) = 0.279, confidence interval [0.248; 0.308]Estimated value (N=10000) = 0.276, confidence interval [0.267; 0.285]

    Price of a put E[(K − eσG)+] with σ = 0.5 and K = 1.Exact value=0.15038Estimated value (N=100) = 0.154, confidence interval at 95% : [0.112; 0.195]Estimated value (N=1000) = 0.155, confidence interval [0.143; 0.167]Estimated value (N=10000) = 0.149, confidence interval [0.145; 0.152]

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 14 / 45

  • Black Scholes model

    0.5 1.0 1.5 2.0 2.5

    0.0

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    Call (green) and put (red)

    K

    Sta

    ndar

    d de

    viat

    ion

    Figure: Standard deviation of the preceding payoffs (σ = 0.5) for different strikes

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 15 / 45

  • Black Scholes model

    In practical cases, the moneyness S0K of most liquid options (in our exampleS0 = 1) belongs in general to [0.7, 1.3].

    Thus, It’s better to price put than call by Monte Carlo methods.

    To recover the price of the call we may use the Call-Put parity:

    E[(eσG − K )+] = E[(−eσG + K )+] + eσ22 − K .

    Price of a call E[(eσG − K )+] with σ = 0.5 and K = 1 computed byCall-Put parity and MC

    Exact value=0.28353Estimated value (N=100) = 0.281, confidence interval at 95% : [0.240; 0.320]Estimated value (N=1000) = 0.285, confidence interval [0.272; 0.297]Estimated value (N=10000) = 0.287, confidence interval [0.283; 0.291]

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 16 / 45

  • Plan

    1 Variance reduction techniquesVariance reduction techniques:IntroductionReminder on B&S modelControl VariateAntithetic VariablesImportance samplingConditioning

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 17 / 45

  • Controle Variate

    We want to compute by Monte Carlo method E[f (X )].

    Idea:E[f (X )] = E[f (X )− h(X )] + E[h(X )]

    where h is chosen to ensure that

    E[h(X )] may be computed explicitly

    Var(f (X )− h(X )) � Var(f (X )) (intuitively we look for f (X )− h(X ) small).

    Example: Call-Put parity

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 18 / 45

  • Controle Variate: Basket options (Homework 1)

    Aim: Price by MC method a Basket put in the B&S framework

    Let G1 and G2 be two independent N (0, 1) and (σ11, σ12, σ21, σ22) ∈ R4. Wedefine

    S1 = eσ11G1+σ12G2

    S2 = eσ21G1+σ22G2 .

    For a1, a2 > 0 with a1 + a2 = 1, we are interested in the numericalcomputation (no closed form formula) of

    E

    K − (a1S1 + a2S2)︸ ︷︷ ︸

    X

    +

    .

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 19 / 45

  • Controle Variate: Basket options (Homework 1)

    First Idea: Classical MC methods

    With a1 = a2 = 0.5, K = 1, σ11 = σ22 = 0.1 and σ12 = σ21 = 0.15,

    Box Muller Method to generate gaussian random variables

    Estimated value (N=10000) =6, 334.10−2 , confidence interval at 95% :

    [6, 159.10−2; 6, 508.10−2]

    Inversion Method to generate gaussian random variables:

    Estimated value (N=10000) = 6, 336.10−2, confidence interval at 95% :

    [6, 161.10−2; 6, 511.10−2]

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 20 / 45

  • Controle Variate: Basket options (Homework 1)

    Second Idea: Control variate when (σ11, σ12, σ21, σ22) are small.

    We haveX ∼ eY

    where

    Y = a1Log(S1) + a2Log(S2) ↪→ N(0, σ2

    )with

    σ2 = (a1σ11 + a2σ21)2 + (a1σ12 + a2σ22)2

    And we remark that

    E[(K − X )+

    ]= E

    [(K − X )+ −

    (K − eY

    )+

    ]+ C

    where

    C = E[(

    K − eY)+

    ]= e

    σ22 N(

    log(K )σ

    − σ) + KN( log(K )σ

    ).

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 21 / 45

  • Controle Variate: Basket options (Homework 1)

    We use a classical Monte Carlo Method (N = 10000) to approximate

    E[(K − X )+ −

    (K − eY

    )+

    ]and deduce E

    [(K − X )+

    ].

    With a1 = a2 = 0.5, K = 1, σ11 = σ22 = 0.1 and σ12 = σ21 = 0.15,

    Estimated value (N=10000) =6, 312.10−2 , confidence interval at 95% :

    [6, 311.10−2; 6, 313.10−2]

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 22 / 45

  • Controle Variate: Asian Options with Kemna and VorstMethodAim: Price by MC method an asian call in the B&S model

    Thus we have to approximate (no closed form formula)

    e−rT E

    [(1T

    ∫ T0

    Ssds − K

    )+

    ]where ∀t ∈ [0, T ]

    St = x0e(r−12 σ

    2)t+σWt .

    First Idea:1T

    ∫ T0

    Ssds ≈1p

    p∑k=0

    S kTp

    + MC

    With σ = 0.2, x0 = K = 100, r = 0.1, T = 1 and p = 50 one obtainsEstimated value (N=10000) = 6.93, confidence interval at 95% :[6.769; 7.101]

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 23 / 45

  • Controle Variate: Asian Options with Kemna and VorstMethod

    Second Idea: When σ and r are small Kemna and Vorst tell us that

    1T

    ∫ T0

    Ssds ≈ exp

    (1T

    ∫ T0

    log(Ss)ds

    )= AT .

    We are going to use the identity

    e−rT E[(

    1T

    ∫ T0 Ssds − K

    )+

    ]= e−rT E

    [(1T

    ∫ T0

    Ssds − K

    )+

    − (AT − K )+

    ]︸ ︷︷ ︸

    (1)

    + e−rT E [(AT − K )+]︸ ︷︷ ︸(2)

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 24 / 45

  • Controle Variate: Asian Options with Kemna and VorstMethod

    For (1) We use the first idea (Approximation by Riemann sums)

    For (2) We just have to remark that

    AT = exp

    (1T

    ∫ T0

    log(Ss)ds

    )↪→ exp

    (N ((r̃ − σ̃

    2

    2)T , σ̃2T )

    )

    where r̃ = r2 −σ2

    12 and σ̃ =σ√3.

    Thus we may use Black Scholes formula to compute explicitly (2).

    With σ = 0.2, x0 = K = 100, r = 0.1, T = 1 and p = 50 one obtains

    Estimated value (N=10000) = 6.981, confidence interval at 95% :[6.969; 6.993].

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 25 / 45

  • Plan

    1 Variance reduction techniquesVariance reduction techniques:IntroductionReminder on B&S modelControl VariateAntithetic VariablesImportance samplingConditioning

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 26 / 45

  • Antithetic Variables

    If (Xi)i∈N∗ are independent with the same distribution than X we mayapproximate E[g(X )] by

    E[g(X )] ≈ 12n

    2n∑k=1

    g(Xk ) =1

    2n

    n∑k=1

    g(X2k−1) + g(X2k ).

    Suppose that there exists a transformation T : R → R such that

    T (X ) has the same distribution than X .

    In this case we may use

    E[g(X )] =E[g(X ) + g(T (X ))]

    2≈ 1

    2n

    n∑k=1

    g(Xk ) + g(T (Xk )).

    Question: Is Var [g(X1) + g(X2)] > Var [g(X1) + g(T (X1))]?

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 27 / 45

  • Antithetic Variables

    We haveVar [g(X1) + g(X2)] = 2Var [g(X1)]

    and

    Var [g(X1) + g(T (X1))] = 2Var [g(X1)] + 2Cov [g(X1), g(T (X1))].

    So,

    Cov [g(X1), g(T (X1))] < 0 ⇒ Var [g(X1) + g(X2)] > Var [g(X1) + g(T (X1))] .

    Under some simple conditions on g and T ,

    Cov [g(X1), g(T (X1))] < 0 holds.

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 28 / 45

  • Antithetic Variables

    LemmaIf X is a random variable and f , h : R → R two non decreasing (or nonincreasing) mappings,

    E[h(X )f (X )] ≥ E[h(X )]E[f (X )].

    CorollaryIf g is monotonic and T non increasing,

    Cov [g(X1), g(T (X1))] ≤ 0.

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 29 / 45

  • Antithetic Variables

    Proof of the lemma: Let Y be a random variable independent from X withthe same distribution.

    FromE[(h(X )− h(Y ))(f (X )− f (Y ))] ≥ 0

    we deduce

    E[h(X )f (X )] + E[h(Y )f (Y )] ≥ E[h(X )f (Y )] + E[h(Y )f (X )].

    Using independence and equi-distribution

    E[h(X )f (X )] ≥ E[h(X )]E[f (X )]

    thus

    Cov [f (X ), h(X )] ≥ 0.

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 30 / 45

  • Antithetic Variables for call options in B&S

    We want to apply the preceding method to

    c = E[(eσG − K )+].

    Here we have to remark that

    g(x) = (eσx − K )+ is non decreasing (σ > 0),

    T (x) = −x is non increasing

    G and −G have the same distribution,

    thus, we may use the following approximation

    E[g(G)] ≈ 12n

    n∑k=1

    g(Gk ) + g(T (Gk )).

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 31 / 45

  • Antithetic Variables for call options in B&S

    Numerical results for σ = 0.5 and K = 1

    Without antithetic variablesExact value=0.28353Estimated value (n=100) = 0.238, confidence interval at 95% : [0.151; 0.324]Estimated value (n=10000) = 0.285, confidence interval [0.275; 0.295]

    With antithetic variablesExact value=0.28353Estimated value (n=100) = 0.283, confidence interval at 95% : [0.227; 0.338]Estimated value (n=10000) = 0.283, confidence interval [0.276; 0.289]

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 32 / 45

  • Plan

    1 Variance reduction techniquesVariance reduction techniques:IntroductionReminder on B&S modelControl VariateAntithetic VariablesImportance samplingConditioning

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 33 / 45

  • Importance sampling

    Let X having the distribution fX (x)dx , we want to approximate

    E[g(X )].

    We consider another random variable Y with distribution fY (y)dy . One has

    E[g(X )] =∫

    Rg(x)fX (x)dx =

    ∫R

    g(x)fX (x)fY (x)fY (x)

    dx = E[g(Y )

    fX (Y )fY (Y )

    ].

    If we are able to simulate de distribution of Y we may use MC method toapproximate E

    [g(Y ) fX (Y )fY (Y )

    ].

    Question: If Z = g(Y ) fX (Y )fY (Y ) , is Var(Z ) < Var(g(X )?

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 34 / 45

  • Importance sampling

    But

    Var(Z ) =∫

    Rg(x)2

    f 2X (x)fY (x)

    dx − E[g(X )]2.

    When g ≥ 0, fY (x) = g(x)fX (x)E[g(X)] ⇒ Var(Z ) = 0

    Problem: E[g(X )] is unknown...

    Idea: Take fY (x) ≈ cste | fX (x)g(x) |.

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 35 / 45

  • Importance sampling: example 1

    We want to compute by MC methods∫ 10

    cos(πx2

    )dx = E[cos(πU2

    )] whereU ↪→ U([0, 1]).

    g(x) = cos(πx2 ) is odd, g(0) = 1, g(1) = 0 thus we take

    fY (x) = cste (1− x2) 1[0,1](x).

    fY density ⇒ cste = 32

    If Y has the density fY ,∫ 10

    cos(πx2

    )dx = E[Z ] where Z =2cos(πY2 )3(1− Y 2)

    .

    In this case

    Var(Z ) ≈Var(cos(πU2 ))

    100...

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 36 / 45

  • Importance sampling: example 1

    Remark: In the preceding example we are able to simulate Y with density

    fY (x) =32

    (1− x2) 1[0,1](x)

    using the next result we have already seen (rejection method...)

    PropositionLet fY be a density on R and

    DfY = {(x , u) ∈ R× R+ | 0 ≤ u ≤ fY (x)}.

    Let (Y , U) be a random variable on R× R+, then,

    (Y , U) ↪→ U(DfY ) ⇔ Y has density fY and ∀x ∈ R+, U | x ↪→ U([0, fY (x)]).

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 37 / 45

  • Importance sampling: Call option in B&S

    We want to use the preceding method to compute

    c = E[(eσG − 1)+]

    where G ↪→ N (0, 1).

    Idea: Since σ is small, eσx − 1 ≈ σx thus

    c =∫

    R+(eσx − 1) 1√

    2πe−

    x22 dx =

    ∫R+

    (eσx − 1)cste σx

    cste σx√2π

    e−x22︸ ︷︷ ︸

    ≈fY (x)

    dx

    y = x2,

    c =∫

    R+

    (eσ√

    y − 1)√2πy

    e−y2

    dx2

    = E

    [(eσ

    √Y − 1)√2πY

    ]where Y ↪→ E(0.5).

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 38 / 45

  • Importance sampling: Call option in B&S

    Numerical results for σ = 0.5 :

    Without importance samplingExact value=0.28353Estimated value (N=100) = 0.249, confidence interval at 95% : [0.164; 0.334]Estimated value (N=10000) = 0.276, confidence interval [0.267; 0.285]

    With importance samplingExact value=0.28353Estimated value (N=100) = 0.287, confidence interval at 95% : [0.275; 0.300]Estimated value (N=10000) = 0.284, confidence interval [0.283; 0.285]

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 39 / 45

  • Plan

    1 Variance reduction techniquesVariance reduction techniques:IntroductionReminder on B&S modelControl VariateAntithetic VariablesImportance samplingConditioning

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 40 / 45

  • Conditioning

    We want to compute by M.C methods

    E[g(X , Y )].

    If h(X ) = E[g(X , Y ) | X ] we have from classical properties that

    E[g(X , Y )] = E[h(X )]

    From Jensen inequality for conditional expectation:

    h2(X ) ≤ E[g2(X , Y ) | X ], so

    Var [g(X , Y )] ≥ Var [h(X )].

    Pb: MC for E[h(X )] but we have to know explicitly h.....

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 41 / 45

  • ConditioningSuppose that the pair (X , Y ) has the distribution fX ,Y (x , y)dxdy . We know inthis case that:

    The distribution of X is given by fX (x)dx where fX (x) =∫

    R fX ,Y (x , y)dy .

    The distribution of Y is given by fY (y)dy where fY (y) =∫

    R fX ,Y (x , y)dx .

    The conditional distribution of Y given X = x is given by fY |X (x , y)dywhere

    fY |X (x , y) = 1fX (x)>0fX ,Y (x , y)

    fX (x).

    Thus

    h(X ) = E[g(X , Y ) | X ]

    where

    h(x) =∫

    Rg(x , y)fY |X (x , y)dy .

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 42 / 45

  • Conditioning: Best of call option

    We want to approximate

    p = E[(Max(eσ1G1 , eσ2G2)− K )+]

    where G1 and G2 are two i.i.d N (0, 1).

    First idea: Classical MC...

    Second idea: Conditioning by G1 + MC...

    We have p = E[h(eσ1G1)] where

    h(x) = E[(Max(x , eσ2G2)− K )+]

    where h may be explicitly known.

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 43 / 45

  • Conditioning: Best of call option

    In fact h(x) = E[(Max(x , eσ2G2)− K

    )+] but

    (Max(x , eσ2G2)− K

    )+

    = (eσ2G2 − K )+ 1{x≤K}

    +(x − K + (eσ2G2 − x)+

    )1{x>K}.

    Thus

    h(x) =(

    eσ222 N(− log(K )σ2 + σ2)− KN(−

    log(K )σ2

    )

    )1{x≤K}

    +(

    x − K + eσ222 N(− log(x)σ2 + σ2)− xN(−

    log(x)σ2

    )

    )1{x>K}.

    h is explicit.

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 44 / 45

  • Conditioning: Best of call option

    With σ1 = 0.2, σ2 = 0.5, K = 1,

    Without conditioning (Exact value 0.338)

    Estimated value (N=100) = 0.318, confidence interval at 95% : [0.219; 0.417]

    Estimated value (N=10000) = 0.327, confidence interval at 95% :[0.318; 0.336]

    With conditioning (Exact value 0.338)

    Estimated value (N=100) = 0.344, confidence interval at 95% : [0.323; 0.365]

    Estimated value (N=10000) = 0.339, confidence interval at 95% :[0.338; 0.341]

    Christophe Chorro ([email protected]) (University Paris 1)Introduction to Monte Carlo Methods (Lecture 3 and 4) July 10 2008 45 / 45

    Variance reduction techniquesVariance reduction techniques:IntroductionReminder on B&S modelControl VariateAntithetic VariablesImportance samplingConditioning