Slides univ-van-amsterdam

66
Arthur CHARPENTIER - Nonparametric quantile estimation. Estimating quantiles and related risk measures Arthur Charpentier [email protected] Universiteit van Amsterdam, January 2008 joint work with Abder Oulidi, IMA Angers 1

description

Β 

Transcript of Slides univ-van-amsterdam

Page 1: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Estimating quantilesand related risk measures

Arthur Charpentier

[email protected]

Universiteit van Amsterdam, January 2008

joint work with Abder Oulidi, IMA Angers

1

Page 2: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Agenda

β€’ General introduction

Risk measures

β€’ Distorted risk measuresβ€’ Value-at-Risk and related risk measures

Quantile estimation : classical techniques

β€’ Parametric estimationβ€’ Semiparametric estimation, extreme value theoryβ€’ Nonparametric estimation

Quantile estimation : use of Beta kernels

β€’ Beta kernel estimationβ€’ Transforming observations

A simulation based study

2

Page 3: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Agenda

β€’ General introduction

Risk measures

β€’ Distorted risk measuresβ€’ Value-at-Risk and related risk measures

Quantile estimation : classical techniques

β€’ Parametric estimationβ€’ Semiparametric estimation, extreme value theoryβ€’ Nonparametric estimation

Quantile estimation : use of Beta kernels

β€’ Beta kernel estimationβ€’ Transforming observations

A simulation based study

3

Page 4: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Risk measures and price of a risk

Pascal, Fermat, Condorcet, Huygens, d’Alembert in the XVIIIth centuryproposed to evaluate the β€œproduit scalaire des probabilites et des gains”,

< p,x >=nβˆ‘i=1

pixi = EP(X),

based on the β€œregle des parties”.

For Quetelet, the expected value was, in the context of insurance, the price thatguarantees a financial equilibrium.

From this idea, we consider in insurance the pure premium as EP(X). As inCournot (1843), β€œl’esperance mathematique est donc le juste prix des chances”(or the β€œfair price” mentioned in Feller (1953)).

4

Page 5: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Risk measures : the expected utility approach

Ru(X) =∫u(x)dP =

∫P(u(X) > x))dx

where u : [0,∞)β†’ [0,∞) is a utility function.

Example with an exponential utility, u(x) = [1βˆ’ eβˆ’Ξ±x]/Ξ±,

Ru(X) =1Ξ±

log(EP(eΞ±X)

).

5

Page 6: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Risk measures : Yarri’s dual approach

Rg(X) =∫xdg β—¦ P =

∫g(P(X > x))dx

where g : [0, 1]β†’ [0, 1] is a distorted function.

Example– if g(x) = I(X β‰₯ 1βˆ’ Ξ±) Rg(X) = V aR(X,Ξ±),– if g(x) = min{x/(1βˆ’ Ξ±), 1} Rg(X) = TV aR(X,Ξ±) (also called expected

shortfall), Rg(X) = EP(X|X > V aR(X,Ξ±)).

6

Page 7: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Distortion of values versus distortion of probabilities

0 1 2 3 4 5 6

0.00.2

0.40.6

0.81.0

Calcul de l’esperance mathΓ©matique

Fig. 1 – Expected value∫xdFX(x) =

∫P(X > x)dx.

7

Page 8: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Distortion of values versus distortion of probabilities

0 1 2 3 4 5 6

0.00.2

0.40.6

0.81.0

Calcul de l’esperance d’utilitΓ©

Fig. 2 – Expected utility∫u(x)dFX(x).

8

Page 9: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Distortion of values versus distortion of probabilities

0 1 2 3 4 5 6

0.00.2

0.40.6

0.81.0

Calcul de l’intΓ©grale de Choquet

Fig. 3 – Distorted probabilities∫g(P(X > x))dx.

9

Page 10: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Distorted risk measures and quantiles

Equivalently, note that E(X) =∫ 1

0Fβˆ’1X (1βˆ’ u)du, and

Rg(X) =∫ 1

0Fβˆ’1X (1βˆ’ u)dgu.

A very general class of risk measures can be defined as follows,

Rg(X) =∫ 1

0

Fβˆ’1X (1βˆ’ u)dgu

where g is a distortion function, i.e. increasing, with g(0) = 0 and g(1) = 1.

Note that g is a cumulative distribution function, so Rg(X) is a weighted sum ofquantiles, where dg(1βˆ’ Β·) denotes the distribution of the weights.

10

Page 11: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Distortion function, VaR (quantile) βˆ’ cdf

1 βˆ’ probability level

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Distortion function, TVaR (expected shortfall) βˆ’ cdf

1 βˆ’ probability level

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Distortion function, cdf

1 βˆ’ probability level

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Distortion function, VaR (quantile) βˆ’ pdf

1 βˆ’ probability level

0.0 0.2 0.4 0.6 0.8 1.0

01

23

45

6

Distortion function, TVaR (expected shortfall) βˆ’ pdf

1 βˆ’ probability level

0.0 0.2 0.4 0.6 0.8 1.0

01

23

45

Distortion function, pdf

1 βˆ’ probability level

Fig. 4 – Distortion function, g and dg

11

Page 12: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Agenda

β€’ General introduction

Risk measures

β€’ Distorted risk measuresβ€’ Value-at-Risk and related risk measures

Quantile estimation : classical techniques

β€’ Parametric estimationβ€’ Semiparametric estimation, extreme value theoryβ€’ Nonparametric estimation

Quantile estimation : use of Beta kernels

β€’ Beta kernel estimationβ€’ Transforming observations

A simulation based study

12

Page 13: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Using a parametric approach

If FX ∈ F = {FΞΈ, ΞΈ ∈ Θ} (assumed to be continuous), qX(Ξ±) = Fβˆ’1ΞΈ (Ξ±), and thus,

a natural estimator isqX(Ξ±) = Fβˆ’1

ΞΈ(Ξ±), (1)

where ΞΈ is an estimator of ΞΈ (maximum likelihood, moments estimator...).

13

Page 14: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Using the Gaussian distribution

A natural idea (that can be found in classical financial models) is to assumeGaussian distributions : if X ∼ N (Β΅, Οƒ), then the Ξ±-quantile is simply

q(Ξ±) = Β΅+ Ξ¦βˆ’1(Ξ±)Οƒ,

where Ξ¦βˆ’1(Ξ±) is obtained in statistical tables (or any statistical software), e.g.u = βˆ’1.64 if Ξ± = 90%, or u = βˆ’1.96 if Ξ± = 95%.

Definition 1. Given a n sample {X1, Β· Β· Β· , Xn}, the (Gaussian) parametricestimation of the Ξ±-quantile is

qn(Ξ±) = Β΅+ Ξ¦βˆ’1(Ξ±)Οƒ,

14

Page 15: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Using a parametric models

Actually, is the Gaussian model does not fit very well, it is still possible to useGaussian approximation

If the variance is finite, (X βˆ’ E(X))/Οƒ might be closer to the Gaussiandistribution, and thus, consider the so-called Cornish-Fisher approximation, i.e.

Q(X,α) ∼ E(X) + zα√V (X), (2)

where

zΞ± = Ξ¦βˆ’1(Ξ±)+ΞΆ16

[Ξ¦βˆ’1(Ξ±)2βˆ’1]+ΞΆ224

[Ξ¦βˆ’1(Ξ±)3βˆ’3Ξ¦βˆ’1(Ξ±)]βˆ’ ΞΆ21

36[2Ξ¦βˆ’1(Ξ±)3βˆ’5Ξ¦βˆ’1(Ξ±)],

where ΞΆ1 is the skewness of X, and ΞΆ2 is the excess kurtosis, i.e. i.e.

ΞΆ1 =E([X βˆ’ E(X)]3)

E([X βˆ’ E(X)]2)3/2and ΞΆ2 =

E([X βˆ’ E(X)]4)E([X βˆ’ E(X)]2)2

βˆ’ 3. (3)

15

Page 16: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Using a parametric models

Definition 2. Given a n sample {X1, Β· Β· Β· , Xn}, the Cornish-Fisher estimation ofthe Ξ±-quantile is

qn(Ξ±) = Β΅+ zΞ±Οƒ, where Β΅ =1n

nβˆ‘i=1

Xi and Οƒ =

√√√√ 1nβˆ’ 1

nβˆ‘i=1

(Xi βˆ’ Β΅)2,

and

zΞ± = Ξ¦βˆ’1(Ξ±)+ΞΆ16

[Ξ¦βˆ’1(Ξ±)2βˆ’1]+ΞΆ224

[Ξ¦βˆ’1(Ξ±)3βˆ’3Ξ¦βˆ’1(Ξ±)]βˆ’ ΞΆ21

36[2Ξ¦βˆ’1(Ξ±)3βˆ’5Ξ¦βˆ’1(Ξ±)],

(4)where ΞΆ1 is the natural estimator for the skewness of X, and ΞΆ2 is the natural

estimator of the excess kurtosis, i.e. ΞΆ1 =

√n(nβˆ’ 1)nβˆ’ 2

√nβˆ‘ni=1(Xi βˆ’ Β΅)3

(βˆ‘ni=1(Xi βˆ’ Β΅)2)3/2

and

ΞΆ2 = nβˆ’1(nβˆ’2)(nβˆ’3)

((n+ 1)ΞΆ β€²2 + 6

)where ΞΆ β€²2 = n

βˆ‘ni=1(Xiβˆ’Β΅)4

(βˆ‘ni=1(Xiβˆ’Β΅)2)2 βˆ’ 3.

16

Page 17: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Parametrics estimator and error model

0 1 2 3 4 5

0.0

0.2

0.4

0.6

0.8

Density, theoritical versus empirical

Theoritical lognormalFitted lognormalFitted gamma

βˆ’4 βˆ’2 0 2 4

0.0

0.1

0.2

0.3

Density, theoritical versus empirical

Theoritical StudentFitted lStudentFitted Gaussian

Fig. 5 – Estimation of Value-at-Risk, model error.

17

Page 18: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Using a semiparametric models

Given a n-sample {Y1, . . . , Yn}, let Y1:n ≀ Y2:n ≀ . . .≀ Yn:n denotes the associatedorder statistics.

If u large enough, Y βˆ’ u given Y > u has a Generalized Pareto distribution withparameters ΞΎ and Ξ² ( Pickands-Balkema-de Haan theorem).

If u = Ynβˆ’k:n for k large enough, and if ΞΎ>0, denote by Ξ²k and ΞΎk maximumlikelihood estimators of the Genralized Pareto distribution of sample{Ynβˆ’k+1:n βˆ’ Ynβˆ’k:n, ..., Yn:n βˆ’ Ynβˆ’k:n},

Q(Y, Ξ±) = Ynβˆ’k:n +Ξ²k

ΞΎk

((nk

(1βˆ’ Ξ±))βˆ’ΞΎk

βˆ’ 1

), (5)

An alternative is to use Hill’s estimator if ΞΎ > 0,

Q(Y, Ξ±) = Ynβˆ’k:n

(nk

(1βˆ’ Ξ±))βˆ’ΞΎk

, ΞΎk =1k

kβˆ‘i=1

log Yn+1βˆ’i:n βˆ’ log Ynβˆ’k:n. (6)

18

Page 19: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

On nonparametric estimation for quantiles

For continuous distribution q(Ξ±) = Fβˆ’1X (Ξ±), thus, a natural idea would be to

consider q(Ξ±) = Fβˆ’1X (Ξ±), for some nonparametric estimation of FX .

Definition 3. The empirical cumulative distribution function Fn, based on

sample {X1, . . . , Xn} is Fn(x) =1n

nβˆ‘i=1

1(Xi ≀ x).

Definition 4. The kernel based cumulative distribution function, based onsample {X1, . . . , Xn} is

Fn(x) =1nh

nβˆ‘i=1

∫ x

βˆ’βˆžk

(Xi βˆ’ th

)dt =

1n

nβˆ‘i=1

K

(Xi βˆ’ xh

)

where K(x) =∫ x

βˆ’βˆžk(t)dt, k being a kernel and h the bandwidth.

19

Page 20: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Smoothing nonparametric estimators

Two techniques have been considered to smooth estimation of quantiles, eitherimplicit, or explicit.

β€’ consider a linear combinaison of order statistics,

The classical empirical quantile estimate is simply

Qn(p) = Fβˆ’1n

(i

n

)= Xi:n = X[np]:n where [Β·] denotes the integer part. (7)

The estimator is simple to obtain, but depends only on one observation. Anatural extention will be to use - at least - two observations, if np is not aninteger. The weighted empirical quantile estimate is then defined as

Qn(p) = (1βˆ’ Ξ³)X[np]:n + Ξ³X[np]+1:n where Ξ³ = npβˆ’ [np].

20

Page 21: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

0.0 0.2 0.4 0.6 0.8 1.0

24

68

The quantile function in R

probability level

quan

tile

leve

l

●

●

●

●

●

●

●

●

●

●

type=1type=3type=5type=7

0.0 0.2 0.4 0.6 0.8 1.0

23

45

67

The quantile function in R

probability levelqu

antil

e le

vel

●●

●

●●●

●●●

●●●

●●

●●

●●●●●●●●

●●●●●●●●●●●●●●

●●●●

●●●

●●

●

●●

●●

●

●●●

●●●

●●●

●●

●●

●●●●●●●●

●●●●●●●●●●●●●●

●●●●

●●●

●●

●

●●

type=1type=3type=5type=7

Fig. 6 – Several quantile estimators in R.

21

Page 22: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Smoothing nonparametric estimators

In order to increase efficiency, L-statistics can be considered i.e.

Qn (p) =nβˆ‘i=1

Wi,n,pXi:n =nβˆ‘i=1

Wi,n,pFβˆ’1n

(i

n

)=∫ 1

0

Fβˆ’1n (t) k (p, h, t) dt (8)

where Fn is the empirical distribution function of FX , where k is a kernel and h abandwidth. This expression can be written equivalently

Qn (p) =nβˆ‘i=1

[∫ in

(iβˆ’1)n

k

(tβˆ’ ph

)dt

]X(i) =

nβˆ‘i=1

[IK

(in βˆ’ ph

)βˆ’ IK

(iβˆ’1n βˆ’ ph

)]X(i)

(9)

where again IK (x) =∫ x

βˆ’βˆžk (t) dt. The idea is to give more weight to order

statistics X(i) such that i is closed to pn.

22

Page 23: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

0.0 0.2 0.4 0.6 0.8 1.0

01

23

quantile (probability) level

Fig. 7 – Quantile estimator as wieghted sum of order statistics.

23

Page 24: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

0.0 0.2 0.4 0.6 0.8 1.0

01

23

quantile (probability) level

Fig. 8 – Quantile estimator as wieghted sum of order statistics.

24

Page 25: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

0.0 0.2 0.4 0.6 0.8 1.0

01

23

quantile (probability) level

Fig. 9 – Quantile estimator as wieghted sum of order statistics.

25

Page 26: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

0.0 0.2 0.4 0.6 0.8 1.0

01

23

quantile (probability) level

Fig. 10 – Quantile estimator as wieghted sum of order statistics.

26

Page 27: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

0.0 0.2 0.4 0.6 0.8 1.0

01

23

quantile (probability) level

Fig. 11 – Quantile estimator as wieghted sum of order statistics.

27

Page 28: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Smoothing nonparametric estimators

E.g. the so-called Harrell-Davis estimator is defined as

Qn(p) =nβˆ‘i=1

[∫ in

(iβˆ’1)n

Ξ“(n+ 1)Ξ“((n+ 1)p)Ξ“((n+ 1)q)

y(n+1)pβˆ’1(1βˆ’ y)(n+1)qβˆ’1

]Xi:n,

β€’ find a smooth estimator for FX , and then find (numerically) the inverse,

The Ξ±-quantile is defined as the solution of FX β—¦ qX(Ξ±) = Ξ±.

If Fn denotes a continuous estimate of F , then a natural estimate for qX(Ξ±) isqn(Ξ±) such that Fn β—¦ qn(Ξ±) = Ξ±, obtained using e.g. Gauss-Newton algorithm.

28

Page 29: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Agenda

β€’ General introduction

Risk measures

β€’ Distorted risk measuresβ€’ Value-at-Risk and related risk measures

Quantile estimation : classical techniques

β€’ Parametric estimationβ€’ Semiparametric estimation, extreme value theoryβ€’ Nonparametric estimation

Quantile estimation : use of Beta kernels

β€’ Beta kernel estimationβ€’ Transforming observations

A simulation based study

29

Page 30: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Kernel based estimation for bounded supports

Classical symmetric kernel work well when estimating densities withnon-bounded support,

fh(x) =1nh

nβˆ‘i=1

k

(xβˆ’Xi

h

),

where k is a kernel function (e.g. k(Ο‰) = I(|Ο‰| ≀ 1)/2).

If K is a symmetric kernel, note that

E(fh(0) =12f(0) +O(h)

30

Page 31: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

1.2

Kernel based estimation of the uniform density on [0,1]

Dens

ity

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

1.2

Kernel based estimation of the uniform density on [0,1]

De

nsity

Fig. 12 – Density estimation of an uniform density on [0, 1].

31

Page 32: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Kernel based estimation for bounded supports

Several techniques have been introduce to get a better estimation on the border,

– boundary kernel (Muller (1991))– mirror image modification (Deheuvels & Hominal (1989), Schuster

(1985))– transformed kernel (Devroye & Gyrfi (1981), Wand, Marron &

Ruppert (1991))– Beta kernel (Brown & Chen (1999), Chen (1999, 2000)),

see Charpentier, Fermanian & Scaillet (2006) for a survey withapplication on copulas.

32

Page 33: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Beta kernel estimators

A Beta kernel estimator of the density (see Chen (1999)) - on [0, 1] is

fb(x) =1n

nβˆ‘i=1

k

(Xi, 1 +

x

b, 1 +

1βˆ’ xb

), x ∈ [0, 1],

where k(u, Ξ±, Ξ²) =uΞ±βˆ’1(1βˆ’ u)Ξ²βˆ’1

B(α, β), u ∈ [0, 1].

If {X1, Β· Β· Β· , Xn} are i.i.d. variables with density f0, if nβ†’βˆž, bβ†’ 0, thenBouzmarni & Scaillet (2005)

fb(x)β†’ f0(x), x ∈ [0, 1].

This is the Beta 1 estimator.

33

Page 34: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

0.0 0.2 0.4 0.6 0.8 1.0

05

1015

Beta kernel, x=0.05

0.0 0.2 0.4 0.6 0.8 1.0

02

46

810

12

Beta kernel, x=0.10

0.0 0.2 0.4 0.6 0.8 1.0

02

46

810

Beta kernel, x=0.20

0.0 0.2 0.4 0.6 0.8 1.00

24

68

Beta kernel, x=0.45

Fig. 13 – Shape of Beta kernels, different x’s and b’s.

34

Page 35: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Improving Beta kernel estimators

Problem : the convergence is not uniform, and there is large second order biason borders, i.e. 0 and 1.

Chen (1999) proposed a modified Beta 2 kernel estimator, based on

k2 (u; b; t) =

k t

b ,1βˆ’t

b(u) , if t ∈ [2b, 1βˆ’ 2b]

kρb(t),1βˆ’t

b(u) , if t ∈ [0, 2b)

k tb ,ρb(1βˆ’t) (u) , if t ∈ (1βˆ’ 2b, 1]

where ρb (t) = 2b2 + 2.5βˆ’βˆš

4b4 + 6b2 + 2.25βˆ’ t2 βˆ’ t

b.

35

Page 36: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Non-consistency of Beta kernel estimators

Problem : k(0, Ξ±, Ξ²) = k(1, Ξ±, Ξ²) = 0. So if there are point mass at 0 or 1, theestimator becomes inconsistent, i.e.

fb(x) =1n

βˆ‘k

(Xi, 1 +

x

b, 1 +

1βˆ’ xb

), x ∈ [0, 1]

=1n

βˆ‘Xi 6=0,1

k

(Xi, 1 +

x

b, 1 +

1βˆ’ xb

), x ∈ [0, 1]

=nβˆ’ n0 βˆ’ n1

n

1nβˆ’ n0 βˆ’ n1

βˆ‘Xi 6=0,1

k

(Xi, 1 +

x

b, 1 +

1βˆ’ xb

), x ∈ [0, 1]

β‰ˆ (1βˆ’ P(X = 0)βˆ’ P(X = 1)) Β· f0(x), x ∈ [0, 1]

and therefore Fb(x) β‰ˆ (1βˆ’ P(X = 0)βˆ’ P(X = 1)) Β· F0(x), and we may haveproblem finding a 95% or 99% quantile since the total mass will be lower.

36

Page 37: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Non-consistency of Beta kernel estimators

Gourieroux & Monfort (2007) proposed

f(1)b (x) =

fb(x)∫ 1

0fb(t)dt

, for all x ∈ [0, 1].

It is called macro-Ξ² since the correction is performed globally.

Gourieroux & Monfort (2007) proposed

f(2)b (x) =

1n

nβˆ‘i=1

kβ(Xi; b;x)∫ 1

0kΞ²(Xi; b; t)dt

, for all x ∈ [0, 1].

It is called micro-Ξ² since the correction is performed locally.

37

Page 38: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Transforming observations ?

In the context of density estimation, Devroye and Gy’orfi (1985) suggestedto use a so-called transformed kernel estimate

Given a random variable Y , if H is a strictly increasing function, then thep-quantile of H(Y ) is equal to H(q(Y ; p)).

An idea is to transform initial observations {X1, Β· Β· Β· , Xn} into a sample{Y1, Β· Β· Β· , Yn} where Yi = H(Xi), and then to use a beta-kernel based estimator, ifH : Rβ†’ [0, 1]. Then qn(X; p) = Hβˆ’1(qn(Y ; p)).

In the context of density estimation fX(x) = fY (H(x))H β€²(x). As mentioned inDevroye and Gyorfi (1985) (p 245), β€œfor a transformed histogram histogramestimate, the optimal H gives a uniform [0, 1] density and should therefore beequal to H(x) = F (x), for all x”.

38

Page 39: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Transforming observations ? a monte carlo study

Assume that sample {X1, Β· Β· Β· , Xn} have been generated from FΞΈ0 (from a famillyF = (FΞΈ, ΞΈ ∈ Θ). 4 transformations will be considered– H = FΞΈ (based on a maximum likelihood procedure)– H = FΞΈ0 (theoritical optimal transformation)– H = FΞΈ with ΞΈ < ΞΈ0 (heavier tails)– H = FΞΈ with ΞΈ > ΞΈ0 (lower tails)

39

Page 40: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Tra

nsfo

rmed

obs

erva

tions

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Tra

nsfo

rmed

obs

erva

tions

Fig. 14 – F (Xi) versus FΞΈ(Xi), i.e. PP plot.

40

Page 41: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Est

imat

ed d

ensi

ty

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

●

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

Est

imat

ed d

ensi

ty

Fig. 15 – Nonparametric estimation of the density of the FΞΈ(Xi)’s.

41

Page 42: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

●●

●●

●●

●●

●●

●

●

●

●

●

●

●

0.80 0.85 0.90 0.95 1.00

1.0

1.5

2.0

2.5

3.0

3.5

4.0

Estimated optimal transformation

Probability level

Qua

ntile

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

●●

●●

●●

●

●

●

●

●

●

●

0.80 0.85 0.90 0.95 1.00

12

34

5

Estimated optimal transformation

Probability level

Qua

ntile

Fig. 16 – Nonparametric estimation of the quantile function, Fβˆ’1

ΞΈ(q).

42

Page 43: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Tra

nsfo

rmed

obs

erva

tions

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Tra

nsfo

rmed

obs

erva

tions

Fig. 17 – F (Xi) versus FΞΈ0(Xi), i.e. PP plot.

43

Page 44: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Est

imat

ed d

ensi

ty

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

●

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

Est

imat

ed d

ensi

ty

Fig. 18 – Nonparametric estimation of the density of the FΞΈ0(Xi)’s.

44

Page 45: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

●●

●●

●●

●●

●

●

●

●

●

●

●

●

0.80 0.85 0.90 0.95 1.00

12

34

Estimated optimal transformation

Probability level

Qua

ntile

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

●●

●●

●●

●●

●

●

●

●

●

●

●

●

0.80 0.85 0.90 0.95 1.00

12

34

Estimated optimal transformation

Probability level

Qua

ntile

Fig. 19 – Nonparametric estimation of the quantile function, Fβˆ’1ΞΈ0

(q).

45

Page 46: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Tra

nsfo

rmed

obs

erva

tions

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Tra

nsfo

rmed

obs

erva

tions

Fig. 20 – F (Xi) versus FΞΈ(Xi), i.e. PP plot, ΞΈ < ΞΈ0 (heavier tails).

46

Page 47: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Est

imat

ed d

ensi

ty

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

●

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

Est

imat

ed d

ensi

ty

Fig. 21 – Estimation of the density of the FΞΈ(Xi)’s, ΞΈ < ΞΈ0 (heavier tails).

47

Page 48: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

●●

●●

●

●

●

●

●

●

●

0.80 0.85 0.90 0.95 1.00

24

68

1012

Estimated optimal transformation

Probability level

Qua

ntile

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

●●

●●

●

●

●

●

●

●

●

0.80 0.85 0.90 0.95 1.00

24

68

1012

Estimated optimal transformation

Probability level

Qua

ntile

Fig. 22 – Estimation of quantile function, Fβˆ’1ΞΈ (q), ΞΈ < ΞΈ0 (heavier tails).

48

Page 49: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Tra

nsfo

rmed

obs

erva

tions

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Tra

nsfo

rmed

obs

erva

tions

Fig. 23 – F (Xi) versus FΞΈ(Xi), i.e. PP plot, ΞΈ > ΞΈ0 (lighter tails).

49

Page 50: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Est

imat

ed d

ensi

ty

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

●

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

Est

imat

ed d

ensi

ty

Fig. 24 – Estimation of density of FΞΈ(Xi)’s, ΞΈ > ΞΈ0 (lighter tails).

50

Page 51: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

●●

●●

●●

●●

●●

●

●

●

●

●

●

●

●

0.80 0.85 0.90 0.95 1.00

1.0

1.5

2.0

2.5

3.0

3.5

Estimated optimal transformation

Probability level

Qua

ntile

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●

●●

●●

●●

●●

●●

●

●

●

●

●

●

●

●

0.80 0.85 0.90 0.95 1.00

1.0

1.5

2.0

2.5

3.0

3.5

Estimated optimal transformation

Probability level

Qua

ntile

Fig. 25 – Estimation of quantile function, Fβˆ’1ΞΈ (q), ΞΈ > ΞΈ0 (lighter tails).

51

Page 52: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

A universal distribution for losses

BNGB considered the Champernowne generalized distribution to modelinsurance claims, i.e. positive variables,

FΞ±,M,c (y) =(y + c)Ξ± βˆ’ cΞ±

(y + c)Ξ± + (M + c)Ξ± βˆ’ 2cΞ±where Ξ± > 0, c β‰₯ 0 and M > 0.

The associated density is then

fΞ±,M,c (y) =Ξ± (y + c)Ξ±βˆ’1 ((M + c)Ξ± βˆ’ cΞ±)

((y + c)Ξ± + (M + c)Ξ± βˆ’ 2cΞ±)2.

52

Page 53: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

A Monte Carlo study to compare those nonparametric

estimators

As in ....., 4 distributions were considered– normal distribution,– Weibull distribution,– log-normal distribution,– mixture of Pareto and log-normal distributions,

53

Page 54: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

5 10 15 20

0.0

00

.0

50

.1

00

.1

50

.2

00

.2

50

.3

0

Density of quantile estimators (mixture longnormal/pareto)

Estimated valueβˆ’atβˆ’risk

de

nsity o

f e

stim

ato

rs

Benchmark (R estimator)HD (Harrellβˆ’Davis)PRK (Park)B1 (Beta 1)B2 (Beta 2)

●

●

Boxβˆ’plot for the 11 quantile estimators

5 10 15 20

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●● ●●●●●● ●●●●●● ●● ● ●

●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ● ● ● ●● ● ●

● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●● ●●●●●●●●●●●●●●●●●● ● ●●● ● ● ●●● ●●● ● ●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●● ●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●● ●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●● ●●●●●●● ●● ● ● ● ●

MICRO Beta2

MACRO Beta2

Beta2

MICRO Beta1

MACRO Beta1

Beta1

PRK Park

PDG Padgett

HD Harrell Davis

E Epanechnikov

R benchmark

Fig. 26 – Distribution of the 95% quantile of the mixture distribution, n = 200,and associated box-plots.

54

Page 55: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●

●

MSE ratio, normal distribution, HD (Harrellβˆ’Davis)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●● ● ● ● ● ● ●

●

●

n= 50n=100n=200n=500

●

●

MSE ratio, normal distribution, B1 (Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

●

●●

●

●●●

●

n= 50n=100n=200n=500

●

●

MSE ratio, normal distribution, MACB1 (MACROβˆ’Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●

●

●● ●

●

●●●

●

n= 50n=100n=200n=500

●

●

MSE ratio, normal distribution, PRK (Park)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●

●

●

●

●

●

●

●●

●

n= 50n=100n=200n=500

●

●

MSE ratio, normal distribution, B1 (Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

●

●●

●

●●●

●

n= 50n=100n=200n=500

●

●

MSE ratio, normal distribution, MACB1 (MACROβˆ’Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●

●

●● ●

●

●●●

●

n= 50n=100n=200n=500

Fig. 27 – Comparing MSE for 6 estimators, the normal distribution case.

55

Page 56: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●

●

MSE ratio, Weibull distribution, HD (Harrellβˆ’Davis)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

● ● ● ●●

●

●

●

n= 50n=100n=200n=500

●

●

MSE ratio, Weibull distribution, MACB1 (MACROβˆ’Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

●●

●

●

●

●●

●

n= 50n=100n=200n=500

●

●

MSE ratio, Weibull distribution, MICB1 (MICROβˆ’Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

● ●●

●●

●

●

●

n= 50n=100n=200n=500

●

●

MSE ratio, Weibull distribution, PRK (Park)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0 ●

●

●

●

●

●●

●

n= 50n=100n=200n=500

●

●

MSE ratio, Weibull distribution, MACB1 (MACROβˆ’Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

●●

●

●

●

●●

●

n= 50n=100n=200n=500

●

●

MSE ratio, Weibull distribution, MICB1 (MICROβˆ’Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

● ●●

●●

●

●

●

n= 50n=100n=200n=500

Fig. 28 – Comparing MSE for 6 estimators, the Weibull distribution case.

56

Page 57: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●

●

MSE ratio, lognormal distribution, HD (Harrellβˆ’Davis)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

● ● ● ● ● ● ●

●

●

●

n= 50n=100n=200n=500

●

●

MSE ratio, lognormal distribution, MACB1 (MACROβˆ’Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.00.

00.

51.

01.

52.

0

● ● ●●

● ●●

●

●

●

n= 50n=100n=200n=500

●

●

MSE ratio, lognormal distribution, MICB1 (MICROβˆ’Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●

●

● ● ●

●

●

●

●

●

n= 50n=100n=200n=500

●

●

MSE ratio, lognormal distribution, B1 (Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

● ● ●●

●●

●

●

●

●

n= 50n=100n=200n=500

●

●

MSE ratio, lognormal distribution, PRK (Park)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●

●

●

●

●

●

●

●

n= 50n=100n=200n=500

●

●

MSE ratio, lognormal distribution, MACB2 (MACROβˆ’Beta2)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●● ● ●

● ●● ●

●

●

n= 50n=100n=200n=500

●

●

MSE ratio, lognormal distribution, MICB2 (MICROβˆ’Beta2)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.00.

00.

51.

01.

52.

0

●

●● ● ● ●

●

●●

●

n= 50n=100n=200n=500

●

●

MSE ratio, lognormal distribution, B2 (Beta2)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●● ●

● ● ●●

●

●

●

n= 50n=100n=200n=500

Fig. 29 – Comparing MSE for 9 estimators, the lognormal distribution case.

57

Page 58: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●

●

MSE ratio, mixture distribution, HD (Harrellβˆ’Davis)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

● ● ● ● ●

●●

●

●

n= 50n=100n=200n=500

●

●

MSE ratio, mixture distribution, MACB1 (MACROβˆ’Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.00.

00.

51.

01.

52.

0

●●

●

●

●

●

●

●

n= 50n=100n=200n=500

●

●

MSE ratio, mixture distribution, MICB1 (MICROβˆ’Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

●

● ●

● ●

●

●

n= 50n=100n=200n=500

●

●

MSE ratio, mixture distribution, B1 (Beta1)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

● ●●

●

●

●

●

n= 50n=100n=200n=500

●

●

MSE ratio, mixture distribution, PRK (Park)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

● ●

●

●

●

●

n= 50n=100n=200n=500

●

●

MSE ratio, mixture distribution, MACB2 (MACROβˆ’Beta2)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

● ● ● ●

● ●

●●

●

n= 50n=100n=200n=500

●

●

MSE ratio, mixture distribution, MICB2 (MICROβˆ’Beta2)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.00.

00.

51.

01.

52.

0

● ● ● ●

●

●●

●

●

n= 50n=100n=200n=500

●

●

MSE ratio, mixture distribution, B2 (Beta2)

Probability, confidence levels (p)

MS

E r

atio

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.5

1.0

1.5

2.0

●●

●

● ●

●●

●

●

n= 50n=100n=200n=500

Fig. 30 – Comparing MSE for 9 estimators, the mixture distribution case.

58

Page 59: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Portfolio optimal allocation

Classical problem as formulated in Markowitz (1952), Journal of Finance, Ο‰βˆ— ∈ argmin{ω′Σω}u.c. Ο‰β€²Β΅ β‰₯ Ξ· and Ο‰β€²1 = 1

convex⇔

Ο‰βˆ— ∈ argmax{Ο‰β€²Β΅}u.c. ω′Σω ≀ Ξ·β€² and Ο‰β€²1 = 1

Roy (1952), Econometrica,β€œthe optimal bundle of assets (investment) forinvestors who employ the safety first principle is the portfolio that minimizes theprobability of disaster”.

Ο‰βˆ— ∈ argmin{VaR(Ο‰β€²X, Ξ±)}u.c. E(Ο‰β€²X) β‰₯ Ξ· and Ο‰β€²1 = 1

nonconvex<

Ο‰βˆ— ∈ argmax{E(Ο‰β€²X)}u.c. VaR(Ο‰β€²X, Ξ±) ≀ Ξ·β€²,Ο‰β€²1 = 1

59

Page 60: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Empirical data, Eurostocks

DAX (1)

βˆ’0.05 0.05

●

●

●●

●●

●

●

●●●●● ●●

●●●

●

●●

●

●●●

●●●

●

●● ●●

●●

●

●●●

●

●●

● ●●●

●

●●●●●

●

●

●

●●

●●

●

●

●

●

●

●●

●●●

●

●●

●

●

●●

●●

●●

●

●

●

●●

● ●●●

●

●

●●

●

●

●●●

●

●●●●●

●

●

● ●

●

●●

●

●

●●

●

●●

●

●●●

●

●

●

●●

●●

●

●

●

●

●

●

●

● ●●●●

●

●

●

●

●●

●

●

●

●

●

●

●●

●

●

●

●●● ●

●●

●

●●●●●

●●

●

●

●

●

●

●

●

●

●●

●●

●

●

●

●●

●●

●

●

●

●

●

●

●●

●●

●

●●

●

●●

●●

●

●●●

●

●

●

●

●●●

●

●●

●

●

● ●●●●

●●

●

●

●

●

● ●●

●●

●

● ●

●

●●

●●●

●

●●

●●

●● ●● ●●●●

●

●●

●●●●

●

●

●●●

●●

●●

●●●●●●●

●●

●●

●

●●●

●

●

●

●●

●

● ●

●●

●

●●

●

●

●●

●●

●

●

●

●

●

●

●

●●

●

●

●

●

●

●●

●

●

●

●

●

●

●

●

●●●

●

●

●

●

●●●

●

● ●● ●

●

●

●

●

●

●●

●

●

●

●

●

●

●●

●

●

●

●

●

●

●●

●●

●

●

●● ●●●●●

●●●

●

●●

●

●●●

●●●

●

●●●●

●●●

●● ●

●

●●●●●●

●

●●●● ●●

●

●

●●

● ●

●

●

●

●

●

●●

●●●

●

●●

●

●

●●

●●

●●

●

●

●

●●

●●●●

●

●

●●

●

●

●●●

●

●● ●● ●

●

●

● ●

●

●●

●

●

●●

●

●●

●

● ●●

●

●

●

●●

●●

●

●

●

●

●

●

●

●● ●●●

●

●

●

●

●●

●

●

●

●

●

●

●●

●

●

●

●●●●

●●

●

●●

●●●

●●

●

●

●

●

●

●

●

●

●●

● ●

●

●

●

●●

●●

●

●

●

●

●

●

● ●

●●

●

●●

●

●●

●●

●

●●●

●

●

●

●

●● ●

●

●●

●

●

●●●●●

● ●

●

●

●

●

●●●

●●

●

● ●

●

●●

●●●

●

●●

●●

●●●●● ●●

●●

●●

● ●●●●

●

●●●

●●

●●

●●●●●●

●●

●

●●

●

●●

●

●

●

●

●●

●

● ●

● ●

●

●●

●

●

●●

● ●

●

●

●

●

●

●

●

●●

●

●

●

●

●

●●

●

●

●

●

●

●

●

●

●● ●

●

●

●

●

●●●

●

●●●

●

●

●

●

●

●

●●

●

●

●

●

●

●

●●

●

●

●

●

βˆ’0.05 0.05

βˆ’0.05

0.05

●

●

●●

●●

●

●

●●●● ●●●

●●●

●

●●●

●●●

●●●

●

●●●●

●●●

●● ●

●

●●

●●●●

●

●●●●●●

●

●

●●

● ●

●

●

●

●

●

●●

●●●

●

●●

●

●

●●

●●

●●

●

●

●

●●● ●●●

●

●

●●

●

●

●●●

●

●● ●●●

●

●

●●

●

●●

●

●

●●

●

●●

●

●●●

●

●

●

●●

●●

●

●

●

●

●

●

●

● ●●●●

●

●

●

●

●●

●

●

●

●

●

●

●●

●

●

●

●●● ●

●●

●

●●

●● ●

● ●

●

●

●

●

●

●

●

●

●●

●●

●

●

●

●●

●●

●

●

●

●

●

●

●●

●●

●

●●

●

●●

●●

●

●●●●

●

●

●

●●●

●

●●

●

●

●●● ●●●●

●

●

●

●

● ●●

●●

●

● ●

●

●●

●●●●

●●

●●

●●●●● ●●

●●

●●

●●●●

●

●

●●●

●●

●●

● ●●●●●

●●

●

●●

●

●●

●

●

●

●

●●

●

●●

●●

●

●●

●

●

●●

● ●

●

●

●

●

●

●

●

●●

●

●

●

●

●

●●

●

●

●

●

●

●

●

●

●●●

●

●

●

●

●●●

●

●● ●●

●

●

●

●

●

●●

●

●

●

●

●

●

●●

●

●

●

●

βˆ’0.05

0.05

●

●

●● ●●

●

●

●●●●●

●●

●

●●

●

●

●●

●●

●

●

●

●●●

●

●●

●●

●●

●●●●

●●

●●

●●●●

●●

●●

●

●

●●

●●

●

●●

●

●●

●

●

●●

●

●

●●

●●

●●

●●

●

●

●●

●●

●

●●●● ●●●●●

●● ●● ●●

●

●●

●

●

●●● ●●

●●

●

●

●●

●

●

●

●●

●

●

●

●

●

●●●

●●●

●

●

●

●

●

●

●

●●

●

●

●

●

●

●

●

●

●●●

●

●

●

●

● ●

●

●

●

●●

●

●

●●●

●

●●

●●

●●●

●

●

●

●●

●● ●●

●

●

●

●

●

●

●

●

● ●

●

●●

●● ●

●●●

● ●●

● ●●

●●●

●

●

●

●●●● ●●

●

●

●●

●

● ●●●●

●●●

●

●

●

●●● ●

●●●●

●● ●

●

●●

●●

●●●●

●

●●●

●

●

●

●●●

●

● ●

●

●●

●●●●

●●●●●●●●

●

●●

●●

●●

●●

●

●●

●

●●

●

●

●

●

●●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●●

●

●

●

●●

●●

●

●●●

●

●

●

●

●●●

●

●

●●●

●

●●

●

●

●

●

●

●●

●

●

●●

●

●

●

●

●

SMI (2) ●

●

●●●●

●

●

●● ●●●

●●

●

●●

●

●

●●

● ●

●

●

●

●●●

●

●●

●●

●●

● ●●●

●●

●●

●●●●

●●

●●

●

●

●●

● ●

●

●●

●

●●

●

●

● ●

●

●

●●

●●

●●

●●

●

●

●●

●●

●

●●●● ●● ●

●●

●●● ●●●

●

● ●

●

●

●● ●●

●

●●

●

●

●●

●

●

●

●●

●

●

●

●

●

●●●

●●●

●

●

●

●

●

●

●

●●

●

●

●

●

●

●

●

●

●●●

●

●

●

●

●●

●

●

●

● ●

●

●

●●●

●

●●

●●

●●●

●

●

●

● ●

● ●●●

●

●

●

●

●

●

●

●

●●

●

● ●

●● ●

●●●

● ●●● ●

●

●● ●●

●

●

●● ●● ●●

●

●

●●

●

●●●

●●●●

●

●

●

●

●●● ●

●●●●

●●●

●

●●

●●

●●●●

●

●●●

●

●

●

●●●

●

●●

●

●●

● ●●●

●●●●●● ●

●

●

●●

●●●●

●●

●

●●

●

●●

●

●

●

●

●●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●●

●

●

●

●●

●●

●

●●

●

●

●

●

●

●●●

●

●

●●●

●

●●

●

●

●

●

●

●●

●

●

●●

●

●

●

●

●

●

●

●●●●

●

●

●●●● ●

●●

●

●●

●

●

●●

● ●

●

●

●

●●●

●

●●●●

●●

● ●● ●

●●

●●

●●●●

●●●

●

●

●

●●

● ●

●

●●

●

●●

●

●

● ●

●

●

●●

●●

●●

●●

●

●

●●

●●

●

●●●● ●●●

●●

●●●● ●●

●

●●

●

●

●● ●●●

●●

●

●

● ●

●

●

●

●●

●

●

●

●

●

● ●●

●●●

●

●

●

●

●

●

●

●●

●

●

●

●

●

●

●

●

●●●

●

●

●

●

● ●

●

●

●

● ●

●

●

● ●●

●

●●

●●

●●●

●

●

●

●●

●●●●

●

●

●

●

●

●

●

●

●●

●

●●

●●●

● ●●

● ●●

● ●●

●●●

●

●

●

●●●●●●●

●

●●

●

●●●

●●●●

●

●

●

●

●●●●

●●●●

●●●

●

●●

●●

●●●●

●

●●●

●

●

●

●●●

●

●●

●

●●

●●●●

● ●●●●● ●

●

●

● ●

●●●●

●●

●

●●

●

●●

●

●

●

●

●●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●●

●

●

●

●●

● ●

●

●●

●

●

●

●

●

●●●

●

●

● ●●

●

●●

●

●

●

●

●

●●

●

●

●●

●

●

●

●

●

●

●

●● ●●

●

●

●●●●●●●

●●●

●

●●

● ●

●

●

●

●● ●

●●●●

●

●●

●●

●●

●●●

● ●

●●

●●●●

●

●●

●

●

●

●●

●

●

● ●

●●

●

●●

●

●

●

●

●

●

●

●

●●

●●

●

●

●

●

●

●●●

●●

●

●

●●

●

●

●●

● ●●

●●

●

●

●

●●

●

●●

●

●

●●●●

● ●●

●

●

●

●

●

●●

●●

●●●

●

●●

●●

●

●●●●

●●

●

●

●

●

●

●

●

●

●

●●

●

●

● ●

●●●

●

●

●

●

●●●

●

●

●

●

●

●●

●● ●

●

●●

●

●●

●

●●

●●

●●

●

●

● ● ●●●

●●

●

●

●

●

●

●●●

●

●●

●●

●

●

●

●

●●

●

●● ●

●

●

●●●

●

●

●

●

●

●

●●●●

●●●

●●

●

●●●

● ●●

●

●

●●●●●●●●

●●●

●

●

●

●●

●

●●

●

●

●●

●

●●

●●●●

●●

●●

●

●●●● ●●

●

●

●

●

●

●

●

●●

●

●

●

●

●

●

●●

●

● ●

●

●

●

● ●

●

●

●

●

●

●

●

●

●●

●

●

●

●●

●

●

●

●●

●

●

●

●

●

●●

●

●

●●●●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

● ●

●

●●●●

●

●

●●●●●

●●

●●●

●

●●

● ●

●

●

●

●●●

●●

●●

●

●●

● ●

●●

●●●●●

●●●●

●●

●

●●

●

●

●

●●

●

●

● ●

●●

●

●●

●

●

●

●

●

●

●

●

●●

●●

●

●

●

●

●

●●

●●

●

●

●

●●

●

●

●●

●●●

●●

●

●

●

●●

●

● ●

●

●

● ●●●

● ● ●

●

●

●

●

●

●●

●●

●●●

●

●●

●●

●

●●●●

●●

●

●

●

●

●

●

●

●

●

●●

●

●

●●

●●

●

●

●

●

●

●●●

●

●

●

●

●

●●

●● ●

●

●●

●

●●

●

●●

●●

●●

●

●

●● ●●●

●●

●

●

●

●

●

●●●

●

●●

●●

●

●

●

●

●●

●

●●●

●

●

●●●●

●

●

●

●

●

●●

●●●●●

●●

●

●●●

●●●

●

●

● ●●● ●● ●

●

●●●

●

●

●

●●

●

●●

●

●

●●

●

●●

●●

●●

●●

●●

●

●● ●●●

●

●

●

●

●

●

●

●

●●

●

●

●

●

●

●

●●

●

● ●

●

●

●

● ●

●

●

●

●

●

●

●

●

● ●

●

●

●

●●

●

●

●

●●

●

●

●

●

●

●●●

●

● ●●●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

CAC (3)

βˆ’0.10

0.00

0.10

●

●

●●●●

●

●

●●

●●●

●●

●●●

●

●●● ●

●

●

●

●● ●

●●

●●

●

●●

● ●

●●

●●●

● ●

●●

●●●●

●

●●

●

●

●

●●

●

●

●●

●●

●

●●

●

●

●

●

●

●

●

●

●●

●●

●

●

●

●

●

●●

●●

●

●

●

●●

●

●

●●

● ●●

●●

●

●

●

●●

●

●●

●

●

●●● ●

●●●

●

●

●

●

●

●●

●●

●●●

●

●●

●●

●

●●●●

●●

●

●

●

●

●

●

●

●

●

●●

●

●

● ●

●●

●

●

●

●

●

● ●●

●

●

●

●

●

●●

●● ●

●

●●

●

●●

●

●●

●●

●●

●

●

●● ●●●

●●

●

●

●

●

●

●● ●

●

●●

●●

●

●

●

●

●●

●

●● ●

●

●

●●●

●

●

●

●

●

●

●●

●●●

●●

●●

●

●●●●●●

●

●

● ●●●●●●

●

●●●

●

●

●

●●

●

●●

●

●

●●

●

●●

●●

●●

●●

●●

●

● ●●●●

●

●

●

●

●

●

●

●

●●

●

●

●

●

●

●

●●

●

● ●

●

●

●

● ●

●

●

●

●

●

●

●

●

●●

●

●

●

●●

●

●

●

●●

●

●

●

●

●

●●

●

●

●● ●●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

●

βˆ’0.05 0.05

βˆ’0.05

0.05

●

●

●●

●●●

●

●

●●●

●

●●●●●

●

●●●●

●

●

●●

●

●

●●

●●

●●●

●

●

●

●

●

●●

●

●

●

●●●●●● ●

●

●●

●

●

●

●

●

● ●

●

●

●

●

●

●

●●●

● ●

●●

●

●●

●

●●

●

●●●

●●

●●

●●●●● ●

● ●

●

●●●

●●

●

●●●

●

●●●●

●● ●●

● ●●●

●

●

●●

●●

●

●

●●●

●

●

●

● ●●● ●

●●

●●

●

●●

●●

●●

●

●●

●●

●

●

●

●●

●

●

●

●

●

●

●●

●

●●

●

●

●

●●

●

●

●

●●

●●●

●

●●

●

●

●●

●

●

●●

●

●● ●●●

●

●●

●

●

●

●

●

●●●●

●

●

● ●●●●

●●

●

● ●●●

●

●●●

●

●● ●

●

●

●●

●●

●

●

●

●●●● ●

●

●●●

●

●●●●

●

●●

●● ●

●

●●●

●●

●

●

●●

●●

●●

●●●

●●●

●

●● ●

●

●● ●●

● ●

●

●

●

●

●●

●● ●

●

●●

●

●●

●

●

●

●

●

●●

●

●

●

●

●

●

●●

●●

●

●

●

●

●● ●

●

●●

●●

●

●

●

●

●●● ●

●●

●

●

●

●

●●

●

●

●

●

●

●

●

●

●●

●

●

●

●

●

●

●

●●

●●●

●

●

●●●

●

●●●

●●

●

●● ●●●

●

●●

●

●

●●

●●

●●●

●

●

●

●

●

●●

●

●

●

●●●● ●●●

●

●●

●

●

●

●

●

● ●

●

●

●

●

●

●

●●

●

● ●

● ●

●

●●

●

●●

●

●●●

●●

●●●●●

●● ●●●

●

●●●

●●

●

●● ●

●

●●●

●

● ● ●●

● ● ●●

●

●

●●

●●

●

●

●●●

●

●

●

●●●● ●

●●

●●

●

●●

●●

●●

●

●●

●●

●

●

●

●●

●

●

●

●

●

●

●●

●

●●

●

●

●

●●

●

●

●

●●

●●●

●

●●

●

●

●●

●

●

●●

●

●● ●●●

●

●●

●

●

●

●

●

●●●●

●

●

● ●●

●●●●

●

● ●●●

●

●●●●

●● ●

●

●

●●●

●●

●

●

●●●●●

●

●●●

●

●● ●●

●

●●

●● ●

●

●●●

●●●

●

●●

●●

●●

●●

●

●●●

●

●● ●

●

●●●●

●●

●

●

●

●

● ●

●● ●

●

●●

●

●●

●

●

●

●

●

●●

●

●

●

●

●

●

●●

●●

●

●

●

●

●● ●

●

●●

●●

●

●

●

●

●●● ●

● ●

●

●

●

●

●●

●

●

●

●

●

●

●

●

●●

●

●

●

●

●

βˆ’0.10 0.00 0.10

●

●

●●

●●●

●

●

●●

●

●

●●●●●

●

●● ●●

●

●

●●

●

●

●●

●●

●●●

●

●

●

●

●

●●

●

●

●

●●●● ● ●●

●

●●

●

●

●

●

●

●●

●

●

●

●

●

●

●●

●

● ●

●●

●

●●

●

●●

●

●● ●

●●

●●

●● ●●● ●

●●

●

●●●

●●

●

●● ●

●

●●●

●

●●●●●●● ●

●

●

●●

●●

●

●

●●●

●

●

●

● ●● ●●

●●

●●

●

●●

●●

●●

●

●●

●●

●

●

●

●●

●

●

●

●

●

●

●●

●

●●

●

●

●

●●

●

●

●

●●

● ●●

●

●●

●

●

●●

●

●

●●

●

● ●●●●

●

●●

●

●

●

●

●

●●● ●

●

●

● ●●

●●●

●

●

● ●●●

●

●●●

●

●● ●

●

●

●●

●●

●

●

●

●●●●●

●

●●●●

●●●●

●

●●

●● ●

●

●●●

●●

●

●

●●

●●

●●

●●

●

●●●

●

●● ●

●

●●●●

● ●

●

●

●

●

● ●

● ● ●

●

●●

●

●●

●

●

●

●

●

●●

●

●

●

●

●

●

●●

●●

●

●

●

●

●● ●

●

●●

●●

●

●

●

●

●●● ●

●●

●

●

●

●

●●

●

●

●

●

●

●

●

●

●●

●

●

●

●

●

FTSE (4)

Fig. 31 – Scatterplot of log-returns.

60

Page 61: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Alloca

tion (

3rd as

set)

Allocation (4th asset)

Quantile

Valueβˆ’atβˆ’Risk (75%) on the grid Valueβˆ’atβˆ’Risk (75%) on the grid

Allocation (3rd asset)

Alloc

ation

(4th

asse

t)

βˆ’3 βˆ’2 βˆ’1 0 1 2

βˆ’3βˆ’2

βˆ’10

12

Alloca

tion (

3rd as

set)

Allocation (4th asset)

Quantile

Valueβˆ’atβˆ’Risk (97.5%) on the grid Valueβˆ’atβˆ’Risk (97,5%) on the grid

Allocation (3rd asset)

Alloc

ation

(4th

asse

t)

βˆ’3 βˆ’2 βˆ’1 0 1 2βˆ’3

βˆ’2βˆ’1

01

2

Fig. 32 – Value-at-Risk for all possible allocations on the grid G (surface and levelcurves), with Ξ± = 75% on the left and Ξ± = 97.5% on the right.

61

Page 62: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●

●

βˆ’1.0

0.00.5

1.01.5

2.0

Optimal allocation (asset 1)

Probability level (97.5%βˆ’75%)

weigh

t of a

lloca

tion

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

97.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

95%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

92.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

90%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

87.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

85%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

82.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

80%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

77.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

75%

●

●

●

βˆ’1.0

0.00.5

1.01.5

2.0

Optimal allocation (asset 2)

Probability level (97.5%βˆ’75%)

weigh

t of a

lloca

tion

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

97.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

95%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

92.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

90%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

87.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

85%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

82.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

80%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

77.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

75%

●

●

●

βˆ’1.0

0.00.5

1.01.5

2.0

Optimal allocation (asset 3)

Probability level (97.5%βˆ’75%)

weigh

t of a

lloca

tion

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●97.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

95%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

92.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

90%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

87.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

85%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

82.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

80%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

77.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

75%

●●

●

βˆ’1.0

0.00.5

1.01.5

2.0

Optimal allocation (asset 4)

Probability level (97.5%βˆ’75%)

weigh

t of a

lloca

tion

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

97.5%

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

95%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

92.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

90%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

87.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

85%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

82.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

80%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

77.5%

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

75%

●

Fig. 33 – Optimal allocations for different probability levels (Ξ± =75%, 77.5%, 80%, ..., 95%, 97.5%), with allocation for the first asset (top left) upto the fourth asset (bottom right). 62

Page 63: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

mean 75% 77.5% 80% 82.5% 85% 87.5% 90% 92.5% 95% 97.5%

variance

asset 1 0.2277 0.222(0.253)

0.206(0.244)

0.215(0.259)

0.251(0.275)

0.307(0.276)

0.377(0.241)

0.404(0.243)

0.394(0.224)

0.402(0.214)

0.339(0.268)

asset 2 0.5393 0.550(0.141)

0.558(0.136)

0.552(0.144)

0.530(0.152)

0.500(0.154)

0.460(0.134)

0.444(0.135)

0.448(0.124)

0.441(0.121)

0.467(0.151)

asset 3 βˆ’0.2516 βˆ’0.062(0.161)

βˆ’0.083(0.176)

βˆ’0.106(0.184)

βˆ’0.139(0.187)

βˆ’0.163(0.215)

βˆ’0.196(0.203)

βˆ’0.228(0.163)

βˆ’0.253(0.141)

βˆ’0.310(0.184)

βˆ’0.532(0.219)

asset 4 0.4846 0.289(0.162)

0.319(0.179)

0.339(0.191)

0.357(0.204)

0.357(0.221)

0.359(0.205)

0.380(0.175)

0.410(0.153)

0.466(0.170)

0.726(0.200)

Tab. 1 – Mean and standard deviation of estimated optimal allocation, for differentquantile levels.

1. raw estimator Q(Y, Ξ±) = Y[Ξ±Β·n]:n

2. mixture estimator Q(Y, Ξ±) =nβˆ‘i=1

Ξ»i(Ξ±)Yi:n, which is the standard quantile

estimate in R (see [?]),

3. Gaussian estimator Q(Y, Ξ±) = Y + z1βˆ’Ξ±sd(Y , where sd denotes the empiricalstandard deviation,

63

Page 64: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

4. Hill’s estimator, with k = [n/5], Q(Y, Ξ±) = Ynβˆ’k:n

(nk

(1βˆ’ Ξ±))βˆ’ΞΎk

, where

ΞΎk =1k

kβˆ‘i=1

logYn+1βˆ’i:n

Ynβˆ’k:n(assuming that ΞΎ > 0),

5. kernel based estimator is obtained as a mixture of smoothed quantiles,derived as inverse values of a kernel based estimator of the cumulative

distribution function, i.e. Q(Y, Ξ±) =nβˆ‘i=1

Ξ»i(Ξ±)Fβˆ’1(i/n).

64

Page 65: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

●

β—βˆ’3

βˆ’2βˆ’1

01

2

Optimal allocation (asset 1)

Quantile estimator

weigh

t of a

llocatio

n ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 1raw

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 2mixture

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●

●

●

●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 3Gaussian

● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 4Hill

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 5Kernel

●

●

●

βˆ’3βˆ’2

βˆ’10

12

Optimal allocation (asset 2)

Quantile estimator

weigh

t of a

llocatio

n ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 1raw

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 2mixture

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 3Gaussian

● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 4Hill

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 5Kernel

●

●

●

βˆ’3βˆ’2

βˆ’10

12

Optimal allocation (asset 3)

Quantile estimator

weigh

t of a

llocatio

n

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 1raw

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 2mixture

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●

●

●

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 3Gaussian

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●

●

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 4Hill

● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 5Kernel

●

●

●

βˆ’3βˆ’2

βˆ’10

12

Optimal allocation (asset 4)

Quantile estimator

weigh

t of a

llocatio

n ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 1raw

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 2mixture

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●

●

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 3Gaussian

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 4Hill

●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

Est. 5Kernel

●

Fig. 34 – Optimal allocations for different 95% quantile estimators, with allocationfor the first asset (top left) up to the fourth asset (bottom right).

65

Page 66: Slides univ-van-amsterdam

Arthur CHARPENTIER - Nonparametric quantile estimation.

Some references

Charpentier, A. & Oulidi, A. (2007). Beta Kernel estimation forValue-At-Risk of heavy-tailed distributions. in revision Journal of ComputationalStatistics and Data Analysis.

Charpentier, A. & Oulidi, A. (2007). Estimating allocations forValue-at-Risk portfolio optimzation. to appear in Mathematical Methods inOperations Research.

Chen, S. X. (1999). A Beta Kernel Estimator for Density Functions.Computational Statistics & Data Analysis, 31, 131-145.

Gourieroux, C., & Montfort, A. 2006. (Non) Consistency of the BetaKernel Estimator for Recovery Rate Distribution. CREST-DP 2006-32.

66