VaR and CVaRmsekce.karlin.mff.cuni.cz/~vorisek/Seminar/1415z/1415z_Bejda.pdf · Motivation General...
Transcript of VaR and CVaRmsekce.karlin.mff.cuni.cz/~vorisek/Seminar/1415z/1415z_Bejda.pdf · Motivation General...
Motivation General concept of CVaR Optimization Comparison
VaR and CVaR
Premysl Bejda
2014
Motivation General concept of CVaR Optimization Comparison
Contents
1 Motivation
2 General concept of CVaR
3 Optimization
4 Comparison
Motivation General concept of CVaR Optimization Comparison
Contents
1 Motivation
2 General concept of CVaR
3 Optimization
4 Comparison
Motivation General concept of CVaR Optimization Comparison
Contents
1 Motivation
2 General concept of CVaR
3 Optimization
4 Comparison
Motivation General concept of CVaR Optimization Comparison
Contents
1 Motivation
2 General concept of CVaR
3 Optimization
4 Comparison
Motivation General concept of CVaR Optimization Comparison
Picture of our risk measures
Figure: Artificial example of VaR.
Motivation General concept of CVaR Optimization Comparison
General overview
VaR is known measure among investors.VaR is also discussed in BASEL II.CVaR takes into account the distribution of the tail.CVaR has many advantages, but its interpretation is worse.Previous picture is just for illustration, in the next chapterwe will focuc on loss instead of profit.
Motivation General concept of CVaR Optimization Comparison
Outline
1 Motivation
2 General concept of CVaR
3 Optimization
4 Comparison
Motivation General concept of CVaR Optimization Comparison
Model and assumptions
Define possible loss function as the function z = f (x ,y).x > X represents a decision vector, X ⊂ Rn expressdecision constraints.y > Y ⊂ Rm represents the future values (random variable)of some variables like interest rates, weather data...y and x are independent (I cannot influence the interestrate by some policy).The distribution function for the loss z is defined as
Ψ(x , ζ) = P (f (x ,y) B ζ) ,
where we have to make some technical assumptions (see[Rockafellar and Urysanov (2001)]).
Motivation General concept of CVaR Optimization Comparison
VaR
Definition 1 (VaR).The α-VaR of the loss associated with a decision x is the value
ζα(x) =minζ(Ψ(x , ζ) C α).
Since Ψ(x , ζ) is nondecreasing and right continuous theminimum has to be attained.In case of Ψ(x , ζ) continuous and increasing we find asolution of Ψ(x , ζ) = α.Define α-VaR+ as ζ+α(x) = infζ Ψ(x , ζ) A α.
When ζα and ζ+α differ it means that Ψ(x , ζ) is constant on(ζα, ζ+α) and small change in α influence the result a lot.In this sense the VaR is not robust enough.
Motivation General concept of CVaR Optimization Comparison
Atom
Definition 2 (Atom).When the difference
Ψ(x , ζ) −Ψ(x , ζ−) = P(f (x ,y) = ζ) A 0
(the minus sign denotes convergence from the left to the ζ) ispositive, so that Ψ(x , ċ) has a jump at ζ, we say that probabilityatom is presented at ζ.
Motivation General concept of CVaR Optimization Comparison
CVaR
Definition 3 (CVaR).The α-CVaR of the loss associated with a decision x is thevalue
φα(x) = Sª
−ª
f (x ,y)dΨα(x , f (x ,y)),where the distribution in question is the one defined by
Ψα(x , ζ) = � 0 for ζ < ζα(x),Ψ(x ,ζ)−α
1−α for ζ C ζα(x).
What to do when in the point ζα(x) appears an atom?The definition is good enough to deal with this problem.What should really be meant by CVaR in this case?
Motivation General concept of CVaR Optimization Comparison
CVaR+ and CVaR−
Definition 4 (CVaR+ and CVaR−).
The α-CVaR+ of the loss associated with a decision x is thevalue
φ+α(x) = E(f (x ,y) � f (x ,y) A ζα(x)),whereas α-CVaR− of the loss associated with a decision x isthe value
φ−α(x) = E(f (x ,y) � f (x ,y) C ζα(x)),
φ−α(x) is well defined because P(f (x ,y) C ζα) C 1 − α A 0.
On the other hand it can happen that P(f (x ,y) A ζα) = 0,when in ζα is the appropriate atom.
Motivation General concept of CVaR Optimization Comparison
Basic CVaR relations
Proposition 5.
If there is no probability atom at ζα(x) one simply has
φ−α(x) = φα(x) = φ+α(x),
If a probability atom does exist at ζα(x) andΨ(x , ζα(x)−) < α < Ψ(x , ζα(x)) < 1 we get
φ−α(x) < φα(x) < φ+α(x).
Motivation General concept of CVaR Optimization Comparison
Atom in ζα
Figure: The picture supports previous proposition.
Motivation General concept of CVaR Optimization Comparison
Basic CVaR relations
Proposition 6 (CVaR as a weighted average).
Letλα(x) = Ψ(x , ζα(x)) − α
1 − α> [0,1].
If Ψ(x , ζα(x)) < 1, so there is chance of a loss greater thanζα(x), then
φα(x) = λα(x)ζα(x) + (1 − λα(x))φ+α(x),
whereas if Ψ(x , ζα(x)) = 1, so ζα(x) is the highest loss that canoccur (and thus λα(x) = 1 but φ+α is not defined) then
φα(x) = ζα(x).
Motivation General concept of CVaR Optimization Comparison
Basic CVaR relations
The proof of previous proposition comes from the definitionof VaR and CVaR.From the proposition we see that probability atoms can besplit such that we compute the CVaR on the proper partwhich has probability α.Surprising is that CVaR, which is coherent (see later) canbe gained as linear combination of two non coherent riskmeasures.
Corollary 7.
α-CVaR dominates α-VaR: φα(x) C ζα(x).
Motivation General concept of CVaR Optimization Comparison
CVaR for scenario models
In this case we suppose that probability measure isdiscrete, it means it is concentrated only in atoms andthere is finitely many of these atoms.
Proposition 8 (CVaR for scenario models).
Suppose the probability measure P is concentrated in finitelymany points yk > Y , so that for each x > X the distribution ofloss z = f (x ,y) is likewise concentrated in finitely many pointsand Ψ(x , ċ) is a step function with jumps at those points. Fixingx , let those corresponding loss points be ordered asz1 < z2 < ċ ċ ċ < zN and P(f (x ,y) = zk) = pk A 0. Let kα is theunique index such that
kα
Qk=1
pk C α Akα−1Qk=1
pk .
Motivation General concept of CVaR Optimization Comparison
CVaR for scenario models
Proposition 8 (CVaR for scenario models).
The α-VaR of the loss is given then by ζα(x) = zkα, whereas the
α-CVaR is given by
φα(x) =�Pkα
k=1 pk − α�zkα+PN
k=kα+1 pkzk
1 − α.
Further
λα(x) = Pkα
k=1 pk − α
1 − αB
pkα
pkα+� + pN
.
It is enough to realize that Ψ(x , ζα(x)) = Pkα
k=1 pk ,
Ψ(x , ζα(x)−) = Pkα−1k=1 pk and
Ψ(x , ζα(x)) −Ψ(x , ζα(x)−) = pkα. Then we use the
previous proposition.The last inequality follows from the α A Pkα−1
k=1 pk .
Motivation General concept of CVaR Optimization Comparison
Outline
1 Motivation
2 General concept of CVaR
3 Optimization
4 Comparison
Motivation General concept of CVaR Optimization Comparison
Goal
We will show that α-VaR and α-CVaR of the lossz = f (x ,y) with given strategy x can be computedsimultaneously by solving one dimensional optimizationproblem of convex nature.
Fα(x , ζ) = ζ +E ([f (x ,y) − ζ]+)
1 − α,
where [t]+ =max�0, t�.The following proposition is not valid for CVaR+ or CVaR−.
Motivation General concept of CVaR Optimization Comparison
Fundamental minimization formula
Proposition 9 (Fundamental minimization formula).
As a function of ζ > R,Fα(x , ζ) is finite and convex (hencecontinuous), with
φα(x) =minζ
Fα(x , ζ),
ζα(x) = lower endpoint of argminζ
Fα(x , ζ),
ζ+α(x) = upper endpoint of argminζ
Fα(x , ζ),
where the argmin refers to the set of ζ for which the minimum isattained. In this case is nonempty, closed, bounded interval(point). In particular, one always has
ζα(x) > argminζ
Fα(x , ζ), φα(x) = Fα(x , ζα(x)).
Motivation General concept of CVaR Optimization Comparison
Sublinear function
Definition 10 (Sublinear function).
The function h(x) is sublinear if h(x + y) B h(x) + h(y) andh(λx) = λh(x) for λ A 0. The second property is called positivehomogeneity.
From the second property we know limt�0 h(t) = 0.
We know further that h(t) B h(0) + h(t). This yieldsh(0) C 0.Finally 0 B h(0) B h(t) + h(−t), but h(t) and h(−t) can bearbitrarily small, which gives h(0) = 0.
Sublinearity is equivalent to the combination of convexitywith positive homogeneity [Rockafellar (1997)].
Motivation General concept of CVaR Optimization Comparison
Convexity of CVaR
Proposition 11 (Convexity of CVaR).
If f (x ,y) is convex with respect to x , then φα(x) is also convexto x . In this case Fα(x , ζ) is jointly convex in (x , ζ).According to previous note if f (x ,y) is sublinear with respect tox , then φα(x) is also sublinear to x . In this case Fα(x , ζ) isjointly sublinear in (x , ζ).
The proof come from the fact that Fα(x , ζ) is convex whenf (x ,y) is convex in x .For convexity of φα we need the fact that when a convexfunction of two vector variables is minimized with respectto one of them, the result is a convex function of the othersee [Rockafellar (1997)].
Motivation General concept of CVaR Optimization Comparison
Coherence
Definition 12 (Coherence).We consider some risk measure ρ as a functional on a linearspace of random variables, where z = f (x ,y) is our randomvariable then the first requirement for coherence is that ρ has tobe sublinear
ρ(z + z′) B ρ(z) + ρ(z′) ρ(λz) = λρ(z) for λ C 0,
ρ(z) = c when z � c(constant)and
ρ(z) B ρ(z′) when z B z′.
We have shown, that we can employ also strict inequalityin definition of sublinearity.The inequality z B z′ refers to first order stochasticdominance (i.e. P(z B a) B P(z′ B a) for all possible a).
Motivation General concept of CVaR Optimization Comparison
Coherence of CVaR
Proposition 13 (Coherence of CVaR).
When f (x ,y) is linear with respect to x then α−CVaR is acoherent risk measure. φα(x) is not only sublinear with respectto x , but further it satisfies
φα(x) = c when f (x ,y) = c
(thus accurately reflecting a lack of risk), and it obeys themonotonicity rule that
φα(x) B φα(x ′) when f (x ,y) B f (x ′,y).
f (x ,y) is linear when f (x ,y) = x1f1(y) +� + xnfn(y).We do not need linearity in the proposition. The result ismore general.The proof is consequence of fundamental minimizationformula.
Motivation General concept of CVaR Optimization Comparison
Optimization shortcut
In problems of optimization under uncertainty, risk measurecan enter into the objective or the constraints or both.In this context CVaR has a big advantage in its convexity.
Theorem 14 (Optimization shortcut).
Minimizing φα(x) with respect to x > X is equivalent tominimizing Fα(x , ζ) over all (x , ζ) > X �R, in the sense that
minx>X
φα(x) = min(x ,ζ)>X�R
Fα(x , ζ),
where moreover
(x�, ζ�) > argmin(x ,ζ)>X�R
Fα(x , ζ) �
x� > argminx>X
φα(x), ζ� > argminζ>R
Fα(x�, ζ).
Motivation General concept of CVaR Optimization Comparison
Optimization shortcut
The proof is based on an idea that minimization withrespect to (x , ζ)can be carried out by minimizing withrespect to ζ for each x then minimizing the rest withrespect to x .
Corollary 15 (VaR and CVaR calculation).
If (x�, ζ�) minimizes Fα over X �R, then not only does x�
minimize φα over X , but also
φα(x�) = Fα(x�, ζ�), ζα(x�) B ζ� B ζ+α(x�),
where actually ζα(x�) = ζ� if argminζ Fα(x�, ζ) reduces to asingle point.
Motivation General concept of CVaR Optimization Comparison
Optimization shortcut
If argminζ Fα(x�, ζ) does not consist of just a single point,it is possible to have ζα(x�) < ζ�. In this case the jointminimization in producing (x�, ζ) yields the CVaRassociated with x�, it does not immediately yield the VaRassociated with x�.The fact that minimization of CVaR (if the problem isconvex) does not have to proceed numerically throughrepeated calculations of φα(x) for various decisions x ispowerfull attraction to working with CVaR.VaR can be ill behaved and does not offer such a shortcut.
Motivation General concept of CVaR Optimization Comparison
Outline
1 Motivation
2 General concept of CVaR
3 Optimization
4 Comparison
Motivation General concept of CVaR Optimization Comparison
Pros of VaR
Very easy idea behind, easy to understand and interpret.How much you may lose with certain confidence level.Two distributions can be ranked by comparing their VaRsfor the same confidence level.VaR focuses on a specific part of the distribution specifiedby the confidence level.Stability of estimation procedure. It is not affected by veryhigh tail losses.It is often estimated parametrically (historical or MonteCarlo simulation or by using approximations based onsecond-order Taylor expansion).
Motivation General concept of CVaR Optimization Comparison
Cons of VaR
VaR does not account for properties of the distributionbeyond the confidence level.This can lead to taking undesirable riskRisk control using VaR may lead to undesirable results forskewed distributions.VaR is a nonconvex and discontinuous function for discretedistributions w.r.t. portfolio positions when returns havediscrete distributions.
Motivation General concept of CVaR Optimization Comparison
Pros of CVaR
Its interpretation is still straight forward. It measures thelosses that hurts the most.Defining CVaR (X) for all confidence levels α in (0, 1)completely specifies the distribution of X .CVaR is a coherent risk measure.CVaR (X) is continuous with respect to α.CVaR (w1X1 +� +wnXn) is a convex function with respectto (w1, . . . ,wn).
Motivation General concept of CVaR Optimization Comparison
Cons of CVaR
CVaR is more sensitive than VaR to estimation errors.For instance, historical scenarios often do not provideenough information about tails; hence, we should assumea certain model for the tail to be calibrated on historicaldata.In financial setting, equally weighted portfolios mayoutperform CVaR-optimal portfolios when historical datahave mean reverting characteristics.Slightly worse interpretation, not so used in practise.
Motivation General concept of CVaR Optimization Comparison
VaR or CVaR
VaR and CVaR measure different parts of the distribution.Depending on what is needed, one may be preferred overthe other.A trader may prefer VaR to CVaR, because he may likehigh uncontrolled risks; VaR is not as restrictive as CVaRwith the same confidence level.Nothing dramatic happens to a trader in case of highlosses.A company owner will probably prefer CVaR; he has tocover large losses if they occur; hence, he “really” needs tocontrol tail events.VaR may be better for optimizing portfolios when goodmodels for tails are not available.
Motivation General concept of CVaR Optimization Comparison
VaR or CVaR
CVaR may not perform well out of sample when portfoliooptimization is run with poorly constructed set of scenarios.Historical data may not give right predictions of future tailevents because of mean-reverting characteristics ofassets. High returns typically are followed by low returns;hence, CVaR based on history may be quite misleading inrisk estimation.If a good model of tail is available, then CVaR can beaccurately estimated and CVaR should be used. CVaRhas superior mathematical properties and can be easilyhandled in optimization and statistics.Appropriate confidence levels for VaR and CVaR must bechosen, avoiding comparison of VaR and CVaR for thesame level of α because they refer to different parts of thedistribution.
Motivation General concept of CVaR Optimization Comparison
Bibliography
R.T. Rockafellar, S. UryasevConditional value at risk for general loss distribution.Research report, 2001.
R.T. RockafellarConvex Analysis.princeton landmarks in mathematics and physics, 1997.
S. Sarykalin, G. Seraino, S. UryasevValue-at-Risk vs. Conditional Value-at-Risk in RiskManagement and Optimization.Tutorialsin OperationsRes earch, 2008.
A. Shapiro, D. Dentcheva, A. RuszczynskiLectures on stochastic programming modeling and theoryMathematical modeling society, 2009.