Sklar’s Theorem and the Rüschendorf transform revisited ... fileSklar’s Theorem and the...
Transcript of Sklar’s Theorem and the Rüschendorf transform revisited ... fileSklar’s Theorem and the...
Sklar’s Theorem and the Rüschendorftransform revisited - An analysis of right
quantiles
Frank Oertel
Copulas and Their ApplicationsTO COMMEMORATE THE 75TH BIRTHDAY OF
PROFESSOR ROGER B. NELSENUniversity of Almería
July 3 - 5, 2017
1 / 28
Content
1 Sklar’s Theorem
2 Rüschendorf’s proof of Sklar’s Theorem revisited
3 The distribution of FV(X) for arbitrary distribution functions F
4 The distribution of FV(X) in the case of F = FX
5 A quotation - There is some truth to it !
2 / 28
1 Sklar’s Theorem
2 Rüschendorf’s proof of Sklar’s Theorem revisited
3 The distribution of FV(X) for arbitrary distribution functions F
4 The distribution of FV(X) in the case of F = FX
5 A quotation - There is some truth to it !
3 / 28
Sklar’s TheoremLet F be a n-dimensional distribution function with marginalsF1, . . . ,Fn. Then there exists a copula CF, such that for all(x1, . . . , xn) ∈ Rn we have
F(x1, . . . , xn) = CF(F1(x1), . . . ,Fn(xn)) .
If in addition F is continuous, the copula CF is unique.Conversely, for any univariate distribution functions H1, . . . ,Hn,and any copula C, the composition C (H1, . . . ,Hn) defines an-dimensional distribution function with marginals H1, . . . ,Hn.
4 / 28
Sklar’s TheoremLet F be a n-dimensional distribution function with marginalsF1, . . . ,Fn. Then there exists a copula CF, such that for all(x1, . . . , xn) ∈ Rn we have
F(x1, . . . , xn) = CF(F1(x1), . . . ,Fn(xn)) .
If in addition F is continuous, the copula CF is unique.
Conversely, for any univariate distribution functions H1, . . . ,Hn,and any copula C, the composition C (H1, . . . ,Hn) defines an-dimensional distribution function with marginals H1, . . . ,Hn.
4 / 28
Sklar’s TheoremLet F be a n-dimensional distribution function with marginalsF1, . . . ,Fn. Then there exists a copula CF, such that for all(x1, . . . , xn) ∈ Rn we have
F(x1, . . . , xn) = CF(F1(x1), . . . ,Fn(xn)) .
If in addition F is continuous, the copula CF is unique.Conversely, for any univariate distribution functions H1, . . . ,Hn,and any copula C, the composition C (H1, . . . ,Hn) defines an-dimensional distribution function with marginals H1, . . . ,Hn.
4 / 28
1 Sklar’s Theorem
2 Rüschendorf’s proof of Sklar’s Theorem revisited
3 The distribution of FV(X) for arbitrary distribution functions F
4 The distribution of FV(X) in the case of F = FX
5 A quotation - There is some truth to it !
5 / 28
The Rüschendorf transform
In the following we fix an arbitrary distribution functionF : R −→ [0, 1].
DefinitionLet λ ∈ [0, 1] and x ∈ R. Put
RF(x, λ) := Fλ(x) : = F(x−) + λ∆F(x)
= (1− λ)F(x−) + λF(x)
Given its importance in Rüschendorf’s proof of Sklar’s Theoremwe denote the function RF : R× [0, 1] −→ [0, 1] as Rüschendorftransform.In particular, for all (x, λ) ∈ R× [0, 1] the following inequalityholds:
F(x−) ≤ Fλ(x) ≤ F(x) .
6 / 28
The Rüschendorf transform
In the following we fix an arbitrary distribution functionF : R −→ [0, 1].
DefinitionLet λ ∈ [0, 1] and x ∈ R. Put
RF(x, λ) := Fλ(x) : = F(x−) + λ∆F(x)
= (1− λ)F(x−) + λF(x)
Given its importance in Rüschendorf’s proof of Sklar’s Theoremwe denote the function RF : R× [0, 1] −→ [0, 1] as Rüschendorftransform.In particular, for all (x, λ) ∈ R× [0, 1] the following inequalityholds:
F(x−) ≤ Fλ(x) ≤ F(x) .
6 / 28
The Rüschendorf transform
In the following we fix an arbitrary distribution functionF : R −→ [0, 1].
DefinitionLet λ ∈ [0, 1] and x ∈ R. Put
RF(x, λ) := Fλ(x) : = F(x−) + λ∆F(x)
= (1− λ)F(x−) + λF(x)
Given its importance in Rüschendorf’s proof of Sklar’s Theoremwe denote the function RF : R× [0, 1] −→ [0, 1] as Rüschendorftransform.In particular, for all (x, λ) ∈ R× [0, 1] the following inequalityholds:
F(x−) ≤ Fλ(x) ≤ F(x) .
6 / 28
The Rüschendorf transform
In the following we fix an arbitrary distribution functionF : R −→ [0, 1].
DefinitionLet λ ∈ [0, 1] and x ∈ R. Put
RF(x, λ) := Fλ(x) : = F(x−) + λ∆F(x)
= (1− λ)F(x−) + λF(x)
Given its importance in Rüschendorf’s proof of Sklar’s Theoremwe denote the function RF : R× [0, 1] −→ [0, 1] as Rüschendorftransform.
In particular, for all (x, λ) ∈ R× [0, 1] the following inequalityholds:
F(x−) ≤ Fλ(x) ≤ F(x) .
6 / 28
The Rüschendorf transform
In the following we fix an arbitrary distribution functionF : R −→ [0, 1].
DefinitionLet λ ∈ [0, 1] and x ∈ R. Put
RF(x, λ) := Fλ(x) : = F(x−) + λ∆F(x)
= (1− λ)F(x−) + λF(x)
Given its importance in Rüschendorf’s proof of Sklar’s Theoremwe denote the function RF : R× [0, 1] −→ [0, 1] as Rüschendorftransform.In particular, for all (x, λ) ∈ R× [0, 1] the following inequalityholds:
F(x−) ≤ Fλ(x) ≤ F(x) .
6 / 28
Reminder on right- and left-quantilefunctions
Consider the generalised inverse function F∧ : (0, 1) −→ R,given by
F∧(α) := minx ∈ R : F(x) ≥ α .
Then
F∧(α) ≤ infx ∈ R : F(x) > α = supx ∈ R : F(x) ≤ α =: F∨(α)
andF(F∧(α)−
)≤ α ≤ F
(F∧(α)+
)= F
(F∧(α)
)for all α ∈ (0, 1).
7 / 28
Reminder on right- and left-quantilefunctions
Consider the generalised inverse function F∧ : (0, 1) −→ R,given by
F∧(α) := minx ∈ R : F(x) ≥ α .
Then
F∧(α) ≤ infx ∈ R : F(x) > α = supx ∈ R : F(x) ≤ α =: F∨(α)
andF(F∧(α)−
)≤ α ≤ F
(F∧(α)+
)= F
(F∧(α)
)for all α ∈ (0, 1).
7 / 28
Reminder on right- and left-quantilefunctions
Consider the generalised inverse function F∧ : (0, 1) −→ R,given by
F∧(α) := minx ∈ R : F(x) ≥ α .
Then
F∧(α) ≤ infx ∈ R : F(x) > α = supx ∈ R : F(x) ≤ α =: F∨(α)
andF(F∧(α)−
)≤ α ≤ F
(F∧(α)+
)= F
(F∧(α)
)for all α ∈ (0, 1).
7 / 28
Rüschendorf’s proof of Sklar’sTheorem I
Rüschendorf’s proof of Sklar’s Theorem (cf. [5]) is completelybuilt on the following result:
Theorem (Rüschendorf 2009)Let X,V be two random variables, both defined on the sameprobability space (Ω,F ,P), such that V ∼ U(0, 1) and V isindependent of X. Let F = FX be the distribution function of X.Let
FV(X) := RF(X,V) = F(X−) + V (F(X)− F(X−))
be the distributional transform of X.(i) FV(X) ∼ U(0, 1) is a uniformly distributed random variable.(ii) Moreover,
X = F∧(FV(X)
)= F∧
(F(X−) + V (F(X)− F(X−))
)P-a.s.
8 / 28
Rüschendorf’s proof of Sklar’sTheorem I
Rüschendorf’s proof of Sklar’s Theorem (cf. [5]) is completelybuilt on the following result:
Theorem (Rüschendorf 2009)Let X,V be two random variables, both defined on the sameprobability space (Ω,F ,P), such that V ∼ U(0, 1) and V isindependent of X. Let F = FX be the distribution function of X.
LetFV(X) := RF(X,V) = F(X−) + V (F(X)− F(X−))
be the distributional transform of X.(i) FV(X) ∼ U(0, 1) is a uniformly distributed random variable.(ii) Moreover,
X = F∧(FV(X)
)= F∧
(F(X−) + V (F(X)− F(X−))
)P-a.s.
8 / 28
Rüschendorf’s proof of Sklar’sTheorem I
Rüschendorf’s proof of Sklar’s Theorem (cf. [5]) is completelybuilt on the following result:
Theorem (Rüschendorf 2009)Let X,V be two random variables, both defined on the sameprobability space (Ω,F ,P), such that V ∼ U(0, 1) and V isindependent of X. Let F = FX be the distribution function of X.Let
FV(X) := RF(X,V) = F(X−) + V (F(X)− F(X−))
be the distributional transform of X.
(i) FV(X) ∼ U(0, 1) is a uniformly distributed random variable.(ii) Moreover,
X = F∧(FV(X)
)= F∧
(F(X−) + V (F(X)− F(X−))
)P-a.s.
8 / 28
Rüschendorf’s proof of Sklar’sTheorem I
Rüschendorf’s proof of Sklar’s Theorem (cf. [5]) is completelybuilt on the following result:
Theorem (Rüschendorf 2009)Let X,V be two random variables, both defined on the sameprobability space (Ω,F ,P), such that V ∼ U(0, 1) and V isindependent of X. Let F = FX be the distribution function of X.Let
FV(X) := RF(X,V) = F(X−) + V (F(X)− F(X−))
be the distributional transform of X.(i) FV(X) ∼ U(0, 1) is a uniformly distributed random variable.
(ii) Moreover,
X = F∧(FV(X)
)= F∧
(F(X−) + V (F(X)− F(X−))
)P-a.s.
8 / 28
Rüschendorf’s proof of Sklar’sTheorem I
Rüschendorf’s proof of Sklar’s Theorem (cf. [5]) is completelybuilt on the following result:
Theorem (Rüschendorf 2009)Let X,V be two random variables, both defined on the sameprobability space (Ω,F ,P), such that V ∼ U(0, 1) and V isindependent of X. Let F = FX be the distribution function of X.Let
FV(X) := RF(X,V) = F(X−) + V (F(X)− F(X−))
be the distributional transform of X.(i) FV(X) ∼ U(0, 1) is a uniformly distributed random variable.(ii) Moreover,
X = F∧(FV(X)
)= F∧
(F(X−) + V (F(X)− F(X−))
)P-a.s.
8 / 28
Rüschendorf’s proof of Sklar’sTheorem II
Proof of SklarLet i ∈ 1, 2, . . . , n and Vi ∼ U(0, 1). On 0 < Vi ≤ 1 put
Ui := RFi(Xi,Vi) = Fi(Xi−) + Vi∆Fi(Xi) .
According to Rüschendorf’s theorem on the distributionaltransform there exist null sets M1,M2, . . . ,Mn ∈ F , such that onΩ \Mi ⊆ 0 < V ≤ 1 Zi := Fi
∧(Ui) is well-defined and satisfiesXi = Zi for every i ∈ 1, 2, . . . , n. Thus, P(M) = 0, whereM :=
⋃ni=1 Mi. Clearly, Xi = Zi holds on Ω \M for all i
“altogether”.
9 / 28
Rüschendorf’s proof of Sklar’sTheorem II
Proof of SklarLet i ∈ 1, 2, . . . , n and Vi ∼ U(0, 1). On 0 < Vi ≤ 1 put
Ui := RFi(Xi,Vi) = Fi(Xi−) + Vi∆Fi(Xi) .
According to Rüschendorf’s theorem on the distributionaltransform there exist null sets M1,M2, . . . ,Mn ∈ F , such that onΩ \Mi ⊆ 0 < V ≤ 1 Zi := Fi
∧(Ui) is well-defined and satisfiesXi = Zi for every i ∈ 1, 2, . . . , n. Thus, P(M) = 0, whereM :=
⋃ni=1 Mi. Clearly, Xi = Zi holds on Ω \M for all i
“altogether”.
9 / 28
Rüschendorf’s proof of Sklar’sTheorem III
Proof (ctd).Consider the copula
CF(γ1, . . . , γn) := P(U1 ≤ γ1, . . . ,Un ≤ γn) ,
where (γ1, . . . , γn)> ∈ [0, 1]n.
Since
u ∈ (0, 1) : u ≤ Fi(xi) = u ∈ (0, 1) : Fi∧(u) ≤ xi
for all i ∈ 1, 2, . . . , n and P(M) = 0, it consequently follows
CF(F1(x1), . . . ,Fn(xn)) = P(Z1 ≤ x1, . . . ,Zn ≤ xn ∩ R \M
)= P
(X1 ≤ x1, . . . ,Xn ≤ xn ∩ R \M
)= F(x1, . . . , xn) .
10 / 28
Rüschendorf’s proof of Sklar’sTheorem III
Proof (ctd).Consider the copula
CF(γ1, . . . , γn) := P(U1 ≤ γ1, . . . ,Un ≤ γn) ,
where (γ1, . . . , γn)> ∈ [0, 1]n. Since
u ∈ (0, 1) : u ≤ Fi(xi) = u ∈ (0, 1) : Fi∧(u) ≤ xi
for all i ∈ 1, 2, . . . , n and P(M) = 0, it consequently follows
CF(F1(x1), . . . ,Fn(xn)) = P(Z1 ≤ x1, . . . ,Zn ≤ xn ∩ R \M
)= P
(X1 ≤ x1, . . . ,Xn ≤ xn ∩ R \M
)= F(x1, . . . , xn) .
10 / 28
Rüschendorf’s proof of Sklar’sTheorem III
Proof (ctd).Consider the copula
CF(γ1, . . . , γn) := P(U1 ≤ γ1, . . . ,Un ≤ γn) ,
where (γ1, . . . , γn)> ∈ [0, 1]n. Since
u ∈ (0, 1) : u ≤ Fi(xi) = u ∈ (0, 1) : Fi∧(u) ≤ xi
for all i ∈ 1, 2, . . . , n and P(M) = 0, it consequently follows
CF(F1(x1), . . . ,Fn(xn)) = P(Z1 ≤ x1, . . . ,Zn ≤ xn ∩ R \M
)
= P(X1 ≤ x1, . . . ,Xn ≤ xn ∩ R \M
)= F(x1, . . . , xn) .
10 / 28
Rüschendorf’s proof of Sklar’sTheorem III
Proof (ctd).Consider the copula
CF(γ1, . . . , γn) := P(U1 ≤ γ1, . . . ,Un ≤ γn) ,
where (γ1, . . . , γn)> ∈ [0, 1]n. Since
u ∈ (0, 1) : u ≤ Fi(xi) = u ∈ (0, 1) : Fi∧(u) ≤ xi
for all i ∈ 1, 2, . . . , n and P(M) = 0, it consequently follows
CF(F1(x1), . . . ,Fn(xn)) = P(Z1 ≤ x1, . . . ,Zn ≤ xn ∩ R \M
)= P
(X1 ≤ x1, . . . ,Xn ≤ xn ∩ R \M
)
= F(x1, . . . , xn) .
10 / 28
Rüschendorf’s proof of Sklar’sTheorem III
Proof (ctd).Consider the copula
CF(γ1, . . . , γn) := P(U1 ≤ γ1, . . . ,Un ≤ γn) ,
where (γ1, . . . , γn)> ∈ [0, 1]n. Since
u ∈ (0, 1) : u ≤ Fi(xi) = u ∈ (0, 1) : Fi∧(u) ≤ xi
for all i ∈ 1, 2, . . . , n and P(M) = 0, it consequently follows
CF(F1(x1), . . . ,Fn(xn)) = P(Z1 ≤ x1, . . . ,Zn ≤ xn ∩ R \M
)= P
(X1 ≤ x1, . . . ,Xn ≤ xn ∩ R \M
)= F(x1, . . . , xn) .
10 / 28
1 Sklar’s Theorem
2 Rüschendorf’s proof of Sklar’s Theorem revisited
3 The distribution of FV(X) for arbitrary distribution functions F
4 The distribution of FV(X) in the case of F = FX
5 A quotation - There is some truth to it !
11 / 28
The distribution of FV(X) I
Next, we will recognise that part (i) of Rüschendorf’s theoremon the distributional transform could be viewed as a corollary ofa more general statement which is of its own interest.
To thisend we fix an arbitrary distribution function F and α ∈ (0, 1) andconsider the following non-empty subset of the “strip” R× (0, 1]:
Aα ≡ Aα[F] := (x, λ) ∈ R× (0, 1] : RF(x, λ) ≤ α .
The following important specification of Aα[F] - stronglyinvolving the right-quantile function F∨ - will help usconsiderably to calculate the interesting probabilityP((X,V) ∈ Aα[F]
).
Note that in our approach F don’t need to coincide with thedistribution function of X!
12 / 28
The distribution of FV(X) I
Next, we will recognise that part (i) of Rüschendorf’s theoremon the distributional transform could be viewed as a corollary ofa more general statement which is of its own interest.To thisend we fix an arbitrary distribution function F and α ∈ (0, 1) andconsider the following non-empty subset of the “strip” R× (0, 1]:
Aα ≡ Aα[F] := (x, λ) ∈ R× (0, 1] : RF(x, λ) ≤ α .
The following important specification of Aα[F] - stronglyinvolving the right-quantile function F∨ - will help usconsiderably to calculate the interesting probabilityP((X,V) ∈ Aα[F]
).
Note that in our approach F don’t need to coincide with thedistribution function of X!
12 / 28
The distribution of FV(X) I
Next, we will recognise that part (i) of Rüschendorf’s theoremon the distributional transform could be viewed as a corollary ofa more general statement which is of its own interest.To thisend we fix an arbitrary distribution function F and α ∈ (0, 1) andconsider the following non-empty subset of the “strip” R× (0, 1]:
Aα ≡ Aα[F] := (x, λ) ∈ R× (0, 1] : RF(x, λ) ≤ α .
The following important specification of Aα[F] - stronglyinvolving the right-quantile function F∨ - will help usconsiderably to calculate the interesting probabilityP((X,V) ∈ Aα[F]
).
Note that in our approach F don’t need to coincide with thedistribution function of X!
12 / 28
The distribution of FV(X) I
Next, we will recognise that part (i) of Rüschendorf’s theoremon the distributional transform could be viewed as a corollary ofa more general statement which is of its own interest.To thisend we fix an arbitrary distribution function F and α ∈ (0, 1) andconsider the following non-empty subset of the “strip” R× (0, 1]:
Aα ≡ Aα[F] := (x, λ) ∈ R× (0, 1] : RF(x, λ) ≤ α .
The following important specification of Aα[F] - stronglyinvolving the right-quantile function F∨ - will help usconsiderably to calculate the interesting probabilityP((X,V) ∈ Aα[F]
).
Note that in our approach F don’t need to coincide with thedistribution function of X!
12 / 28
The distribution of FV(X) II
LemmaLet α ∈ (0, 1) and F an arbitrary distribution function. Putξ := F∧(α), η := F∨(α), q := F(ξ−), β := ∆F(ξ) andγ := ∆F(η). Then
Aα =
(−∞, η]× (0, 1] if γ = 0(−∞, η)× (0, 1] if ξ < η and γ > 0(−∞, ξ)× (0, 1] ·∪ ξ ×
(0, α−q
β
]if ξ = η and γ(= β) > 0
The proof of this lemma is built on the following accuratedescription of all “flat pieces” of F.
13 / 28
The distribution of FV(X) II
LemmaLet α ∈ (0, 1) and F an arbitrary distribution function. Putξ := F∧(α), η := F∨(α), q := F(ξ−), β := ∆F(ξ) andγ := ∆F(η). Then
Aα =
(−∞, η]× (0, 1] if γ = 0(−∞, η)× (0, 1] if ξ < η and γ > 0(−∞, ξ)× (0, 1] ·∪ ξ ×
(0, α−q
β
]if ξ = η and γ(= β) > 0
The proof of this lemma is built on the following accuratedescription of all “flat pieces” of F.
13 / 28
The distribution of FV(X) II
LemmaLet α ∈ (0, 1) and F an arbitrary distribution function. Putξ := F∧(α), η := F∨(α), q := F(ξ−), β := ∆F(ξ) andγ := ∆F(η). Then
Aα =
(−∞, η]× (0, 1] if γ = 0(−∞, η)× (0, 1] if ξ < η and γ > 0(−∞, ξ)× (0, 1] ·∪ ξ ×
(0, α−q
β
]if ξ = η and γ(= β) > 0
The proof of this lemma is built on the following accuratedescription of all “flat pieces” of F.
13 / 28
The distribution of FV(X) III
LemmaLet 0 < λ ≤ 1 and α ∈ (0, 1). Put ξ := F∧(α) and η := F∨(α).
(i) If ξ < η then F(η−) = F(ξ) = α = F (F∧(α)), and
∅ 6= x ∈ R : F(x) = α =
[ξ, η)
if F(η) > α[ξ, η]
if F(η) = α
(ii) If ξ = η, then
x ∈ R : F(x) = α =
∅ if F(η) > α
ξ if F(η) = α
Observationξ < η if and only if there are x1 6= x2 such thatF(x1) = α = F(x2).
14 / 28
The distribution of FV(X) III
LemmaLet 0 < λ ≤ 1 and α ∈ (0, 1). Put ξ := F∧(α) and η := F∨(α).
(i) If ξ < η then F(η−) = F(ξ) = α = F (F∧(α)), and
∅ 6= x ∈ R : F(x) = α =
[ξ, η)
if F(η) > α[ξ, η]
if F(η) = α
(ii) If ξ = η, then
x ∈ R : F(x) = α =
∅ if F(η) > α
ξ if F(η) = α
Observationξ < η if and only if there are x1 6= x2 such thatF(x1) = α = F(x2).
14 / 28
The distribution of FV(X) III
LemmaLet 0 < λ ≤ 1 and α ∈ (0, 1). Put ξ := F∧(α) and η := F∨(α).
(i) If ξ < η then F(η−) = F(ξ) = α = F (F∧(α)), and
∅ 6= x ∈ R : F(x) = α =
[ξ, η)
if F(η) > α[ξ, η]
if F(η) = α
(ii) If ξ = η, then
x ∈ R : F(x) = α =
∅ if F(η) > α
ξ if F(η) = α
Observationξ < η if and only if there are x1 6= x2 such thatF(x1) = α = F(x2).
14 / 28
The distribution of FV(X) III
LemmaLet 0 < λ ≤ 1 and α ∈ (0, 1). Put ξ := F∧(α) and η := F∨(α).
(i) If ξ < η then F(η−) = F(ξ) = α = F (F∧(α)), and
∅ 6= x ∈ R : F(x) = α =
[ξ, η)
if F(η) > α[ξ, η]
if F(η) = α
(ii) If ξ = η, then
x ∈ R : F(x) = α =
∅ if F(η) > α
ξ if F(η) = α
Observationξ < η if and only if there are x1 6= x2 such thatF(x1) = α = F(x2).
14 / 28
The distribution of FV(X) IVLet 0 < α < 1 and X,V two random variables, both defined onthe same probability space (Ω,F ,P), such that V ∼ U(0, 1) andV is independent of X.
By construction
P(FV(X) ≤ α
)= P (FV(X) ≤ α and V ∈ (0, 1]) = P
((V,X) ∈ Aα
).
Hence,
PropositionLet F : R −→ [0, 1] be an arbitrary distribution function. Letα ∈ (0, 1). Put ξ := F∧(α), η := F∨(α) and β := ∆F(ξ). Let X,Vbe two random variables, both defined on the same probabilityspace (Ω,F ,P), such that V ∼ U(0, 1) and V is independent ofX.
(i) If ξ < η then
P(FV(X) ≤ α
)= P
(X < η
)+ 11F=α(η)P
(X = η
)= P
(X < ξ
)+ P
(F(X) = α
)
15 / 28
The distribution of FV(X) IVLet 0 < α < 1 and X,V two random variables, both defined onthe same probability space (Ω,F ,P), such that V ∼ U(0, 1) andV is independent of X. By construction
P(FV(X) ≤ α
)= P (FV(X) ≤ α and V ∈ (0, 1]) = P
((V,X) ∈ Aα
).
Hence,
PropositionLet F : R −→ [0, 1] be an arbitrary distribution function. Letα ∈ (0, 1). Put ξ := F∧(α), η := F∨(α) and β := ∆F(ξ). Let X,Vbe two random variables, both defined on the same probabilityspace (Ω,F ,P), such that V ∼ U(0, 1) and V is independent ofX.
(i) If ξ < η then
P(FV(X) ≤ α
)= P
(X < η
)+ 11F=α(η)P
(X = η
)= P
(X < ξ
)+ P
(F(X) = α
)
15 / 28
The distribution of FV(X) IVLet 0 < α < 1 and X,V two random variables, both defined onthe same probability space (Ω,F ,P), such that V ∼ U(0, 1) andV is independent of X. By construction
P(FV(X) ≤ α
)= P (FV(X) ≤ α and V ∈ (0, 1]) = P
((V,X) ∈ Aα
).
Hence,
PropositionLet F : R −→ [0, 1] be an arbitrary distribution function. Letα ∈ (0, 1). Put ξ := F∧(α), η := F∨(α) and β := ∆F(ξ). Let X,Vbe two random variables, both defined on the same probabilityspace (Ω,F ,P), such that V ∼ U(0, 1) and V is independent ofX.
(i) If ξ < η then
P(FV(X) ≤ α
)= P
(X < η
)+ 11F=α(η)P
(X = η
)= P
(X < ξ
)+ P
(F(X) = α
)
15 / 28
The distribution of FV(X) IVLet 0 < α < 1 and X,V two random variables, both defined onthe same probability space (Ω,F ,P), such that V ∼ U(0, 1) andV is independent of X. By construction
P(FV(X) ≤ α
)= P (FV(X) ≤ α and V ∈ (0, 1]) = P
((V,X) ∈ Aα
).
Hence,
PropositionLet F : R −→ [0, 1] be an arbitrary distribution function. Letα ∈ (0, 1). Put ξ := F∧(α), η := F∨(α) and β := ∆F(ξ). Let X,Vbe two random variables, both defined on the same probabilityspace (Ω,F ,P), such that V ∼ U(0, 1) and V is independent ofX.
(i) If ξ < η then
P(FV(X) ≤ α
)= P
(X < η
)+ 11F=α(η)P
(X = η
)= P
(X < ξ
)+ P
(F(X) = α
)
15 / 28
The distribution of FV(X) IVLet 0 < α < 1 and X,V two random variables, both defined onthe same probability space (Ω,F ,P), such that V ∼ U(0, 1) andV is independent of X. By construction
P(FV(X) ≤ α
)= P (FV(X) ≤ α and V ∈ (0, 1]) = P
((V,X) ∈ Aα
).
Hence,
PropositionLet F : R −→ [0, 1] be an arbitrary distribution function. Letα ∈ (0, 1). Put ξ := F∧(α), η := F∨(α) and β := ∆F(ξ). Let X,Vbe two random variables, both defined on the same probabilityspace (Ω,F ,P), such that V ∼ U(0, 1) and V is independent ofX.
(i) If ξ < η then
P(FV(X) ≤ α
)= P
(X < η
)+ 11F=α(η)P
(X = η
)
= P(X < ξ
)+ P
(F(X) = α
)
15 / 28
The distribution of FV(X) IVLet 0 < α < 1 and X,V two random variables, both defined onthe same probability space (Ω,F ,P), such that V ∼ U(0, 1) andV is independent of X. By construction
P(FV(X) ≤ α
)= P (FV(X) ≤ α and V ∈ (0, 1]) = P
((V,X) ∈ Aα
).
Hence,
PropositionLet F : R −→ [0, 1] be an arbitrary distribution function. Letα ∈ (0, 1). Put ξ := F∧(α), η := F∨(α) and β := ∆F(ξ). Let X,Vbe two random variables, both defined on the same probabilityspace (Ω,F ,P), such that V ∼ U(0, 1) and V is independent ofX.
(i) If ξ < η then
P(FV(X) ≤ α
)= P
(X < η
)+ 11F=α(η)P
(X = η
)= P
(X < ξ
)+ P
(F(X) = α
)15 / 28
The distribution of FV(X) V
Proposition (ctd.)
(ii) If ξ = η and β = 0 then
P(FV(X) ≤ α
)= P
(X ≤ ξ
).
(iii) If ξ = η and β > 0 then
P(FV(X) ≤ α
)= P
(X ≤ ξ
)− F(ξ)− α
βP(X = ξ
).
ObservationIn any of the three cases the right side of the representation ofP(FV(X) ≤ α
)is completely independent of the choice of the
random variable V ∼ U(0, 1).
16 / 28
The distribution of FV(X) V
Proposition (ctd.)
(ii) If ξ = η and β = 0 then
P(FV(X) ≤ α
)= P
(X ≤ ξ
).
(iii) If ξ = η and β > 0 then
P(FV(X) ≤ α
)= P
(X ≤ ξ
)− F(ξ)− α
βP(X = ξ
).
ObservationIn any of the three cases the right side of the representation ofP(FV(X) ≤ α
)is completely independent of the choice of the
random variable V ∼ U(0, 1).
16 / 28
The distribution of FV(X) V
Proposition (ctd.)
(ii) If ξ = η and β = 0 then
P(FV(X) ≤ α
)= P
(X ≤ ξ
).
(iii) If ξ = η and β > 0 then
P(FV(X) ≤ α
)= P
(X ≤ ξ
)− F(ξ)− α
βP(X = ξ
).
ObservationIn any of the three cases the right side of the representation ofP(FV(X) ≤ α
)is completely independent of the choice of the
random variable V ∼ U(0, 1).
16 / 28
1 Sklar’s Theorem
2 Rüschendorf’s proof of Sklar’s Theorem revisited
3 The distribution of FV(X) for arbitrary distribution functions F
4 The distribution of FV(X) in the case of F = FX
5 A quotation - There is some truth to it !
17 / 28
The special case F = FX ILet ξ < η or β = 0. Then
F(ξ) = α = F(η−) .
Hence, if F = FX coincides with the distribution function of X itfollows that
P(X ≤ ξ) = α = P(X < η) .
In particular, if ξ < η :
P (F(X) = α) = P(X < η)− P(X < ξ) = P (X = ξ) .
CorollaryLet α ∈ (0, 1). Let X,V be two random variables, both definedon the same probability space (Ω,F ,P), such that V ∼ U(0, 1)and V is independent of X. Let F = FX be the distributionfunction of X. Then FV(X) is uniformly distributed.
18 / 28
The special case F = FX ILet ξ < η or β = 0. Then
F(ξ) = α = F(η−) .
Hence, if F = FX coincides with the distribution function of X itfollows that
P(X ≤ ξ) = α = P(X < η) .
In particular, if ξ < η :
P (F(X) = α) = P(X < η)− P(X < ξ) = P (X = ξ) .
CorollaryLet α ∈ (0, 1). Let X,V be two random variables, both definedon the same probability space (Ω,F ,P), such that V ∼ U(0, 1)and V is independent of X. Let F = FX be the distributionfunction of X. Then FV(X) is uniformly distributed.
18 / 28
The special case F = FX ILet ξ < η or β = 0. Then
F(ξ) = α = F(η−) .
Hence, if F = FX coincides with the distribution function of X itfollows that
P(X ≤ ξ) = α = P(X < η) .
In particular, if ξ < η :
P (F(X) = α) =
P(X < η)− P(X < ξ) = P (X = ξ) .
CorollaryLet α ∈ (0, 1). Let X,V be two random variables, both definedon the same probability space (Ω,F ,P), such that V ∼ U(0, 1)and V is independent of X. Let F = FX be the distributionfunction of X. Then FV(X) is uniformly distributed.
18 / 28
The special case F = FX ILet ξ < η or β = 0. Then
F(ξ) = α = F(η−) .
Hence, if F = FX coincides with the distribution function of X itfollows that
P(X ≤ ξ) = α = P(X < η) .
In particular, if ξ < η :
P (F(X) = α) = P(X < η)− P(X < ξ)
= P (X = ξ) .
CorollaryLet α ∈ (0, 1). Let X,V be two random variables, both definedon the same probability space (Ω,F ,P), such that V ∼ U(0, 1)and V is independent of X. Let F = FX be the distributionfunction of X. Then FV(X) is uniformly distributed.
18 / 28
The special case F = FX ILet ξ < η or β = 0. Then
F(ξ) = α = F(η−) .
Hence, if F = FX coincides with the distribution function of X itfollows that
P(X ≤ ξ) = α = P(X < η) .
In particular, if ξ < η :
P (F(X) = α) = P(X < η)− P(X < ξ) = P (X = ξ) .
CorollaryLet α ∈ (0, 1). Let X,V be two random variables, both definedon the same probability space (Ω,F ,P), such that V ∼ U(0, 1)and V is independent of X. Let F = FX be the distributionfunction of X. Then FV(X) is uniformly distributed.
18 / 28
The special case F = FX ILet ξ < η or β = 0. Then
F(ξ) = α = F(η−) .
Hence, if F = FX coincides with the distribution function of X itfollows that
P(X ≤ ξ) = α = P(X < η) .
In particular, if ξ < η :
P (F(X) = α) = P(X < η)− P(X < ξ) = P (X = ξ) .
CorollaryLet α ∈ (0, 1). Let X,V be two random variables, both definedon the same probability space (Ω,F ,P), such that V ∼ U(0, 1)and V is independent of X. Let F = FX be the distributionfunction of X. Then FV(X) is uniformly distributed.
18 / 28
The special case F = FX II
Corollary (ctd.)Moreover, if F has at least one “flat piece” then
P(F(X) ≤ α
)= α = P
(X ≤ F∧(α)
)on the set
α ∈ (0, 1) : F∧(α) < F∨(α)
.
ObservationLet F = FX be a distribution function of a given random variableX. TFAE:
(i) F is continuous.(ii) F(X) is uniformly distributed over (0, 1).
Consequently, if F had non-zero jumps FV(X) still would beuniformly distributed over (0, 1), as opposed to F(X).
19 / 28
The special case F = FX II
Corollary (ctd.)Moreover, if F has at least one “flat piece” then
P(F(X) ≤ α
)= α = P
(X ≤ F∧(α)
)on the set
α ∈ (0, 1) : F∧(α) < F∨(α)
.
ObservationLet F = FX be a distribution function of a given random variableX. TFAE:
(i) F is continuous.(ii) F(X) is uniformly distributed over (0, 1).
Consequently, if F had non-zero jumps FV(X) still would beuniformly distributed over (0, 1), as opposed to F(X).
19 / 28
The special case F = FX II
Corollary (ctd.)Moreover, if F has at least one “flat piece” then
P(F(X) ≤ α
)= α = P
(X ≤ F∧(α)
)on the set
α ∈ (0, 1) : F∧(α) < F∨(α)
.
ObservationLet F = FX be a distribution function of a given random variableX. TFAE:
(i) F is continuous.(ii) F(X) is uniformly distributed over (0, 1).
Consequently, if F had non-zero jumps FV(X) still would beuniformly distributed over (0, 1), as opposed to F(X).
19 / 28
The special case F = FX III
Proof.By contradiction assume that (ii) holds and (i) is false. Thenthere exists x0 ∈ R such that
0 < ∆F(x0) = F(x0)−F(x0−) = P(X = x0) ≤ P(F(X) = F(x0)) = 0
- which is a contradiction.
20 / 28
The special case F = FX IV
TheoremLet X,V be two random variables, defined on the sameprobability space (Ω,F ,P), such that V ∼ U(0, 1) and V isindependent of X. Let F = FX be the distribution function of therandom variable X. Then
X = F∧(FV(X)
)= F∧
(F(X−) + V∆F(X)
)P-almost surely.
If in addition P(0 < F(X) < 1
)= 1 (for example, if F is
continuous), then
X = F∧(F(X)
)P-almost surely .
21 / 28
The special case F = FX V
Proof (Sketch).Since V ∼ U(0, 1), it follows that on B
((0, 1]
)the image
measure µFV := P(V ∈ ·) coincides with the Lebesgue measurem.
Thus, by applying the Fubini-Tonelli Theorem to non-negative(or bounded) Borel functions on
(R2,B(R2), µFV ⊗ µFX
)we firstly
recognise that there exists an m-null set L ∈ B((0, 1]
)such that
for all λ ∈ A := (0, 1] \ L there exists a Borel set Nλ ⊆ R suchthat P (X ∈ Nλ) = 0 and for any x ∈ R \ Nλ the value F∧
(Fλ(x)
)is well-defined and satisfies F∧
(Fλ(x)
)= x.
22 / 28
The special case F = FX V
Proof (Sketch).Since V ∼ U(0, 1), it follows that on B
((0, 1]
)the image
measure µFV := P(V ∈ ·) coincides with the Lebesgue measurem.Thus, by applying the Fubini-Tonelli Theorem to non-negative(or bounded) Borel functions on
(R2,B(R2), µFV ⊗ µFX
)we firstly
recognise that there exists an m-null set L ∈ B((0, 1]
)such that
for all λ ∈ A := (0, 1] \ L there exists a Borel set Nλ ⊆ R suchthat P (X ∈ Nλ) = 0 and for any x ∈ R \ Nλ the value F∧
(Fλ(x)
)is well-defined and satisfies F∧
(Fλ(x)
)= x.
22 / 28
The special case F = FX VI
Proof (Sketch) - ctd.Hence, since
P(X ∈ NV and V ∈ A
)=
∫AP(X ∈ Nλ
)m(dλ) = 0 ,
it consequently follows
X = F∧(FV(X)
)= F∧
(F(X−) + V∆F(X)
)on the set Ω \ N ⊆ 0 < V ≤ 1, whereN := V /∈ A ·∪X ∈ NV and V ∈ A.
The second part of the claim follows from
X(ω) = F∧ (F(X(ω)−) + V(ω)∆F(X(ω))) ≤ F∧(F(X(ω))) ≤ X(ω)
outside of a P-null set N, satisfying N ⊆ N.
23 / 28
The special case F = FX VI
Proof (Sketch) - ctd.Hence, since
P(X ∈ NV and V ∈ A
)=
∫AP(X ∈ Nλ
)m(dλ) = 0 ,
it consequently follows
X = F∧(FV(X)
)= F∧
(F(X−) + V∆F(X)
)on the set Ω \ N ⊆ 0 < V ≤ 1, whereN := V /∈ A ·∪X ∈ NV and V ∈ A.The second part of the claim follows from
X(ω) = F∧ (F(X(ω)−) + V(ω)∆F(X(ω))) ≤ F∧(F(X(ω))) ≤ X(ω)
outside of a P-null set N, satisfying N ⊆ N.
23 / 28
1 Sklar’s Theorem
2 Rüschendorf’s proof of Sklar’s Theorem revisited
3 The distribution of FV(X) for arbitrary distribution functions F
4 The distribution of FV(X) in the case of F = FX
5 A quotation - There is some truth to it !
24 / 28
Not “confusing”. Rather “quite hidden” Iwould suggest
- W. F. Darsow, B. Nguyen and E. T. Olsen (1996)
25 / 28
A very few references I
[1] P. Embrechts and M. Hofert.A note on generalized inverses.Mathematical Methods of Operations Research, 77 (3),423-432 (2013).
[2] T. S. Ferguson.Mathematical Statistics: A Decision Theoretic Approach.Probability and Mathematical Statistics, vol. 1., AcademicPress, New York (1967).
[3] J. F. Mai and M. Scherer.Simulating copulas.Imperial College Press, London (2012).
26 / 28
A very few references II
[4] F. Oertel.An analysis of the Rüschendorf transform - with a viewtowards Sklar’s Theorem.Dependence Modelling Vol. 3, No. 1, 113-125 (2015).
[5] L. Rüschendorf.On the distributional transform, Sklar’s theorem, and theempirical copula process.Journal of Statistical Planning and Inference 139 (11),3921-3927 (2009).
[6] B. Schweizer and A. Sklar.Operations on distribution functions not derivable fromoperations on random variables.Studia Mathematica 52, 43-52 (1974).
27 / 28
Thank you for your attention!
Are there any questions, comments or remarks?
28 / 28
Thank you for your attention!
Are there any questions, comments or remarks?
28 / 28