On the Monge–Ampère equation for characterizing gamma-Gaussian model

7
Statistics and Probability Letters 83 (2013) 1692–1698 Contents lists available at SciVerse ScienceDirect Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro On the Monge–Ampère equation for characterizing gamma-Gaussian model Célestin C. Kokonendji a,, Afif Masmoudi b a University of Franche-Comté, Besançon, France b University of Sfax, Tunisia article info Article history: Received 25 October 2012 Accepted 28 March 2013 Available online 11 April 2013 MSC: primary 62H05 60E07 secondary 62F10 62H99 Keywords: Covariance matrix Determinant Generalized variance estimator Log-Laplace transform Multivariate natural exponential model abstract We study the k-dimensional gamma-Gaussian model (k > 1) composed by distributions of random vector X = (X 1 , X 2 ,..., X k ) , where X 1 is a univariate gamma distributed, and (X 2 ,..., X k ) given X 1 are k 1 real independent Gaussian variables with variance X 1 . We first solve a particular Monge–Ampère equation which characterizes this gamma-Gaussian model through the determinant of its covariance matrix, named the generalized variance function. Then, we show that its modified Lévy measure is of the same type for which we prove a conjecture on generalized variance estimators of the gamma-Gaussian model. Finally, we provide reasonable extensions of the model and corresponding problems. © 2013 Elsevier B.V. All rights reserved. 1. Introduction For an unknown smooth function K : Θ R k R, k 2, the famous Monge–Ampère equation is defined by det K ′′ (θ) = r (θ), (1.1) where K ′′ = (D 2 ij K) i,j=1,...,k denotes the Hessian matrix of K and r is a given positive function (see, e.g., Gutiérrez, 2001, for some details). This class of Eq. (1.1) has been a source of intense investigations which are related to many areas of math- ematics, such as geometry and optimal transportation (e.g. Villani, 2003, and the references therein). Numerical solutions are also examined (e.g. Loeper and Rapetti, 2005). However, explicit solutions remain generally challenging problems. In particular for r = 1 in (1.1), the proofs of the basic Monge–Ampère equation such as ‘‘any strictly convex smooth function K in R k such that det K ′′ (θ) = 1 must be a quadratic form’’ were progressive, different, and according to the dimension: Jörgens (1954) for k = 2, Calabi (1958) for k = 3, 4, 5, and Pogorelov (1972) for k 6; see also Cheng and Yau (1986) for a shorter and more analytical proof, and Caffarelli and Li (2004) for its extension to positive periodic functions. Here, we are interested in the following form of the right member r in (1.1) presented as det K ′′ (θ) = exp{aK µ (θ) + b θ + c }, θ Θ R k , (1.2) Correspondence to: Université de Franche-Comté, UFR Sciences et Techniques, Laboratoire de Mathématiques de Besançon - UMR 6623 CNRS-UFC, 16 route de Gray, 25030 Besançon cedex, France. Tel.: +33 381 666 341; fax: +33 381 666 623. E-mail addresses: [email protected], [email protected] (C.C. Kokonendji), [email protected] (A. Masmoudi). 0167-7152/$ – see front matter © 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.spl.2013.03.023

Transcript of On the Monge–Ampère equation for characterizing gamma-Gaussian model

Statistics and Probability Letters 83 (2013) 1692–1698

Contents lists available at SciVerse ScienceDirect

Statistics and Probability Letters

journal homepage: www.elsevier.com/locate/stapro

On the Monge–Ampère equation for characterizinggamma-Gaussian modelCélestin C. Kokonendji a,∗, Afif Masmoudi ba University of Franche-Comté, Besançon, Franceb University of Sfax, Tunisia

a r t i c l e i n f o

Article history:Received 25 October 2012Accepted 28 March 2013Available online 11 April 2013

MSC:primary 62H0560E07secondary 62F1062H99

Keywords:Covariance matrixDeterminantGeneralized variance estimatorLog-Laplace transformMultivariate natural exponential model

a b s t r a c t

We study the k-dimensional gamma-Gaussian model (k > 1) composed by distributionsof random vector X = (X1, X2, . . . , Xk)

⊤, where X1 is a univariate gamma distributed, and(X2, . . . , Xk) given X1 are k − 1 real independent Gaussian variables with variance X1. Wefirst solve a particularMonge–Ampère equationwhich characterizes this gamma-Gaussianmodel through the determinant of its covariance matrix, named the generalized variancefunction. Then, we show that its modified Lévy measure is of the same type for whichwe prove a conjecture on generalized variance estimators of the gamma-Gaussian model.Finally, we provide reasonable extensions of the model and corresponding problems.

© 2013 Elsevier B.V. All rights reserved.

1. Introduction

For an unknown smooth function K : Θ ⊆ Rk→ R, k ≥ 2, the famous Monge–Ampère equation is defined by

detK′′(θ) = r(θ), (1.1)

where K′′= (D2

ijK)i,j=1,...,k denotes the Hessian matrix of K and r is a given positive function (see, e.g., Gutiérrez, 2001, forsome details). This class of Eq. (1.1) has been a source of intense investigations which are related to many areas of math-ematics, such as geometry and optimal transportation (e.g. Villani, 2003, and the references therein). Numerical solutionsare also examined (e.g. Loeper and Rapetti, 2005). However, explicit solutions remain generally challenging problems. Inparticular for r = 1 in (1.1), the proofs of the basic Monge–Ampère equation such as ‘‘any strictly convex smooth function Kin Rk such that detK′′(θ) = 1must be a quadratic form’’ were progressive, different, and according to the dimension: Jörgens(1954) for k = 2, Calabi (1958) for k = 3, 4, 5, and Pogorelov (1972) for k ≥ 6; see also Cheng and Yau (1986) for a shorterand more analytical proof, and Caffarelli and Li (2004) for its extension to positive periodic functions.

Here, we are interested in the following form of the right member r in (1.1) presented as

detK′′(θ) = exp{aKµ(θ) + b⊤θ + c}, ∀θ ∈ Θ ⊆ Rk, (1.2)

∗ Correspondence to: Université de Franche-Comté, UFR Sciences et Techniques, Laboratoire de Mathématiques de Besançon - UMR 6623 CNRS-UFC, 16route de Gray, 25030 Besançon cedex, France. Tel.: +33 381 666 341; fax: +33 381 666 623.

E-mail addresses: [email protected], [email protected] (C.C. Kokonendji), [email protected] (A. Masmoudi).

0167-7152/$ – see front matter© 2013 Elsevier B.V. All rights reserved.http://dx.doi.org/10.1016/j.spl.2013.03.023

C.C. Kokonendji, A. Masmoudi / Statistics and Probability Letters 83 (2013) 1692–1698 1693

where K is an unknown cumulant function or log-Laplace transform, to be determined, (a, b, c) ∈ R × Rk× R is a given

triple of constants, and Kµ is a given cumulant function on the same domain Θµ = Θ for any σ -finite positive (or probabil-ity) measure µ on Rk; see, for example, Roberts and Kaufman (1966). Let us recall that in the framework of the probabilitymodel F = F(µ) = {Pθ,µ(dx) := exp[x⊤θ − Kµ(θ)]µ(dx); θ ∈ Θµ}, named the natural exponential family (NEF) gener-ated by µ ∈ M(Rk) not concentrated on an affine subset of Rk, if X is a random vector distributed as Pθ,µ then Eθ (X) =

K′µ(θ) = (DiKµ(θ))i=1,...,k and varθ (X) = K′′

µ(θ). The function m(θ) = K′µ(θ) is a one-to-one transformation from Θµ onto

MF := K′µ(Θµ) and thus m = m(θ) provides an alternative parametrization of the family F = {P(m, F);m ∈ MF }, called

the mean parametrization. Note that MF depends only on F , and not on the choice of the generating measure µ of F . Thecovariance matrix of P(m, F) can be written as a function of the mean parameter m,VF (m) = K′′

µ(θ), called the variancefunction of F . Together with the mean domain MF ,VF characterizes F within the class of all NEFs; see, e.g., Casalis (1996)and Kotz et al. (2000, Chapter 54) for more details. However, the so-called generalized variance function

detVF (m) = detK′′

µ(θ(m)) (1.3)

does not characterize the NEF F = F(µ), and it is necessary to solve individually the corresponding Monge–Ampère equa-tion. The type of Eqs. (1.2) appeared first in the classification of the simple quadratic variance function with VF (m) =

αmm⊤+ B(m) + C, where α ∈ R, B(m) and C are real symmetric (k × k) matrices of respectively linear and constant

elements in m ∈ MF ⊆ Rk (Casalis, 1996). Then, as a practical use in statistics, it is conjectured by Kokonendji and Pom-meret (2007, Conjecture 1) that, with (1.3) themodels from (1.2) provide a simple expression of uniformlyminimum varianceunbiased (UMVU) estimators of generalized variance. Particular solutions of (1.2) strongly obtained by different approachescan be found in Kokonendji and Seshadri (1996) for Gaussians with a = 0 and b = 0, Kokonendji and Masmoudi (2006) forPoisson–Gaussians with a = 0, and recently Ghribi and Masmoudi (2010) for a multinomial model with a < 0.

This paper focuses on one of the basic cases of (1.2) with a > 0 and such that the mean parametrization (1.3) is simplyexpressed as

detVF (m) = mk+11 , ∀m = (m1, . . . ,mk)

⊤∈ (0, ∞) × Rk−1. (1.4)

From Kokonendji and Pommeret (2007, Tables 1–3) the candidate to the problem of characterization (1.4) is the gamma-Gaussian (or normal gamma) model, denoted (NM − Ga)0. Since the gamma-Gaussian is known to be an infinitely divisiblemodel (e.g., Hassairi, 1999), we also investigate itsmodified Lévy measure ρ(µ) satisfying

Kρ(µ)(θ) = log detK′′

µ(θ), ∀θ ∈ Θµ, (1.5)

for completely solving the corresponding conjecture in this model F = F(µ). In Section 2, we show analytically the resultof the gamma-Gaussian characterization by its generalized variance function (1.4). Section 3 is devoted to some concludingremarks related to the modified Lévy measure (1.5) of gamma-Gaussian distribution, its practical use for estimating thegeneralized variance solving a conjecture, and a reasonable extension of the model.

2. Result

According to Casalis (1996), for a k-dimensional gamma-Gaussian random vector X = (X1, X2, . . . , Xk)⊤

∼ (NM − Ga)0it must hold that X1 is univariate gamma distributed, and the components (X2, . . . , Xk) =: Xc

1 given X1 are k − 1 realindependent Gaussian variables with variance X1. Note here that Bernardo and Smith (1993) used the bivariate case (k = 2)in Bayesian theory. The corresponding NEF is generated by

µp(dx) =xp−11

(2πx1)(k−1)/20(p)exp

−x1 −

12x1

kj=2

x2j

1x1>0dx1 · · · dxk, (2.1)

for a fixed power of convolution p > 0, and denoted Fp = F(µp) with µp := µ∗p, where 0(·) is the gamma function and 1Ais the indicator function of the set A. Then

Kµp(θ) = −p log

−θ1 −

12

kj=2

θ2j

, Θ(µp) =

θ ∈ Rk

; θ1 +

kj=2

θ2j /2 < 0

, (2.2)

and

VFp(m) =1pmm⊤

+ Diagk(0,m1, . . . ,m1), MFp = (0, ∞) × Rk−1. (2.3)

Consequently, its generalized variance function is

detVFp(m) =1pmk+1

1 , MFp = (0, ∞) × Rk−1. (2.4)

1694 C.C. Kokonendji, A. Masmoudi / Statistics and Probability Letters 83 (2013) 1692–1698

Indeed, from (2.3) one can use the following particular Schur representation of the determinant

det

γ a⊤

a A

= γ det

A − γ −1aa⊤

(2.5)

with the non-null scalar γ = m21/p, the vector a = (m1/p)(m2, . . . ,mk)

⊤ and the (k − 1) × (k − 1) matrix A = γ −1aa⊤+

m1Ik−1, where Ij = Diagj(1, . . . , 1) is the j × j unit matrix. Mention that for the unit power p = 1 in (2.4) one gets (1.4).Conversely, we now show our main result which is stated as follows:

Theorem 2.1. Let k ∈ {2, 3, . . .} and p > 0. If an NEF Fp satisfies (2.4) then, up to affinity, Fp is of the gamma-Gaussianmodel (2.3).

Let us first compile here some properties between two connected NEFs that are necessary for making the paper as self-contained as possible.

Proposition 2.1. Let µ and µ′ be two σ -finite positive measures on Rk such that F = F(µ), F′= F(µ′) and m ∈ MF.

(i) If there exists (d, c) ∈ Rk× R such that µ′(dx) = exp{d⊤x + c}µ(dx), then F = F′ with Θµ′ = Θµ − d and for m =

m ∈ MF,

detVF′(m) = detVF(m).

(ii) If µ′= ϕ∗µ is the image measure of µ by the affine transformation ϕ(x) = Ax + b, where A is a k × k non-degenerate

matrix and b ∈ Rk, then, for m = Am + b ∈ ϕ(MF),

detVF′(m) = (detA)2 detVF(m).

(iii) If µ′= µ∗p is the p-th convolution power of µ for p > 0, then, for m = pm ∈ pMF,

detVF′(m) = pk detVF(m).

Parts (i) and (ii) of Proposition 2.1 show, respectively, that the generalized variance function detVF(m) of F is invariant forany element of its basis (or generating measure) and for affine transformations ϕ(x) = Ax + b such that detA = ±1, inparticular for a translation x → x + b.

Coming back to the proof of Theorem 2.1. Casalis (1996) has already characterized the gamma-Gaussian NEF by itsvariance function (up to affinity and to power), that is from (2.3) to (2.2) and then (2.1). She also linked any variance functionto the Monge–Ampère equation (1.2) as in the following lemma. In what follows we denote by (ei)i=1,...,k an orthonormalbasis of Rk.

Lemma 2.1. For a given variance function V onM ⊆ Rk, there is equivalence between (1.2) andk

i=1

(V′(m)ei)ei = am + b, ∀m ∈ M.

Applying Lemma 2.1 to the gamma-Gaussian model (2.3) one has a = (k+ 1)/p and b = 0 in the Monge–Ampère equation(1.2). Without loss of generality we assume c = 0 in (1.2) and then, for any fixed p > 0, we have to solve (in K or in V = K′′)

expk + 1p

K(θ(m))

=

1pmk+1

1 , ∀m = (m1, . . . ,mk)⊤

∈ (0, ∞) × Rk−1, (2.6)

for proving Theorem 2.1.

Lemma 2.2. Let Gp = G(νp) be a NEF verifying (2.6) with variance function VGp(m) = (Vij(m))i,j=1,...,k. Then

V1j(m) =1pm1mj, ∀j = 1, . . . , k.

Proof. Writing (2.6) as [(k + 1)/p]Kνp(θ(m)) = (k + 1) logm1 − log p and taking its derivative with respect to m, oneobtains

k + 1p

mVGp(m)

−1 h = (k + 1)

1m1

, 0, . . . , 0h, ∀h ∈ Rk

; (2.7)

since one has m = K′νp

(θ) and θ ′(m) =VGp(m)

−1 by definitions of the mean parametrization and variance function ofGp = G(νp), respectively. In particular, puttingh = VGp(m)ej in (2.7) for each j = 1, 2, . . . , k, one gets the desired result. �

For k = 1 in Lemma 2.2, this is the well-known real variance function of the univariate gamma NEF.

C.C. Kokonendji, A. Masmoudi / Statistics and Probability Letters 83 (2013) 1692–1698 1695

Lemma 2.3. Under assumption of Lemma 2.2, there exists a function g : Rk−1→ R+ such that, up to an additive constant,

Kνp(θ1, θc1 ) = −p log

θ1

p− g(θ c

1 )

on Θνp ⊆ {(θ1, θ

c1 ) ∈ R × Rk−1

; θ1/p + g(θ c1 ) < 0} with θ c

1 := (θ2, . . . , θk).

Proof. From Lemma 2.2 we have V11(m) = m21/p, that means

D211Kνp(θ1, θ

c1 ) =

1p

D1Kνp(θ1, θ

c1 )2

and then, by integration with respect to θ1, there exists a function g : Rk−1→ R+ such that

D1Kνp(θ1, θc1 ) =

−1θ1/p + g(θ c

1 ). (2.8)

Also, the rest of the conclusion of Lemma 2.2 can be written as

D21jKνp(θ1, θ

c1 ) =

1pD1Kνp(θ1, θ

c1 )DjKνp(θ1, θ

c1 ), ∀j = 2, . . . , k. (2.9)

Using now (2.8) in (2.9), one has

D21jKνp(θ1, θ

c1 ) =

g′(θ c1 )

[θ1/p + g(θ c1 )]

2=

1p

−1

θ1/p + g(θ c1 )

DjKνp(θ1, θ

c1 ), ∀j = 2, . . . , k.

Therefore

DjKνp(θ1, θc1 ) =

−p g′(θ c1 )

θ1/p + g(θ c1 )

, ∀j = 2, . . . , k.

By integration with respect to θ c1 one gets

Kνp(θ1, θc1 ) = −p log

θ1

p− g(θ c

1 )

+ b(θ1).

Comparing to (2.8) as follows

−1θ1/p + g(θ c

1 )=

−1θ1/p + g(θ c

1 )+ b′(θ1)

one obtains b′(θ1) = 0 which implies b(θ1) = b is a real constant. Finally, from (2.6) as [(k + 1)/p]Kνp(θ(m)) = (k + 1)logm1 − log p, one has

k + 1p

Kνp(θ1, θc1 ) = (k + 1) logD1Kνp(θ1, θ

c1 ) − log p,

that is

−(k + 1) log−

θ1

p− g(θ c

1 )

+

k + 1p

b = −(k + 1) log−

θ1

p− g(θ c

1 )

− log p,

and, therefore, the additive constant is b = [p/(k + 1)] log(1/p). The lemma is thus proven. �

Lemma 2.4. From Lemma 2.3, one has the three following assertions:

(i) D211Kνp(θ1, θ

c1 ) = (1/p)m2

1(ii) D2

1jKνp(θ1, θc1 ) = (1/p)m1mj, ∀j = 2, . . . , k

(iii) D2ijKνp(θ1, θ

c1 ) = (1/p)mimj − (p/[θ1/p + g(θ c

1 )])g′′(θ c

1 )ij , ∀i, j = 2, . . . , k,

where mi = DiKνp(θ1, θc1 ) for all i = 1, 2, . . . , k.

Proof. Easy. �

Lemma 2.5. The function g : Rk−1→ R+ of Lemma 2.3 has the following properties:

(i) g is a convex function,(ii) det g′′(θ c

1 ) is a real constant (does not depend on θ c1 ),

(iii) g′′(θ c1 ) = Σk−1 is a symmetric definite positive matrix of Rk−1 (does not depend on θ c

1 ).

1696 C.C. Kokonendji, A. Masmoudi / Statistics and Probability Letters 83 (2013) 1692–1698

Proof. Let νp = (νp,1, νcp,1) be a generating measure of Gp on R × Rk−1.

(i) Since θ c1 = (θ2, . . . , θk) → Kνp(0, θ

c1 ) =: Kνcp,1

(θ c1 ) is the cumulant function of νc

p,1, then θ c1 → −p log{−g(θ c

1 )} =

Kνcp,1(θ c

1 ) is convex. This implies that

θ c1 → exp[− log{−g(θ c

1 )}] =−1g(θ c

1 )

is also convex. Consequently, g is convex because g > 0.

(ii) From Lemma 2.4 and the Schur formula (2.5), we can write

detK′′

νp(θ1, θ

c1 ) = det

D2ijKνp(θ1, θ

c1 )i,j=1,...,k

= m21 γ0 det

A0 − γ −1

0 a0a⊤

0

with γ0 = 1/p, a0 = (1/p)(m2, . . . ,mk)

⊤= (1/p)(mc

1)⊤ and

A0 = γ −10 a0a⊤

0 −p

θ1/p + g(θ c1 )

g′′(θ c1 ).

Hence

detK′′

νp(θ1, θ

c1 ) =

m21

pdet

−p

θ1/p + g(θ c1 )

g′′(θ c1 )

=m2

1

p

p

−θ1/p − g(θ c1 )

k−1

det g′′(θ c1 )

= pk−2 mk+11 det g′′(θ c

1 )

because m1 = −1/[θ1/p + g(θ c1 )] by (2.8). Since detK′′

νp(θ1, θ

c1 ) = p−1 mk+1

1 must hold, we finally deduce that

det g′′(θ c1 ) = p−k+1

does not depend on θ c1 ∈ Θνcp,1

.

(iii) That is now trivial from Part (ii) and the Jörgens–Calabi–Pogorelov result. �

In order to conclude the proof of themain result, we need to show that themodelGp = G(νp) of Lemmas 2.2–2.5 correspondsto Fp of Theorem 2.1 up to linear transformation.

Indeed, from g′′(θ c1 ) = Σk−1 of Part (iii) of Lemmas 2.4 and 2.5 withm1 = −1/[θ1/p + g(θ c

1 )] ∈ (0, ∞) one has

VGp(m) = K′′

νp(θ1, θ

c1 ) =

γp a⊤

pap Ap

with γp = m2

1/p, ap = (m1/p)(m2, . . . ,mk)⊤

=: (m1/p)(mc1)

⊤ and

Ap =1pmc

1(mc1)

⊤−

pθ1/p + g(θ c

1 )g′′(θ c

1 )

=1pmc

1(mc1)

⊤+ m1 pΣk−1.

Therefore

VGp(m) =1pmm⊤

+ m1

0 0⊤

k−10k−1 pΣk−1

where 0k−1 = (0, . . . , 0)⊤ is the null-vector of Rk−1 and p > 0. By Cholesky’s decomposition, there exists a triangularmatrix T such that pΣk−1 = TT⊤. FromPart (ii) of Proposition 2.1, letGp,0 be the imageNEF ofGp by the linear transformationx → Bx of Rk with

B =

1 0⊤

k−10k−1 T−1

,

C.C. Kokonendji, A. Masmoudi / Statistics and Probability Letters 83 (2013) 1692–1698 1697

where T−1 is the (k − 1) × (k − 1) inverse matrix of T. Then, by Formula (54.14) of Kotz et al. (2000) one can verify thatGp,0 = Fp with variance function (2.3) as follows:

VGp,0(m) := BVGp

B−1m

B⊤

=1pmm⊤

+ B

0 0⊤

k−10k−1 pΣk−1

B−1m

1 B

=1pmm⊤

+ m1

1 0⊤

k−10k−1 T−1

0 0⊤

k−10k−1 TT⊤

1 0⊤

k−10k−1 (T−1)⊤

=

1pmm⊤

+ m1Diagk(0, 1, . . . , 1).

The proof of Theorem 2.1 is now complete.

3. Concluding remarks

Let X1, . . . ,Xn be gamma-Gaussian random vectors i.i.d. with distribution P(m, Fp) ∈ F(µp) defined from (2.1) for fixedp > 0. Denoting X = (X1 + · · · + Xn)/n = (X1, . . . , Xk)

⊤ the sample mean with positive first component X1. Then, themaximum likelihood (ML) estimator of the generalized variance detVFp(m) = mk+1

1 /p is Tn,k,p = detVFp(X) = X1/p and itis generally a biased estimator. For n > k, Kokonendji and Pommeret (2007) conjectured that its UMVU generalized vari-ance estimator Un,k,p is proportional to the previous ML one Tn,k,p because the Monge–Ampère equation (1.2) holds for thegamma-Gaussian model. Here we directly calculate the expression of this UMVU estimator defined by

Un,k,p = Cn,k,p(nX), (3.1)

where Cn,k,p : Rk→ [0, ∞) satisfies

νn,k,p(dx) = Cn,k,p(x)µnp(dx) (3.2)

and νn,k,p is the image measure of

1(k + 1)!

det

1 1 · · · 1x1 x2 · · · xk+1

2

µp(dx1) · · · µp(dxn)

by the map (x1, . . . , xn) → x1 + · · · + xn. Since the cumulant function of νn,k,p is

Kνn,k,p(θ) = nKµp(θ) + log detK′′

µp(θ)

= Kµnp(θ) + Kρ(µp)(θ)

for all θ ∈ Θµp with ρ(µp) defined as in (1.5), the following proposition provides the closed form of the UMVU estimator(3.1).

Proposition 3.1. Let µp as in (2.1) for fixed p > 0. Then ρ(µp) = pk · µk+1 and, for n > k,

Cn,k,p(x) =µnp ∗ ρ(µp)(dx)

µnp(dx)= pk

0(np)0(np + k + 1)

xk+11 .

Proof. From (2.2) we easily check the first result that

Kρ(µp)(θ) = log detK′′

µp(θ) = Kµk+1(θ) + k log p

for all θ ∈ Θµp . The second results are hence trivial from (3.2) and (2.1). �

However, calculations of quadratic risks of both generalized variance estimators Tn,k,p and Un,k,p would be very interestingfor their practical properties. See Kokonendji and Pommeret (2007) and Bernardoff et al. (2008) for some particular casesand asymptotic behaviors. It would be also interesting to approximate numerically Cn,k,p(·) of (3.1) for general models inthe sense of Loeper and Rapetti (2005).

Finally, combining the single power (q1 = k+ 1) of gamma-Gaussian generalized variance (1.4) with the unit j-multipleof the j-Poisson–Gaussian one such that detVPGj(m) = m1 · · ·mj for j = 0, 1, . . . , k (see Kokonendji and Masmoudi, 2006),it will be reasonable to investigate a general case of models such that their generalized variance functions can be written as

detV(m) = mq11 · · ·m

qjj , ∀m = (m1, . . . ,mk)

⊤∈ M ⊆ Rk,

for given j ∈ {0, 1, . . . , k} and qs ∈ R, s = 1, 2, . . . , j. Work in this direction is in progress.

1698 C.C. Kokonendji, A. Masmoudi / Statistics and Probability Letters 83 (2013) 1692–1698

Acknowledgment

We are grateful to Salomé Friedel for her attentive reading of an earlier draft of this paper.

References

Bernardo, J.M., Smith, A.F.M., 1993. Bayesian Theory. Wiley, New York.Bernardoff, P., Kokonendji, C.C., Puig, B., 2008. Generalized variance estimators in the multivariate gamma models. Math. Methods Statist. 17, 66–73.Caffarelli, L., Li, Y.Y., 2004. A Liouville theorem for solutions of the Monge–Ampère equation with periodic data. Ann. Inst. H. Poincaré Anal. Non Linéaire

21, 97–120.Calabi, E., 1958. Improper affine hyperspheres of convex type and a generalization of a theorem by K. Jörgens. Michigan Math. J. 5, 105–126.Casalis, M., 1996. The 2d + 4 simple quadratic natural exponential families on Rd . Ann. Statist. 24, 1828–1854.Cheng, S.Y., Yau, S.T., 1986. Complete affine hypersurfaces: I. The completeness of affine metrics. Comm. Pure Appl. Math. 39, 839–866.Ghribi, A., Masmoudi, A., 2010. Characterization of multinomial exponential families by generalized variance. Statist. Probab. Lett. 80, 939–944.Gutiérrez, C.E., 2001. The Monge–Ampère equation. Birkhäuser, Boston.Hassairi, A., 1999. Generalized variance and exponential families. Ann. Statist. 27, 374–385.Jörgens, K., 1954. Über die lösungen der differentialgleichung rt − s2 = 1. Math. Ann. 127, 130–134.Kokonendji, C.C., Masmoudi, A., 2006. A characterization of Poisson–Gaussian families by generalized variance. Bernoulli 12, 371–379.Kokonendji, C.C., Pommeret, D., 2007. Comparing UMVU and ML estimators of the generalized variance for natural exponential families. Statistics 41,

547–558.Kokonendji, C.C., Seshadri, V., 1996. On the determinant of the second derivative of a Laplace transform. Ann. Statist. 24, 1813–1827.Kotz, S., Balakrishnan, N., Johnson, N.L., 2000. Continuous Multivariate Distributions, Vol. 1: Models and Applications, second ed. Wiley, New York.Loeper, G., Rapetti, F., 2005. Numerical solution of the Monge–Ampère equation by a Newton’s algorithm. C. R. Acad. Sci. Paris, Ser. I 340, 319–324.Pogorelov, A.V., 1972. On the improper convex affine hyperspheres. Geom. Dedicata 1, 33–46.Roberts, G.E., Kaufman, H., 1966. Table of Laplace Transforms. Saunders, London.Villani, C., 2003. Topics in Optimal Transportation. In: Graduate Studies in Mathematics, vol. 58. American Mathematical Society, Providence.