Estimation in Multivariate Linear Models with...

37
Linköping Studies in Science and Technology. Thesis. No. 1531 Estimation in Multivariate Linear Models with Linearly Structured Covariance Matrices Joseph Nzabanita Department of Mathematics Linköping University, SE–581 83 Linköping, Sweden Linköping 2012

Transcript of Estimation in Multivariate Linear Models with...

Page 1: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

Linköping Studies in Science and Technology. Thesis.No. 1531

Estimation in Multivariate LinearModels with Linearly Structured

Covariance Matrices

Joseph Nzabanita

Department of MathematicsLinköping University, SE–581 83 Linköping, Sweden

Linköping 2012

Page 2: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

Linköping Studies in Science and Technology. Thesis.No. 1531

Estimation in Multivariate Linear Models with Linearly Structured CovarianceMatrices

Joseph Nzabanita

[email protected]

Mathematical StatisticsDepartment of Mathematics

Linköping UniversitySE–581 83 Linköping

Sweden

LIU-TEK-LIC-2012:16ISBN 978-91-7519-886-6

ISSN 0280-7971

Copyright c© 2012 Joseph Nzabanita

Printed by LiU-Tryck, Linköping, Sweden 2012

Page 3: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

To my beloved

Page 4: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation
Page 5: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

Abstract

This thesis focuses on the problem of estimating parameters in multivariate linear mod-els where particularly the mean has a bilinear structure and the covariance matrix has alinear structure. Most of techniques in statistical modeling rely on the assumption thatdata were generated from the normal distribution. Whereas real data may not be exactlynormal, the normal distributions serve as a useful approximation to the true distribution.The modeling of normally distributed data relies heavily on the estimation of the meanand the covariance matrix. The interest of considering various structures for the covari-ance matrices in different statistical models is partly driven by the idea that altering thecovariance structure of a parametric model alters the variances of the model’s estimatedmean parameters.

The extended growth curve model with two terms and a linearly structured covariancematrix is considered. In general there is no problem to estimate the covariance matrixwhen it is completely unknown. However, problems arise when one has to take into ac-count that there exists a structure generated by a few number of parameters. An estimationprocedure that handles linear structured covariance matrices is proposed. The idea is firstto estimate the covariance matrix when it should be used to define an inner product in aregression space and thereafter reestimate it when it should be interpreted as a dispersionmatrix. This idea is exploited by decomposing the residual space, the orthogonal comple-ment to the design space, into three orthogonal subspaces. Studying residuals obtainedfrom projections of observations on these subspaces yields explicit consistent estimatorsof the covariance matrix. An explicit consistent estimator of the mean is also proposedand numerical examples are given.

The models based on normally distributed random matrix are also studied in this the-sis. For these models, the dispersion matrix has the so called Kronecker product structureand they can be used for example to model data with spatio-temporal relationships. Theaim is to estimate the parameters of the model when, in addition, one covariance matrixis assumed to be linearly structured. On the basis ofn independent observations from amatrix normal distribution, estimation equations in a flip-flop relation are presented andnumerical examples are given.

v

Page 6: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation
Page 7: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

Populärvetenskaplig sammanfattning

Många statistiska modeller bygger på antagandet om normalfördelad data. Verklig da-ta kanske inte är exakt normalfördelad men det är i många fall en bra approximation.Normalfördelad data kan modelleras enbart genom dess väntevärde och kovariansmatrisoch det är därför ett problem av stort intresse att skatta dessa. Ofta kan det också varaintressant eller nödvändigt att anta någon struktur på både väntevärdet och/eller kovari-ansmatrisen.

Den här avhandlingen fokuserar på problemet att skatta parametrarna i multivariatalinjära modeller, speciellt den utökade tillväxtkurvemodellen innehållande två termer ochmed en linjär struktur för kovariansmatrisen. I allmänhet är det inget problem att skattakovariansmatrisen när den är helt okänd. Problem uppstår emellertid när man måste tahänsyn till att det finns en struktur som genereras av ett färre antal parametrar. I mångaexempel kan maximum-likelihoodskattningar inte erhållas explicit och måste därför be-räknas med någon numerisk optimeringsalgoritm. Vi beräknar explicita skattningar somett bra alternativ till maximum-likelihoodskattningarna. En skattningsprocedur som skat-tar kovariansmatriser med linjära strukturer föreslås. Tanken är att först skatta en kovari-ansmatris som används för att definiera en inre produkt i två steg, för att sedan skatta denslutliga kovariansmatrisen.

Även enkla tillväxtkurvemodeller med matrisnormalfördelning studeras i den här av-handlingen. För dessa modeller är kovariansmatrisen en Kroneckerprodukt och dessa mo-deller kan användas exempelvis för att modellera data med spatio-temporala förhållande.Syftet är att skatta parametrarna i modellen när dessutom en av kovariansmatriserna antasfölja en linjär strukturer. Medn oberoende observationer från en matrisnormalfördelningtas skattningsekvationer fram som löses med den så kallade flip-flop-algoritmen.

vii

Page 8: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation
Page 9: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

Acknowledgments

First of all, I would like to express my deep gratitude to my supervisors Professor Dietrichvon Rosen and Dr. Martin Singull. Thank you Dietrich for guiding and encouraging methroughout my studies. The enthusiasm you constantly show makes you the right personto work with. Thank you Martin. You are always available to help me when I am in needand without your help this thesis would not be completed.

My deep gratitude goes also to Bengt Ove Turesson, to Björn Textorius, and to all theadministrative staff of the Department of Mathematics for their constant help in differentthings.

I have also to thank my colleagues at the Department of Mathematics, especiallyJolanta (my office mate), for making life easier during my studies.

My studies are sponsored through Sida/SAREC-Funded NUR-LiU Cooperation andall involved institutions are acknowledged.

Linköping, May 21, 2012

Joseph Nzabanita

ix

Page 10: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation
Page 11: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

Contents

1 Introduction 11.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2.1 Outline of Part I . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2.2 Outline of Part II . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

I Multivariate Linear Models 5

2 Multivariate Distributions 72.1 Multivariate Normal distribution . . . . . . . . . . . . . . . . . . . . . . 72.2 Matrix Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . 82.3 Wishart Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 Growth Curve Model and its Extensions 133.1 Growth Curve Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.2 Extended Growth Curve Model . . . . . . . . . . . . . . . . . . . . . . . 153.3 Maximum Likelihood Estimators . . . . . . . . . . . . . . . . . . . . . . 16

4 Concluding Remarks 214.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214.2 Further research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Bibliography 23

xi

Page 12: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

xii Contents

II Papers 27

A Paper A 291 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Maximum likelihood estimators . . . . . . . . . . . . . . . . . . . . . . 333 Estimators of the linearly structured covariance matrix . . . . . . . . . . 364 Properties of the proposed estimators . . . . . . . . . . . . . . . . . . . . 415 Numerical examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

B Paper B 511 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 Explicit estimators whenΣ is unknown and has a linear structure andΨ

is known . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553 Estimators whenΣ is unknown and has a linear structure andΨ is unknown 564 Numerical examples: simulated study . . . . . . . . . . . . . . . . . . . 59References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Page 13: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

1Introduction

THE goals of statistical sciences are about planning experiments, setting up models toanalyze experiments and to study properties of these models. Statistical applica-

tion is about connecting statistical models to data. Statistical models are essentially formaking predictions; they form the bridge between observed data and unobserved (future)outcomes (Kattan and Gönen, 2008). The general statistical paradigm constitutes of thefollowing steps: (i) set up a model, (ii) evaluate the model via simulations or comparisonswith data, (iii) if necessary refine the model and restart from step (ii), and (iv) accept andinterpret the model. From this paradigm it is clear that the concept of statistical model liesin the heart of Statistics. In this thesis our focus is to linear models, a class of statisticalmodels that play a key role in Statistics. If exact inference is not possible then at leasta linear approximate approach can often be curried out (Kollo and von Rosen, 2005). Inparticular, we are concerned with the problem of estimation of parameters in multivariatelinear models where the covariance matrices have linear structures.

1.1 Background

The linear structures for the covariance matrices emerged naturally in statistical applica-tions and they are in the statistical literature for some years ago. These structures are, forexample, the uniform structure (or intraclass structure), the compound symmetry struc-ture, the matrix with zeros, the banded matrix, the Toeplitz or circular Toeplitz, etc. Theuniform structure, a linear covariance structure which consists of equal diagonal elementsand equal off-diagonal elements, emerged for the first time in Wilks (1946) while deal-ing with measurements onk psychological tests. An extension of the uniform structuredue to Votaw (1948) is the compound symmetry structure, which consists of blocks eachhaving uniform structure. In Votaw (1948) one can find examples of psychometric andmedical research problems where the compound symmetry covariance structure is appli-cable. The block compound symmetry covariance structure was discussed by Szatrowski

1

Page 14: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

2 1 Introduction

(1982) who applied the model to the analysis of an educationaltesting problem. Ohlsonet al. (2011b) proposed a procedure to obtain explicit estimator of a banded covariancematrix. The Toeplitz or circular Toeplitz discussed in Olkin and Press (1969) is anothergeneralizations of the intraclass structure.

The interest of considering various structures for the covariance matrices in differentstatistical models is partly driven by the idea that altering the covariance structure of aparametric model alters the variances of the model’s estimated mean parameters (Langeand Laird, 1989). In this thesis we focus on the problem of estimation of parametersin multivariate linear models where particularly the mean has a bilinear structure as inthe growth curve model (Pothoff and Roy, 1964) and the covariance matrix has a linearstructure. The linear structured covariance matrix in the growth curve model have beenstudied in the statistical literature. For examples, Khatri (1973) considered the intraclasscovariance structure, Ohlson and von Rosen (2010) studied the classical growth curvemodel, when the covariance matrix has some specific linear structure.

The main themes of this thesis are (i) to derive explicit estimators of parameters inthe extended growth curve model with two terms, when the covariance matrix is linearlystructured and (ii) to propose estimation equations of the parameters in the multivariatelinear models with a mean which has a bilinear structure and a Kronecker covariancestructure, where one of the covariance matrix has a linear structure.

1.2 Outline

This thesis consists of two parts and the outline is as follows.

1.2.1 Outline of Part I

In Part I the background and relevant results that are needed for an ease reading of thisthesis are presented. Part I starts with Chapter 2 which gives a brief review on the multi-variate distributions. The main focus is to define the multivariate normal distribution, thematrix normal distribution and the Wishart distribution. The maximum likelihood estima-tors in multivariate normal model and matrix normal model, for the unstructured cases,are given. Chapter 3 is devoted to the growth curve model and the extended growth curvemodel. The maximum likelihood estimators, for the unstructured cases, are presented.Part I ends with Chapter 4, which gives some concluding remarks and suggestions forfurther work.

1.2.2 Outline of Part II

Part II consists of two papers. Hereafter a short summary for each of the papers is pre-sented.

Paper A: Estimation of parameters in the extended growth curvemodel with a linearly structured covariance matrix

In Paper A, the extended growth curve model with two terms and a linearly structuredcovariance matrix is studied. More specifically, the model considered is defined as fol-

Page 15: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

1.3 Contributions 3

lows. LetX : p × n, Ai : p × qi, Bi : qi × ki, Ci : ki × n, r(C1) + p ≤ n, i = 1, 2,C(C′

2) ⊆ C(C′1), wherer( · ) andC( · ) represent the rank and column space of a matrix,

respectively. The extended growth curve model with two terms is given by

X = A1B1C1 +A2B2C2 +E,

where columns ofE are assumed to be independently distributed as a multivariate nor-mal distribution with mean zero and a positive definite dispersion matrixΣ; i.e., E ∼Np,n(0,Σ, In). The design matricesAi andCi are known matrices whereas matricesBi

andΣ are unknown parameter matrices. Moreover, we assume that the covariance matrixΣ is linearly structured. In this paper an estimation procedure that handles linear struc-tured covariance matrices is proposed. The idea is first to estimate the covariance matrixwhen it should be used to define an inner product in a regression space and thereafterreestimate it when it should be interpreted as a dispersion matrix. This idea is exploitedby decomposing the residual space, the orthogonal complement to the design space, intothree orthogonal subspaces. Studying residuals obtained from projections of observationson these subspaces yields explicit estimators of the covariance matrix. An explicit estima-tor of the mean is also proposed. Properties of these estimators are studied and numericalexamples are given.

Paper B: Estimation in multivariate linear models with Kroneckerproduct and linear structures on the covariance matrices

This paper deals with models based on normally distributed random matrices. Morespecifically the model considered isX ∼ Np,q(M,Σ,Ψ) with meanM, a p × q ma-trix, assumed to follow a bilinear structure, i.e.,E[X] = M = ABC, whereA andCare known design matrices,B is unkown parameter matrix, and the dispersion matrix ofX has a Kronecker product structure, i.e.,D[X] = Ψ ⊗ Σ, where bothΨ andΣ areunknown positive definite matrices. The model may be used for example to model datawith spatio-temporal relationships. The aim is to estimate the parameters of the modelwhen, in addition,Σ is assumed to be linearly structured. In the paper, on the basis ofn independent observations on the random matrixX, estimation equations in a flip-floprelation are presented and numerical examples are given.

1.3 Contributions

The main contributions of the thesis are as follows.

• In Paper A, we studied the extended growth curve model with two terms and a lin-early structured covariance matrix. A simple procedure based on the decompositionof the residual space into three orthogonal subspaces and the study of the residualsobtained from projections of observations on these subspaces yields explicit andconsistent estimators of the covariance matrix. An explicit unbiased estimator ofthe mean is also proposed.

• In Paper B, the multivariate linear model with Kronecker and linear structures onthe covariance matrices is considered. On the basis ofn independent matrix ob-servations, the estimation equations in a flip-flop relation are derived. Numerical

Page 16: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

4 1 Introduction

simulations show that solving these equations with a flip-flopalgorithm gives esti-mates which are in a well agreement with the true parameters.

Page 17: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

Part I

Multivariate Linear Models

5

Page 18: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation
Page 19: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

2Multivariate Distributions

THIS chapter focuses on the normal distribution which is very important in statisticalanalyses. In particular, our interest here is to define the matrix normal distribution

which will play a central role in this thesis. The Wishart distribution will also be lookedat for easy reading of papers. The well known univariate normal distribution has beenused in statistics for about two hundreds years and the multivariate normal distribution,understood as a distribution of a vector, has been also used for a long time (Kollo andvon Rosen, 2005). Due to the complexity of data from various field of applied research,inevitable extensions of the multivariate normal distribution to the matrix normal distribu-tion or even more generalization to multilinear normal distribution have been considered.The multilinear normal distribution will not be considered in this thesis. For more rele-vant results about multilinear normal distribution one can consult Ohlson et al. (2011a)and references cited therein.

Before defining the multivariate normal distribution and the matrix normal distributionwe remember that there are many ways of defining the normal distributions. In this thesiswe will define the normal distributions via their density functions assuming that they exist.

2.1 Multivariate Normal distribution

Definition 2.1 (Multivariate normal distribution). A random vectorx : p× 1 is mul-tivariate normally distributed with mean vectorµ : p× 1 and positive definite covariancematrixΣ : p× p if its density is

f(x) = (2π)− p

2 |Σ|−12 e−

12 trΣ

−1(x−µ)(x−µ)′, (2.1)

where| · | andtr denote the determinant and the trace of a matrix respectively. We usuallyuse the notationx ∼ Np(µ,Σ).

The multivariate normal modelx ∼ Np(µ,Σ), whereµ andΣ are unknown pa-rameters, is used in the statistical literature for a long time. To find estimators of the

7

Page 20: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

8 2 Multivariate Distributions

parameters, the method of maximum likelihood is often used. Let a random sample ofn observation vectorsx1,x2, . . . ,xn come from the multivariate normal distribution, i.e.xi ∼ Np(µ,Σ). Thexi’s constitute a random sample and the likelihood function is givenby the product of the densities evaluated at each observation vector

L(x1,x2, . . . ,xn,µ,Σ) =n∏

i=1

f(xi,µ,Σ)

=

n∏

i=1

(2π)− p

2 |Σ|−12 e−

12 trΣ

−1(xi−µ)(xi−µ)′

= (2π)− pn

2 |Σ|−n2 e−

∑ni=1(xi−µ)′Σ−1(xi−µ)/2.

The maximum likelihood estimators (MLEs) ofµ andΣ resulting from the maximizationof this likelihood function, for more details see for example Johnson and Wichern (2007),are respectively

µ =1

n

n∑

i=1

xi =1

nX1n,

Σ =1

nS,

where

S =

n∑

i=1

(xi − µ)(xi − µ)′ = X(In −1

n1n1

′n)X

′,

X = (x1,x2, . . . ,xn), 1n is then−dimensional vector of 1s, andIn is then×n identitymatrix.

2.2 Matrix Normal Distribution

Definition 2.2 (Matrix normal distribution). A random matrixX : p × q is matrixnormally distributed with meanM : p × q and positive definite covariance matricesΣ : p× p andΨ : q × q if its density is

f(X) = (2π)− pq

2 |Σ|−q

2 |Ψ|−p

2 e−12 trΣ

−1(X−M)Ψ−1(X−M)′. (2.2)

The model based on the matrix normally distributed is usually denoted as

X ∼ Np,q(M,Σ,Ψ), (2.3)

and it can be shown thatX ∼ Np,q(M,Σ,Ψ) means the same as

vecX ∼ Npq(vecM,Ψ⊗Σ), (2.4)

where⊗ denotes the Kronecker product. Since by definition of the dispersion matrix ofX is D[X] = D[vecX], we getD[X] = Ψ ⊗ Σ. For the interpretation we note thatΨ

Page 21: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

2.2 Matrix Normal Distribution 9

describes the covariances between the columns ofX. These covariances will be the samefor each row ofX. The other covariance matrixΣ describes the covariances betweenthe rows ofX which will be the same for each column ofX. The productΨ ⊗Σ takesinto account the covariances between columns as well as the covariances between rows.Therefore,Ψ ⊗ Σ indicates that the overall covariance consists of the products of thecovariances inΨ and inΣ, respectively, i.e.,Cov[xij , xkl] = σikψjl, whereX = (xij),Σ = (σik) andΨ = (ψjl).

The following example shows one possibility of how a matrix normal distribution mayarise.

Example 2.1

Let x1, . . . ,xn be an independent sample ofn observation vectors from a multivariatenormal distributionNp (µ,Σ) and let the observation vectorsxi be the columns in a ma-trix X = (x1,x2, . . . ,xn). The distribution of the vectorization of the sample observationmatrixvecX is given by

vecX = (x′1, x

′2, . . . ,x

′n)

′∼ Npn (1n ⊗ µ,Ω) ,

whereΩ = In ⊗Σ, 1n is then−dimensional vector of 1s, andIn is then × n identitymatrix. This is written as

X ∼ Np,n (M,Σ, In) ,

whereM = µ1′n.

The models (2.3) and (2.4) have been considered in the statistical literature. For exam-ple Dutilleul (1999), Roy and Khattree (2005) and Lu and Zimmerman (2005) consideredthe model (2.4), and to obtain MLEs these authors solved iteratively the usual likelihoodequations, one obtained by assuming thatΨ is given and the other obtained by assumingthatΣ is given, by what was called the flip-flop algorithm in Lu and Zimmerman (2005).

Let a random sample ofn observation matricesX1,X2, . . . ,Xn be drawn from thematrix normal distribution, i.e.Xi ∼ Np(M,Σ,Ψ). The likelihood function is givenby the product of the densities evaluated at each observation matrix as it was for themultivariate case. The log-likelihood, ignoring the normalizing factor, is given by

lnL(X,M,Σ,Ψ) = −qn

2ln |Σ| −

pn

2ln |Ψ|

−1

2

n∑

i=1

trΣ−1(Xi −M)Ψ−1(Xi −M)′.

Page 22: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

10 2 Multivariate Distributions

The likelihood equations for likelihood estimators are given by (Dutilleul, 1999)

M =1

n

n∑

i=1

Xi = X, (2.5)

Σ =1

nq

n∑

i=1

(Xi − M)Ψ−1(Xi − M)′, (2.6)

Ψ =1

np

n∑

i=1

(Xi − M)′Σ−1(Xi − M), (2.7)

There is no explicit solutions to these equations and one must rely on an iterative algo-rithm like the flip-flop algorithm (Dutilleul, 1999). Srivastava et al. (2008) pointed outthat the estimators found in this way are not uniquely determined. Srivastava et al. (2008)showed that solving these equations with additional estimability conditions, using the flip-flop algorithm, the estimates in the algorithm converge to the unique maximum likelihoodestimators of the parameters.

The model (2.3), where the mean has a bilinear structure was considered by Srivastavaet al. (2008). In Paper B, we consider the problem of estimating the parameters in themodel (2.3) where the mean has a bilinear structure (see the mean structure in the growthcurve model Section 3.1) and, in addition, the covariance matrixΣ is assumed to belinearly structured.

2.3 Wishart Distribution

In this section we present the definition and some properties of another important distri-bution which belongs to the class of matrix distributions, the Wishart distribution. Firstderived by Wishart (1928), the Wishart distribution is usually regarded as a multivari-ate analogue of the chi-square distribution. There are many ways to define the Wishartdistribution and here we adopt the definition by Kollo and von Rosen (2005).

Definition 2.3 (Wishart distribution). The matrixW : p × p is said to be Wishartdistributed if and only ifW = XX′ for some matrixX, whereX ∼ Np,n(M,Σ, I),Σ ≥ 0. If M = 0, we have a central Wishart distribution which will be denotedW ∼Wp(Σ, n), and if M 6= 0, we have a non-central Wishart distribution which will bedenotedWp(Σ, n,∆), where∆ = MM′.

The first parameterΣ is usually supposed to be unknown. The second parametern,which stands for the degree of freedom is usually considered to be known. The thirdparameter∆, which is used in the non-central Wishart distribution, is called the non-centrality parameter.

Page 23: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

2.3 Wishart Distribution 11

The following theorem contains some properties of the Wishart distribution which areto be used in the papers.

Theorem 2.1

(i) LetW1 ∼Wp(Σ, n,∆1) be independent ofW2 ∼Wp(Σ,m,∆2). Then

W1 +W2 ∼Wp(Σ, n+m,∆1 +∆2).

(ii) Let X ∼ Np,n(M,Σ,Ψ), whereC(M′) ⊆ C(Ψ). PutW = XΨ−X′. Then

W ∼Wp(Σ, r(Ψ),∆),

where∆ = MΨ−M′.

(iii) Let W ∼Wp(Σ, n,∆) andA ∈ Rq×p. Then

AWA′ ∼Wp(AΣA′, n,A∆A′).

(iv) Let X ∼ Np,n(M,Σ, I) andQ : n × n be symmetric. ThenXQX′ is Wishartdistributed if and only ifQ is idempotent.

(v) LetX ∼ Np,n(M,Σ, I) andQ : n × n be symmetric and idempotent, so thatMQ = 0. ThenXQX′ ∼Wp(Σ, r(Q)).

(vi) LetX ∼ Np,n(M,Σ, I), Q1 : n× n andQ2 : n× n be symmetric. ThenXQ1X′

andXQ2X′ are independent if and only ifQ1Q2 = Q2Q1 = 0.

The proofs of these results can be found, for example, in Kollo and von Rosen (2005).

Example 2.2In Section 2.1, the MLEs ofµ and Σ in the multivariate normal modelx ∼ Np(µ,Σ)were given. These are respectively

µ =1

nX1n,

Σ =1

nS,

where

S = X(In −1

n1n1

′n)X

′,

X = (x1,x2, . . . ,xn), 1n is then−dimensional vector of 1s, andIn is then×n identitymatrix.

It is easy to show that the matrixQ = In − 1n1n1

′n is idempotent andr(Q) = n− 1.

Thus,S ∼Wp(Σ, n− 1).

Moreover, we note thatQ is a projector on the spaceC(1n)⊥, the orthogonal complement

to the spaceC(1n). HenceQ1n = 0 so thatµ andS (or Σ) are independent.

Page 24: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation
Page 25: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

3Growth Curve Model and its

Extensions

THE growth curve analysis is a topic with many important applications within medicine,natural sciences, social sciences, etc. Growth curve analysis has a long history and

two classical papers are Box (1950) and Rao (1958). In Roy (1957) or Anderson (1958),one considered the MANOVA model

X = BC+E, (3.1)

whereX : p × n, B : p × k, C : k × n, E ∼ Np,n(0,Σ, I). The matrixC called thebetween-individuals design matrix is known,B and the positive definite matrixΣ areunknown parameter matrices.

3.1 Growth Curve Model

In 1964 the well known paper by Pothoff and Roy (1964) extended the MANOVA model(3.1) to the model which was later termed the growth curve model.

Definition 3.1 (Growth curve model). Let X : p× n, A : p× q, q ≤ p, B : q × k, C :k × n, r(C) + p ≤ n, wherer( · ) represents the rank of a matrix. The Growth CurveModel is given by

X = ABC+E, (3.2)

where columns ofE are assumed to be independently distributed as a multivariate nor-mal distribution with mean zero and a positive definite dispersion matrixΣ; i.e. E ∼Np,n(0,Σ, In).

The matricesA andC, often called respectively within-individuals and between-individuals design matrices, are known matrices whereas matricesB andΣ are unknownparameter matrices.

13

Page 26: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

14 3 Growth Curve Model and its Extensions

The paper by Pothoff and Roy (1964) is often considered to be the first where themodel was presented. Several prominent authors wrote follow-up papers, e.g. Rao (1965)and Khatri (1966). Notice that the growth curve model is a special case of the matrixnormal model where the mean has a bilinear structure. Therefore, we may use the notation

X ∼ Np,n(ABC,Σ, I).

Also, it is worth noting that the MANOVA model with restrictions

X = BC+E, (3.3)

GB = 0

is equivalent to the Growth Curve model.GB = 0 is equivalent toB = (G′)oΘ, where(G′)o is any matrix spanning the orthogonal complement to the space generated by thecolumns ofG′. Plugging(G′)oΘ in (3.3) gives

X = (G′)oΘC+E,

which is identical to the growth curve model (3.2).

Example 3.1: Potthoff & Roy (1964) dental dataDental measurements on eleven girls and sixteen boys at four different ages (t1 = 8, t2 =10, t3 = 12, andt4 = 14) were taken. Each measurement is the distance, in millimeters,from the center of pituitary to pteryo-maxillary fissure. These data are presented in Table1 and plotted in Figure 3.1. Suppose linear growth curves describe the mean growth for

Table 3.1: Dental data

id gender t1 t2 t3 t4 id gender t1 t2 t3 t41 F 21.0 20.0 21.5 23.0 12 M 26.0 25.0 29.0 31.02 F 21.0 21.5 24.0 25.5 13 M 21.5 22.5 23.0 26.03 F 20.5 24.0 24.5 26.0 14 M 23.0 22.5 24.0 27.04 F 23.5 24.5 25.0 26.5 15 M 25.5 27.5 26.5 27.05 F 21.5 23.0 22.5 23.5 16 M 20.0 23.5 22.5 26.06 F 20.0 21.0 21.0 22.5 17 M 24.5 25.5 27.0 28.57 F 21.5 22.5 23.0 25.0 18 M 22.0 22.0 24.5 26.58 F 23.0 23.0 23.5 24.0 19 M 24.0 21.5 24.5 25.59 F 20.0 21.0 22.0 21.5 20 M 23.0 20.5 31.0 26.010 F 16.5 19.0 19.0 19.5 21 M 27.5 28.0 31.0 31.511 F 24.5 25.0 28.0 28.0 22 M 23.0 23.0 23.5 25.0

23 M 21.5 23.5 24.0 28.024 M 17.0 24.5 26.0 29.525 M 22.5 25.5 25.5 26.026 M 23.0 24.5 26.0 30.027 M 22.0 21.5 23.5 25.0

both girls and boy. Then we may use the growth curve model

X ∼ Np,n(ABC,Σ, I).

Page 27: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

3.2 Extended Growth Curve Model 15

1 1.5 2 2.5 3 3.5 416

18

20

22

24

26

28

30

32

Age

Gro

wth

mea

sure

men

ts

Girls profileBoys profile

Figure 3.1: Growth profiles plot of Potthoff and Roy (1964) dental data

In this model, the observation matrix isX = (x1,x1, . . . ,x27), in which eleven firstcolumns correspond to measurements on girls and sixteen last columns correspond tomeasurements on boys. The design matrices are

A′ =

(1 1 1 18 10 12 14

), C =

(1′11 ⊗

(1

0

): 1′

16 ⊗

(0

1

)),

andB is the unknown parameter matrix andΣ is the unknown positive definite covariancematrix.

3.2 Extended Growth Curve Model

One of limitations of the growth curve model is that different individuals should followthe same growth profile. If this does not hold there is a way to extend the model. A naturalextension of the growth curve model, introduced by von Rosen (1989), is the following

Definition 3.2 (Extended growth curve model). LetX : p×n,Ai : p×qi,Bi : qi×ki,Ci : ki × n, r(C1) + p ≤ n, i = 1, 2, . . . ,m, C(C′

i) ⊆ C(C′i−1), i = 2, 3, . . . ,m,

wherer( · ) andC( · ) represent the rank and column space of a matrix respectively. TheExtended Growth Curve Model is given by

X =

m∑

i=1

AiBiCi +E,

where columns ofE are assumed to be independently distributed as a multivariate nor-mal distribution with mean zero and a positive definite dispersion matrixΣ; i.e. E ∼Np,n(0,Σ, In).

Page 28: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

16 3 Growth Curve Model and its Extensions

The matricesAi and Ci, often called design matrices, are known matrices whereasmatricesBi andΣ are unknown parameter matrices. As for the growth curve model thenotation

X ∼ Np,n

(m∑

i=1

AiBiCi,Σ, I

)

may be used for the extended growth curve model. The only difference with the growthcurve model in Definition 3.1 is the presence of a more general mean structure. Whenm = 1, the model reduces to the growth curve model. The model without subspaceconditions was considered before by Verbyla and Venables (1988) under the name ofsum of profiles model. Also observe that the subspace conditionsC(C′

i) ⊆ C(C′i−1),

i = 2, 3, . . . ,m may be replaced byC(Ai) ⊆ C(Ai−1), i = 2, 3, . . . ,m. This problemwas considered for example by Filipiak and von Rosen (2011) form = 3.

In Paper A, we consider the problem of estimating parameters in the extended growthcurve model with two terms (m= 2), where the covariance matrixΣ is linearly struc-tured.

Example 3.2Consider again Potthoff & Roy (1964) classical dental data. But now assume that for bothgirls and boys we have a linear growth component but additionally for the boys there alsoexists a second order polynomial structure. Then we may use the extended growth curvemodel with two terms

X ∼ Np,n(A1B1C1 +A2B2C2,Σ, I),

where

A′1 =

(1 1 1 18 10 12 14

), C1 =

(1′11 ⊗

(1

0

): 1′

16 ⊗

(0

1

))

A′2 =

(82 102 122 142

), C2 = (0′

11 : 1′16),

are design matrices andB1 =

(β11 β12β21 β22

)andB2 = (β32) are parameter matrices and

Σ is the same as in Example 3.1.

3.3 Maximum Likelihood Estimators

The maximum likelihood method is one of several approaches used to find estimators ofparameters in the growth curve model. The maximum likelihood estimators of parametersin the growth curve model have been studied by many authors, see for instance (Srivastavaand Khatri, 1979) and (von Rosen, 1989). For the extended growth curve model as inDefinition 3.2 an exhaustive description of how to get those estimators can be found inKollo and von Rosen (2005). Here we present some important results from which themain ideas discussed in Paper A are derived. The following results due to von Rosen(1989) gives the MLEs of parameters in the extended growth curve model.

Page 29: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

3.3 Maximum Likelihood Estimators 17

Theorem 3.1Consider the extended growth curve model as in Definition 3.2. Let

Pr = Tr−1Tr−2 × · · · ×T0, T0 = I, r = 1, 2, . . . ,m+ 1,

Ti = I−PiAi(A′iP

′iS

−1i PiAi)

−A′iP

′iS

−1i , i = 1, 2, . . . ,m,

Si =

i∑

j=1

Kj , i = 1, 2, . . . ,m,

Kj = PjXPC′

j−1(I−PC′

j)PC′

j−1X′P′

j , C0 = I,

PC′

j= C′

j(CjC′j)

−Cj .

Assume thatS1 is positive definite.

(i) The representations of maximum likelihood estimators ofBr, r = 1, 2, . . . ,m andΣ are

Br = (A′rP

′rS

−1r PrAr)

−A′rP

′rS

−1r (X−

m∑

i=r+1

AiBiCi)C′r(CrC

′r)

+(A′rP

′r)

oZr1 +A′rP

′rZr2C

or′,

nΣ = (X−

m∑

i=1

AiBiCi)(X−

m∑

i=1

AiBiCi)′

= Sm +Pm+1XC′m(CmC′

m)−CmX′Pm+1,

whereZr1 andZr2 are arbitrary matrices and∑m

i=m+1 AiBiCi = 0.

(ii) For the estimatorsBi,

Pr

m∑

i=r

AiBiCi =

m∑

i=r

(I−Ti)XC′i(CiC

′i)

−Ci.

The notationCo stands for any matrix of full rank spanningC(C)⊥, andG− denotes anarbitrary generalized inverse in the sense thatGG−G = G.

A useful results is the corollary of this theorem whenr = 1, which gives the estimatedmean structure.

Corollary 3.1

E[X] =∑m

i=1AiBiCi =

m∑

i=1

(I−Ti)XC′i(CiC

′i)

−Ci.

Another consequence of Theorem 3.1 that is considered in Paper A, corresponds to thecase ofm = 2. Setm = 2 in the extended growth curve model of Definition 3.2. Then,the maximum likelihood estimators for the parameter matricesB1 andB2 are given by

B2 = (A′2P

′2S

−12 P2A2)

−A′2P

′2S

−12 XC′

2(C2C′2)

− + (A′2P2)

oZ21 +A′2Z22C

o′

2

B1 = (A′1S

−11 A1)

−A′1S

−11 (X−A2B2C2)C

′1(C1C

′1)

− +A′o1 Z11 +A′

1Z12Co′

1

Page 30: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

18 3 Growth Curve Model and its Extensions

where

S1 = X(I−C′

1(C1C′1)

−C1

)X′,

P2 = I−A1(A′1S

−11 A1)

−A′1S

−11 ,

S2 = S1 +P2XC′1(C1C

′1)

−C1

(I−C′

2(C2C′2)

−C2

)C′

1(C1C′1)

−C1X′P′

2,

Zkl are arbitrary matrices.Assuming that matricesAi’s, Ci’s are of full rank and thatC(A1) ∩ C(A2) = 0,

the unique maximum likelihood estimators are

B2 = (A′2P

′2S

−12 P2A2)

−1A′2P

′2S

−12 XC′

2(C2C′2)

−1,

B1 = (A′1S

−11 A1)

−1A′1S

−11 (X−A2B2C2)C

′1(C1C

′1)

−1.

Obviously, under general settings, the maximum likelihood estimatorsB1 andB2 arenot unique due to the arbitrariness of matricesZkl. However, it is worth noting that theestimated mean

E[X] = A1B1C1 +A2B2C2

is always unique and thereforeΣ given by

nΣ = (X−A1B1C1 −A2B2C2)(X−A1B1C1 −A2B2C2)′

is also unique.

Example 3.3: Example 3.2 continued

Consider again Potthoff & Roy (1964) classical dental data and the model of Example3.2. Then, the maximum likelihood estimates of parameters are

B1 =

(20.2836 21.95990.9527 0.5740

), B2 = (0.2006),

Σ =

5.0272 2.5066 3.6410 2.50992.5066 3.8810 2.6961 3.07123.6410 2.6961 6.0104 3.82532.5099 3.0712 3.8253 4.6164

.

The estimated mean growth curves, plotted in Figure 3.2, for girls and boys are respec-tively

µg(t) = 20.2836 + 0.9527 t,

µb(t) = 21.9599 + 0.5740 t+ 0.2006 t2.

Page 31: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

3.3 Maximum Likelihood Estimators 19

1 1.5 2 2.5 3 3.5 416

18

20

22

24

26

28

30

32

Age

Gro

wth

Girls profileBoys profile

Figure 3.2: Estimated mean growth curves for Potthoff and Roy (1964) dental data

Page 32: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation
Page 33: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

4Concluding Remarks

THIS chapter is reserved to the summary of the thesis and suggestions for further re-search.

4.1 Conclusion

The problem of estimating parameters in different statistical models is in the center ofthe statistical sciences. The main theme of this thesis is about the estimation of param-eters in multivariate linear models where the covariance matrices have linear structures.The linear structures for the covariance matrices occur naturally in statistical applicationsand many authors have been interested in those structures. It is well known that normaldistributed data can be modeled only by its mean and covariance matrix. Moreover, theinference on the mean parameters heavily depends on the estimated covariance matrixand the dispersion matrix for the estimator of the mean is a function of it. Hence, it isbelieved that altering the covariance structure of a parametric model alters the variancesof the model’s estimated mean parameters. Therefore, considering various structures forthe covariance matrices in different statistical models is a problem of great interest.

In Paper A, we study the extended growth curve model with two terms and a linearlystructured covariance matrix. An estimation procedure that handles linear structured co-variance matrices was proposed. The idea is first to estimate the covariance matrix whenit should be used to define an inner product in the regression space and thereafter rees-timate it when it should be interpreted as a dispersion matrix. This idea is exploited bydecomposing the residual space, the orthogonal complement to the design space, intothree orthogonal subspaces. Studying residuals obtained from projections of observationson these subspaces yields explicit consistent estimators of the covariance matrix. An ex-plicit consistent estimator of the mean was also proposed. Numerical simulations showthat the estimates of the linearly structured covariance matrices are very close to the truecovariance matrices. However, for the banded matrix structure, it was noted that the es-

21

Page 34: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

22 4 Concluding Remarks

timates of the covariance matrix may not be positive definite for smalln whereas it isalways positive definite for the circular Toeplitz structure.

In Paper B, the models based on normally distributed random matrix are studied.For these models, the dispersion matrix has the so called Kronecker product structureand they can be used for example to model data with spatio-temporal relationships. Theaim is to estimate the parameters of the model when, in addition, one covariance matrixis assumed to be linearly structured. On the basis ofn independent observations froma matrix normal distribution, estimation equations in a flip-flop relation are presented.Numerical simulations show that the estimates of parameters are in a well agreement withthe true parameters.

4.2 Further research

At the completion of this thesis some points have to be pointed out as suggestions forfurther work.

• The proposed estimators in Paper A have good properties like unbiasedness and/orconsistency. However, to be more useful their other properties (e.g. their distri-butions) have to be studied. Also, more studies on the positive definiteness of theestimates for the covariance matrix is of interest.

• In Paper B, numerical simulations showed that the proposed algorithm producesestimates of parameters that are in a well agreement with the true values. Thealgorithm was established in a fair heuristic manner and more rigorous studies areneeded.

• Application of procedures developed in Paper A and Paper B to concrete real dataand a comparison with the existing ones may be useful to show their merits.

Page 35: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

Bibliography

Anderson, T. (1958).An Introduction to Multivariate Statistical Analysis. Wiley, NewYork, USA.

Box, G. E. P. (1950). Problems in the analysis of growth and wear curves.Biometrics,6:362–389.

Dutilleul, P. (1999). The MLE algorithm for the matrix normal distribution.Journal ofstatistical Computation Simulation, 64:105–123.

Filipiak, K. and von Rosen, D. (2011). On MLEs in an extended multivariate lineargrowth curve model.Metrika, doi: 10.1007/s00184-011-0368-2.

Johnson, R. and Wichern, D. (2007).Applied Multivariate Statistical Analysis. PearsonEducation International, USA.

Kattan, W. M. and Gönen, M. (2008). The prediction philosophy in statistics.UrologicOncology: Seminars and Original Investigations, 26:316–319.

Khatri, C. G. (1966). A note on a manova model applied to problems in growth curve.Annals Institute Statistical Mathematics, 18:75–86.

Khatri, C. G. (1973). Testing some covariance structures under a growth curve model.Journal of Multivariate Analysis, 3:102–116.

Kollo, T. and von Rosen, D. (2005).Advanced Multivariate Statistics with Matrices.Springer, Dordrecht, The Netherlands.

Lange, N. and Laird, N. M. (1989). The effect of covariance structure on variance es-timation in balanced growth-curve models with random parameters.Journal of theAmerican Statistical Association, pages 241–247.

23

Page 36: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

24 Bibliography

Lu, N. and Zimmerman, L. D. (2005). The likelihood ratio test for a separable covariancematrix. Statistics Probability Letters, 73:449–457.

Ohlson, M., Ahmad, M. R., and von Rosen, D. (2011a). The multilinear normal dis-tribution: Introduction and some basic properties.Journal of Multivariate Analysis,doi:10.1016/j.jmva.2011.05.015.

Ohlson, M., Andrushchenko, Z., and von Rosen, D. (2011b). Explicit estimators under m-dependence for a multivariate normal distribution.Annals of the Institute of StatisticalMathematics, 63:29–42.

Ohlson, M. and von Rosen, D. (2010). Explicit estimators of parameters in the growthcurve model with linearly structured covariance matrices.Journal of Multivariate Anal-ysis, 101:1284–1295.

Olkin, I. and Press, S. (1969). Testing and estimation for a circular stationary model.TheAnnals of Mathematical Statistics, 40(4):1358–1373.

Pothoff, R. and Roy, S. (1964). A generalized multivariate analysis of variance modeluseful especially for growth curve problems.Biometrika, 51:313–326.

Rao, C. (1958). Some statistical metods for comparisons of growth curves.Biometrics,14:1–17.

Rao, C. (1965). The theory of least squares when the parameters are stochastic and itsapplication to the analysis of growth curves.Biometrika, 52:447–458.

Roy, A. and Khattree, R. (2005). On implementation of a test for Kronecker productcovariance structure for multivariate repeated measures data.Statistical Methodology,2:297–306.

Roy, S. (1957).Some Aspects of Multivariate Analysis. Wiley, New York, USA.

Srivastava, M. S. and Khatri, C. (1979).An Introduction to Multivariate Statistics. North-Holland, New York, USA.

Srivastava, M. S., von Rosen, T., and von Rosen, D. (2008). Models with Kronecker prod-uct covariance structure: estimation and testing.Mathematical Methods of Statistics,17:357–370.

Szatrowski, T. H. (1982). Testing and estimation in the block compound symmetry prob-lem. Journal of Educational Statistics, 7(1):3–18.

Verbyla, A. and Venables, W. (1988). An extension of the growth curve model.Biometrika, 75:129–138.

von Rosen, D. (1989). Maximum likelihood estimators in multivariate linear normalmodels.Journal of Multivariate Analysis, 31:187–200.

Votaw, D. F. (1948). Testing compound symmetry in a normal multivariate distribution.The Annals of Mathematical Statistics, 19(4):447–473.

Page 37: Estimation in Multivariate Linear Models with …liu.diva-portal.org/smash/get/diva2:536195/FULLTEXT01.pdfLinköping Studies in Science and Technology. Thesis. No. 1531 Estimation

Bibliography 25

Wilks, S. S. (1946). Sample criteria for testing equality of means, equality of variances,and equality of covariances in a normal multivariate distribution.The Annals of Math-ematical Statistics, 17(3):257–281.

Wishart, J. (1928). The generalized product moment distribution in samples from a nor-mal multivariate population.Biometrika, 20 A:32–52.