A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in...

37
A Joint Modeling Approach for Longitudinal Studies Weiping Zhang, Chenlei Leng, and Cheng Yong Tang * October 2, 2013 Abstract In longitudinal studies, it is of fundamental importance to understand the dynamics in the mean function, variance function, and correlations of the re- peated or clustered measurements. For modeling the covariance structure, Cholesky type decomposition based approaches are demonstrated effective. However, parsimonious approaches for directly revealing the correlation struc- ture among longitudinal measurements remain less explored, and existing joint modeling approaches may encounter difficulty in interpreting the covariation structure. In this paper, we propose a novel joint mean-variance-correlation modeling approach for longitudinal studies. By applying hyperspherical coor- dinates, we obtain an unconstrained parametrization for the correlation matrix that automatically guarantees its positive definiteness, and develop a regression approach to model the correlation matrix of the longitudinal measurements by exploiting the parametrization. The proposed modeling framework is parsi- monious, interpretable, and flexible for analyzing longitudinal data. Extensive data examples and simulations support the effectiveness of the proposed ap- proach. Some key words: Correlation matrix; Hyperspherical coordinates; Joint modeling; Longitu- dinal data analysis; Modified Cholesky decomposition. * Zhang is with Department of Statistics and Finance, University of Science and Technol- ogy of China. Zhang’s research is supported by NSF of China (No. 11271347, 11171321) (Email: [email protected]). Leng is with Department of Statistics, University of Warwick and Department of Statistics and Applied Probability, National University of Singapore (Email: [email protected]). Tang is with the Business School, University of Colorado Denver. Tang acknowledges research support from the Business School, University of Colorado Denver. (Email: [email protected]). Corresponding author: Cheng Yong Tang. We thank the joint Editor, an associate editor, two referees and Prof. Paul Marriot for helpful comments. 1

Transcript of A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in...

Page 1: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

A Joint Modeling Approach for Longitudinal

Studies

Weiping Zhang, Chenlei Leng, and Cheng Yong Tang ∗

October 2, 2013

Abstract

In longitudinal studies, it is of fundamental importance to understand the

dynamics in the mean function, variance function, and correlations of the re-

peated or clustered measurements. For modeling the covariance structure,

Cholesky type decomposition based approaches are demonstrated effective.

However, parsimonious approaches for directly revealing the correlation struc-

ture among longitudinal measurements remain less explored, and existing joint

modeling approaches may encounter difficulty in interpreting the covariation

structure. In this paper, we propose a novel joint mean-variance-correlation

modeling approach for longitudinal studies. By applying hyperspherical coor-

dinates, we obtain an unconstrained parametrization for the correlation matrix

that automatically guarantees its positive definiteness, and develop a regression

approach to model the correlation matrix of the longitudinal measurements by

exploiting the parametrization. The proposed modeling framework is parsi-

monious, interpretable, and flexible for analyzing longitudinal data. Extensive

data examples and simulations support the effectiveness of the proposed ap-

proach.

Some key words: Correlation matrix; Hyperspherical coordinates; Joint modeling; Longitu-

dinal data analysis; Modified Cholesky decomposition.

∗Zhang is with Department of Statistics and Finance, University of Science and Technol-

ogy of China. Zhang’s research is supported by NSF of China (No. 11271347, 11171321)

(Email: [email protected]). Leng is with Department of Statistics, University of Warwick and

Department of Statistics and Applied Probability, National University of Singapore (Email:

[email protected]). Tang is with the Business School, University of Colorado Denver. Tang

acknowledges research support from the Business School, University of Colorado Denver. (Email:

[email protected]). Corresponding author: Cheng Yong Tang. We thank the joint

Editor, an associate editor, two referees and Prof. Paul Marriot for helpful comments.

1

Page 2: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

1 Introduction

A longitudinal study involves observing the same variables repeatedly over a pe-

riod of time, and is commonly encountered in psychology, social sciences, economics,

and medical sciences. Since the collected observations of the same subject are not

independent, it is fundamentally important to make effective use of the correlated

measurements in analyzing data from longitudinal studies. Diggle et al. (2002) high-

lighted this issue and gave an excellent overview of various approaches to model this

type of data sets.

Regression models on the mean and variance functions for understanding longi-

tudinal data have been extensively studied in literature; see, for example, Liang and

Zeger (1986), Lin and Carroll (2006), Fan et al. (2007), Fan and Wu (2008) and

reference therein. Based on the mean-variance modeling framework, specifying the

correlation structure by using, for example, the ARMA models with some index pa-

rameters has been explored; see, for example, Diggle et al. (2002), Fan et al. (2007),

Fan and Wu (2008) and Qu et al. (2000). However, such statistical approaches do not

permit more general forms of the correlation structure and cannot flexibly incorporate

covariates that may help explain the covariations. To overcome this limitation, joint

modeling for the mean and covariance becomes a poplar approach for longitudinal

data analysis, and has received increasing interest recently; see, for example, Pourah-

madi (1999, 2000, 2007), Pan and MacKenzie (2003), Ye and Pan (2006), Daniels

and Pourahmadi (2009), Leng et al. (2010) and Zhang and Leng (2012). For par-

simoniously modeling the covariations, a modified Cholesky decomposition was first

applied by Pourahmadi (1999) to obtain unconstrained parametrization of the covari-

ance matrix. An interesting aspect of such a decomposition is that the entries in this

decomposition has autoregressive and log innovation interpretations. Pourahmadi

(2007) pointed out that a decomposition studied by Chen and Dunson (2003) can be

2

Page 3: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

understood as modeling certain moving average parameters and innovation variances.

More recently, Zhang and Leng (2012) considered an alternative decomposition where

the entries have moving average and log innovation interpretations; see also Yao and

Li (2013) that uses the modified Cholesky decomposition in nonparametric regression

for longitudinal data. A review of these approaches will be presented in a later section.

Though demonstrated parsimonious and effective, these approaches can be viewed as

indirectly modeling the variances and covariances of the longitudinal measurements.

More specifically, due to the decompositions, the resulting variance functions of the

aforementioned approaches cannot be directly interpreted as those of the repeated

measurements. Moreover, the same interpretation issue also arises for the covariance

and correlation structures when these approaches are applied. Therefore, for prac-

tical applications, additional effort and extra care are necessary for interpreting the

resulting variance and covariance functions.

Not surprisingly, the development of a general regression approach to model the

correlation structure is hindered by the requirements of a correlation matrix. Specif-

ically, a correlation matrix has unity diagonal entries, and must be positive semi-

definite with elements taking values between −1 and 1. Therefore, a regression ap-

proach based on a direct Cholesky type decomposition of the correlation matrix can

hardly satisfy the requirements, and hence it encounters great difficulty in this sce-

nario. In this paper, we propose a novel joint mean-variance-correlation modeling

approach that targets directly the variances and correlations in the longitudinal data.

Towards this end, we explore a new device for the correlation matrix by express-

ing it in hyperspherical coordinates using angles and trigonometric functions. Such a

parametrization is very attractive because the resulting parameters are unconstrained

on their support, and are directly interpretable with respect to the correlations. We

then propose to construct a parsimonious regression model based on those angles

and apply the maximum likelihood approach for parameter estimation and statisti-

3

Page 4: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

cal inference. Our approach yields a parsimonious, interpretable, efficient and flexi-

ble framework for characterizing covariations in general correlated longitudinal data.

Most importantly, such a parametrization and regression model are supported by

data from real applications. For a balanced longitudinal data set in Section 3 where

the empirical correlation matrix can be calculated from residuals, we can clearly see

from Figure 3 that there exists some functional association between the angles and

the time lag as a covariate. In addition, we also find that the proposed approach is

preferred by comparing the Kullback-Leibler divergence and the information criteria

of different approaches, in both simulations and data analysis. Along the line of ex-

ploring unconstrained parametrization, Daniels and Pourahmadi (2009) parametrized

the correlation matrix via partial autocorrelations and their recursive relationships;

but they studied neither efficient maximum likelihood estimation nor models for un-

balanced data, popular in practical observational longitudinal studies. Compared to

Daniels and Pourahmadi (2009), our method handles unbalanced longitudinal data

more naturally.

The rest of this paper is organized as follows. Section 2 elaborates the new

parametrization and its interpretations, the proposed joint modeling approach, the

computational algorithm and its theoretical properties. We provide extensive nu-

merical examples by applying our method to real data analysis including a balanced

and an unbalanced data set, and conduct simulation comparisons in Section 3. The

numerical results confirm the attractiveness of the new joint modeling approach. We

conclude this paper by summarizing the main findings and outlining future research

in Section 4. Technical proofs of the main results and additional interpretations of

the new parametrization from the view points of geometric are given in the Supple-

mentary Material of this paper.

4

Page 5: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

2 Methodology

2.1 Longitudinal data modeling

Some notations are first introduced. We have generic longitudinal measurements

yi = (yi1, . . . , yimi)T (i = 1, . . . , n) collected from n subjects, observed at times ti =

(ti1, . . . , timi)T. Here we allow the number of repeated measurements mi and time ti to

be subject specific, so that data sets observed at irregular time points and unbalanced

longitudinal data can be analyzed using our framework. We denote by µij and σ2ij

respectively the conditional mean and variance of yij given the covariate information

at time tij. Modeling the quantities µij and σ2ij as functions of covariates by various

methods is studied extensively in existing literature; see, for example, Diggle et al.

(2002), Lin and Carroll (2006), Fan et al. (2007), Fan and Wu (2008), Wang (2003)

and reference therein.

A crucial consideration in longitudinal data analysis is that the components of

yi are correlated, and it has been shown that incorporating the correlation is the

key to efficiently utilizing data information in statistical inferences (Liang and Zeger,

1986; Diggle et al., 2002; Lin and Carroll, 2006). For simplicity and clarity in pre-

sentation, we assume hereinafter that yi ∼ N(µi,Σi) where µi = (µi1, . . . , µimi)T,

Σi = DiRiDi, Di = diag(σi1, . . . , σimi), and Ri = (ρijk)mi

j,k=1 is the correlation matrix

of yi with ρijk = corr(yij, yik) being the correlation between the jth and kth mea-

surements of the ith subject. Here the distributional assumption can be relaxed and

we may use estimating equations approaches (Liang and Zeger, 1986) instead of the

likelihood approach for inferences, so that the general applicability of our method is

not compromised in practice.

5

Page 6: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

2.2 Existing approaches

In a class of modeling approaches, Ri is specified by some correlation matrix as a

function of the observation time ti – i.e., corr(yij, yik) = ρ(tij, tik;α) where ρ(t1, t2, α)

is a positive definite function of t1 and t2, and is indexed by some unknown parameter

α. We refer to Liang and Zeger (1986) as the seminal work of this class of approaches;

see also Lin and Carroll (2006), Fan et al. (2007), Fan and Wu (2008) and reference

therein for more recent development along this line of research. The limited choices

of the positive definite function impose a severe restriction of the applicability of

these approaches. To overcome the difficulty in specifying a model for Ri, Qu et al.

(2000) proposed to model R−1i by a weighted sum of a series of basis matrices – i.e.,

R−1i =∑s

i=1 aiMi for known matrices M1, . . . ,Ms and unknown constants a1, . . . , as.

They then used the quadratic inference function approach for statistical inferences.

Although this method often provides more efficient estimates for mean parameters,

it is unclear how the covariations are affected by covariates.

In practice, it is desirable to study adaptive modeling approaches for Ri, and to

explore a broader range of information beyond ti that might affect the covariations

in yi. Pourahmadi (1999) proposed to parametrize the covariance matrix Σi by a

modified Cholesky decomposition PiΣiPT

i = Γ2i , where Pi = (pijk) (j, k = 1, . . . ,mi)

is a lower triangular matrix with 1’s on its diagonal and Γi = diag{γi1, ..., γimi} is

a mi-dimensional diagonal matrix. Immediately, we know that the lower triangular

entries pijk are unconstrained. This decomposition is connected to the autoregressive

model

yij − µij = −j−1∑k=1

pijk(yik − µik) + εij, (i = 1, . . . , n, j = 2, . . . ,mi), (1)

where εij are independent innovations with innovation variance defined as var(εij) =

γ2ij, and pijk’s are the so-called autoregressive coefficients, due to their similarity to

the analogous components in time series analysis. Pourahmadi (1999) proposed to

6

Page 7: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

link pijk to covariates via a regression model. By noting that Σi = QiΓ2iQ

T

i where

Qi = (qijk) (j, k = 1, . . . ,mi) is a lower triangular matrix with 1’s on its diagonal and

Γi is defined as above, Zhang and Leng (2012) characterized qijk as moving average

parameters in the model yij−µij =∑j−1

k=1 qijkεik+εij (i = 1, . . . , n, j = 2, . . . ,mi), and

proposed to specify qijk as a parametrized function of covariates. Via a decomposition

similar to that in Pourahmadi (1999), Chen and Dunson (2003) dealt with Σi =

ΓiPiPT

i Γi, where Pi = Γ−1i QiΓi. Pourahmadi (2007) interpreted the entries in this

decomposition via a rescaled moving average model

yij − µij

γij= εij +

j−1∑k=1

pikεik, (i = 1, . . . , n, j = 2, . . . ,mi),

where εij are independent with var(εij) = 1. In this class of joint mean-covariance

modeling approaches, however, components in Γ2i are not the conditional variances of

the longitudinal response yi given the covariates. To extract the variance information,

one must transform the respective decompositions back to the original covariance

matrix that gives nontrivial interpretations with respect to the covariates. Similarly,

additional steps are also needed to study Ri as an objective of interest in practice

for quantifying the correlations among the longitudinal measurements. Hence, extra

effort and caution are required in practice to apply the aforementioned approaches

for interpreting the features in the variance and covariations.

In a related work to ours, Daniels and Pourahmadi (2009) studied an alternative

unconstrained parametrization by exploiting the partial autocorrelation matrix. They

parametrized an m ×m correlation matrix R by Π = (πij) (i, j = 1, . . . ,m) where

πij = πji = corr(yi, yj|yi+1, . . . , yj−1) is the partial autocorrelation coefficient between

yi and yj if j > i+ 1, and otherwise πij = ρij, the (i, j)th element of R. In addition,

the partial correlations connect to the correlations in R recursively via

πj(j+k) =ρj(j+k) − rT

1 (j, k)R2(j, k)−1r3(j, k)

[1− rT1 (j, k)R2(j, k)−1r1(j, k)]1/2[1− rT

3 (j, k)R2(j, k)−1r3(j, k)]1/2

for j = 1, . . . ,m − k; k = 1, . . . ,m − 1, where rT1 (j, k) = (ρj(j+1), . . . , ρj(j+k−1)),

7

Page 8: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

rT3 (j, k) = (ρ(j+k)(j+1), . . . , ρ(j+k)(j+k−1)) and R2(j, k) is the correlation matrix cor-

responding to the components j + 1, . . . , j + k− 1. A similar feature to our approach

is that the elements πij’s in Π can vary freely in the interval (−1, 1) and take values

in the entire real line after using Fisher’s z transformation. Daniels and Pourah-

madi (2009) focused on the Bayesian approach for the inference of parameters rather

than studying the efficient maximum likelihood approach. Clearly, the mapping from

correlation coefficients to partial autocorrelation coefficients is complicated, and the

calculation of the Fisher information can be intractable for statistical inferences. It

also remains less explored on how to apply this parametrization to unbalanced longi-

tudinal data.

2.3 An unconstrained parametrization for correlation matri-

ces and its interpretations

In this study, we propose to parametrize Ri via hyperspherical coordinates by the

following decomposition:

Ri = TiTT

i , (2)

where Ti is a lower triangular matrix given by

Ti =

1 0 0 · · · 0ci21 si21 0 · · · 0ci31 ci32si31 si32si31 · · · 0

......

.... . .

...

cimi1 cimi2simi1 cimi3simi2simi1 · · ·mi−1∏l=1

simil

(3)

and cijk = cos(φijk) and sijk = sin(φijk) are trigonometric functions of angles φijk. In

other words, the nonzero entries in the lower diagonal matrix Ti are given by Ti11 = 1,

Tij1 = cos(φij1) for j = 2, . . . ,mi, and

Tijk =

{cos(φijk)

∏k−1l=1 sin(φijl), 2 ≤ k < j ≤ mi;∏k−1

l=1 sin(φijl), k = j; j = 2, . . . ,mi.(4)

Here the total number of angles φijk (1 ≤ k < j ≤ mi) in (3) and (4) is mi(mi− 1)/2,

the same as that of the free parameters in an unconstrained correlation matrix. The

8

Page 9: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

decomposition of Ri in (2) has several merits. First, columns of TT

i are unit vectors

in Rmi , and the trigonometric expression (3) ensures the diagonal elements of TiTT

i

to be 1, and all other elements falling between −1 and 1. Second, TiTT

i is always

non-negative definite, satisfying the requirement of a correlation matrix. On the other

hand, since Ri is a symmetric positive-semidefinite matrix, there always exists a lower

triangular matrix Ti such that Ri = TiTT

i . Following elementary algebra, we can

take the angles in (3) and (4) as

φijk = arccos

(Tijk/

k−1∏l=1

sin(arccos(Tijl))

), 1 ≤ k < j ≤ mi (5)

where∏0

1 is taken as 1. Therefore, (5) can be viewed as a mapping from a gen-

eral correlation matrix Ri to the angles. In addition, the mapping (5) is unique by

restricting the range of the angles {φijk} to be [0, π) (Rapisarda et al., 2007). For

unique model identification, we will take that all angles φijk are in [0, π] hereinafter.

Hence a model for the angles {φijk} in (3) is equivalent to a model for the correlation

matrix. The most prominent advantage of (3) is that the angles {φijk} as parameters

are unconstrained in the range [0, π). If needed, further transformation such as one

involving arctan transform can give unconstrained parametrization in the entire real

line (−∞,∞). We observe in data analysis and simulation studies that it is sufficient

to model the angles without further transformation. This facilitates a convenient and

flexible modeling device for the correlation matrix. This parametrization was previ-

ously studied by Creal et al. (2011) in a different context for modeling correlations

among multivariate financial time series.

By taking∑0

1 = 0 and noting that Tijk in (4) can be recursively expressed by the

correlations ρijk as

Tijk =

ρijk −k−1∑l=1

TijlTikl√1−

∑k−1l=1 T

2ikl

,

we observe from (5) that the angles can be expressed as functions of correlations. On

9

Page 10: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

the other hand, from (2), ρijk can be expressed by functions of the angles φijk as

ρijk = cos(φijk)

k−1∏l=1

sin(φijl) sin(φikl) +

k−1∑l=1

[cos(φijl) cos(φikl)

l−1∏t=1

sin(φijt) sin(φikt)

](6)

for 1 ≤ k < j ≤ mi. We note that (6) establishes a hierarchic connection between

the correlations and the angles. More specifically, ρijk depends on φijk only through

cos(φijk) upon given precedent angles φist (2 ≤ s < j, 1 ≤ t < k), or equivalently,

upon given correlations ρist (2 ≤ s < j, 1 ≤ t < k). Such a hierarchic connection

reflects the aforementioned intrinsic structural requirement for the correlation matrix.

From (6), we see that ∂ρijk/∂φijk = − sin(φijk)∏k−1

l=1 sin(φijl) sin(φikl) ≤ 0 because all

angles are in [0, π), implying that ρijk is monotone decreasing in φijk. Furthermore,

because in practice the dependence between measurements in a longitudinal study

generally decays with the time lag, small ρijk is expected between measurements with

large time lag. Since cos(φ) is a monotone decreasing function on [0, π), it is natural

to expect from (6) that φijk is increasing with the time lag between the jth and

kth measurements. Our empirical data analysis also confirms this expectation; see

Figures 3 and 6 in our numerical examples.

Figure 1: A 3-dimensional geometric representation of the correlations and the an-gles. Here, e1, e2 and e3 are the canonical basis, t1, t2, t3 are the three unit vectorsin a unit ball where cosines of the pairwise angles are equal to the respective corre-lations between three longitudinal measurements, φ21, φ31, φ32 are the angles in theparametrization (2) and (4).

10

Page 11: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

We now illustrate the geometric connection between the correlations and the an-

gles via a graph when m = 3. In Figure 1, t1, t2, t3 are the three unit vectors in

T on a 3-dimensional unit ball where cosines of the pairwise angles are equal to the

respective correlations among three longitudinal measurements; φ21, φ31, φ32 are the

angles in the parametrization (2) and (4). Clearly, φ21, the angle between t2 and t1,

and φ31, the angle between t3 and t1, directly reflect the correlations between the

first measurement and the other two. Once φ21 and φ31, or equivalently, the corre-

lations between the first and second measurements, and between the first and third

measurements are specified, there is a one to one correspondence between φ32 and the

correlation between the second and third measurement. Equivalently, there is a one-

to-one correspondence between the three angles φ21, φ31, φ32 and the corresponding

correlations among the three measurements. More detail on the general geometric

interpretation of the new parametrization is available in the Supplementary Material

of the paper.

2.4 The proposed approach

From the above discussion, Ri parametrized by angles φijk (i = 1, . . . , n; 1 ≤ k < j ≤

mi) can be viewed as an equivalent expression of a correlation matrix in a hyperspher-

ical coordinate system. Since Ri = TiTT

i is guaranteed positive semidefinite and the

angles in the parametrization (3) are unconstrained on [0, π), we are free to charac-

terize these angles via regression as functions of some covariates. In practice, such a

rationale can be initially assessed by examining empirical variances and correlations

from the observed longitudinal data. For a balanced longitudinal study such as the

cattle example in Section 3, an initial version of the angles φijk can be obtained from

the empirical correlation matrix of the standardized residuals after a mean-variance

model fitting. By examining the plot of those angles φijk against the time lag in

Figure 3, we clearly observe a curvature that supports some functional associations.

From there, appropriate models can be used to describe such a curvature.

11

Page 12: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

To illustrate the merits of the proposed parametrization more clearly, we now ex-

amine how the proposed parametrization behaves for two commonly used correlation

structures.

Example 1. For an m×m compound symmetry correlation matrix R = (1− ρ)Im +

ρJm, where Im is the m-dimensional identity matrix and Jm is a m×m matrix of 1s,

the elements of matrix T can be seen as

T11 = 1, Tj1 = ρ, Tjj = (1− (j − 1)ρ2/(1 + (j − 2)ρ))1/2, j ≥ 2;

Tjk = ρ{1 + (j − 1)ρ}−1Tkk, 2 ≤ k < j ≤ m.

Thus, the angles are specified by (5).

Example 2. For an AR(1) correlation matrix R = (ρ|j−k|)mj,k=1, the elements of T

are found as Tj1 = ρ|j−1|, j = 1, . . . ,m;Tjk = ρ|j−k|/(1 − ρ2), 2 ≤ k < j ≤ m. Then

the angles are specified by (5), which are functions of ρ and time j and k.

The above two examples show the equivalence of two parametrizations, one un-

der the traditional and the other under the proposed parametrization. To examine

our model more closely, we plot in Figure 2 angles φjk’s versus time lag |j − k| for

a compound symmetry and an AR(1) correlation structure with ρ = 0.5 respec-

tively. Clearly, these scatter plots indicate some functional relationships approxi-

mately. More specifically, the compound symmetry structure can be explained by

polynomials in time lag and a factor corresponding to the measurements ordering,

while the AR(1) can be well explained by polynomials in time lag. Although not exact,

a regression model constructed based on these angles represents a fairly good approx-

imation to these commonly used correlations. Thus the proposed parametrization is

flexible and adaptive for capturing the dynamics in these correlation structures.

Motivated by above considerations, we propose a joint regression model for the

12

Page 13: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

1.0 1.5 2.0 2.5 3.0 3.5 4.0

1.05

1.15

1.25

1.35

(a)

Lag

Phi

1.0 1.5 2.0 2.5 3.0 3.5 4.0

1.1

1.2

1.3

1.4

1.5

(b)

Lag

Phi

Figure 2: Plots of the angles φ versus lag for 5 × 5 (a) compound symmetry and(b) AR(1) correlation matrix with ρ = 0.5 respectively. Points are labeled by theirrow numbers in Tj where circle, triangle, square, and cross are corresponding to thesecond, third, fourth, and fifth rows.

mean, the variances, and the correlations as

g(µij) = xT

ijβ, log σ2ij = zT

ijλ, φijk = wT

ijkγ, (7)

where µij and σ2ij (i = 1, . . . , n, j = 1, . . . ,mi) are respectively the conditional mean

and variance functions for the jth measurement of the ith subject. As discussed

earlier, for the ith subject, φijk (i = 1, . . . , n; 1 ≤ k < j ≤ mi) reflects the correlation

between the jth measurement and the kth measurement once its correlations with the

first k − 1 measurements are specified. We impose the regression model on the log-

variance so that the variance function is automatically non-negative, and the support

of λ is unconstrained. In this formulation, xij, zij and wijk are p× 1, q× 1 and d× 1

vectors of generic covariates available, g(·) is a known link function, usually taken as

an identity function as in linear models, and β, γ and λ are unknown parameters

for characterizing the mean, the variance and the correlation. The covariates xij and

zij are specified in a way similar to those in heteroscedastic regression models. The

covariate wijk should include time varying covariates that may depend on time tij and

tik as it is used for capturing the correlation between the responses at these two times.

13

Page 14: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

In practice, natural candidates for wijk include (tij, tik)T and its higher order terms, or

a polynomial of the time lag (tik− tij) such that the resulting correlation is stationary

(Ye and Pan, 2006). In practice, these covariates can be specified by graphical tools

such as the technique we employed for analyzing the balanced cattle data in Section

3, or by model selection techniques where many potential covariates of interest are

fitted initially and then selected. As for the range of φijk, our experience from data

analysis and simulations is that the estimated φijk’s always fall in the range [0, π). If

its range is a concern, transformation such as arctan can be applied to ensure that

φijk falls in [0, π). Remarkably, our framework generalizes easily to nonparametric

and semiparametric models, although the focus of the paper is on parametric models

as in (7).

To fit the joint model specified by (4) and (7) under the normality assumption,

we note that

∂Tijk∂γ

=

Tijk[−wijk tan(φijk) +

∑k−1l=1 wijl/ tan(φijl)] k < j

Tijk∑k−1

l=1 wijl/ tan(φijl), k = j

, (8)

where for simplicity, the notation∑0

1 is understood as 0 throughout this paper. By

letting µi = (µi1, . . . , µimi)T, Di = diag(σi1, . . . , σimi

), θ = (βT,γT,λT)T, and noting

that Σi = DiRiDi, the minus twice log-likelihood, up to a constant, is given by

−2l(θ) =n∑

i=1

log |DiRiDi|+n∑

i=1

rT

i D−1i R−1i D−1i ri, (9)

where ri = yi − µi. We define ∆i = ∆i(Xiβ) = diag{g−1(xTijβ), . . . , g−1(xT

imiβ)}

where g−1(·) is the derivative of the inverse link function g−1(·) and we note that

µ(·) = g−1(·). We also define bijk =∑j

l=k∂Tilk

∂γ aijl with aijl being the (j, l) element

of T−1i , hi = diag{R−1i D−1i rir′iD−1i }. Let εi = (εi1, . . . , εimi

)T = T−1i D−1i ri, and

thus εi1, . . . , εimiare independent standard normal random variables, and denote by

1mithe mi × 1 vector with elements 1. The following score equations based on the

14

Page 15: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

likelihood can be obtained by direct calculations:

U1(β;γ,λ) =n∑

i=1

XT

i ∆iΣ−1i (yi − µi) = 0,

U2(γ;β,λ) =n∑

i=1

mi∑j=1

{∂ log Tijj∂γ

(ε2ij − 1) + εij

j−1∑k=1

bijkεik

}= 0, (10)

U3(λ;β,γ) =1

2

n∑i=1

ZT

i (hi − 1mi) = 0.

We then estimate θ by minimizing (9) via the iterative Newton-Raphson algorithm.

Since the solutions satisfy equations (10), the parameters β, γ and λ can be sequen-

tially solved one by one with other parameters kept fixed in the optimization. More

specifically, we apply the following quasi-Fisher scoring algorithm:

1. Initialize the parameters as β(0), γ(0) and λ(0). Set k = 0.

2. Compute Σi using γ(k) and λ(k). Update β as

β(k+1) = β(k) +[I−111 (θ)U1(β;γ,λ)

] ∣∣∣β=β(k) . (11)

3. Given β = β(k+1), update γ and λ using(γ(k+1)

λ(k+1)

)=

(γ(k)

λ(k)

)+

[(I22(θ) I23(θ)I32(θ) I33(θ)

)−1(U2(γ;β,λ)U3(λ;β,γ)

)] ∣∣∣γ=γ(k),λ=λ(k) .

(12)

4. Set k ← k + 1 and repeat Steps 2–3 until a pre-specified convergence criterion

is met.

The expressions of Ijk(θ), j, k = 1, 2, 3 are given in the next subsection. Note that

this algorithm updates γ and λ together. This consideration is motivated by the

asymptotic dependence of these two parameters, as can be seen from Theorem 1 in

Section 2.5. Because the likelihood function is not a global convex function of the

parameters on their support, it can only be guaranteed that the algorithm converges

to a local optimum. To choose an appropriate initial value, we set Σi’s as identity

15

Page 16: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

matrices initially when solving for β in (11) by using the least square estimator

as the initial value. Then we initiate γ in (12) by assuming Ri = Imi, the mi

dimensional identity matrix for the ith subject, and use the least square estimator

based on the residuals to obtain an initial value of λ. It is not difficult to see that

these initial estimators for β and λ are√n-consistent. From the theoretical analysis

in Theorem 1 in the next subsection and the proofs in the Appendix, the negative

log-likelihood function is asymptotically convex around a small neighborhood of the

true parameters. To ensure that the optimum is global, we may try multiple initial

values for γ. For our data analysis and simulation studies, the algorithm is quite

stable with no multiple optima found and convergence was usually obtained within

several iterations.

2.5 Theoretical properties

Let I(θ) = −E(∂2l/∂θ∂θT) be the negative expected Hessian matrix. Our theoretical

analysis assumes the following regularity conditions:

C1. The dimensions p, q and d of covariates xij,wijk and zij are fixed; n → ∞

and max1≤i≤nmi is bounded.

C2. The parameter space Θ of (βT,γT,λT)T is a compact set in Rp+q+d, and the

true value θ0 = (βT

0 ,γT0 ,λ

T

0 )T is in the interior of Θ.

C3. As n→∞, n−1I(θ0) converges to a positive definite matrix I(θ0).

Condition C1 is routinely made for longitudinal data from the practical perspective.

Condition C2 is a conventional assumption for theoretical analysis of the maximum

likelihood approach. Condition C3 is a natural requirement for the regression analysis

in unbalanced longitudinal data modeling. We establish the following asymptotic

results for the maximum likelihood estimator.

Theorem 1. Under regularity conditions C1–C3, as n → ∞, we have that: (a)

the maximum likelihood estimator (βT

, γT, λT

)T is strongly consistent for the true

16

Page 17: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

value (βT

0 ,γT0 ,λ

T

0 )T; and (b) (βT

, γT, λT

)T is asymptotically normally distributed as√n(θ−θ0)→ N

[0, {I(θ0)}−1

], where I(θ0) is the Fisher information matrix defined

in Condition C3.

Following (10), it is shown in the Appendix that the block components of I(θ)

satisfy

I11(θ) =n∑

i=1

XT

i ∆iΣ−1i ∆iXi, I12(θ) = IT

21(θ) = 0, I13(θ) = IT

31(θ) = 0,

I23(θ) = IT

32(θ) =n∑

i=1

mi∑j=1

[∂ log Tijj∂γ

zT

ij +1

2

j−1∑k=1

bijk

j∑l=k

TilkaijkzT

il

],

I22(θ) =n∑

i=1

mi∑j=1

(2∂ log Tijj∂γ

∂ log Tijj∂γT

+

j−1∑k=1

bijkbT

ijk

),

I33(θ) =1

4

n∑i=1

ZT

i (Imi+ R−1i ◦Ri)Zi,

where ◦ denotes the Hadamard product. Since β, γ and λ are consistent estimators

for θ0, the asymptotic covariance matrix I can be consistently estimated by a matrix

whose block components are given by

Iij = n−1Iij(θ), (i, j = 1, 2, 3). (13)

From Theorem 1, β is asymptotically independent of γ and λ. This is not surprising

for statistical inferences of normally distributed data, because β concerns the mean

function, and γ and λ are parameters of the covariations. From the generalized

estimating equations point of view and (10), we also see that the optimal efficiency

of estimating β is assured whenever Σi’s or the models for σ2ij and φijk are correctly

specified, a reminiscence of the results in generalized estimating equations by Liang

and Zeger (1986). If the model for Σi is mis-specified, β is still consistent and

asymptotically normal by a result in Liang and Zeger (1986), although the asymptotic

variance of β would take a sandwich form. On the other hand, the two covariation

parameters γ and λ are not asymptotically independent in general, unlike those

17

Page 18: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

parameters in the modified Cholesky decomposition studied by Pourahmadi (1999)

and Zhang and Leng (2012).

We now discuss a more general result that only requires the mean model in terms

of β to be correctly specified. Recall that the Kullback-Leibler divergence KL(f |f0)

between a true model with density f0 with mean µ0 = µ(XTβ0) and a working model

fc ∼ Nm(µ(β),Σ) is defined as

KL(f0|fc) = Ef0 log(f0fc

) = Ef0 log(f0)− Ef0 log(fc)

= Ef0 log(f0)− Ef0 [−1

2(Y − µ)TΣ−1(Y − µ)− 1

2log |Σ|+ m

2log(2π)]

=1

2

[tr(Σ0Σ

−1) + (µ0 − µ)TΣ−1(µ0 − µ) + log |Σ|]

+ Ef0 log(f0)−m

2log(2π).

We define a new population parameter vector θ∗ = (βT

∗ ,γT∗ ,λ

T

∗ )T to be the minimizer

of KL(f0|fc).

Corollary 1. Under regularity conditions C1–C3 with θ0 replaced by θ∗, as n→∞,

we have that the maximum likelihood estimator θ is strongly consistent for θ∗; specif-

ically if the mean model in (7) is correctly specified, β∗ = β0 and β is a consistent

estimator of the true parameter vector β0.

The first part of the corollary follows directly from Theorem 2.2 of White (1982).

Furthermore, when the mean structure is correctly specified, we can see from the def-

inition of KL(f0|fc) that the minimizer β∗ must equal to β0 if the mean function is

correctly specified. Thus the consistency of β neither relies on the normality assump-

tion nor requires correct specification of the variance and correlation models in (7).

This result is again similar to that in generalized estimating equations approaches

(Liang and Zeger, 1986). As long as the mean model is correctly specified, the esti-

mation of the covariance does not affect the consistency of the mean parameter, but

only affects its efficiency.

18

Page 19: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

3 Numerical Examples

3.1 Cattle data

We first apply our approach to a balanced longitudinal data set in Kenward (1987),

where cattle were assigned randomly to two treatment groups A and B, and their

weights were measured 11 times over a 133-day period. As in Pourahmadi (2000)

and Pan and MacKenzie (2003), we focus on the 30 animals in group A using a

saturated mean model with 11 parameters. Pan and MacKenzie (2003) found that it

is suitable to apply three polynomials for characterizing the mean, the autoregressive

coefficients and the log innovation variances in the model (1) based on the modified

Cholesky decomposition. Using the Bayesian information criterion (BIC), they found

the optimal triplet of the polynomial orders is (8,4,3).

Here we re-examine this data set with our joint modeling approach. For balanced

longitudinal data, the corresponding angles φijk of the empirical correlation matrix

can be calculated using (5). By examining the angles versus the time lag between

measurements in Figure 3, we see a clear curvature pattern that can be reasonably

captured by a polynomial. Figure 3 also indicates a curvature pattern by examining

the log sample variances versus the time of measurements.

Motivated by Figure 3, we model the angles in the proposed correlation parametriza-

tion by a polynomial of the time lag that defines wijk. Following Pourahmadi (1999)

and Pan and MacKenzie (2003), we apply two polynomials of time that define xij

and zij for modeling the mean and the log-variances. The following BIC criterion is

used to select the optimal model (Pan and MacKenzie, 2003; Zhang and Leng, 2012)

BIC(p, q, d) = −2lmax/n+ (p+ q + d+ 3) log(n)/n, (14)

where p, q, d are respectively the orders of three aforementioned polynomials, and lmax

is the maximum of the corresponding log likelihood for the given orders. The BIC

19

Page 20: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

● ●●●

●●

●●

●●

2 4 6 8 10

0.6

0.7

0.8

0.9

1.0

1.1

1.2

(a)

Time Lag

Phi

●●

●●

2 4 6 8 10

5.0

5.5

6.0

(b)

Time

Log−

varia

nce

Figure 3: Sample regressograms and fitted curves for the cattle data: (a) shows thesample φijk versus time lag; (b) shows the sample log-variance versus time. The solidblack lines are fitted LOWESS lines, and the solid red lines are the fitted lines bythe proposed model. Dashed curves represent asymptotic 95% pointwise confidenceintervals.

criterion is known to be consistent in selecting the truth among candidate models

(Shao, 1997). The optimal triplet using our approach is (8, 2, 2) with a BIC value

52.03 and lmax = −755.0. By comparing with the optimal triplet (8, 4, 3) and lmax =

−1045.40 for Pan and MacKenzie (2003)’s best model, we clearly see that our method

produces a larger likelihood and a smaller BIC value, indicating that the proposed

parametrization better fits the data with a much more parsimonious model. It is

interesting to see that it suffices to model the variances and the correlations via

quadratic polynomials, yielding a simpler model than that from the approach in Pan

and MacKenzie (2003). As discussed earlier, a likely reason for the improvement is

that the proposed modeling approach is more flexible and adaptive so that it better

fits the longitudinal data. We note that the fitted model for the angles implies a

monotone relationship with the time lag, which is consistent with (6).

20

Page 21: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

Table 1: Kenward’s cattle data. Comparison of various models using the new jointmodeling approach. *: The optimal triplet of the proposed method.

No.of(p, q, d) parameters lmax BIC(8, 2, 2)∗ 15 -755.00 52.03(6,1,1) 11 -788.90 53.17(3,3,3) 12 -800.27 54.71(4,3,4) 14 -772.61 53.09(7,2,2) 14 -761.67 52.37(8,4,7) 22 -749.90 52.49(9,3,1) 16 -756.92 52.28(9,3,4) 19 -752.82 52.34(9,5,8) 25 -748.29 52.72

●●●●●●

●●

●●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

0.3

0.5

0.7

0.9

(a)

Emprical Correlation

Fitt

ed C

orre

latio

n

100 200 300 400

100

200

300

400

(b)

Emprical Variances

Fitt

ed V

aria

nces

Figure 4: Plot of the empirical correlations and variances against the fitted correla-tions and variances; (a) correlations; and (b) variances.

The likelihood functions and the BIC values for a number of selected models are

given in Table 1 for comparison purposes. Figure 4 depicts the fitted correlations and

variances, versus the empirical correlations and variances respectively. Clearly, there

is a close agreement between the sample quantities and the fitted quantities, implying

that the proposed method produces a fairly good fit. To compare the accuracy of

the empirical estimator and our estimator, we re-sample with respect to the subjects

in the data set, and re-fit the optimal regression model. For each sample in the

bootstrap, the empirical correlations and the estimated correlations are calculated.

21

Page 22: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

Figure 5 shows that the standard deviations of the empirical correlations are generally

larger that those of the fitted correlations by our approach. This is not surprising

since we apply a model that captures the overall trend of these correlations as a

function of time lag, and thus the noise in estimating the correlations is reduced

by incorporating additional data information. We now report the time for model

selection via all subset selection. Running on a Dell R910 2.00 GHz workstation,

we specify the three polynomial orders from 1 to 11 for µij, and from 1 to 10 for

both φijk and σij. The user process time of the R code is about seven minutes

for fitting 11 × 10 × 10 models. Thus the computational time is manageable. We

have also made use of a computational more efficient strategy for model selection

as discussed in Zhang and Leng (2012) by fitting 11 + 10 × 10 models, thanks to

the asymptotic orthogonality of the mean parameter and the variance parameters.

The computational time has reduced about ten folds and the same optimal model is

identified. Overall, we conclude that our approach is very effective for modeling the

cattle data.

●●

● ●

●●

0.00 0.05 0.10 0.15

0.00

0.05

0.10

0.15

Std of Empirical Correlation

Std

of f

itted

Cor

rela

tion

Figure 5: The standard deviations for the estimated correlations of the proposedapproach and standard deviations of the empirical correlations. The standard devia-tions are calculated using a bootstrap approach that re-samples with respect to thesubjects in Kenward’s cattle data 100 times.

22

Page 23: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

3.2 CD4 cell data

We apply the proposed joint modeling approach to an unbalanced data set, the CD4

cell study, previously analyzed by Zeger and Diggle (1994) and Ye and Pan (2006).

The CD4 cell counts of 369 HIV-infected men with a total of 2,376 values were

collected for this study, covering a period of approximately eight and a half years.

This data set is observational and these counts were measured at different time for

each individual. The number of measurements for each individual varies from 1 to 12

and the time points are not equally spaced. As in Zeger and Diggle (1994), to make

the response variable more close to the Gaussian distribution, square roots of the

CD4 counts are used. Clearly, this is a highly unbalanced data set and the method

in Daniels and Pourahmadi (2009) developed mainly for balanced data sets is not

applicable.

The objective of our analysis is to jointly model the mean, variance, and correlation

structures. Using BIC for model selection, we find the optimal polynomials in our

model, a degree eight polynomial in time for xij in the mean function, a linear function

of time lag for wijk in the angles of the correlation model, and another linear function

in time for zij in the log-variances. The optimal BIC value turns out to be 26.73,

and lmax = −4892.72. Figure 6 shows the fitted curves of the mean, angles in the

correlation parametrization, and log-variances respectively. From Figure 6, we also

observe the monotone increasing relationship of the fitted angles with the time lag.

For comparison, we also apply the modified Cholesky decomposition approach in

Pourahmadi (1999). The comparison is made in Table 2. We find that the best

model of our approach is more parsimonious with a larger value in likelihood, and

is more desirable in terms of the BIC value than the alternative. From Table 2,

we can also find that our approach has larger values in likelihood and smaller BICs

when compared at both the optimal models and models with the same complexity.

Hence, we demonstrate the merits and wide applicability of our methods for general

23

Page 24: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

unbalanced longitudinal data. When the highest polynomial orders of ten are used for

selecting an optimal model for the three components in the joint modeling approach,

the total CPU time for fitting 103 models is about 60 hours, and is reduced to about

6 hours if the computational thrifty strategy in Zhang and Leng (2012) is applied

with the same optimal model identified.

−2 0 2 4

1030

50

(a)

Time

CD

4 ce

ll nu

mbe

rs

−2 0 2 4

3.55

3.65

3.75

(c)

Time

Log−

Varia

nce

0 1 2 3 4 5 6

1.10

1.20

1.30

(b)

Lag

Phi

Figure 6: The CD4 cell data. The fitted curves of (a) the mean against time, (b)the angles against the time lag and (c) the log-variances against time. Dashed curvesrepresent asymptotic 95% confidence intervals.

3.3 Simulation studies

In this section we investigate the finite sample performance of the proposed esti-

mation and inference methods, and compare our method to the modified Cholesky

decomposition approach. We conduct simulations in two studies.

24

Page 25: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

Table 2: The CD4 data. Comparison of different models between our approach andthe modified Cholesky decomposition approach (MCD) in Pourahmadi (2000) andPan and MacKenzie (2003). *: The optimal triplet of the proposed method. **: Theoptimal triplet of the MCD method.

No.of Proposed MCD(p, q, d) parameters lmax BIC lmax BIC(8, 1, 1)∗ 13 -4892.72 26.72∗ -5008.80 27.36(8, 3, 1)∗∗ 15 -4890.44 26.75 -4979.23 27.23∗∗

(6,1,1) 11 -4902.17 26.75 -5018.47 27.38(3,3,3) 12 -4919.52 26.85 -5006.18 27.33(4,3,4) 14 -4902.10 26.80 -4995.51 27.30(8,3,3) 17 -4886.36 26.76 -4974.70 27.24(8,4,7) 22 -4881.76 26.81 -4971.74 27.30(9,3,1) 17 -4888.34 26.75 -4983.73 27.27(9,3,4) 19 -4881.30 26.76 -4974.15 27.26(9,5,8) 26 -4877.16 26.84 -4968.30 27.33

Study 1. In the first study, data are generated from the proposed model and then

the proposed approach is applied. This is to demonstrate the asymptotic properties

in Section 2.5. The data sets are generated from the model

yij = β0 + xij1β1 + xij2β2 + eij, (i = 1, . . . , n; j = 1, . . . ,mi),

φijk = γ0 + wijk1γ1 + wijk2γ2, log(σ2ij) = λ0 + zij1λ1 + zij2λ2, (15)

where mi − 1 ∼ Binomial(6, 0.8), and then the measurement times tij are gener-

ated from the uniform(0,1) distribution. This setting results in different numbers of

repeated measurements mi for each subjects. The covariate xij = (xij1, xij2)T is gen-

erated from a multivariate normal distribution with mean zero, marginal variance 1

and correlation 0.5. We take zij = xij, and wijk = {1, tij − tik, (tij − tik)2}T, following

the setup in Leng et al. (2010). We generate 1000 data sets and consider sample sizes

n = 50, 100, and 200 respectively.

Table 3 shows the accuracy of the estimated parameters in terms of their mean

absolute biases (MAB) and standard deviations. All the biases are small especially

when n is large. Additionally, to evaluate the inference procedure, we compare the

sample standard deviation (SD) of 1,000 parameter estimates to the sample average

of 1,000 standard errors (SE) using formula (13). The standard deviation (Std) of

25

Page 26: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

1,000 standard errors is also reported. Table 3 shows that the SD and SE are quite

close, especially for large n. This indicates that the standard error formula works well

and demonstrates the validity of Theorem 1. Here we note that relatively higher level

of variability is observed in estimating λ0, the baseline level of the variance function,

reflecting a fact that the variance function is generally harder to estimate in practice

(Leng and Tang, 2011).

Table 3: Simulation results for Study 1 (all the results are multiplied by a factor 103).

True n = 50 n = 100 n = 200value MAB SEStd SD MAB SESD SD MAB SESD SD

β0 1.0 7.13 7.832.34 9.22 4.49 5.131.12 5.66 2.86 3.500.56 3.69β1 -0.5 1.67 1.840.58 2.18 1.05 1.200.28 1.32 0.67 0.810.14 0.86β2 0.5 1.01 1.130.35 1.32 0.64 0.730.17 0.81 0.41 0.500.09 0.53γ0 0.3 6.98 7.310.69 8.10 4.44 5.150.34 5.38 3.05 3.640.16 3.80γ1 -0.2 8.49 8.481.53 10.44 5.29 5.890.72 6.59 3.59 4.140.34 4.50γ2 0.3 9.46 9.381.37 11.14 5.83 6.570.64 7.17 3.92 4.630.30 4.92λ0 -0.5 110.42 131.475.15 139.54 74.01 92.552.58 92.92 51.19 65.361.26 65.29λ1 0.5 0.74 0.650.15 0.96 0.43 0.460.08 0.56 0.28 0.320.03 0.36λ2 -0.3 0.71 0.660.15 0.94 0.43 0.460.07 0.55 0.28 0.320.03 0.36

Study 2. In this study, we compare the proposed approach with the modified

Cholesky decomposition (MCD) approach in Pourahmadi (1999) under different set-

tings and for sample sizes n = 50 and 100. More specifically, we investigate three

cases where the data models are either correctly or incorrectly specified respectively

for our approach and the MCD approach. We fit the data using polynomials of dif-

ferent orders and always use a correct mean model for both approaches, as the key

difference between these two approaches is how to model the covariance.

Case I. We take a similar model as in Study 1 to generate data sets with covariate

zij taken as zij = (1, tij, t2ij)

T. In this case, the covariance model for the MCD

approach is mis-specified.

Case II. We generate data from model (1) following the MCD decomposition. A

similar model structure as in Case I is implemented by changing φijk in (15) to pijk in

(1), and the variance function in (15) is used for the variance of εij in (1). In this case,

the variance function and correlation structure in our approach are mis-specified.

26

Page 27: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

Case III. To compare these two methods when models are mis-specified for both

approaches, we take the same mean model as in Case I with the marginal variance

σ2(t) = 0.5et and ARMA(1,1) correlation structure corr(εt, εs) = γρ|t−s|, for t 6= s.

We consider γ = 0.85 and ρ = 0.6 corresponding to moderately correlated errors.

In this case, both approaches use mis-specified models for the covariance, since this

correlation structure does not exactly correspond to either decomposition. The best

these two approaches can do is to capture some signals in this correlation with their

respective model specifications.

Under each setting, polynomials of degrees q and d respectively are used for the

angles in the correlation model and the log variances in our approach, and for the

autoregressive coefficients and log innovations in the alternative MCD approach. The

same orders of polynomials are applied for both approaches to make a fair comparison.

To compare these two methods, we define the following error measurements

‖µd‖ =1

n

n∑i=1

‖xTi (β − β0)‖, ‖Σd‖ =

1

n

n∑i=1

‖Σi −Σ0i‖, KL =1

n

n∑i=1

KLi(fi1|fi0),

where KL is the Kullback-Leibler divergence between a fitted model fi1 = N(µi, Σi)

and the true model f0i = N(µi0,Σi0) for the ith subject. Table 4 shows the norms

for the biases in the mean, the biases in the covariance and the KL divergences

under different settings. In case I where the data are generated from our model,

our approach performs substantially better than the alternative one in all comparing

criteria, even for the saturated model (q = 3, d = 3) and a mis-specified reduced

model (q = 1, d = 2). The KL divergences of the alternative approach are poor

due to a lack of capability to capture the dynamics in this correlation structure.

In Case II where the data are generated from the alternative MCD decomposition

in (1), our approach still works reasonably well. The error measurements by our

approach only inflate slightly compared to the alternative approach that fully exploits

the model information. In case III where the data model is mis-specified for both

approaches, ours works very promisingly. It is clear from Table 4 that our approach

27

Page 28: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

substantially outperforms the alternative MCD approach in all measures. Though the

KL divergences now take slightly larger values than those in Case I, they are much

smaller than those of the alternative approach for all scenarios, indicating better fit

to the truth. The simulations together with our data examples clearly demonstrate

that the proposed approach is more adaptive and flexible for capturing the dynamics

in the correlations of longitudinal data, even when the model is mis-specified.

Table 4: Comparison between the proposed method and Pourahmadi’s modifiedCholesky decomposition model (MCD).

Proposed MCD(q, d) ‖µd‖ ‖Σd‖ KL ‖µd‖ ‖Σd‖ KL

Case I, n = 50(2,2) 0.050.04 0.430.35 0.160.11 0.240.17 2.260.23 10.170.58(1,2) 0.230.18 1.220.39 2.880.46 0.240.17 2.260.23 10.140.57(3,3) 0.050.04 0.440.36 0.250.17 0.250.17 2.260.23 10.220.60

Case I, n = 100(2,2) 0.030.02 0.290.23 0.060.03 0.180.12 2.240.16 10.040.39(1,2) 0.160.12 1.270.26 2.690.23 0.180.12 2.250.16 10.040.39(3,3) 0.030.02 0.300.23 0.080.05 0.180.12 2.250.16 10.050.39

Case II, n = 50(2,2) 0.330.15 1.670.36 0.180.07 0.320.15 1.330.54 0.120.06(1,2) 0.330.15 1.650.36 0.160.06 0.320.15 1.240.54 0.100.06(3,3) 0.330.15 1.750.37 0.230.21 0.320.16 1.490.53 0.150.08

Case II, n = 100(2,2) 0.230.11 1.450.21 0.110.03 0.230.11 0.920.36 0.060.03(1,2) 0.230.11 1.440.21 0.110.03 0.230.11 0.870.36 0.050.02(3,3) 0.230.11 1.490.21 0.130.06 0.230.11 1.030.35 0.070.03

Case III, n = 50(1,1) 0.210.12 0.930.30 0.260.05 0.230.13 1.610.19 0.690.07(1,2) 0.210.12 0.940.30 0.280.06 0.230.13 1.620.19 0.700.08(3,3) 0.220.12 0.960.29 0.330.10 0.230.13 1.650.18 0.740.10

Case III, n = 100(1,1) 0.150.09 0.860.22 0.210.02 0.160.09 1.570.13 0.640.04(1,2) 0.150.09 0.870.22 0.220.03 0.160.09 1.580.13 0.640.04(3,3) 0.150.09 0.880.22 0.240.03 0.170.10 1.590.13 0.660.05

4 Discussion

We have proposed a novel joint approach for modeling the mean, the variance and

the correlation in longitudinal data analysis. Our approach permits unconstrained

parametrization, fast computation and easy interpretation of the parameters. Unlike

previous approaches, this approach targets directly correlations and variances, and

28

Page 29: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

provides the most general form of the covariance structure to our best knowledge.

Our decomposition opens many new avenues for future research. With uncon-

strained structures, we can model the mean, the variance and the correlations non-

parametrically and semiparametrically. Along this line of research, for example, Wang

et al. (2005), Fan et al. (2007), Fan and Wu (2008), and Yao and Li (2013) have de-

veloped methods for nonparametric approaches. The likelihood based estimation pro-

cedure permits regularization based model selection and it is interesting to study this

further (Fan and Li, 2004; Bickel and Li, 2006). It may be interesting also to develop

Bayesian inference procedures by eliciting appropriate priors. Finally, we assume

that the longitudinal data follow multivariate normal distribution. It is worthwhile

to further develop methods that are robust with respect to this assumption.

ReferencesBickel, P. and Li, B. (2006). Regularization in statistics (with discussion). Test,

15:271–344.

Chen, Z. and Dunson, D. (2003). Random effects selection in linear mixed models.Biometrics, 59:762–9.

Chiu, T. Y. M., Leonard, T., and Tsui, K. W. (1996). The matrix-logarithm covari-ance model. Journal of American Statistical Association, 91:198–210.

Creal, D., Koopman, S. J., and Lucas, A. (2011). A dynamic multivariate heavy-tailed model for time-varying volatilities and correlations. Journal of Business andEconomic Statistics, 29:552–563.

Daniels, M. J. and Pourahmadi, M. (2009). Modeling covariance matrices via partialautocorrelations. Journal of Multivariate Analysis, 100:2352–2363.

Diggle, P. J., Heagerty, P., Liang, K. Y., and Zeger, S. L. (2002). Analysis of Longi-tudinal Data. Oxford University Press, 2nd edition.

Fan, J., Huang, T., and Li, R. (2007). Analysis of longitudinal data with semiparamet-ric estimation of covariance function. Journal of American Statistical Association,102:632–640.

Fan, J. and Li, R. (2004). New estimation and model selection procedures for semi-parametric modeling in longitudinal data analysis. Journal of American StatisticalAssociation, 99:710–723.

Fan, J. and Wu, Y. (2008). Semiparametric estimation of covariance matrices forlongitudinal data. Journal of American Statistical Association, 103:1520–1533.

Kenward, M. G. (1987). A method for comparing profiles of repeated measurements.Applied Statistics, 36:296–308.

29

Page 30: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

Leng, C. and Tang, C.Y. (2011). Improving variance function estimation in semipara-metric longitudinal data analysis. The Canadian Journal of Statistics, 39:656–670.

Leng, C., Zhang, W., and Pan, J. (2010). Semiparametric mean-covariance regressionanalysis for longitudinal data. Journal of the American Statistical Association,105:181–193.

Liang, K. Y. and Zeger, S. L. (1986). Longitudinal data analysis using generalizedlinear models. Biometrika, 73:13–22.

Lin, X. and Carroll, R. J. (2006). Semiparametric estimation in general repeatedmeasures problems. Journal of Royal Statistical Society, Series B, 68:69–88.

Pan, J. and Mackenzie, G. (2003). Model selection for joint mean-covariance struc-tures in longitudinal studies. Biometrika, 90:239–244.

Pourahmadi, M. (1999). Joint mean-covariance models with applications to longitu-dinal data: unconstrained parameterisation. Biometrika, 86:677–690.

Pourahmadi, M. (2000). Maximum likelihood estimation of generalised linear modelsfor multivariate normal covariance matrix. Biometrika, 87:425–35.

Pourahmadi, M. (2007). Cholesky decompositions and estimation of a covariancematrix: Orthogonality of variance-correlation parameters. Biometrika, 94:1006–1013.

Qu, A., Lindsay, B. G., and Li, B. (2000). Improving estimating equations usingquadratic inference functions. Biometrika, 87:823–836.

Rapisarda, F., Brigo, D., and Mercurio, F. (2007). Parametrization correlations: ageometric interoretation. IMA Journal of Management Mathematics, 18:55–73.

Shao, J. (1997). An asymptotic theory for linear model selection. Statistica Sinica,7:221–264.

Wang, N. (2003). Marginal nonparametric kernel regression accounting for within-subject correlation. Biometrika, 90: 43–52.

Wang, N., Carroll, R. J., and Lin, X. (2005). Efficient semiparametric marginal esti-mation for longitudinal/clustered data. Journal of the American Statistical Asso-ciation, 100:147–157.

White, H. (1982). Maximum likelihood estimation of misspecified models. Economet-rica, 50:1–25.

Yao, W. and Li, R. (2013). New local estimation procedure for a non-parametricregression function for longitudinal data. Journal of the Royal Statistical Society,Series B, 75:123–138.

Ye, H. and Pan, J. (2006). Modelling covariance structures in generalized estimatingequations for longitudinal data. Biometrika, 93:927–941.

Zeger, S. L. and Diggle, P. J. (1994). Semiparametric models for longitudinal datawith application to CD4 cell numbers in HIV seroconverters. Biometrics, 50:689–699.

Zhang, W. and Leng, C. (2012). A moving average cholesky factor model in covariancemodeling for longitudinal data. Biometrika, 99:141-150.

30

Page 31: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

Supplementary Material to “A Joint Modeling Approach for

Longitudinal Studies”

Weiping Zhang, Chenlei Leng, and Cheng Yong Tang

This Supplementary Material contains more detail for geometrically interpreting

the angles in the new parametrization (2) and (4), and technical proofs for the asymp-

totic properties of the estimation procedure.

1 More detail on the geometric interpretation

1.1 Geometric interpretation of the new parametrization

Now we describe a geometric view of a correlation matrix and the interpretation of

the parametrization in (2) and (4). For clarity and simplicity, we consider a general

m-dimensional correlation matrix R = (ρjk)mj,k=1. Let T be the corresponding lower

triangular matrix defined in (2) such that R = TTT, and denote by TT = (t1, . . . , tm)

where ti (i = 1, . . . ,m) are the column vectors in TT. Therefore, we have ρjk = 〈tj, tk〉

(j, k = 1, . . . ,m) where 〈·, ·〉 denotes the inner product of two vectors. By observing

that ti’s are all unit vectors, it is seen that the correlation ρjk is in fact the cosine of

the angle between vectors tj and tk. This suggests a geometric representation of a

m-dimensional correlation matrix by m vectors t1, . . . , tm in a m-dimensional space

with pairwise angles arccos(ρjk). Indeed, such a geometric representation exists for

any correlation matrix R and is unique by restricting the range of those angles to be

[0, π) (Rapisarda et al., 2007).

The geometric representation of a correlation matrix provides a natural way for

interpreting the angles φjk (1 ≤ k < j ≤ m) in (3) and (4) with respect to the

correlations among longitudinal measurements. Let ej = (0, . . . , 0︸ ︷︷ ︸j−1

, 1, 0, . . . 0︸ ︷︷ ︸m−j

)T (j =

1, . . . ,m) be the jth canonical basis of Rm. Then, it is clear from (3) that t1 = e1

and 〈tj, e1〉 = cos(φj1) = ρj1 (j = 2, . . . ,m). This implies that φj1 is simply the angle

1

Page 32: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

between tj and t1 that reflects the correlation between the first measurement and

the jth one. Further, let Pk = diag(0, . . . , 0︸ ︷︷ ︸k−1

, 1, . . . , 1︸ ︷︷ ︸m−k+1

) be the matrix such that Pkt

is the projection of the vector t into the subspace spanned by {ek, . . . , em} for k =

1 . . . ,m. For example, P2t3 = (0, cos(φ32) sin(φ31), sin(φ32) sin(φ31), 0, . . . , 0)T and

therefore ‖P2t3‖ =√〈P2t3,P2t3〉 = sin(φ31). Hence in this case 〈P2t3, e2〉/‖P2t3‖ =

cos(φ32), implying that φ32 is the angle between P2t3 and e2. More generally, it can

be shown analogously that φjk is the angle between Pktj and ek (1 ≤ k < j ≤ m). On

the other hand, since each tj can be expressed as an orthogonal decomposition tj =∑mi=1〈tj, ei〉ei, the term Pktj = tj−

∑k−1i=1 〈tj, ei〉ei denotes the remaining components

in tj by excluding the first k−1 coordinates in Rm. By noting again the equivalence of

a correlation and the cosine of an angle, we can see that φjk is the angle between the

basis vector ek and the remaining component in tj after eliminating the contribution

from the first k − 1 coordinates.

By recalling the correspondence between vectors t1, . . . , tm in the geometric repre-

sentation and the correlation matrix of m longitudinal measurements, we summarize

the practical interpretations of the angles in our parametrization as follows. As seen

from the hierarchic connection (6), for the jth measurement (j = 2, . . . ,m), its cor-

relations with the existing j−1 measurements are reflected by angles φj1, . . . , φj(j−1).

Specifically, cos(φj1) is its correlation with the first measurement. Once the correla-

tions between the jth measurements and the first k − 1 measurements (2 ≤ k < j)

are specified, the additional contribution from φjk properly reflects the correlation

between the jth and kth measurements through the connection (6).

Another viewpoint of our parametrization is that a series of rotations of t1 can

construct a geometric representation for the correlations between the three measure-

ments. In particular, we first rotate t1 anti-clockwise with angle φ21 in the space

spanned by e1 and e2 to obtain t2. If we rotate t1 anti-clockwise with angle φ31 in

the same space, we have an intermediate vector w3 in Figure 1. If we further rotate

2

Page 33: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

w3 anti-clockwise with angle φ32 in the space parallel to the subspace spanned by e2

and e3, we obtain t3. Because the rotation of w3 is taken in a subspace orthogonal

to t1 and t2, new contribution to t3 from the additional angle φ32 is orthogonal to

t1 and t2 representing prior measurements. This process involves a series of Givens

rotation and has been carefully studied by Rapisarda et al. (2007) in the context of

understanding a correlation matrix R in a m-dimensional space. A brief description

of this constructive interpretation is provided as follows.

1.2 Givens rotation for interpreting the angles in (2) and (4)

The connection between the correlations and the angles can be made from a construc-

tive view of the vectors in the columns of TT. The following shows how to construct

t1, . . . , tm sequentially so that cosines of their pairwise angles are the correlations be-

tween the corresponding measurements. To this end, define a m-dimensional Givens

rotation matrix as

G(i, j;φ) =

1 · · · 0 · · · 0 · · · 0...

. . ....

......

0 · · · cos(φ) · · · − sin(φ) · · · 0...

. . ....

......

0 · · · sin(φ) · · · cos(φ) · · · 0...

. . ....

......

0 · · · 0 · · · 0 · · · 1

,

where G(i, j;φ) differs from the d-dimensional identity matrix in its (i, i), (i, j), (j, i),

and (j, j) elements. For any vector t ∈ Rm, ‖G(i, j;φ)t‖ = ‖t‖, i.e. the rotated

vector and the original vector have the same length. Geometrically, the G(i, j;φ)t

rotates the vector t anti-clockwise by angle φ in the subspace spanned by ei and ej,

while keeping all other components of t fixed. This property of the Givens rotation

makes it an ideal device for constructing the m vectors t1, . . . , tm for representing a

correlation R = (ρij)mi,j=1. Following our earlier discussions in Subsection 2.3, there

3

Page 34: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

is a one-to-one correspondence between R and angles φji (1 ≤ i < j ≤ m).

The procedure for constructing the vectors with those angles is as follows. Firstly,

t1 is set as e1, and t2 is specified by rotating e1 in the subspace of e1 and e2

anti-clockwise with angle φ21 such that cos(φ21) = ρ12. With the Givens rotation

matrix, we can write t2 = G(1, 2;φ21)e1, which is exactly the second column of

TT. Generally, the vector tj with given t1, . . . , tj−1 can be constructed by a total

of j − 1 anti-clockwise rotations from e1. The first rotation needs to satisfy that

〈G(1, 2;φj1)e1, t1〉 = ρj1 and can be achieved easily by choosing the angle φj1 prop-

erly. The second rotation needs to satisfy that 〈{G(2, 3;φj2)G(1, 2;φj1)e1}, t2〉 = ρj2.

Because the Givens rotation only affects components in a two-dimensional subspace,

we have 〈{G(2, 3;φj2)G(1, 2;φj1)e1}, t1〉 = ρj1 for any φj2. Therefore, φj2 can be

identified given a specified φj1 in the first rotation. More generally, for the kth

(k < j) rotation,

〈k∏

i=1

G(i, i+ 1;φji)e1, tk〉 = ρjk

is satisfied by appropriately choosing the angle φjk with given φj1, . . . , φj(k−1). By

performing the rotation j − 1 times following the above specification, the vector tj is

identified by

tj =

j−1∏i=1

G(i, i+ 1;φji)e1.

The construction by the Givens rotation ensures that 〈tj, tk〉 = ρjk for all k =

1, . . . , j − 1. It can also be verified that tj is exactly the jth column of TT. By

sequentially applying the above procedure for j = 2, . . . ,m, the vectors t1, . . . , tm

can be obtained such that they form a geometric representation of the desirable cor-

relation matrix R.

2 Technical proofs

4

Page 35: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

The Score and expectation of the Hessian

The computation of U1(β;γ,λ) and I11 are trivial. Since Σ only depends on γ and

λ, it is easy to see that

I12(θ) = −E(

∂2l

∂β∂γT

)= −E

[n∑

i=1

XT

i ∆i∂Σ−1i

∂γT{yi − µ(Xiβ)}

]= 0.

Similarly I13(θ) = 0. With Ri = TiTT

i , we have

−2l(θ) =n∑

i=1

mi∑j=1

(log σ2ij + log(T 2

ijj) + ε2ij).

Thus, the derivative of l(θ) with respect to γ can be expressed by

U2(γ;β,λ) = −n∑

i=1

mi∑j=1

(∂logTijj∂γ

+∂εij∂γ

εij

). (A.1)

As Tiεi = D−1i ri, it is easy to see that εij = 1Tijj

(rij/σij −∑j−1

k=1 Tijkεik). Therefore,

we havej∑

k=1

Tijk∂εik∂γ

= −j∑

k=1

εik∂Tijk∂γ

, j = 1, . . . ,mi,

or equivalently in matrix form, ∂εi

∂γ = −(εTi ⊗ Iq)

∂TT

i

∂γ T−Ti . We then have

∂εij∂γ

= −∂logTijj∂γ

εij −j−1∑k=1

bijkεik, (A.2)

where bijk =∑j

l=k∂Tilk

∂γ aijl with aijl being the (j, l)th element of T−1i . Combining

(A.1) and (A.2) gives the second estimating equation in (10).

From (A.1) and (A.2), it is easy to know

I22(θ) = −E(

∂2l

∂γ∂γT

)=

n∑i=1

mi∑j=1

E

(∂2logTijj∂γ∂γT

+∂2εij∂γ∂γT

εij +∂εij∂γ

∂εij∂γT

)

=n∑

i=1

mi∑j=1

(2∂logTijj∂γ

∂logTijj∂γT

+

j−1∑k=1

bijkbT

ijk

). (A.3)

5

Page 36: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

Similarly, we have

U3(λ;β,γ) = −1

2

n∑i=1

mi∑j=1

(zij + 2

∂εij∂λ

εij

)

=1

2

n∑i=1

(−

mi∑j=1

zij +

mi∑k=1

zik

mi∑j=k

aijkrikσik

j∑l=1

aijlrilσil

)(A.4)

=1

2

n∑i=1

ZT

i (hi − 1mi),

where the d× 1 vector hi = diag{R−1i D−1i rir′iD−1i } and

∂εij∂λ

= −1

2

j∑k=1

rikσik

aijkzik, (A.5)

or in the matrix form ∂εi/∂λ = −12ZT

i diag(D−1i ri)T−Ti .

From (A.1), (A.2) and (A.5) and the fact that EεirTi = TT

i Di, i.e.,

Eεijrik =

{0, k < j,σikTikj, k ≥ j;

j, k = 1, . . . ,mi,

we have

I23(θ) = −E(

∂2l

∂γ∂λT

)= −

n∑i=1

mi∑j=1

E

[∂logTijj∂γ

∂ε2ij∂λT

+

j−1∑k=1

bijk

(εik∂εij∂λT

+∂εik∂λT

εij

)]

=n∑

i=1

mi∑j=1

[∂logTijj∂γ

zT

ij +1

2

j−1∑k=1

bijk

j∑l=k

TilkaijkzT

il

]. (A.6)

Finally, from (A.4) we have

I33(θ) = −E(

∂2l

∂λ∂λT

)=

n∑i=1

mi∑j=1

E

(∂2εij∂λ∂λT

εij +∂εij∂λ

∂εij∂λT

)

=1

4

n∑i=1

ZT

i [Imi+ R−1i ◦Ri]Zi, (A.7)

where A ◦B denotes the Hadamard product of matrix A and B.

Proof of Theorem 1. The proof is essentially the same as that of Theorem 1 in

Pourahmadi (2000) and Theorem 1 and 2 in Chiu et al. (1996).

6

Page 37: A Joint Modeling Approach for Longitudinal Studies · 2013. 12. 26. · A crucial consideration in longitudinal data analysis is that the components of y i are correlated, and it

(a) Let θ = (βT,γT,λT)T and li = log fi(yi,θ), (i = 1, . . . , n). Then ignoring the

constant 12mi log(2π), we obtain that

li = −1

2log(|Σi|)−

1

2{yi − µ(Xiβ)}TΣ−1i {yi − µ(Xiβ)}.

Thus the mean and the variance of li when θ = θ0 are respectively

E0(li) = −1

2log(|Σi|)−

1

2tr(Σ−1i Σ0i)−

1

2{µ(Xiβ)− µ(Xiβ0)}TΣ−1i {µ(Xiβ)− µ(Xiβ0)},

var0(li) =1

2

[tr(Σ−1i Σ0i)

2 + 2 {µ(Xiβ)− µ(Xiβ0)}T Σ−1i Σ0iΣ

−1i {µ(Xiβ)− µ(Xiβ0)}

],

where Σi = DiRiDT

i and Σ0i = D0iR0iDT

0i. It follows from the compactness of the

parameter space and boundedness of the covariates that var0(li) ≤ K, for all i where

K is a constant. Therefore by Kolmogorov’s strong law of large numbers, we have

that

1

n

n∑i=1

li −1

n

n∑i=1

E0(li)→ 0, a.s.. (A.8)

Notice that the above constant K is independent of θ and it can be shown that

1n

∑ni=1E0(li(θ)) is equicontinuous in θ, then following the proof of Theorem 1 in

Chiu et al. (1996), it is easy to show the consistency of θ.

The proof of (b) is essentially the same as that of Theorem 2 in Chiu et al.

(1996).

7