AppMultNotes.pdf

46
Applied Multivariate Analysis, Notes originally for the course of Lent 2004, MPhil in Statistical Science, gradually updated P.M.E.Altham, Statistical Laboratory, University of Cambridge. September 23, 2013

Transcript of AppMultNotes.pdf

  • Applied Multivariate Analysis, Notesoriginally for the course of Lent 2004,

    MPhil in Statistical Science,gradually updated

    P.M.E.Altham, Statistical Laboratory, University of Cambridge.

    September 23, 2013

  • Contents

    1 Properties of the multivariate normal distribution 3

    2 Estimation and Testing for the multivariate normal distri-bution 92.1 Maximum Likelihood Estimation (mle) . . . . . . . . . . . . . 92.2 The distribution of the mles . . . . . . . . . . . . . . . . . . . 122.3 The multivariate analysis of variance . . . . . . . . . . . . . . 142.4 Applications to Linear Discriminant Analysis. . . . . . . . . . 17

    3 Principal components analysis. 19

    4 Cluster Analysis 26

    5 Tree-based methods, ie decision trees/ classification trees 30

    6 Classical Multidimensional Scaling 35

    7 Applied Multivariate Analysis Exercises 40

    1

  • Preface

    Note added 2013: these are essentially my original notes, but I have just donea little tidying up, and have included a couple of extra graphs for clarity.All of the statistical techniques described may be implemented in R: seehttp://www.statslab.cam.ac.uk/~pat/misc.pdf for examples.I have also appended the Exercises sheet at the end of this set of notes, forconvenience.There are 6 Chapters in all, intended for a 16-hour course, of which about 8hours should be practical classes: I used R or S-plus for these.

    2

  • Chapter 1

    Properties of the multivariatenormal distribution

    The multivariate normal distribution is the basis for many of the classicaltechniques in multivariate analysis. It has many beautiful properties. Herewe mention only a few of these properties, with an eye to the statisticalinference that will come in subsequent Chapters.

    Definition and Notation.We write

    X Np(, V )if the pdimensional random vector X has the pdf

    f(x|, V ) exp[(x )TV 1(x )]/2

    for xRp.The constant of proportionality is 1/

    (2pi)p|V |, and we use the notation |V |

    as the determinant of the matrix V .Then this pdf has ellipsoidal contours, ie

    f(x|, V ) = constant

    is the equation(x )TV 1(x ) = constant

    which is an ellipse (for p = 2) or an ellipsoid (for p > 2) centred on the point, with shape determined by the matrix V .The characteristic function of X is say (t) = E(exp itTX), and you cancheck that

    (t) = exp(itT tTV t/2)(using the fact that

    xf(x|, V )dx = 1).

    Furthermore, by differentiating (t) with respect to t and setting t = 0, youcan see that

    E(X) = ,

    3

  • [email protected] 4

    similarly, differentiating again and setting t = 0 shows you that

    E(X )(X )T = V.V is the covariance matrix of X.By definition, uTV u 0 for any vector u, ie the matrix V is positive semi-definite.Here is one possible characterisation of the multivariate normal distribution:X is multivariate normal if and only iffor any fixed vector a, aTX is univariate normal.

    Partitioning the normal vector XTake X1 as the first p1 elements of X, and X2 as the last p2 elements, wherep = p1 + p2.Assume as before that X N(, V ), and now suppose that T = (T1 , T2 ),with V partitioned in a corresponding fashion,

    V =

    (V11 V12V21 V22

    )then, for i = 1, 2, Xi N(i, Vii),and

    cov(X1, X2) = E(X1 1)(X2 2)T = V12 = V T21so that X1, X2 are independent iff V12 is a matrix with every element 0.

    Linear transformation of a normal XIfX N(, V ) and C is anmp constant matrix, then CX N(C,CV CT ).

    DiagonalisationSuppose X N(, V ) and V is a positive-definite matrix, with eigen-values1, . . . , p say (which are then> 0, since V is positive-definite). Let u1, . . . , upbe the corresponding eigen-vectors of V , thus

    V ui = iui, for 1 i p,and

    uTi uj = 0 for i 6= j, 1 for i = j,ie the eigen-vectors are mutually orthogonal, and each is of length 1.Define U as the p p matrix whose columns are (u1, . . . , up). Then

    UTX Np(UT, UTV U).But

    uTj V ui = iuTj ui

  • [email protected] 5

    and this is i for i = j, 0 otherwise. Hence

    UTV U = diag(1, . . . , p).

    Thus, given V , we can always construct an orthogonal matrix U such that ifY = UTX, then Y1, . . . , Yp are independent normal variables (with variances,1, . . . , p in fact).Exercises.i) Given X N(, V ), modify the above proof to construct a matrix D suchthat

    DX N(D, Ip)where Ip is the p p identity matrix.Henceii) show that (X )TV 1(X ) is distributed as 2, with p df.

    Less familiar facts about the normal distributionConditioning.Take X N(, V ) and partition the vector X as before, so that XT =(XT1 , X

    T2 ). We will prove that

    X1|(X2 = x2) N(1, V11.2) saywhere1 = 1 + V12V

    122 (x2 2), the conditional mean vector,

    and V11.2 = V11 V12V 122 V21, the conditional covariance matrix.(Observe that V11.2 is free of x2.)ProofNote that we can always derive a conditional pdf as

    f(x1|x2) = f(x1, x2)/f(x2)(ie joint pdf divided by marginal pdf), but in the current proof we employ aMORE CUNNING argument. (Fine if you know how to get started.)Suppose the vector Y has components Y1, Y2 say, where

    Y1 = X1 V12V 122 X2, and Y2 = X2.Thus we have written Y = CX say, where

    C =

    (Ip1 V12V 1220 Ip2

    )and so Y N(C,CV CT ), and you can check that

    C =

    (1 V12V 122 2

    2

    )and

    CV CT =

    (V11.2 0

    0 V22

    )

  • [email protected] 6

    (Multiply out as if we had 2 2 matrices).Hence Y1 and Y2 are independent vectors, ie X1 V12V 122 X2 is independentof X2.Thus the distribution of (X1 V12V 122 X2)|(X2 = x2) is the same as thedistribution of (X1 V12V 122 X2), which is Normal with covariance matrixV11.2.Hence (X1 V12V 122 X2)|(X2 = x2) is Normal with covariance matrix V11.2.Now E(Y1) = 1 V12V 122 2.Hence X1|(X2 = x2) has distribution N(1 + V12V 122 (x2 2), V11.2), whichis the required result.Notei)E(X1|X2 = x2) = 1 + V12V 122 (x2 2), a linear function of x2, as weshould expect,ii)

    var(X1|X2 = x2) = V11 V12V 122 V21 V11 = var(X1)(ie conditional variance is marginal variance)in the sense that we take A B for matrices A,B if B A is a positivedefinite matrix.Here

    var(X1|X2 = x2) = var(X1)iff V12 = 0, in which case X1, X2 are independent.The correlation coefficient.We take

    X =

    X1...Xp

    as before, and take V as the covariance matrix of X.Definition The Pearson correlation coefficient between Xi, Xj is

    ij = corr(Xi, Xj) = vij/

    (viivjj).

    (ij) is the correlation matrix of X.Check: by definition, 2ij 1, with = iff Xi is a linear function of Xj.With

    X =

    (X1X2

    )with X1, X2 now of dimensions p1, p2 respectively, we know that

    var(X1|X2 = x2) = V11 V12V 122 V21.

    We could use this latter matrix to find the conditional correlation of, say,X1i, X1j, conditional on X2 = x2.Exercise.Suppose X1 = 1+aY and X2 = Y , where Y N(0, V22) and 1 N(0, Ip1),independently of Y , and a is a p1p2 constant matrix. Show that, conditional

  • [email protected] 7

    on X2 = x2, the components of X1 are independent.Clearly, for this example

    var(X) =

    (I + aV22a

    T aV22V22a

    T V22

    ).

    Two useful expressions from V 1

    First we find an expression for the conditional correlation, say of X1, X2conditional on the values of the remaining variables.Suppose X N(0, V ), and write Xi as the ith component of X. Then

    f(x) expxTax/2,where we have defined a = V 1. Thus, expanding out the quadratic expres-sion, we see that

    f(x1, x2, z) exp(a11x21 + 2a12x1x2 + a22x22 + terms linear in x1, x2)/2,where zT = (x3, . . . , xp). Thus

    f(x1, x2|z) = f(x1, x2, z)f(z)

    exp(a11(x11)2+2a12(x11)(x22)+a22(x22)2)/2

    where we have defined 1 = E(X1|z) and 2 = E(X2|z), thus 1, 2 arelinear functions of z, but not of interest to us at present. We compare theabove expression for f(x1, x2|z) with the bivariate normal density to find anexpression for corr(X1, X2|z) in terms of elements of a.Suppose Y is bivariate normal, with E(Yi) = mi, and var(Yi) =

    2i , and

    cor(Y1, Y2) = , thus(Y1Y2

    )has pdf g(y1, y2) exp(b21 2b1b2 + b22)/2(1 2)

    where we have written bi = (yi mi)/i for i = 1, 2.Look at these two expressions for a density function and compare coefficientsin the quadratic. You will then see that

    = a12/a11a22,ie corr(X1, X2|Z = z) = a12/a11a22, where a is the inverse of V , thecovariance matrix.Similarly, if we now define

    z =

    x2...xp

    ,you will now see by a similar argument, that

    f(x1, z) exp(a11x21 + . . .)/2and hence var(X1|Z = z) = 1/a11, where a = V 1 as before.If 1/a11 is small (compared with var(X1)), then X1 will be (almost) a linearfunction of X2, . . . , Xp.In R, the matrix V has inverse

  • [email protected] 8

    solve(V)

    (recall that the original use of matrix inverse is to SOLVE a system of linearequations.)Check If we write

    V =

    (v11 b

    T

    b V22

    ),

    thenvar(X1|X2 = x2, . . . , Xp = xp) = v11 bTV 122 b.

    Exercisesi) Suppose X1X2

    X3

    N(., .),with ij = corr(Xi, Xj).Show that

    Corr(X1, X2|X3 = x3) = (12 1323)/

    ((1 213)(1 223).

    Hence show that if 12 1323 = 0, then we may represent the dependencegraph of (X1, X2, X3) as

    X1----X3-----X2

    ie X1 and X2 are linked only through X3.(This is rather a poor-mans graphic: doubtless you can do it in a betterway.)

    This would be the case if, for example,X1 = 1X3 + 1,X2 = 2X3 + 2where 1, 2, X3 are independent random variables.ii) Suppose X N(0, V ) and XT = (X1, XT2 ), where X1, X2 are of dimen-sions 1, p 1 respectively.The multiple correlation coefficient between X1, X2 is defined as themaximum value of corr(X1,

    TX2), maximising wrt the vector .Show that this maximising is given by

    T V12V 122where we have partitioned V in the obvious way, and find the resultingmultiple correlation coefficient.Hint: cov(X1,

    TX2) = TV21, and var(

    TX2) = TV22. So the problem

    is equivalent to:maximise TV21 subject to

    TV22 = 1. We write down the correspondingLagrangian.

  • Chapter 2

    Estimation and Testing for themultivariate normaldistribution

    2.1 Maximum Likelihood Estimation (mle)

    Let x1, . . . , xn be a random sample (rs) from Np(, V ). Then

    f(x1, . . . , xn|, V ) 1/|V |n/2 expn1 (xi )TV 1(xi )/2.Now

    (xi )TV 1(xi ) = (xi x+ x )TV 1(xi x+ x )where x = xi/n. Thus

    (xi )TV 1(xi ) = (xi x)TV 1(xi x) + n(x )TV 1(x ).(Check this.) Hence 2 log f(x1, . . . , xn|, V )

    = n log |V |+ (xi x)TV 1(xi x) + n(x )TV 1(x ).Recall, the trace of a square matrix is the sum of its diagonal elements.Now, for any vector u, uTV 1u is a scalar quantity, and hence is equal to itstrace, ie

    uTV 1u = tr(uTV 1u)

    and this in turn is equal totr(V 1uuT ).

    Further, for any matrices, tr(A+B) = tr(A) + tr(B).Hence we may write 2 log-likelihood as

    = n log(|V |) + tr(V 1(xi x)(xi x)T + n(x )TV 1(x )= n log(|V |) + tr(V 1nS) + n(x )TV 1(x ),

    whereS = (1/n)(xi x)(xi x)T ,

    9

  • [email protected] 10

    the sample covariance matrix.Thus we see that(i) (x, S) is sufficient for (, V ), by the factorisation theorem, and(ii) it is easy to minimise 2 loglikelihood, with respect to , for V a fixedpositive definite matrix.Clearly,

    (x )TV 1(x ) 0,with = if and only if = x.Hence = x, (whether or not V is known).Finally, we wish to minimise the expression

    l(V ) = nlog|V |+ ntr(V 1S),

    with respect to V , for V a positive-definite matrix. (Of course, experiencewith the univariate normal tells us to expect the answer V = S.)Observe that, at V = S,

    l(V ) = nlog|S|+ ntr(Ip) = nlog|S|+ np.

    a) Here is the slick way to achieve the minimisation.

    Note that we may write

    l(V ) = n log|V 1|+ n tr(V 1S),

    and hence

    l(V ) = n log|V 1S|+ n tr(V 1S) + constant.

    But each of V, S is symmetric and positive-definite. Suppose is an eigen-value of V 1S, with V 1Su = u.Thus

    Su = V u = LLTu,

    say, where L is a real non-singular matrix. Hence

    L1Su = LTu

    and soL1S(L1)T )(LTu) = (LTu).

    Thus, is an eigen-value of the symmetric positive-definite matrix L1S(L1)T ,and hence is real and positive. Let 1, . . . , p be the eigen-values ofL1S(L1)T .Further we may write

    |V 1S| = |(LT )1L1S| = |L1S(L1)T | = i,

  • [email protected] 11

    similarly,

    tr(V 1S) = tr((LT )1L1S) = tr(L1S(L1)T ) = i.

    Thus, our problem reduces to the problem,find 1, . . . , p 0 to minimise

    l(V ) = n log i + ni.Thus,

    l(V ) = ni(i log i).Now find

    l()

    i, and

    2l()

    2iin order to show that l(V ) is minimised with respect to 1, . . . , p > 0 byi = 1 for i = 1, . . . , p, thus

    V 1S = Ip, and so V = S,

    as we would expect, from the 1-dimensional case.We write V = S, we shall show later that this is not an unbiased estimatorof V .

    b) The brute force method.Put = V 1; our problem is to choose to minimise

    f() = n log ||+ kzTk zk,where we have written zk = xk x for k = 1, . . . , n. Heroic feats (well,perhaps mildly heroic) of calculus (to be demonstrated by your lecturer)enable us to write down the equations

    f()

    ij= 0

    and hence find the solution1 = S,

    as above. Here is how to get started.Note that we consider the more general problem: choose a matrix tominimise f(). We do not include the constraint that is a symmetricpositive definite matrix.

    f()

    ij= (n/det())det

    ij+ zkizkj

    Hencef()

    ij= 0

    is equivalent to

    (1/det())det

    ij= Sij

    using the fact that Sij = zkizkj.

  • [email protected] 12

    2.2 The distribution of the mles

    Notation, for reminder. Take X1, . . . , Xn a random sample from Np(, V ),

    and write = X, V = S. Recall, if p = 1,X N(, V/n) and nS/(2) 2n1, independently, (where V = 2). Weseek the multivariate version of this result, which we will then apply.Here is the easy part of the result: clearly X1, . . . , Xn NID(, V ) impliesthat X N(, V/n).For the rest, we first need a multivariate version of the 2 distribution: thisis the Wishart distribution, defined as follows.Take Z, 1 n 1 as NID(0, V ) where of course V is a p p matrix.Then we say that

    n11 ZZT

    has the Wishart distribution, parameters n 1, V .Fact

    nS = n1 (Xi X)(Xi X)Twhich is of course a random matrix, is distributed as

    n11 ZZT ,

    independently of X.Proof is omitted (but see eg Seber 1984).Exercise. Note that nS = XiX

    Ti nXXT . Hence show that E(nS) =

    (n 1)V .Testing hypotheses for when V is known, for example, to test H0 : = 0.Now since

    n(Xo) N(0, V ) if H0 is true, we refer n(X0)TV 1(X

    0) to 2p to test H0.

    Similarly, we can contruct a 95% confidence region for from X.Testing hypotheses for when V is unknown:We know that X N(, V/n), and hence (X )n N(0, V ), indepen-dently of our estimate S of V ; we know that nS has the Wishart distributiongiven above.It therefore seems obvious that our test statistic, eg of H0 : = 0, must be

    constant (X )TS1(X )having distribution (on H0) which is free of the unknown V , and is themultivariate version of tn1.This is actually true, but is surprisingly lengthy to prove: see standard texts,eg Seber, for proof.For the present, we merely note that Hotellings T 2 is defined as

    T 2 = n(x 0)T (nS/(n 1))1(x 0)and the exact distribution of T 2 on H0 is known,it is obviously F1,n1 if p = 1,and for general p,

    n pp

    T 2

    n 1 Fp,np.

  • [email protected] 13

    Again, we omit the proof.But, we note that this result can be used to find the multivariate version ofthe 2-sample t-test:eg, take X1, . . . , Xm NID(1, V )and Y1, . . . , Yn NID(2, V ) and assume that the Xs are independent ofthe Y s.Then

    X Y N(1 2, (1/m+ 1/n)V )independently of

    (m+ n)S = i(Xi X)(Xi X)T + j(Yj Y )(Yj Y )T

    from which we could construct a Hotellings T 2 statistic, of known distribu-tion, to test H0 : 1 = 2.

    Exercise.Let X1, . . . , Xn be a random sample from N(, V ), where , V are unknown.Show that T 2, as defined above, is equivalent to the generalized likelihoodratio test of H0 : = 0.Hint: you will need the matrix identity

    |A+ uuT | = |A|(1 + uTA1u)where A is a positive-definite p p matrix, and u is a pdimensional vector.Heres how the matrix identity is proved:Suppose A = LLT where L is a real and non-singular matrix. Then we seethat

    A+ uuT = LLT + uuT = L(I + vvT )LT

    where we have defined the vector v to be L1u. Thus we see that

    det(A+uuT ) = |L|det(I+vvT )|LT | = det(LLT )det(I+vvT ) = |A|det(I+vvT ).But, det(I + vvT ) is the product of the eigen values of this matrix. Now,

    (I + vvT )v = (1 + vTv)v

    hence (1 + vTv) is an eigen-value of this matrix, and it corresponds to eigenvector v. Take x any vector orthogonal to v, then

    (I + vvT )x = 1x

    hence every other eigen value of (I+vvT ) is 1. Thus the product of the eigenvalues is (1 + vTv), and so we see that

    det(A+ uuT ) = |A|(1 + uTA1u)as required.

    Discussion of the derivation of a test criterion via the generalized likelihoodratio method leads naturally to the next topic:

  • [email protected] 14

    2.3 The multivariate analysis of variance

    manova()

    in R.First an example, from

    library(MASS)

    of the famous painters data, for which we may wish to apply

    manova(cbind(Composition,Drawing, Colour, Expression) ~ School, painters)

    An 18th century art critic called de Piles rated each of 54 painters, on ascale of 0 to 20, on each of the following 4 variables, Composition, Drawing,Colour and Expression. The painters are also classified according to theirSchool: these areA= Renaissance, B= Mannerist, C= Seicento, D= Venetian, E= Lombard,F= 16thC, G= 17thC, and finally H= French. Here are the data

    Composition Drawing Colour Expression School

    Da Udine 10 8 16 3 A

    Da Vinci 15 16 4 14 A

    Del Piombo 8 13 16 7 A

    Del Sarto 12 16 9 8 A

    Fr. Penni 0 15 8 0 A

    Guilio Romano 15 16 4 14 A

    Michelangelo 8 17 4 8 A

    Perino del Vaga 15 16 7 6 A

    Perugino 4 12 10 4 A

    Raphael 17 18 12 18 A

    F. Zucarro 10 13 8 8 B

    Fr. Salviata 13 15 8 8 B

    Parmigiano 10 15 6 6 B

    Primaticcio 15 14 7 10 B

    T. Zucarro 13 14 10 9 B

    Volterra 12 15 5 8 B

    Barocci 14 15 6 10 C

    Cortona 16 14 12 6 C

    Josepin 10 10 6 2 C

    L. Jordaens 13 12 9 6 C

    Testa 11 15 0 6 C

    Vanius 15 15 12 13 C

    Bassano 6 8 17 0 D

    Bellini 4 6 14 0 D

    Giorgione 8 9 18 4 D

    Murillo 6 8 15 4 D

    Palma Giovane 12 9 14 6 D

    Palma Vecchio 5 6 16 0 D

  • [email protected] 15

    Pordenone 8 14 17 5 D

    Tintoretto 15 14 16 4 D

    Titian 12 15 18 6 D

    Veronese 15 10 16 3 D

    Albani 14 14 10 6 E

    Caravaggio 6 6 16 0 E

    Corregio 13 13 15 12 E

    Domenichino 15 17 9 17 E

    Guercino 18 10 10 4 E

    Lanfranco 14 13 10 5 E

    The Carraci 15 17 13 13 E

    Durer 8 10 10 8 F

    Holbein 9 10 16 13 F

    Pourbus 4 15 6 6 F

    Van Leyden 8 6 6 4 F

    Diepenbeck 11 10 14 6 G

    J. Jordaens 10 8 16 6 G

    Otho Venius 13 14 10 10 G

    Rembrandt 15 6 17 12 G

    Composition Drawing Colour Expression School

    Rubens 18 13 17 17 G

    Teniers 15 12 13 6 G

    Van Dyck 15 10 17 13 G

    Bourdon 10 8 8 4 H

    Le Brun 16 16 8 16 H

    Le Suer 15 15 4 15 H

    Poussin 15 17 6 15 H

    You could argue that the format of this dataset is typical of a pattern recog-nition problem: suppose that a newly discovered old master, say Patriziani,has a a set of scores (Composition=15, Drawing= 19, Colour=17, Expres-sion=3), can we use the above Training Set to assign this new painter tothe correct School?You could also reasonably argue that any analysis of the above data-set muststart by suitable plots: this is what we will do in the practical classes. Try

    attach(painters)

    plot(Composition,Drawing, type="n")

    text(Composition,Drawing, c("A","B","C","D","E","F","G","H")[School])

    But here we will restrict ourselves to a much more specific problem (and thiswill turn out to give the generalization of the well-known 1-way anova).The model: assume that we have independent observations from g differentgroups,

    x()j NID((), V ), for j = 1, . . . , n , = 1, . . . , g

  • [email protected] 16

    where n = n. We wish to test

    H0 : (1) = . . . = (g) = say, where , V unknown

    againstH1 :

    (1), . . . , (g), V all unknown.

    We maximise the log-likelihood under each of H0, H1 respectively, in orderto find the likelihood ratio criterion (which of course must turn out to be thematrix version of (ss between groups)/(ss within groups)).Let S be the sample covariance matrix for the th group.Now -2 loglikelihood for all n observations=

    2l((1), . . . , (g), V ) say = n(log |V |+tr(V 1S)+(x()())TV 1(x()())

    = n log |V |+ ntr(V 1S) + n(x() ())TV 1(x() ()).We have already done the hard work that enables us to minimise this expres-sion.Verify that this is minimised under H1 by

    () = x(), for each , and V = (1/n)nS = (1/n)W say,

    minH1 2l((1), . . . , (g), V ) = n log |V |+ n tr(V 1V )) = n log(V |+ np.Now, under H0, -2 loglikelihood = 2l(, V ) say

    = n log |V |+ ntr(V 1S) + n(x() )TV 1(x() ).

    We may write the second term as

    n(x() x+ x )TV 1(x() x+ x )

    = n(x() x)TV 1(x() x) + n(x )TV 1(x )

    where we have defined x = n x()/n, the mean of all the observations.

    Hence, underH0, -2 loglikelihood is minimised with respect to , V by = x,

    and

    V =1

    n(nS + (x

    () x)(x() x)Tn)

    =1

    n(W +B)say,

    where W = within-groups ss, B = between-groups ss, and

    minH0 2l(, V ) = n log(1/n)|W +B|+ np.So we see that the l.r. test of H0 against H1 is to reject H0 in favour of H1iff

    log |W +B|/|W | > constant.(Compare this with the traditional 1-way anova.) Define = |W |/|W +B|.We simplify this expression, as follows.

  • [email protected] 17

    Now W,B are symmetric matrices, with W > 0, B 0. Put W = LLT say.Suppose

    W1Bvj = jvj.

    ThenBvj = jWvj = jL(L

    T )vj,

    and soL1B(L1)T (LTvj) = jLTvj,

    put uj = LTvj, thus

    L1B(L1)Tuj = juj.

    Now L1B(L1)T is a symmetric matrix, 0, hence j is real, and 0.Further,

    |W +B||W | =

    |LLT +B||W | =

    |L||LT |det(I + L1B(LT )1)|W |

    = det(I + L1B(LT )1) = (1 + j).

    This final quantity has known distribution, under H0.

    2.4 Applications to Linear Discriminant Anal-

    ysis.

    discrim()

    lda() # in library(MASS)

    Here is the problem: given data (x()j , 1 j n , 1 g) from our g dif-

    ferent groups, choose a p-dimensional vector a such that the between-groupsss for aTx is as large as possible for a given value of the within-groups ss foraTx,ie maximise aTBa subject to aTWa = 1. Clearly, the corresponding La-grangian is

    L(a, ) = aTBa+ (1 aTWA)giving (by differentiating wrt a)

    Ba = Wa,

    ie we choose a as the eigen vector corresponding to the largest eigen valueof W1B, equivalently, LTa is the eigen vector corresponding to the largesteigen value of L1B(L1)T .

    An alternative approach is to use a Bayesian decision rule: this is what liesbehind the R function lda(). Our explanation below follows Venables andRipley, 2nd edition, p397 (with some minor modifications).

  • [email protected] 18

    Suppose we have observations from a set of g classes, which correspond re-spectively to observations from pdfs f1(x), . . . , fc(x), . . . , fg(x). Initially weassume that all these pdfs are known. We take a new observation, x say, andwe wish to assign it to the correct class out of C1, . . . , Cg, which have knownprior probabilities pi1, . . . , pig, adding to 1. Clearly, by Bayes theorem, wesee thatP (new obsn belongs to class c|x) picfc(x)and if we have a symmetric loss-function, then we should assign the newobservation to class c if picfc(x) is the largest over the g classes, ie we assignto class c if Qc is smallest, where

    Qc = 2log(fc(x)) 2logpic.

    Take the special case of fc(x) as the pdf of N(c, Vc). Then it is easily checkedthat

    Qc = (x c)TVc1(x c) + logdet(Vc) 2logpic.The quantity (x c)TVc1(x c) is called the Mahalanobis distance of xfrom the class centre of Cc. Since the difference between any pair Qc, Qd sayis a quadratic function of x, this method will give quadratic boundaries ofthe decision region for x, and the method is known as quadratic discriminantanalysis.Take the special case Vc = V for all c. Then you can easily check that thequadratic rule given above simplifies to a linear discriminant rule: we assignx to class c if Lc is the largest, where we define

    Lj = xTV 1j Tj V 1j/2 + log(pij).

    In real life, of course, we do not know 1, . . . , g and we do not know V : soin practice we use the obvious estimates for them, namely the within-classmeans to estimate the means, and we use W to estimate V .

  • Chapter 3

    Principal components analysis.

    (pca)

    princomp()

    Suppose our observation vector X N(, V ), so that for example, for p = 2, = (0, 0)T ,

    V =

    (1 1

    )Then for the special case = .9, the contours of the density function

    will be very elongated ellipses, and a random sample of 500 observationsfrom this density function has the scatter plot given here as Figure 3.1.Using y1, y2 as the principal axes of the ellipse, we see that most of thevariation in the data can be expected to be in the y1 direction. Indeed, the2-dimensional picture Figure 3.1 could be quite satisfactorily collapsed intoa 1-dimensional picture.Clearly, if we had n points in p dimensions as our original data, it would bea great saving if we could adequately represent these n points in say just 2or 3 dimensions.Formally: with X N(, V ), our problem is to choose a direction a1 sayto maximise var(aT1X), subject to a

    T1 a1 = 1,

    ie to choose a1 to maximise aT1 V a1 subject to a

    T1 a1 = 1.

    This gives the Lagrangian

    L(a1, 1) = aT1 V a1 + 1(1 aT1 a1).

    L

    a1= 0 implies V a1 = 1a1

    and hence a1 is an eigenvector of V , corresponding to eigenvalue 1. Further

    aT1 V a1 = 1aT1 a1 = 1,

    hence we should take 1 as the largest eigen value of V . Then the first prin-cipal component is said to be aT1X: it has variance 1.Our next problem is to choose a2 to maximise var(a

    T2X) subject to cov(a

    T1X, a

    T2X) =

    0 and aT2 a2 = 1.

    19

  • [email protected] 20

    x1

    x2

    3 2 1 0 1 2 3

    3

    2

    1

    01

    23

    Figure 3.1: A random sample from a bivariate normal distribution

    Now, using the fact that V a1 = 1a1, we see that cov(aT1X, a

    T2X) = 0 is

    equivalent to the condition aT2 a1 = 0. Hence we take as the Lagrangian

    L(a2, , 2) = aT2 V a2 + 2a

    T2 a1 + 2(1 aT2 a2).

    Now find La2

    and apply the constraints to show that a2 is the eigen vectorof V corresponding to its second largest eigen value, 2.And so on. Let us denote

    1 2 . . . p(> 0)as the eigen values of V , and let

    a1, . . . , ap

    be the corresponding eigen vectors.(Check that for the given example, with p = 2, the eigen -values are 1+, 1.)Define Yi = a

    Ti X, these are the p principal components, and you can see that

    Yi NID(aTi , i), for i = 1, . . . , p.The practical application Of course, in practice the matrix V is unknown,so we must replace it by say S, its sample estimate. Suppose our originaln p data matrix is X, so that

    X =

    xT1...xTn

  • [email protected] 21

    corresponding to n points in p dimensions. Then

    nS = (XTX nxT x)

    (correcting apparent error in Venables and Ripley) where x = 1TX/n is therow vector of the means of the p variables: we have already proved that S isis the mle of V . (Warning: you may find that (n 1) is used as the divisor,in some software.) Here, for example, what the R function

    princomp()

    does is to choose a1 to maximise aT1 Sa1 subject to a

    T1 a1 = 1, obtaining

    Sa1 = 1a1,

    1 being the largest eigenvalue of S. Then, for example, aT1 x1, . . . , a

    T1 xn show

    the first principal component for each of the original n data points.Interpretation of the principal components (where possible) is very impor-tant in practice.The scree-plotPlot (1 + +i)/(1 + +p) against i. This gives us an informal guideas to how many components are needed for an acceptable representation ofthe original data. Clearly (1 + + i)/(1 + + p) increases with i,and is 1 for p = i, but we may hope that 3 dimensions represents the overallpicture satisfactorily, ie(1 + 2 + 3)/(1 + + p) > 3/4, say.The difficulty of scalingSuppose, eg xi has 2 components, namelyx1i a length, measured in feet, and x2i a weight, measured in pounds.Suppose that these variables have covariance matrix S.We consider the effect of rescaling these variables, to inches and ounces re-spectively. (1 ft= 12 inches, 1 pound = 16 ounces)The covariance matrix for these new units is say SS, and

    SS =

    (122S11 12 16S12

    12 16S21 162S22).

    There is no simple relationship between the eigenvalues/vectors of S andthose of SS.So the picture we get of the data by applying pca to SS might be quitedifferent from what we found by pca on S.Note, if one column of the original data matrix x

    T1...xTn

    has a particularly large sample variance, this column will tend to dominatethe pca picture, although it may be scientifically uninteresting. For example

  • [email protected] 22

    in a biological dataset, this column may simply correspond to the size of theanimal.For example, suppose our observation vector is X, and

    X = lZ +

    where Z, are independent, with Z N1(0, v) and i NID(0, 2i ), fori = 1, . . . , p. Then

    V = var(X) = llTv + diag(21, . . . , 2p).

    Suppose v >> 21, . . . , 2p. Then you will find that the first principal compo-

    nent of V is approximately lTX; in other words Z (which may correspond tothe size of the animal) dominates the pca picture.Further, if 21 = . . . =

    2p =

    2, then

    V = llTv + 2I

    and henceV l = l(lTvl) + 2l.

    Suppose (wlog) that lT l = 1. Then we see that l is an eigen-vector of Vcorresponding to eigen value v + 2. If u is any vector orthogonal to l, then

    V u = l(lTu)v + 2u,

    hence V u = 2u, and so u is an eigen vector of V corresponding to eigenvalue2. This is clearly an extreme example: we have constructed V to have asits eigenvalues v + 2 (once) and 2, repeated p 1 times.The standard fixup, if there is no natural scale for each of the columns onthe data matrix X, is to standardise so that each of the columns of thismatrix has sample variance 1. Thus, instead of finding the eigen values ofS, the sample covariance matrix, we find the eigen values of R, the samplecorrelation matrix. This has the following practical consequences:i) i = tr(R) = p since Rii = 1 for each i.ii) the original measurements (eg height, weight, price) are all given equalweight in the pca.So this is quite a draconian correction.Factor analysis, which is a different technique, with a different model,attempts to remedy this problem, but at the same time introduces a wholenew raft of difficulties.

    factanal()

    For this reason, we discuss factor analysis only very briefly, below. It is wellcovered by R, and includes some thoughtful warnings.Formal definition of Factor analysis.We follow the notation of Venables and Ripley, who give further details andan example of the technique.

  • [email protected] 23

    The model, for a single underlying factor: Suppose that the observationvector X is given by

    X = + f + u

    where is fixed, and is a fixed vector of loadings and f N1(0, 1) andu Np(0,) are independent, and is an unknown diagonal matrix. Ourrandom sample consists of

    Xi = + fi + ui

    for i = 1, . . . , n. This gives X1, . . . , Xn a rs from N(, V ) where

    V = T + .

    For k < p common factors, we have

    Xi = + fi + ui,

    for i = 1, . . . , n with f Nk(0, Ik) and u Np(0,) independent, with a fixed, unknown matrix of loadings, and with is an unknown diagonalmatrix as before, so that X1, . . . , Xn is a rs from N(, V ) where

    V = T + .

    We have a basic problem of unidentifiability, since

    Xi = + fi + ui

    is the same model as

    Xi = + (G)(GTfi) + ui

    for any orthogonal matrix G.Choosing an appropriate G is known as choosing a rotation. In the mlfit of the factor analysis model above, we choose , (subject to a suitableconstraint) to maximise

    tr(V 1S) + log |(V 1S)|.Note that the number of parameters in V is p+ p(p 1)/2 = p(p+ 1)/2.Define

    s = p(p+ 1)/2 [p(k + 1) k(k 1)/2] = (p k)2/2 + (p+ k)/2as the degrees of freedom of this ml problem. Thens = number of parameters in V (number of parameters in ,) (takingaccount of the rotational freedom in , since only T is determined).We assume s 0 for a solution (otherwise we have non-identifiability).If s > 0, then in effect, factor analysis chooses the scaling of the variablesvia , whereas in pca, the user must choose the scaling.

  • [email protected] 24

    Lastly, here is another view of pca on S, the sample covariance matrix, asthe solution to a minimisation problem.This is a data-analytic rather than an mle approach.Suppose we have observations y1, . . . , ynR

    p, take yi = 0 for simplicity, sothat y = 0.Take k < p and consider the problem of finding the best-fitting k-dim linearsubspace for (y1, . . . , yn), in the following sense:take k orthonormal vectors a1, . . . , ak (ie such that a

    Ti aj = ij for i, j =

    1, . . . , k) to minimise(yi Pyi)T (yi Pyi)

    where Pyi is the projection of yi onto

    = `(a1, . . . , ak)

    the linear subspace spanned by a1, . . . , ak.Solution.Any orthonormal set a1, . . . , akR

    p may be extended to

    a1, . . . , ak, . . . , ap

    an orthonormal basis of RP . Furthermore, any yRp may then be rewrittenas

    y = p1(yTai)ai

    and thenPy = k1(y

    Tai)ai,

    and soy Py = pk+1(yTai)ai;

    and(y Py)T (y Py) = pk+1(yTai)2.

    Hence, our problem is to choose a1, . . . , ak to minimise

    nj=1pi=k+1(y

    Tj ai)

    2.

    Butnj=1y

    Tj yj =

    nj=1

    pi=1(y

    Tj ai)

    2

    is fixed, so our problem is therefore to maximise

    nj=1ki=1(y

    Tj ai)

    2,

    and this last term iski=1a

    Ti (

    nj=1yjy

    Tj )ai.

    Thus our problem is to choose a1, . . . , ak subject to aTi aj = ij to maximise

    k1aTi Sai, where

    S n1yjyTj .

  • [email protected] 25

    We have already shown how to solve this problem, at the beginning of thisChapter. The solution is to take a1 as the eigenvector of S corresponding tothe largest eigen value 1, and so on.

    Note added September 2013. I have now come across the termmultinomial PCAmainly in the context of text-mining analysis and bioinformatics. Whathas this to do with the pca described above?One answer is that both techniques are concerned with reducing a large datamatrix. Non-negative Matrix Factorisation (NMF) decomposes a positivematrix, say V , into a product of non-negative factors, thus

    V u W.H

    where V,W,H are of orders mn,mp, pn, respectively, and for a usefulrepresentation p

  • Chapter 4

    Cluster Analysis

    Here we seek to put n objects into (disjoint) groups, using data on d vari-ables.The data consist of points x1, ..., xn say, giving rise to an n d data matrix.xi may consist of continuous, or discrete variables or a mixture of the two,eg red hair, blue eyes, 1.78 m, 140 kg, 132 iq and so on.There are NO probability models involved in cluster analysis: in this sensethe method is said to be data-analytic.We start by constructing from x1, ..., xn the DISSIMILARITY matrix (drs)between all the individuals, or equivalently the SIMILARITY matrix, say(crs).For example, we might take drs = |xr xs|, Euclidean distance.There are 3 types of clustering availablei) hierarchical clustering

    hclust()

    in which we form a dendrogram of clusters (like a tree, upside down) bygrouping points into clusters, and then grouping the clusters themselves intobigger clusters, and so on, until all n points form one big cluster.See Swiss cantons data-set as an example.ii) we could partition the n points into a given number, eg 4, of non-overlappingclusters

    kmeans()

    iii) we could partition the n points into overlapping clusters.

    How to construct the dissimilarity matrixTake any 3 objects, say A,B,C.Let d(A,B) be the dissimilarity between A and B.It is reasonable to require the following of the function d(, ).

    d(A,B) 0, d(A,B) = 0 iff A = B, d(A,C) d(A,B) + d(B,C).The R function

    26

  • [email protected] 27

    dist()

    produces, for n points, the n(n 1)/2 distances, eg withEuclidean, |xr xs| as defaultor manhattan (also called city-block), ie i|xri xsi|or maximum, ie maxi|xri xsi|.It also allows for binary variables, for example ifxr = (1, 0, 0, 1, 1, 0, 0) andxs = (1, 0, 0, 0, 0, 0, 0) then the simple matching coefficient givesd(xr, xs) = 1 5/7 (this counts all matches)but the Jaccard coefficient givesd(xr, xs) = 11/3 (just one positive match in the 3 places where match-

    ing matters).

    What do we do with the distance matrix when weve got it ?The R function hclust() has 3 possible methods, our problem being thatwe have to decide how to define the distance or dissimilarity between any 2clusters.We now describe the agglomerative technique used in hierarchical cluster-ing.Begin with n clusters, each consisting of just 1 point, and (drs) = D say, adissimilarity matrix, and a measure of dissimilarity between any 2 clusters,say d(C1, C2) for clusters C1, C2.Fuse the 2 nearest points into a single cluster.Now we have n 1 clusters.Fuse the 2 nearest such clusters into a single cluster.Now we have n 2 clusters.And so on. Finally we have one cluster containing all n objects.This now gives us a hierarchical clustering, and for example we could sortthe n objects into 3 groups by drawing a horizontal line across the resultingdendrogram.

    The possible definitions for d(C1, C2).i) compact (complete linkage) d(C1, C2) = max d(i, j)ii) average d(C1, C2) = average d(i, j)iii) connected (single linkage) d(C1, C2) = min d(i, j).(this one tends to produce long straggly clusters)

    In all of i),ii),iii) we take i in C1, j in C2.Warning: we repeat, this method is entirely data-analytic. We cant reason-ably ask for significance tests: eg, do 3 clusters explain the data significantlybetter than 2?(Simulation, or using random subsets of the whole data-set, may go someway towards answering this last type of question.)Here is a small scale example, from a subset of recent MPhil students. Eachstudent was asked to reply Yes or No (coded here as 1, 0 respectively) to eachof 10 (rather boring, but not embarrassing personal) questions. These were

  • [email protected] 28

    Do you eat eggs? Do you eat meat? Do you drink coffee? Do you drinkbeer? Are you a UK resident? Are you a Cambridge Graduate? Are youFemale? Do you play sports? Are you a car driver? Are you Left-handed?The answers for this subset of students form the dataset given below.

    eggs meat coffee beer UKres Cantab Fem sports driver Left.h

    Philip 1 1 1 0 1 1 0 0 1 1

    Chad 1 1 1 0 0 0 0 1 1 0

    Graham 1 1 1 1 1 1 0 1 1 0

    Tim 1 1 1 1 1 1 0 1 0 0

    Mark 1 1 0 1 1 1 0 0 0 1

    Juliet 0 1 1 0 1 0 1 0 0 0

    Garfield 0 1 1 1 0 0 0 1 0 0

    Nicolas 1 1 1 1 0 0 0 1 1 0

    Frederic 1 1 0 1 0 0 0 1 1 0

    John 1 1 1 1 0 0 0 0 1 0

    Sauli 1 1 0 0 1 0 0 1 1 0

    Fred 1 1 1 0 0 0 0 1 0 0

    Gbenga 1 1 1 0 0 0 0 1 0 0

    # taking a as the data-matrix above, we compute d, the appropriate

    # set of 14*13/2 interpoint distances, and present the corresponding

    # 14 by 14 distance matrix

    > d = dist(a, metric="binary") ; round(dist2full(d), 2)

    [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13]

    [1,] 0.00 0.50 0.33 0.44 0.38 0.62 0.78 0.56 0.67 0.50 0.50 0.62 0.62

    [2,] 0.50 0.00 0.38 0.50 0.78 0.71 0.50 0.17 0.33 0.33 0.33 0.20 0.20

    [3,] 0.33 0.38 0.00 0.12 0.44 0.67 0.50 0.25 0.38 0.38 0.38 0.50 0.50

    [4,] 0.44 0.50 0.12 0.00 0.38 0.62 0.43 0.38 0.50 0.50 0.50 0.43 0.43

    [5,] 0.38 0.78 0.44 0.38 0.00 0.75 0.75 0.67 0.62 0.62 0.62 0.75 0.75

    [6,] 0.62 0.71 0.67 0.62 0.75 0.00 0.67 0.75 0.88 0.71 0.71 0.67 0.67

    [7,] 0.78 0.50 0.50 0.43 0.75 0.67 0.00 0.33 0.50 0.50 0.71 0.40 0.40

    [8,] 0.56 0.17 0.25 0.38 0.67 0.75 0.33 0.00 0.17 0.17 0.43 0.33 0.33

    [9,] 0.67 0.33 0.38 0.50 0.62 0.88 0.50 0.17 0.00 0.33 0.33 0.50 0.50

    [10,] 0.50 0.33 0.38 0.50 0.62 0.71 0.50 0.17 0.33 0.00 0.57 0.50 0.50

    [11,] 0.50 0.33 0.38 0.50 0.62 0.71 0.71 0.43 0.33 0.57 0.00 0.50 0.50

    [12,] 0.62 0.20 0.50 0.43 0.75 0.67 0.40 0.33 0.50 0.50 0.50 0.00 0.00

    [13,] 0.62 0.20 0.50 0.43 0.75 0.67 0.40 0.33 0.50 0.50 0.50 0.00 0.00

    > h = hclust(d, method="compact"); h

    and here is a resulting dendrogram (obtained using the method Compact)as Figure 4.1.

  • [email protected] 29

    1

    2

    3 4

    5

    6

    7

    8

    910

    11

    12 13

    0.0

    0.2

    0.4

    0.6

    0.8

    Figure 4.1: Example of a dendrogram

  • Chapter 5

    Tree-based methods, ie decisiontrees/ classification trees

    (Regression trees are also a possibility, but we do not discuss them here.)We take as our example the autolander data (These data are taken fromD.Michie, 1989, and are discussed in Venables and Ripley).For the shuttle autolander data, we seek to base our decision about thedesired level of a particular factor, here use, which has possible valuesauto and noauto, on the levels of certain other explanatory variables,here stability, error, wind, visibility, . . .. (As it happens, all thevariables in this problem are factors, with two or more levels, but this neednot be the case for this method to work.)We show how to construct a decision tree, or classification tree, using the Rlibrary

    rpart()

    or the previously favoured method

    tree()

    post.tree(shuttle.tree, file="pretty")

    Heres how it works. The decision tree provides a probability model: at eachnode of the classification tree, we have a probability distribution (pik) overthe classes k, here k has values just 1, 2, corresponding to auto, noautorespectively. (Note that R works alphabetically, by default) and

    kpik = 1,

    for each node i.The partition is given by the leaves of the tree, which are also known as theterminal nodes, denoted as * in the R output. (Confusingly, the conventionis that the tree grows upside-down, ie with its root at the top of the page.)In the current example, each of the total of n = 256 cases in the trainingset is assigned to a leaf, and so at each leaf we have a sample, say (nik), froma multinomial distribution, say Mn(ni, (pik)) (in the shuttle example theseare actually binomial distributions.)

    30

  • [email protected] 31

    Condition on the observed variables (xi) in the training set (ie the observedcovariate values).Then we know the numbers (ni) of cases assigned to each node of the tree,and in particular to each of the leaves. The conditional likelihood is thus

    cases j p[j]yjwhere [j] is the leaf assigned to case j. Hence this conditional likelihood is

    leaves i classes k pnikikand we can define a deviance for the tree as

    D = leaves iDi,

    whereDi = 2knik log pik.

    Note that a perfect classification would result in each (pi1, pi2) as a (1, 0) or(0, 1) (zero-entropy) distribution, and in this case each Di = 0 (recall that0 log 0 = 0) and so D = 0.Our general aim is to construct a tree withD as small as possible, but withouttoo many leaves.How does the algorithm decide how to split a given node?Consider splitting node s into nodes t, u say.

    t s u

    This will change the probability distribution/model within node s. The totalreduction in deviance for the tree will be

    Ds Dt Du = 2(ntk log(ptk/psk) + nuk log(puk/psk)).We do not actually know the probabilities (ptk) etc, so the best we cando is to estimate them from the sample proportions in the split node, thusobtaining

    ptk = ntk/nt, puk = nuk/nu,

    andpsk = (ntptk + nupuk)/ns = nsk/ns,

    and correspondingly, the estimated reduction in deviance is

    Ds Dt Du = 2(ntk log(ptk/psk) + nuk log(puk/psk).This gives us a measure of the value of a split at node s.NB: since this depends on ns, nt, nu, there will be more value in splittingleaves with a larger number of cases.The algorithm for the tree construction is designed to take the MAXIMUMreduction in deviance over all allowed splits of all leaves, to choose the nextsplit.Usually, tree construction continues until

  • [email protected] 32

    either, the number of cases reaching each leaf is small enough (default, < 10in tree())or, a leaf is homogeneous enough (eg, its deviance is < (1/100) of devianceof root node).Remarks1. In classifying new cases, missing values of some x1, x2, . . . values caneasily be handled (unlike, say, in logistic regression, where we may need toinfer the missing covariate values in order to find a corresponding fittedvalue).(For example, in the final tree construction, it may turn out that x1, eg windspeed, is not even used.)2. Cutting trees down to sizeThe function

    tree()

    may produce some useless nodes. The function

    prune.tree()

    will prune the tree to something more useful, eg by reducingdeviance + size .thus, we have a tradeoff between the overall cost and the complexity (ienumber of terminal nodes). (The idea is similar to that of the use of theAIC in regression models.) This is not explained further here, but is bestunderstood by experimenting

    > library(MASS) ; library(rpart)

    >

    >shuttle[120:130,]

    stability error sign wind magn vis use

    120 stab XL nn tail Out no auto

    121 stab MM pp head Out no auto

    122 stab MM pp tail Out no auto

    123 stab MM nn head Out no auto

    124 stab MM nn tail Out no auto

    125 stab SS pp head Out no auto

    126 stab SS pp tail Out no auto

    127 stab SS nn head Out no auto

    128 stab SS nn tail Out no auto

    129 xstab LX pp head Light yes noauto

    130 xstab LX pp head Medium yes noauto

    > table(use)

    auto noauto

    145 111

    >fgl.rp = rpart(use ~ ., shuttle, cp = 0.001)

    > fgl.rp

  • [email protected] 33

    n= 256

    node), split, n, loss, yval, (yprob)

    * denotes terminal node

    1) root 256 111 auto (0.5664062 0.4335938)

    2) vis=no 128 0 auto (1.0000000 0.0000000) *

    3) vis=yes 128 17 noauto (0.1328125 0.8671875)

    6) error=SS 32 12 noauto (0.3750000 0.6250000)

    12) stability=stab 16 4 auto (0.7500000 0.2500000) *

    13) stability=xstab 16 0 noauto (0.0000000 1.0000000) *

    7) error=LX,MM,XL 96 5 noauto (0.0520833 0.9479167) *

    node), split, n, deviance, yval, (yprob)

    * denotes terminal node

    > plot(fgl.rp, uniform=T)

    > text(fgl.rp, use.n = T)

    # see graph attached

    # heres another way,

    >shuttle.tree = tree(use ~ ., shuttle); shuttle.tree

    check that the root deviance is

    350.4 = 2[145 log(145/256) + 111 log(111/256)]

    1) root 256 350.400 auto ( 0.5664 0.4336 )

    2) vis:no 128 0.000 auto ( 1.0000 0.0000 ) *

    3) vis:yes 128 100.300 noauto ( 0.1328 0.8672 )

    6) stability:stab 64 74.090 noauto ( 0.2656 0.7344 )

    12) error:MM,SS 32 44.240 auto ( 0.5312 0.4688 )

    24) magn:Out 8 0.000 noauto ( 0.0000 1.0000 ) *

    25) magn:Light,Medium,Strong 24 28.970 auto ( 0.7083 0.2917 )

    50) error:MM 12 16.300 noauto ( 0.4167 0.5833 )

    100) sign:nn 6 0.000 noauto ( 0.0000 1.0000 ) *

    101) sign:pp 6 5.407 auto ( 0.8333 0.1667 ) *

    51) error:SS 12 0.000 auto ( 1.0000 0.0000 ) *

    13) error:LX,XL 32 0.000 noauto ( 0.0000 1.0000 ) *

    7) stability:xstab 64 0.000 noauto ( 0.0000 1.0000 ) *

    Classification tree:

    tree(formula = use ~ ., data = shuttle)

    Variables actually used in tree construction:

    [1] "vis" "stability" "error" "magn" "sign"

    Number of terminal nodes: 7

    Residual mean deviance: 0.02171 = 5.407 / 249

    Misclassification error rate: 0.003906 = 1 / 256

    tree.rp = rpart(use ~., shuttle)

  • [email protected] 34

    |vis=a

    error=c

    stability=a

    auto 128/0

    auto 12/4

    noauto0/16

    noauto5/91

    Figure 5.1: Tree for shuttle data drawn by rpart()

    plot(tree.rp, compress=T) ; text(tree.rp, use.n=T)

    and here is the graph drawn by rpart()

  • Chapter 6

    Classical MultidimensionalScaling

    Let D = (rs) be our dissimilarity/distance matrix, computed from givendata points x1, . . . , xn in R

    d, for example by

    dist(X,metric="binary")

    Take p given, assume p < d.When does a given D correspond to a configuration y1, . . . , yn in R

    p, in termsof Euclidean distances?Notation and definitionsGiven D, define A = (ars), where ars = (1/2)2rs, and define B = (brs),where

    brs = ars ar+ a+s + a++where ar+ = (1/n)sars, etc.Thus

    B = (In (1/n)11T )A(In (1/n)11T )as you can check.We say that D is Euclidean if there exists a p-dimensional configuration ofpoints y1, . . . , yn for some p, such that rs = |yr ys| for all r, s.Theorem.Given the matrix D of interpoint distances, then D is Euclidean iff B ispositive-semidefinite.(the proof follows Seber, 1984, p236)ProofSuppose D corresponds to the configuration of points y1, . . . , yn in R

    p and

    2ars = 2rs = (yr ys)T (yr ys).Hence, as you may check,

    brs = ars ar+ a+s + a++ = (yr y)T (ys y).Define the matrix X by

    XT = (y1 y : y2 y : .... : yn y).

    35

  • [email protected] 36

    Hence we see thatB = XXT

    and clearly XXT 0 (just look at uT XXTu for any u).Hence D Euclidean implies B positive semidefinite.b) Conversely, givenD, with A,B defined as above, suppose that B is positivesemidefinite, of rank p. Then B has eigen-values say 1 2 . . . p > 0,with all remaining eigen-values = 0, and there exists an orthonormal matrixV , constructed as usual from the eigen-vectors of B such that

    V TBV =

    ( 00 0

    )where the matrix is diagonal, with diagonal entries 1, . . . , p. Thus,

    B = V

    ( 00 0

    )V T .

    Now partition V as V = (V1 : V2) so that the columns of V1 are the first peigen vectors of B. Then you can see that

    B =(V1 V2

    )( 00 0

    )(V T1V T2

    )and hence

    B = V1VT1 .

    This last equation may be rewritten as

    B = (V11/2)(V1

    1/2)T = Y Y T

    say, where we have defined Y as V11/2, thus Y is an n p matrix.

    Define yT1 , . . . , yTn as the rows of the matrix Y , thus y1, . . . , yn are vectors in

    Rp, and you may check that since B = Y Y T , it follows that with B = (brs),

    brs = yTr ys.

    hence|yr ys|2 = yTr yr 2yTr ys + yTs ys = brr 2brs + bss

    = arr + ass 2ars = 2rssince arr = ass = 0. Thus the yi give the required configuration, and so wesee that D is Euclidean.This concludes the proof. It also indicates to us how we may use, for ex-ample, the first 2 eigen vectors of B to construct the best 2-dimensionalconfiguration yi, i = 1, . . . , n corresponding to a given distance matrix D,as in

    cmdscale()

  • [email protected] 37

    new[, 1]

    ne

    w[, 2

    ]

    -0.4 -0.2 0.0 0.2

    -0.

    4-0.

    20.

    00.

    2

    Vivienne

    Taeko

    Luitgard

    Alet

    Tom

    LinYee

    Pio

    LingChen

    HuiChin

    Martin

    Nicolas

    Mohammad

    Meg

    CindyPeter

    Paul

    Figure 6.1: Classical Multidimensional Scaling for the students of Lent 2003

    An example plot is given as Figure 6.1.

    Note the classical solution is not unique: a shift of origin and a rotation orreflection of axes does not change interpoint distances.Observe (i) Given (rs), our original distance/dissimilarity matrix, and(y1, . . . , yn) our derived points (eg forced into R

    2) with interpoint distances(drs) say, then

    r 6=s(drs rs)2/r 6=s2rs = E(d, )is a measure of the stress involved in forcing our original n points into a2-dimensional configuration.(ii) If (rs) has actually been derived as an exact Euclidean distance matrixfrom the original points x1, . . . , xn, then

    cmdscale(D, scale=2)

    will give us (exactly) these n points, plotted by their first 2 principal com-ponent scores.Thus if (rs) is just a Euclidean distance matrix, we do not gain any extrainformation by using

    cmdscale()

    as compared with principal components on the sample covariance matrix.

  • [email protected] 38

    Vivienne

    Taeko

    Luitgard

    Alet

    Tom

    LinYee

    Pio

    LingChen

    HuiChin

    Martin

    Nicolas

    Mohammad

    Meg

    Cindy

    Peter

    Paul

    Figure 6.2: Students Faces

    Finally, Chernoffs faces for the class of Lent 2003, or how to insult yourstudents. Chernoffs faces represents the n p data matrix (p 15), by nfaces, of different features, eg size of face, shape of face, length of nose, etcaccording to the elements of the p columns. Warning: appearances can bedeceptive: you will get a very different picture by just changing the order ofthe columns.

    Figures 6.1, and 6.2 were obtained from the data from the class of Lent2003, which is given below.

    MPhil/Part III, app mult. analysis, Feb 2003

    ...........................................................................

    eggs meat coffee beer UKres Cantab Fem sports driver Left-h

    Vivienne y n y n y n y y y n

    Taeko y y y n y y y n n n

    Luitgard y n y n n y y y y n

    Alet y y y y n n y n y n

    Tom y y y y y y n y y n

    LinYee y y y n n n n y y n

    Pio y y y n n n n y n n

    LingChen y y n n n n y y n n

    HuiChin y y y n n n y y y n

    Martin y y y y y n n y y n

    Nicolas y y y y n n n y y y

    Mohammad y y y n n n n n y n

    Meg y y y n n n y y n n

    Cindy y y y y n n y y y n

  • [email protected] 39

    Peter y y y y n n n y y n

    Paul y y n y y y n y n n

    Note, the first column turns out to be unhelpful, so you may prefer to omitit before trying, egdist() for use with hclust() or cmdscale()The above data set is of course based on rather trivial questions.By way of complete contrast, here is a data set from The Independent,Feb 13,2002 on Countries with poor human rights records where firms withBritish links do business. It occurs under the headlineCORPORATE RISK: COUNTRIES WITH A BRITISH CONNECTION.

    1 2 3 4 5 6 7 8 9 10 11 12 13 14

    SaudiArabia 1 0 0 0 0 1 0 1 0 0 1 1 0 1

    Turkey 1 0 1 0 1 1 0 0 0 1 0 1 0 1

    Russia 1 0 1 0 1 1 1 0 0 0 0 1 0 1

    China 1 1 1 0 1 1 1 0 0 0 0 1 0 1

    Philippines 1 1 1 0 0 0 0 0 1 0 0 1 1 0

    Indonesia 1 1 1 0 0 1 1 1 0 0 0 1 0 0

    India 1 0 1 0 1 0 0 1 1 0 1 1 0 0

    Nigeria 0 0 1 0 0 0 1 0 0 0 0 1 1 0

    Brazil 1 0 1 1 1 0 1 0 0 1 0 1 0 0

    Colombia 1 1 1 1 1 0 0 0 0 1 0 1 0 0

    Mexico 0 1 1 0 0 1 0 0 0 0 0 1 0 1

    Key to the questions (1 for yes, 0 for no)

    Violation types occurring in the countries listed

    1 Torture

    2 Disappearance

    3 Extra-judicial killing

    4 Hostage taking

    5 Harassment of human rights defenders

    6 Denial of freedom of assembly & association

    7 Forced labour

    8 Bonded labour

    9 Bonded child labour

    10 Forcible relocation

    11 Systematic denial of womens rights

    12 Arbitrary arrest and detention

    13 Forced child labour

    14 Denial of freedom of expression

    What happens when you apply

    cmdscale()

    to obtain a 2-dimensional picture of these 11 countries?

  • Chapter 7

    Applied Multivariate AnalysisExercises

    by P.M.E.Altham, Statistical Laboratory, University of Cambridge.See also recent MPhil/Part III examination papers for this course.The questions below marked with a * are harder, and represent interestingmaterial that I probably will not have time to explain in lectures.

    0. (i) Given the pp positive-definite symmetric matrix V , with eigen-values1, . . . , p say, we may write V as

    V =

    uuT ,

    equivalentlyV = UUT

    where U is a p p orthonormal matrix, and is the p p diagonal matrixwith diagonal elements . Hence show

    trace(V ) =

    , det(V ) = .

    0. (ii) Given S, V both p p positive-definite symmetric matrices, show thatthe eigen values of V 1S are real and positive.0. (iii) Suppose that Y N(0, V ) and that A is a symmetric matrix, of thesame dimensions as V . Prove that

    E(Y TAY ) = tr(AV ).

    1.(i) Given a random sample of 2-dimensional vectors x1, , xn from a bi-variate normal distribution with correlation coefficient (and the other 4parameters unknown) show, elegantly, that the distribution of the samplecorrelation coefficient depends only on , n.1.(ii) Given a random sample of vectors x1, , xn from N(, V ), show, ele-gantly, that the distribution of

    (X )TS1(X )

    40

  • [email protected] 41

    is free of , V (with the usual notation for X, S).1.(iii) Suppose X is p-variate normal, with E(Xi) = for all i, and var(Xi) =2 for all i, cov(Xi, Xj) =

    2 for all i 6= j.Show that this distribution is what we get if we write

    Xi = Y + i, for all i

    where Y, 1, . . . , p are NID random variables, with

    E(Y ) = , var(Y ) = v1, E(i) = 0, var(i) = v2

    where v1, v2 are suitable functions of 2, .

    Hence show that

    (XiX)2/2(1 ) has the chi-squared distribution withp 1 degrees of freedom.2. The distribution of the sample correlation coefficient rn. You mayassume that for = 0,

    rn

    (n 2)(1 r2n)

    tn2.

    This enables us to do an exact test of H0 : = 0.Further, for large n

    rn N(, v()/n)where

    v() = (1 2)2.Show that if we define the transformation g() by

    2g() = log (1 + )/(1 )

    then, for large n,g(rn) N(g(), 1/n)

    hence having variance free of . Hence show how to construct a large sampleconfidence-interval for .Note: g() = tanh1(). This is also called Fishers z-transformation.3. Let y1, , yn be a random sample from the p-variate normal distributionN(, V ). Construct the likelihood ratio test of the null hypothesis

    H0 : V is a diagonal matrix.Answer:

    The l-r criterion is: reject H0 if logdet(R) < constant, where R is the sample-correlation matrix.4. Suppose Y = U + Z for = 1, , p, where

    U N(0, v0) and Z1, Zp N(0, v1)

    and U,Z1, , Zp are all independent. Show that the covariance matrix ofthe vector Y is say, V , where

    V = 2((1 )I + 11T )

  • [email protected] 42

    for suitably defined 2, . Write

    V 1 = aI + b11T .

    Find the eigen values of V 1 in terms of a, b.5. Given a random sample y1, , yn from N(, V ), with V as in question4, write down the log-likelihood as a function of , a and b, and maximise itwith respect to these parameters. Hence write down the mle of .6. Whittaker, Graphical Methods in Applied Multivariate Statistics (1990)discusses a dataset for five different mathematics exams for 88 students, forwhich the sample covariance matrix is

    mech 302.29

    vect 125.78 170.88

    alg 100.43 84.19 111.60

    anal 105.07 93.60 110.84 217.88

    stat 116.07 97.89 120.49 153.77 294.37

    Use R to compute the corresponding sample correlation matrix, and discuss.By inverting the covariance matrix given above, show that

    var(mech| remaining 4 variables) = 0.62 var(mech),Can you interpret this in terms of prediction? Show also that

    cor(stat, anal|remaining 3 variables) = 0.25.Note added March 2009: if we use R, we can make use of G.Marchettis ggmlibrary to do the hard work for us, as follows

    > library(ggm) # for Marchettis library

    > data(marks)

    > S = var(marks) # here is the sample covariance matrix

    > round(S,2)

    mechanics vectors algebra analysis statistics

    mechanics 305.69 127.04 101.47 106.32 117.49

    vectors 127.04 172.84 85.16 94.67 99.01

    algebra 101.47 85.16 112.89 112.11 121.87

    analysis 106.32 94.67 112.11 220.38 155.54

    statistics 117.49 99.01 121.87 155.54 297.76

    # and here is the sample partial correlation matrix

    > round(parcor(S),2)

    mechanics vectors algebra analysis statistics

    mechanics 1.00 0.33 0.23 0.00 0.03

    vectors 0.33 1.00 0.28 0.08 0.02

    algebra 0.23 0.28 1.00 0.43 0.36

    analysis 0.00 0.08 0.43 1.00 0.25

    statistics 0.03 0.02 0.36 0.25 1.00

  • [email protected] 43

    7. Simulate from the bivariate normal to verify Sheppards formula

    Pr(X1 > 0, X2 > 0) =1

    4+

    1

    2pisin1()

    where (X1, X2) have the bivariate normal distribution, E(Xi) = 0, var(Xi) =1 and cor(X1, X2) = .8. * Canonical correlationsSuppose X, Y are vectors of dimension p, q respectively, and ZT = (XT , Y T )has mean vector 0, covariance matrix V . We consider the problem:choose a, b to maximise the correlation between aTX, bTY , equivalently, tomaximise

    = aTV12b/aTV11a bTV22b,

    where V has been partitioned in the obvious way. This is equivalent tomaximising aTV12b subject to a

    TV11a = 1 and bTV22b = 1. By considering

    the corresponding Lagrangian, show that a is a solution of the equation

    V12V122 V21a V11a = 0.

    We could use this technique to relate one set of variables to another set ofvariables. Or, as in classification, we could consider one set of variables asdefining membership or otherwise of each of g groups, and the other set asour original observations, eg length, height, weight etc. Then our problem isto see what linear function of the original variables best discriminates intogroups. This is one way of deriving standard discriminant analysis, nowadaysseen as part of the pattern recognition set of techniques.9. An example of a classification tree (taken from Venables and Ripley, 1999,p321.)Fishers Iris data-set consists of 50 observations on each of 3 species, herelabelled s, c and v. For each of the 150 flowers, the Petal Length,Petal Width, Sepal Length and Sepal Width are measured. Can we predictthe Species from these 4 measurements? Experiment with the following:

    ird

  • [email protected] 44

    For simplicity we also remove the intercept from the problem, thus assumethat yT1 = 0 and that the matrix X has been centered and scaled, so that,for each j, the vector (xij) has mean 0 and variance 1.Now choose to minimise (y X)T (y X) subject to T s: youshould be able to show that this gives

    ridge = (XTX + I)1XTy,

    where corresponds to the Lagrange multiplier. Thus ( 0) is a complexityparameter that controls the amount of shrinkage.Assume further that the n p matrix X has singular value decomposition

    X = UDV T ,

    where U, V are n p and p p orthogonal matrices, with the columns ofU spanning the column space of X, and the columns of V spanning its rowspace, and D is a p p diagonal matrix, with diagonal entries d1 d2 . . . dp 0. Show that, for the ordinary LS estimator,

    Y = Xls = UUTy,

    while for the ridge regression estimator,

    Y ridge = Xridge = d2j

    d2j + uju

    Tj y,

    where the uj are the columns of U .Note that, we can use say lm() to obtain the ridge regression estimator fory = X + by doing ordinary least-squares regression with the vector yreplaced by (

    y0p

    )and the matrix X replaced by (

    XIp

    ).

    Choice of the right for a given y,X is an art: Hastie et al. use the tech-nique of cross-validation: see their book, or Venables and Ripley, for adescription of cross-validation.Interestingly, Penalised Discriminant Methods for the Classificaion of Tu-mors from Gene Expression Data by D.Ghosh, Biometrics 2003, p992, dis-cusses Ridge regression, and other methods, in the context of gene expressiondata, where the ith row of the matrix X corresponds to the gene expressionprofile for the ith sample, and we have G possible tumours, with gi as theknown tumour class of the ith sample: thus gi takes possible values 1, . . . , G.

    11. (new for 2006) Many of the techniques used in Multivariate Analysis canbe seen as examples in constrained optimization, using appropriate matrix al-gebra. Here is another such, which goes under the name of correspondenceanalysis for which the R/R function

  • [email protected] 45

    library(MASS)

    corresp()

    is written.Consider the R C contingency table with known probablities (pij), thusin matrix notation uTpv = 1, where u, v are unit vectors of dimension R,Crespectively. We wish to choose scores (xi), (yj) such that the random vari-ables X, Y have joint frequncy function given by

    P (X = xi, Y = yj) = pij

    for all i, j, and (xi), (yj) maximise the covariance between X, Y subject tothe following constraints

    E(X) = 0 = E(Y ), var(X) = 1 = var(Y ).

    Show that this problem reduces to that of solving the 2 sets of equations

    E(Y |X = xi) = xi, for all i,

    E(X|y = yj) = yj, for all j.In essence this means we focus on the singular value decomposition of theR C matrix (pij/pi+p+j). The largest eigen value of this matrix is always1 (can you see why?) and the next largest eigen value is , our desiredcorrelation coefficient.The reason why I had another look at this technique was its use in the articleFood buying habits of people who buy wine or beer: cross-sectional studyby Johansen et al, British Medical Journal, March 4, 2006. The authorsuse correspondence analysis to give a 2-dimensional representation (a biplot)of the data from the 40 4 contingency table obtained from the the 40categories of food types, and the 4 categories of alcohol buyers. For example,a person who buys both wine (but not beer) and olives contributes to the(olives, wine but not beer) cell. The resulting graph is constructed from theapproximation

    pij = pi+p+j(1 + 1i1j1 + 2i2j2)

    and you can see it for yourself at http://bmj.bmjjournals.com

    Properties of the multivariate normal distribution Estimation and Testing for the multivariate normal distributionMaximum Likelihood Estimation (mle)The distribution of the mle'sThe multivariate analysis of varianceApplications to Linear Discriminant Analysis.

    Principal components analysis.Cluster AnalysisTree-based methods, ie decision trees/ classification treesClassical Multidimensional ScalingApplied Multivariate Analysis Exercises