Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas...

27
Clustering and Testing in High-Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Transcript of Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas...

Page 1: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Clustering and Testing in High-Dimensional Data

M. Radavičius, G. Jakimauskas,J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Page 2: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

The problem

Let X = XN be a sample of size N supposed to satisfy d-dimen-sional Gaussian mixture model (d is supposed to be large).

Because of large dimension it is natural to project the sample to k-dimensional (k = 1, 2,…) linear subspaces using projection pursuit method (Huber (1985), Friedman (1987)) which gives the best selection of these subspaces. If distribution of standardized sample on the complement space becomes standard Gaussian, this linear subspace H is called discriminant subspace. E. g., if we have q Gaussian mixture components with equal covariance matrices then dimension of the discriminant subspace is q–1.

Having an estimate of the discriminant subspace we can perform much easier classification using projected sample.

Page 3: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

The sequential procedure applied to the standardized sample is the following (k = 1, 2,…, until the hypothesis of discriminant subspace holds for some k):

1. Find the best k-dimensional linear subspace using projection pursuit method (Rudzkis and Radavičius (1999)).

2. Fit a Gaussian mixture model for the sample projected to the k-dimensional linear subspace (Rudzkis and Radavičius (1995)).

3. Test goodness-of-fit of the estimated d-dimensional model assuming that distribution on the complement space is standard Gaussian. If the test fails then increase k and go to the step 1.

The problem in step 1 is to find basic vectors in high-dimension space (we do not cover this problem by now). The problem in step 3 (in common approach) is comparing some non-parametric density estimate with parametric one in high-dimensional space.

Page 4: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

We present a simple, data-driven and computationally efficient procedure for testing goodness-of-fit. The procedure is based on well-known interpretation of testing goodness-of-fit as the classi-fication problem, a special sequential data partition procedure, randomization and resampling, elements of sequential testing. Monte-Carlo simulations are used to assess the performance of the procedure.

This procedure can be applied to the testing of independence of components in high-dimensional data.

We present some preliminary computer simulation results.

Page 5: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

IntroductionLet

,,...,2,1,, qipiYX i P

q

iii

iiiid

i

xpxfX

RMYY

1

).(),(~

,),,(~,:

R

Consider general classification problem of estimation ofa posteriori probabilities

),,(| xixXi P

from the sample

.,..., 21N

N XXXX

Page 6: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Under these assumptions we have

.),(

)(),(

xf

xpxi ii

Usually the EM algorithm is used to estimate the a posteriori probabilities. Denote

,,,...,2,1),,( NN XXqiXi

then EM algorithm is a following iterative procedure:

...ˆˆˆˆ... NN

EM algoritm converges to some local maximum of the maximum likelihood function

Page 7: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

which usually is not equal to the global maximum

,),(log)(1

N

jjXfl

.)(maxarg* l

Let for some subspace

,,),...,,(span 11 dkvvvH k

the following equality holds:

,|| HH xXixXi PP

where

Page 8: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

and the subspace H has a maximum dimension, then this subspace is called the discriminant subspace. We do not lose an information on the a posteriori probabilities when we project the sample to the discriminant subspace.

,),cov(,),( T XXVVhuhu

We can get the estimate of the discriminant subspace

,,)ˆ,...,ˆ,ˆ(spanˆ21 dkvvvH k

using projection pursuit procedure (see e. g. , J. H. Friedman (1987), S. A. Aivazyan (1996), R. Rudzkis, M. Radavičius (1998)).

Page 9: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Test statistics

Let )(),...,2(),1( NXXXX be a sample of the size N of i.i.d.

random vectors with a common distribution function F on Rd.

distributions. Consider a nonparametric hypothesis testing problem:

.vs.: AH FFH FF

Let HF and AF be two disjoint classes of d-dimensional

Let .AH FFF

),1,0(,)1()( ppFFpF Hp

Consider a mixture model

Page 10: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

of two populations H and with d.f. FH and F, respecti-vely. Fix p and let Y = Y(p) denote a random vector with the mixture distribution F(p). Let Z = Z(p) be the posterior proba- bility of the population given Y, i.e.

.)()1()(

)(}|{

YfpYpfYpf

YZH

P

Here f and fH denote distribution densities of F and FH, respecti-vely. Let us introduce a loss function l(F, FH) = E(Z – p)2.

Page 11: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

.)(HXXY

,,...,1,0,,},,...,1,0,{Let 10 KkPPPKkP kkd

k RPbe a sequence of partitions of Rd, possibly dependent on Y, and let

be the corresponding sequence of -algebras

},,...,1,0,{ Kkk F

generated by these partitions.

A computationally efficient choice of P is the sequential dyadic coordinate-wise partition minimizing at each step the mean square error.

Let X(H) = {X(H)(1), X(H)(2),…, X(H)(M)} be a sample of size M of i.i.d. vectors from H. It is also supposed that X(H) is independent of X. Set

Page 12: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

In view of the definition of the loss function a natural choice of the test statistics would be 2-type statistics

,],|[where,)( 2

MNN

pZZpZT kMNkkMNk FEE

for some },...,2,1{ Kk which can be treated as a smoothingparameter. Here EMN stands for the expectation with respect to the empirical distribution F of Y.

However, since the optimal value of k is unknown, we prefer the following definition of the test statistics:

,/)(max1

kkkKk

baTT

where ak and bk are centering and scaling parameters to be specified.

Page 13: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

We have selected the following test statistics:

,,...,2,1,)1(2

))1((Kk

k

kST k

k

where

,)(21 2

1

),(2

),(1

k

j

kjkjk aa

NS

sample X(H)) in the jth area of the kth partition Pk.

(resp. sample of elements ofnumber areandhere ),(2

),(1 Xkjkj aa

Page 14: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Illustration of the sequential dyadic partitioning procedure

Here we have an example (at some step) of sequential partitioning procedure with two samples of two-dimen-sional data. The next partition is selected from all current squares and all divisions by each dimen-sion (in this case d=2) to achieve minimum mean square error of grouping.

Page 15: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Preliminary simulation results

The computer simulations have been performed using Monte-Carlo simulation method (typically 100 independent simulations). Sample sizes of X and X(H) were selected equal (typically N = M = 1000).

The first problem is to evaluate using the computer simulation the test statistics Tk in case when the hypothesis H holds. Centering and scaling parameters of the test statistics were selected in such a way that distribution of the test statistics is approximately standard Gaussian for each k not very close to 1 and K.

The computer simulation results show that for very wide range of dimensions, sample sizes and distributions behaviour of the test statistics in case when the hypothesis H holds is very similar.

Page 16: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Fig. 1. Behaviour of Tk when the hypothesis holds

Here we have sample size N=1000, dimension d=100, and two samples of d-dimensional standard Gaussian distribution. We have maxima and minima of 100 realizations and corresponding maxima and minima except of 5 per cent largest values at each point.

-4

-3

-2

-1

0

1

2

3

4

0 200 400 600 800 1000 1200 1400 1600 1800 2000

Page 17: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Fig. 2. Behaviour of Tk when the hypothesis does not hold

Here we have sample size N=1000, dimension d=10, q=3, Gaussian mixture with means (–4, –3, 0, 0, 0,…), (0, 6, 0, 0, 0,…), (4, –3, 0, 0, 0,…). The sample is projected to one-dimensional subspace. This is an extremely unfit situation.

-10

0

10

20

30

40

50

60

70

80

0 100 200 300 400 500

Page 18: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Fig. 3. Behaviour of Tk (control data)

This is a control example for the data in Fig. 2 assuming that we project data to the true two-dimensional discriminant subspace.

-4

-2

0

2

4

6

0 100 200 300 400 500

Page 19: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Fig. 4. Behaviour of Tk when the hypothesis does not hold

Here we have sample size N=1000, dimension d=10, q=3, Gaussian mixture with means (–4, –1, 0, 0, 0,…), (0, 2, 0, 0, 0,…), (4, –1, 0, 0, 0,…). The sample is projected to one-dimensional subspace.

-5

0

5

10

15

20

0 100 200 300 400 500

Page 20: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Fig. 5. Behaviour of Tk (control data)

This is a control example for the data in Fig. 4 assuming that we project data to the true two-dimensional discriminant subspace.

-4

-2

0

2

4

6

8

0 100 200 300 400 500

Page 21: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Fig. 6. Behaviour of Tk when the hypothesis does not hold

Here we have sample size N=1000, dimension d=10, q=3, Gaussian mixture with means (–4, –0.5, 0, 0, 0,…), (0, 1, 0, 0, 0,…), (4, –0.5, 0, 0, 0,…). The sample is projected to one-dimensional subspace.

-4

-2

0

2

4

6

8

10

12

0 100 200 300 400 500

Page 22: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Fig. 7. Behaviour of Tk (control data)

This is a control example for the data in Fig. 6 assuming that we project data to the true two-dimensional discriminant subspace.

-4

-2

0

2

4

6

8

0 100 200 300 400 500

Page 23: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Fig. 8. Behaviour of Tk when the hypothesis does not hold

Here we have sample size N=1000, dimension d=20, and standard Cauchy distribution. Sample XH is simulated with independent components, number of independent components are d1 = d/2, d2 = d/2.

-2

0

2

4

6

8

0 200 400 600 800 1000 1200

Page 24: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Fig. 9. Behaviour of Tk (control data)

This is a control example for the data in Fig. 8 assuming that the sample X(H) is simulated as sample with the same distribution as the sample X.

-2

0

2

4

6

8

0 200 400 600 800 1000 1200

Page 25: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Fig. 10. Behaviour of Tk when the hypothesis does not hold

Here we have sample size N=1000, dimension d=10, and Student distribution with 3 degrees of freedom . Number of independent components are d1 = 1, d2 = d–1.

-4

-2

0

2

4

6

8

0 200 400 600 800 1000 1200

Page 26: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

Fig. 11. Behaviour of Tk (control data)

This is a control example for the data in Fig. 10 assuming that the sample X(H) is simulated as sample with the same distribution as the sample X.

-4

-2

0

2

4

6

8

0 200 400 600 800 1000 1200

Page 27: Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)

end.