Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation...

35
1 Kamber, Eick: Object Similarity & Clustering Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications by Ch. Eick ganization for COSC 6340: What is Clustering? Object Similarity Measureme K-Means Clustering Algorith Hierarchical Clustering

description

Han, Kamber, Eick: Object Similarity & Clustering 3 Examples of Clustering Applications Marketing: Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop targeted marketing programs Land use: Identification of areas of similar land use in an earth observation database Insurance: Identifying groups of motor insurance policy holders with a high average claim cost City-planning: Identifying groups of houses according to their house type, value, and geographical location Earth-quake studies: Observed earth quake epicenters should be clustered along continent faults

Transcript of Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation...

Page 1: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

1Han, Kamber, Eick: Object Similarity & Clustering

Clustering andObject Similarity Evaluation

©Jiawei Han and Micheline Kamberwith Additions and Modifications by Ch. Eick

Organization for COSC 6340:1. What is Clustering?2. Object Similarity Measurement3. K-Means Clustering Algorithm4. Hierarchical Clustering

Page 2: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

3Han, Kamber, Eick: Object Similarity & Clustering

Examples of Clustering Applications

Marketing: Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop targeted marketing programs

Land use: Identification of areas of similar land use in an earth observation database

Insurance: Identifying groups of motor insurance policy holders with a high average claim cost

City-planning: Identifying groups of houses according to their house type, value, and geographical location

Earth-quake studies: Observed earth quake epicenters should be clustered along continent faults

Page 3: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

4Han, Kamber, Eick: Object Similarity & Clustering

Requirements of Clustering in Data Mining

Scalability Ability to deal with different types of attributes Discovery of clusters with arbitrary shape Minimal requirements for domain knowledge to

determine input parameters Able to deal with noise and outliers Insensitive to order of input records High dimensionality Incorporation of user-specified constraints Interpretability and usability

Page 4: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

5Han, Kamber, Eick: Object Similarity & Clustering

Data Structures for Clustering

Data matrix (n objects, p attributes)

(Dis)Similarity matrix (nxn)

npx...nfx...n1x...............ipx...ifx...i1x...............1px...1fx...11x

0...)2,()1,(:::

)2,3()

...ndnd

0dd(3,10d(2,1)

0

Page 5: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

6Han, Kamber, Eick: Object Similarity & Clustering

Quality Evaluation of Clusters Dissimilarity/Similarity metric: Similarity is expressed

in terms of a normalized distance function d, which is typically metric; typically: (oi, oj) = 1 - d (oi, oj)

There is a separate “quality” function that measures the “goodness” of a cluster.

The definitions of similarity functions are usually very different for interval-scaled, boolean, categorical, ordinal and ratio variables.

Weights should be associated with different variables based on applications and data semantics.

It is hard to define “similar enough” or “good enough” the answer is typically highly subjective.

Page 6: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

7Han, Kamber, Eick: Object Similarity & Clustering

Challenges in Obtaining Object Similarity Measures

Many Types of Variables Interval-scaled variables Binary variables and nominal variables Ordinal variables Ratio-scaled variables

Objects are characterized by variables belonging to different types (mixture of variables)

Page 7: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

8Han, Kamber, Eick: Object Similarity & Clustering

Case Study: Patient Similarity The following relation is given (with 10000 tuples):Patient(ssn, weight, height, cancer-sev, eye-color, age) Attribute Domains

ssn: 9 digits weight between 30 and 650; mweight=158 sweight=24.20 height between 0.30 and 2.20 in meters; mheight=1.52

sheight=19.2 cancer-sev: 4=serious 3=quite_serious 2=medium 1=minor eye-color: {brown, blue, green, grey } age: between 3 and 100; mage=45 sage=13.2

Task: Define Patient Similarity

Page 8: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

9Han, Kamber, Eick: Object Similarity & Clustering

Generating a Global Similarity Measure from Single Variable Similarity Measures

Assumption: A database may contain up to six types of variables: symmetric binary, asymmetric binary, nominal, ordinal, interval and ratio.

1. Standardize variable and associate similarity measure i with the standardized i-th variable and determine weight wi of the i-th variable.

2. Create the following global (dis)similarity measure :

f

fjif

ji

wpf

woopfoo

1

*)(1 ,),(

Page 9: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

10Han, Kamber, Eick: Object Similarity & Clustering

A Methodology to Obtain a Similarity Matrix

1. Understand Variables 2. Remove (non-relevant and redundant) Variables3. (Standardize and) Normalize Variables (typically

using z-scores or variable values are transformed to numbers in [0,1])

4. Associate (Dis)Similarity Measure df/f with each Variable

5. Associate a Weight (measuring its importance) with each Variable

6. Compute the (Dis)Similarity Matrix7. Apply Similarity-based Data Mining Technique (e.g.

Clustering, Nearest Neighbor, Multi-dimensional Scaling,…)

Page 10: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

11Han, Kamber, Eick: Object Similarity & Clustering

Interval-scaled Variables Standardize data using z-scores

Calculate the mean absolute deviation:

where Calculate the standardized measurement (z-score)

Using mean absolute deviation is more robust than using standard deviation

.)...211

nffff xx(xn m

|)|...|||(|121 fnffffff mxmxmxns

f

fifif s

mx z

Page 11: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

12Han, Kamber, Eick: Object Similarity & Clustering

Normalization in [0,1]

Problem: If non-normalized variables are used the maximum distance between two values can be greater than 1.

Solution: Normalize interval-scaled variables using

where minf denotes the minimum value and maxf denotes the maximum value of the f-th attribute in the data set and is constant that is choses depending on the similarity measure (e.g. if Manhattan distance is used is chosen to be 1).

)*)min/((max)min( fffififz x

Page 12: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

13Han, Kamber, Eick: Object Similarity & Clustering

Other Normalizations

Goal:Limit the maximum distance to 1 Start using a distance measure df(x,y) Determine the maximum distance dmaxf

that can occur for two values of the f-th attribute (e.g. dmaxf=maxf-minf ).

Define f(x,y)=1- (df(x,y)/ dmaxf)

Advantage: Negative similarities cannot occur.

Page 13: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

14Han, Kamber, Eick: Object Similarity & Clustering

Similarity Between Objects Distances are normally used to measure the

similarity or dissimilarity between two data objects

Some popular ones include: Minkowski distance:

where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p-dimensional data objects, and q is a positive integer

If q = 1, d is Manhattan distance

q q

pp

qq

jxixjxixjxixjid )||...|||(|),(2211

||...||||),(2211 pp jxixjxixjxixjid

Page 14: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

15Han, Kamber, Eick: Object Similarity & Clustering

Similarity Between Objects (Cont.) If q = 2, d is Euclidean distance:

Properties d(i,j) 0 d(i,i) = 0 d(i,j) = d(j,i) d(i,j) d(i,k) + d(k,j)

Also one can use weighted distance, parametric Pearson product moment correlation, or other disimilarity measures.

)||...|||(|),( 22

22

2

11 pp jxixjxixjxixjid

Page 15: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

16Han, Kamber, Eick: Object Similarity & Clustering

Similarity with respect to a Set of Binary Variables

A contingency table for binary data

pdbcasumdcdcbaba

sum

01

01

Object i

Object j

dcbada ji

cbaa ji

sym

Jaccard

),(

),(

Ignores agree-ments in O’s

Considers agree-ments in 0’s and 1’sto be equivalent.

Page 16: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

17Han, Kamber, Eick: Object Similarity & Clustering

Similarity between Binary Variable Sets

Example

gender is a symmetric attribute the remaining attributes are asymmetric binary let the values Y and P be set to 1, and the value N be

set to 0

Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4Jack M Y N P N N NMary F Y N P N P NJim M Y P N N N N

25.0211

1),(

33.0111

1),(

67.0102

2),(

maryjim

jimjack

maryjack

Jacc

Jacc

Jacc

Page 17: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

18Han, Kamber, Eick: Object Similarity & Clustering

Nominal Variables A generalization of the binary variable in that it can

take more than 2 states, e.g., red, yellow, blue, green Method 1: Simple matching

m: # of matches, p: total # of variables

Method 2: use a large number of binary variables creating a new binary variable for each of the M

nominal states

pmpood ji

),(

Page 18: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

19Han, Kamber, Eick: Object Similarity & Clustering

Ordinal Variables An ordinal variable can be discrete or continuous order is important (e.g. UH-grade, hotel-rating) Can be treated like interval-scaled

replacing xif by their rank: map the range of each variable onto [0, 1] by

replacing the f-th variable of i-th object by

compute the dissimilarity using methods for interval-scaled variables

11

fM

ifif

rz

}{1,...,Mifr f

Page 19: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

20Han, Kamber, Eick: Object Similarity & Clustering

Ratio-Scaled Variables

Ratio-scaled variable: a positive measurement on a nonlinear scale, approximately at exponential scale, such as AeBt or Ae-Bt

Methods: treat them like interval-scaled variables — not a

good choice! (why?) apply logarithmic transformation

yif = log(xif) treat them as continuous ordinal data treat their

rank as interval-scaled.

Page 20: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

21Han, Kamber, Eick: Object Similarity & Clustering

Case Study --- NormalizationPatient(ssn, weight, height, cancer-sev, eye-color, age) Attribute Relevance: ssn no; eye-color minor; other major Attribute Normalization:

ssn remove! weight between 30 and 650; mweight=158 sweight=24.20;

transform to zweight= (xweight-158)/24.20 (alternatively, zweight=(xweight-30)/620));

height normalize like weight! cancer_sev: 4=serious 3=quite_serious 2=medium

1=minor; transform 4 to 1, 3 to 2/3, 2 to 1/3, 1 to 0 and then normalize like weight!

age: normalize like weight!

Page 21: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

22Han, Kamber, Eick: Object Similarity & Clustering

Case Study --- Weight Selection and Similarity Measure Selection

Patient(ssn, weight, height, cancer-sev, eye-color, age) For normalized weight, height, cancer_sev, age values use

Manhattan distance function; e.g.: weight(w1,w2)= 1 | ((w1-158)/24.20 ) ((w2-158)/24.20) | For eye-color use: eye-color(c1,c2)= if c1=c2 then 1 else 0 Weight Assignment: 0.2 for eye-color; 1 for all othersFinal Solution --- chosen Similarity Measure :Let o1=(s1,w1,h1,cs1,e1,a1) and o2=(s2,w2,h2,cs2,e2,a2)(o1,o2):= (weight(w1,w2) + height(h1,h2) + cancer-

sev(cs1,cs2) + age(a1,a2) + 0.2* eye-color(e1,e2) ) /4.2

Page 22: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

23Han, Kamber, Eick: Object Similarity & Clustering

Major Clustering Approaches Partitioning algorithms: Construct various partitions and

then evaluate them by some criterion Hierarchy algorithms: Create a hierarchical decomposition

of the set of data (or objects) using some criterion Density-based: based on connectivity and density functions Grid-based: based on a multiple-level granularity structure Model-based: A model is hypothesized for each of the

clusters and the idea is to find the best fit of that model to each other

Page 23: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

24Han, Kamber, Eick: Object Similarity & Clustering

Partitioning Algorithms: Basic Concept

Partitioning method: Construct a partition of a database D of n objects into a set of k clusters

Given a k, find a partition of k clusters that optimizes the chosen partitioning criterion Global optimal: exhaustively enumerate all partitions Heuristic methods: k-means and k-medoids algorithms k-means (MacQueen’67): Each cluster is represented

by the center of the cluster k-medoids or PAM (Partition around medoids)

(Kaufman & Rousseeuw’87): Each cluster is represented by one of the objects in the cluster

Page 24: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

25Han, Kamber, Eick: Object Similarity & Clustering

The K-Means Clustering Method

Given k, the k-means algorithm is implemented in 4 steps: Partition objects into k nonempty subsets Compute seed points as the centroids of the

clusters of the current partition. The centroid is the center (mean point) of the cluster.

Assign each object to the cluster with the nearest seed point.

Go back to Step 2, stop when no more new assignment.

Page 25: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

26Han, Kamber, Eick: Object Similarity & Clustering

The K-Means Clustering Method Example

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 100

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 100

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

Page 26: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

27Han, Kamber, Eick: Object Similarity & Clustering

Comments on the K-Means Method Strength

Relatively efficient: O(tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n.

Often terminates at a local optimum. The global optimum may be found using techniques such as: deterministic annealing and genetic algorithms

Weakness Applicable only when mean is defined, then what about

categorical data? Need to specify k, the number of clusters, in advance Unable to handle noisy data and outliers Not suitable to discover clusters with non-convex

shapes

Page 27: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

28Han, Kamber, Eick: Object Similarity & Clustering

Hierarchical Clustering Use distance matrix as clustering criteria. This

method does not require the number of clusters k as an input, but needs a termination condition

Step 0 Step 1 Step 2 Step 3 Step 4

b

dc

e

a a b

d ec d e

a b c d e

Step 4 Step 3 Step 2 Step 1 Step 0

agglomerative(AGNES)

divisive(DIANA)

Page 28: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

29Han, Kamber, Eick: Object Similarity & Clustering

A Dendrogram Shows How the Clusters are Merged Hierarchically

Decompose data objects into a several levels of nested partitioning (tree of clusters), called a dendrogram.

A clustering of the data objects is obtained by cutting the dendrogram at the desired level, then each connected component forms a cluster.

Page 29: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

30Han, Kamber, Eick: Object Similarity & Clustering

Data Extraction Tool

DBMS

Clustering Tool

User Interface

A set of clusters

Similarity measure

Similarity Measure Tool

Default choices and domain information

Library of similarity measures

Type and weight

information

ObjectView

Library of clustering algorithms

CAL-FU/UH Database Clustering Similarity Assessment

Environments

LearningTool

TrainingDate

Page 30: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

31Han, Kamber, Eick: Object Similarity & Clustering

Prototypes of Similarity Assessment Tools

Prototype1 (CAL State Fullerton): Supported the interactive definition of similarity measures; knowledge representation format does not rely on modular units; provides a nearest neighbor clustering algorithm for database clustering; functions were supported outside a DBMS

Prototype 2 (UH): Similarity measures are defined using a special language (not interactively); tool supports modular units and functions are provided using a Java/SQL-Server 2000 framework; functions were partially moved inside a DBMS (although some are still inside Java); analysis results are stored in the database and therefore available for further analysis.

Prototype 3 (UH): Supports the learning of similarity measures, functions are provided inside the DBMS, other capabilities???

Page 31: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

32Han, Kamber, Eick: Object Similarity & Clustering

Self-organizing feature maps (SOMs)

SOM employ neural network techniques for clustering Clustering is also performed by having several units

competing for the current object The unit whose weight vector is closest to the current

object wins The winner and its neighbors learn by having their

weights adjusted SOMs are believed to resemble processing that can

occur in the brain Useful for visualizing high-dimensional data in 2- or

3-D space

Page 32: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

33Han, Kamber, Eick: Object Similarity & Clustering

Problems and Challenges Considerable progress has been made in scalable

clustering methods Partitioning: k-means, k-medoids, CLARANS Hierarchical: BIRCH, CURE Density-based: DBSCAN, CLIQUE, OPTICS Grid-based: STING, WaveCluster Model-based: Autoclass, Denclue, Cobweb

Current clustering techniques do not address all the requirements adequately

Constraint-based clustering analysis: Constraints exist in data space (bridges and highways) or in user queries

Page 33: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

34Han, Kamber, Eick: Object Similarity & Clustering

Summary Object Similarity & Clustering

Cluster analysis groups objects based on their similarity and has wide applications

Appropriate similarity measures have to be chosen for various types of variables and combined into a global similarity measure.

Clustering algorithms can be categorized into partitioning methods, hierarchical methods, density-based methods, grid-based methods, and model-based methods

Methods to measure, compute, and learn object similarity are quite important, not only for clustering, but also for nearest neighbor approaches, information retrieval in general, and for data visualization.

Page 34: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

35Han, Kamber, Eick: Object Similarity & Clustering

References (1) R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace

clustering of high dimensional data for data mining applications. SIGMOD'98 M. R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973. M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points to

identify the clustering structure, SIGMOD’99. P. Arabie, L. J. Hubert, and G. De Soete. Clustering and Classification. World

Scietific, 1996 M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for

discovering clusters in large spatial databases. KDD'96. M. Ester, H.-P. Kriegel, and X. Xu. Knowledge discovery in large spatial

databases: Focusing techniques for efficient class identification. SSD'95. D. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine

Learning, 2:139-172, 1987. D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An

approach based on dynamic systems. In Proc. VLDB’98. S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for

large databases. SIGMOD'98. A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988.

Page 35: Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.

36Han, Kamber, Eick: Object Similarity & Clustering

References (2) L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster

Analysis. John Wiley & Sons, 1990. E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets.

VLDB’98. G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to

Clustering. John Wiley and Sons, 1988. P. Michaud. Clustering techniques. Future Generation Computer systems, 13, 1997. R. Ng and J. Han. Efficient and effective clustering method for spatial data mining.

VLDB'94. E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very

large data sets. Proc. 1996 Int. Conf. on Pattern Recognition, 101-105. G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution

clustering approach for very large spatial databases. VLDB’98. W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial

Data Mining, VLDB’97. T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : an efficient data clustering

method for very large databases. SIGMOD'96.