BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen...

56
BBS654 Data Mining Pinar Duygulu Slides are adapted from Nazli Ikizler 1

Transcript of BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen...

Page 1: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

BBS654Data Mining

Pinar Duygulu

Slides are adapted from

Nazli Ikizler

1

Page 2: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

Classification

• Classification systems:• Supervised learning

• Make a rational prediction given evidence

• There are several methods for this

• Useful when you have labeled data (or can get it)

2

Page 3: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

Clustering

• Clustering systems:• Unsupervised learning• Detect patterns in unlabeled

data• Useful when don’t know

what you’re looking for• Requires data, but no labels

• Typical applications• As a stand-alone tool to get

insight into data distribution • As a preprocessing step for

other algorithms

3

Page 4: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

4

What is Cluster Analysis?• Finding groups of objects such that the objects in a group

will be similar (or related) to one another and different from (or unrelated to) the objects in other groups

Inter-cluster distances are maximized

Intra-cluster distances are

minimized

Page 5: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

Clustering• Basic idea: group together similar instances

• Example: 2D point patterns

• What could “similar” mean?• One option: small (squared) Euclidean distance

5

Page 6: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

6

Notion of a Cluster can be Ambiguous

How many

clusters?

Four ClustersTwo Clusters

Six Clusters

Page 7: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

Quality: What Is Good Clustering?

• A good clustering method will produce high quality clusters

• high intra-class similarity: cohesive within clusters

• low inter-class similarity: distinctive between clusters

• The quality of a clustering method depends on

• the similarity measure used by the method

• its implementation, and

• Its ability to discover some or all of the hidden patterns

7

Page 8: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

8

Types of Clusterings

• A clustering is a set of clusters

• Important distinction between hierarchical and partitional sets of clusters

• Partitional Clustering• A division data objects into non-overlapping subsets (clusters)

such that each data object is in exactly one subset

• Hierarchical clustering• A set of nested clusters organized as a hierarchical tree

Page 9: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

9

Partitional Clustering

Original Points A Partitional Clustering

Page 10: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

10

Hierarchical Clustering

p4

p1p3

p2

p4

p1 p3

p2

p4p1 p2 p3

p4p1 p2 p3

Traditional Hierarchical Clustering

Non-traditional Hierarchical Clustering Non-traditional Dendrogram

Traditional Dendrogram

Page 11: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

11

Clustering Algorithms

• K-means and its variants

• Hierarchical clustering

• Density-based clustering

Page 12: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

12

K-means Clustering

• Partitional clustering approach

• Each cluster is associated with a centroid (center point)

• Each point is assigned to the cluster with the closest centroid

• Number of clusters, K, must be specified

• The basic algorithm is very simple

Page 13: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

K-Means

13

Page 14: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

Solution of K-Means

• Iterations

14

Page 15: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

15

K-means Clustering – Details

• Initial centroids are often chosen randomly.• Clusters produced vary from one run to another.

• The centroid is (typically) the mean of the points in the cluster.

• ‘Closeness’ is measured by Euclidean distance, cosine similarity, correlation, etc.

• K-means will converge for common similarity measures

• Most of the convergence happens in the first few iterations.• Often the stopping condition is changed to ‘Until relatively few points

change clusters’

• Complexity is O( n * K * I * d )• n = number of points, K = number of clusters,

I = number of iterations, d = number of attributes

Page 16: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

16

Two different K-means Clusterings

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Sub-optimal Clustering

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Optimal Clustering

Original Points

Page 17: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

17

Importance of Choosing Initial Centroids

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 4

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 6

Page 18: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

18

Importance of Choosing Initial Centroids

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 4

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 5

Page 19: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

19

Evaluating K-means Clusters• Most common measure is Sum of Squared Error (SSE)

• For each point, the error is the distance to the nearest cluster

• To get SSE, we square these errors and sum them.

• x is a data point in cluster Ci and mi is the representative point for cluster Ci

• can show that mi corresponds to the center (mean) of the cluster

• Given two clusters, we can choose the one with the smallest error

• One easy way to reduce SSE is to increase K, the number of clusters

• A good clustering with smaller K can have a lower SSE than a poor clustering with higher K

K

i Cx

i

i

xmdistSSE1

2 ),(

Page 20: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

20

10 Clusters Example

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 1

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 2

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 3

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 4

Starting with two initial centroids in one cluster of each pair of clusters

Sometimes the initial centroids will readjust themselves in ‘right’ way, and sometimes they don’t

Page 21: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

21

10 Clusters Example

Starting with some pairs of clusters having three initial centroids, while other have only one.

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 1

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 2

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 3

0 5 10 15 20

-6

-4

-2

0

2

4

6

8

x

y

Iteration 4

Sometimes the initial centroids will readjust themselves in ‘right’ way, and sometimes they don’t

Page 22: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

22

Solutions to Initial Centroids Problem

• Multiple runs• Helps, but probability is not on your side

• Sample and use hierarchical clustering to determine initial centroids

• Select more than k initial centroids and then select among these initial centroids• Select most widely separated

• Postprocessing

• Bisecting K-means• Not as susceptible to initialization issues

Page 23: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

23

Bisecting K-means

• Bisecting K-means algorithm• Variant of K-means that can produce a partitional or a

hierarchical clustering

Page 24: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

24

Bisecting K-means Example

Page 25: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

25

Limitations of K-means

• K-means has problems when clusters are of differing • Sizes

• Densities

• Non-globular shapes

• K-means has problems when the data contains outliers.

Page 26: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

26

Limitations of K-means: Differing Sizes

Original Points K-means (3 Clusters)

Page 27: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

27

Limitations of K-means: Differing Density

Original Points K-means (3 Clusters)

Page 28: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

28

Limitations of K-means: Non-globular Shapes

Original Points K-means (2 Clusters)

One solution is to use many clusters.Find parts of clusters, but need to put together.

Page 29: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

29

Overcoming K-means Limitations

Original Points K-means Clusters

One solution is to use many clusters.Find parts of clusters, but need to put together.

Page 30: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

K-means is sensitive to outliers

• The k-means algorithm is sensitive to outliers !

• Since an object with an extremely large value may substantially

distort the distribution of the data

• K-Medoids: Instead of taking the mean value of the object in a cluster

as a reference point, medoids can be used, which is the most

centrally located object in a cluster

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

30

Page 31: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

31

Hierarchical Clustering

• Produces a set of nested clusters organized as a hierarchical tree

• Can be visualized as a dendrogram• A tree like diagram that records the sequences of merges or splits

1 3 2 5 4 60

0.05

0.1

0.15

0.2

1

2

3

4

5

6

1

23 4

5

Page 32: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

32

Strengths of Hierarchical Clustering

• Do not have to assume any particular number of clusters• Any desired number of clusters can be obtained by ‘cutting’ the dendogram at

the proper level

• They may correspond to meaningful taxonomies• Example in biological sciences (e.g., animal kingdom, phylogeny

reconstruction, …)

Page 33: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

33

Hierarchical Clustering• Two main types of hierarchical clustering

• Agglomerative: • Start with the points as individual clusters

• At each step, merge the closest pair of clusters until only one cluster (or k clusters) left

• Divisive: • Start with one, all-inclusive cluster

• At each step, split a cluster until each cluster contains a point (or there are k clusters)

• Traditional hierarchical algorithms use a similarity or distance matrix• Merge or split one cluster at a time

Page 34: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

34

Agglomerative Clustering Algorithm

• More popular hierarchical clustering technique

• Basic algorithm is straightforward1. Compute the proximity matrix2. Let each data point be a cluster3. Repeat4. Merge the two closest clusters5. Update the proximity matrix6. Until only a single cluster remains

• Key operation is the computation of the proximity of two clusters• Different approaches to defining the distance between

clusters distinguish the different algorithms

Page 35: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

35

Starting Situation • Start with clusters of individual points and a proximity matrix

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

. Proximity Matrix

...p1 p2 p3 p4 p9 p10 p11 p12

Page 36: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

36

Intermediate Situation• After some merging steps, we have some clusters

C1

C4

C2 C5

C3

C2C1

C1

C3

C5

C4

C2

C3 C4 C5

Proximity Matrix

...p1 p2 p3 p4 p9 p10 p11 p12

Page 37: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

37

Intermediate Situation• We want to merge the two closest clusters (C2 and C5) and update the proximity

matrix.

C1

C4

C2 C5

C3

C2C1

C1

C3

C5

C4

C2

C3 C4 C5

Proximity Matrix

...p1 p2 p3 p4 p9 p10 p11 p12

Page 38: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

38

After Merging• The question is “How do we update the proximity matrix?”

C1

C4

C2 U C5

C3

? ? ? ?

?

?

?

C2 U C5

C1

C1

C3

C4

C2 U C5

C3 C4

Proximity Matrix

...p1 p2 p3 p4 p9 p10 p11 p12

Page 39: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

39

How to Define Inter-Cluster Similarity

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.

Similarity?

MIN

MAX

Group Average

Distance Between Centroids

Other methods driven by an objective function Ward’s Method uses squared error

Proximity Matrix

Page 40: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

40

How to Define Inter-Cluster Similarity

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.Proximity Matrix

MIN

MAX

Group Average

Distance Between Centroids

Other methods driven by an objective function Ward’s Method uses squared error

Page 41: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

41

How to Define Inter-Cluster Similarity

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.Proximity Matrix

MIN

MAX

Group Average

Distance Between Centroids

Other methods driven by an objective function Ward’s Method uses squared error

Page 42: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

42

How to Define Inter-Cluster Similarity

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.Proximity Matrix

MIN

MAX

Group Average

Distance Between Centroids

Other methods driven by an objective function Ward’s Method uses squared error

Page 43: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

43

How to Define Inter-Cluster Similarity

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.Proximity Matrix

MIN

MAX

Group Average

Distance Between Centroids

Other methods driven by an objective function Ward’s Method uses squared error

Page 44: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

Hierarchical Clustering: MIN or Single Link

Nested Clusters Dendrogram

1

2

3

4

5

6

1

2

3

4

5

3 6 2 5 4 10

0.05

0.1

0.15

0.2

• Similarity of two clusters is based on the two most similar (closest) points in the different clusters– Determined by one pair of points, i.e., by one link in the proximity graph.

Page 45: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

45

Strength of MIN

Original Points Two Clusters

• Can handle non-elliptical shapes

Page 46: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

46

Limitations of MIN

Original Points Two Clusters

• Sensitive to noise and outliers

Page 47: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

47

Cluster Similarity: MAX or Complete Linkage

• Similarity of two clusters is based on the two least similar (most distant) points in the different clusters• Determined by all pairs of points in the two clusters

I1 I2 I3 I4 I5

I1 1.00 0.90 0.10 0.65 0.20

I2 0.90 1.00 0.70 0.60 0.50

I3 0.10 0.70 1.00 0.40 0.30

I4 0.65 0.60 0.40 1.00 0.80

I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5

Page 48: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

48

Strength of MAX

Original Points Two Clusters

• Less susceptible to noise and outliers

Page 49: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

49

Limitations of MAX

Original Points Two Clusters

•Tends to break large clusters

•Biased towards globular clusters

Page 50: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

50

Cluster Similarity: Group Average• Proximity of two clusters is the average of pairwise proximity

between points in the two clusters.

• Need to use average connectivity for scalability since total proximity favors large clusters

• Compromise between Single and Complete Link

• Strengths• Less susceptible to noise and outliers

• Limitations• Biased towards globular clusters

||Cluster||Cluster

)p,pproximity(

)Cluster,Clusterproximity(ji

ClusterpClusterp

ji

jijj

ii

Page 51: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

51

Cluster Similarity: Ward’s Method

• Similarity of two clusters is based on the increase in squared error when two clusters are merged• Similar to group average if distance between points is distance squared

• Less susceptible to noise and outliers

• Biased towards globular clusters

• Can be used to initialize K-means

Page 52: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

52

Hierarchical Clustering: Comparison

Group Average

Ward’s Method

1

2

3

4

5

6

1

2

5

3

4

MIN MAX

1

2

3

4

5

6

1

2

5

34

1

2

3

4

5

6

1

2 5

3

41

2

3

4

5

6

1

2

3

4

5

Page 53: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

CHAMELEON: Hierarchical Clustering Using Dynamic Modeling (1999)

• CHAMELEON: G. Karypis, E. H. Han, and V. Kumar, 1999

• Measures the similarity based on a dynamic model

• Two clusters are merged only if the interconnectivity

and closeness (proximity) between two clusters are

high relative to the internal interconnectivity of the

clusters and closeness of items within the clusters

• Graph-based, and a two-phase algorithm

1. Use a graph-partitioning algorithm: cluster objects into

a large number of relatively small sub-clusters

2. Use an agglomerative hierarchical clustering algorithm:

find the genuine clusters by repeatedly combining

these sub-clusters

53

Page 54: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

Overall Framework of CHAMELEON

Construct (K-NN)

Sparse Graph Partition the Graph

Merge Partition

Final Clusters

Data Set

K-NN Graph

P and q are connected if q is among the top k closest neighbors of p

Relative interconnectivity: connectivity of c1 and c2

over internal connectivity

Relative closeness: closeness of c1 and c2 over internal closeness 54

Page 55: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

55

CHAMELEON (Clustering Complex Objects)

Page 56: BBS654 Data Mining · 15 K-means Clustering –Details • Initial centroids are often chosen randomly. • Clusters produced vary from one run to another. • The centroid is (typically)

56

Hierarchical Clustering: Problems and Limitations

• Once a decision is made to combine two clusters, it cannot be undone

• No objective function is directly minimized

• Different schemes have problems with one or more of the following:• Sensitivity to noise and outliers

• Difficulty handling different sized clusters and convex shapes

• Breaking large clusters