Clustering Basic Concepts and Algorithms 1

53
Clustering Basic Concepts and Algorithms 1 Jeff Howbert Introduction to Machine Learning Winter 2012 1

Transcript of Clustering Basic Concepts and Algorithms 1

Page 1: Clustering Basic Concepts and Algorithms 1

Clustering

Basic Concepts and Algorithms 1

Jeff Howbert Introduction to Machine Learning Winter 2012 1

Page 2: Clustering Basic Concepts and Algorithms 1

Clustering definition

Given:– Set of data points– Set of attributes on each data point– A measure of similarity between data points

Fi d l t h th tFind clusters such that:– Data points within a cluster are more similar to one

another– Data points in separate clusters are less similar to one

anotherSimilarity measures:Similarity measures:– Euclidean distance if attributes are continuous– Other problem-specific measures

Jeff Howbert Introduction to Machine Learning Winter 2012 2

p p

Page 3: Clustering Basic Concepts and Algorithms 1

Clustering definition

Find groups (clusters) of data points such that data points in a group will be similar (or related) to one another and different from (or unrelated to) the data points in other groups

Inter-clusterInter cluster distances are maximized

Intra-cluster distances are

minimized

Jeff Howbert Introduction to Machine Learning Winter 2012 3

Page 4: Clustering Basic Concepts and Algorithms 1

Approaches to clustering

A clustering is a set of clusters

Important distinction between hierarchical and partitional clustering

P titi l d t i t di id d i t fi it– Partitional: data points divided into finite number of partitions (non-overlapping subsets)

each data point is assigned to exactly one subseteach data point is assigned to exactly one subset

– Hierarchical: data points placed into a set of nested clusters, organized into a hierarchicalnested clusters, organized into a hierarchical tree

tree expresses a continuum of similarities and l t i

Jeff Howbert Introduction to Machine Learning Winter 2012 4

clustering

Page 5: Clustering Basic Concepts and Algorithms 1

Partitional clustering

Original points Partitional clustering

Jeff Howbert Introduction to Machine Learning Winter 2012 5

g p g

Page 6: Clustering Basic Concepts and Algorithms 1

Hierarchical clustering

p4p1

p3

p2 p4p1 p2 p3 p4p1 p2 p3

Hierarchical clustering Dendrogram

Jeff Howbert Introduction to Machine Learning Winter 2012 6

Page 7: Clustering Basic Concepts and Algorithms 1

Partitional clustering illustrated

Euclidean distance-based clustering in 3D space

Intracluster distancesare minimized

Intracluster distancesare minimized

Assign tol t

Intercluster distancesIntercluster distances

clusters

Jeff Howbert Introduction to Machine Learning Winter 2012 7

are maximizedare maximized

Page 8: Clustering Basic Concepts and Algorithms 1

Hierarchical clustering illustrated

Driving distances between Italian cities

Jeff Howbert Introduction to Machine Learning Winter 2012 8

Page 9: Clustering Basic Concepts and Algorithms 1

Applications of clustering

Understanding– Group related documents

Discovered Clusters Industry Group

1 Applied-Matl-DOWN,Bay-Network-Down,3-COM-DOWN, Cabletron-Sys-DOWN,CISCO-DOWN,HP-DOWN,

DSC-Comm-DOWN,INTEL-DOWN,LSI-Logic-DOWN, Micron-Tech-DOWN,Texas-Inst-Down,Tellabs-Inc-Down,

Natl-Semiconduct-DOWN,Oracl-DOWN,SGI-DOWN,

Technology1-DOWN pfor browsing

– Group genes and proteins that have similar

Sun-DOWN

2 Apple-Comp-DOWN,Autodesk-DOWN,DEC-DOWN, ADV-Micro-Device-DOWN,Andrew-Corp-DOWN,

Computer-Assoc-DOWN,Circuit-City-DOWN, Compaq-DOWN, EMC-Corp-DOWN, Gen-Inst-DOWN,

Motorola-DOWN,Microsoft-DOWN,Scientific-Atl-DOWN

Technology2-DOWN

3 Fannie-Mae-DOWN,Fed-Home-Loan-DOWN, MBNA-Corp-DOWN,Morgan-Stanley-DOWN

Financial-DOWN t at a e s a

functionality– Group stocks with similar

price fluctuations

34 Baker-Hughes-UP,Dresser-Inds-UP,Halliburton-HLD-UP,

Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP, Schlumberger-UP

Oil-UP

price fluctuations

SummarizationSummarization– Reduce the size of large

data setsClustering precipitation

Jeff Howbert Introduction to Machine Learning Winter 2012 9

C uste g p ec p tat oin Australia

Page 10: Clustering Basic Concepts and Algorithms 1

Clustering application 1

Market segmentation– Goal: subdivide a market into distinct subsets– Goal: subdivide a market into distinct subsets

of customers, such that each subset is conceivably a submarket which can be reached with a customized marketing mix.

– Approach: C ll t diff t tt ib t f t b dCollect different attributes of customers based on their geographical and lifestyle related information.Find clusters of similar customers.Measure the clustering quality by observing buying patterns of customers in same cluster vs. those from different clusters

Jeff Howbert Introduction to Machine Learning Winter 2012 10

from different clusters.

Page 11: Clustering Basic Concepts and Algorithms 1

Clustering application 2

Document clustering– Goal: Find groups of documents that are– Goal: Find groups of documents that are

similar to each other based on the important terms appearing in them.pp g

– Approach: Identify frequently occurring terms in each document. Form a similarity measure ybased on the frequencies of different terms. Use it to cluster.

– Benefit: Information retrieval can utilize the clusters to relate a new document or search t t l t d d t

Jeff Howbert Introduction to Machine Learning Winter 2012 11

term to clustered documents.

Page 12: Clustering Basic Concepts and Algorithms 1

Document clustering example

Items to cluster: 3204 articles of Los Angeles Times.Similarity measure: Number of words in commonSimilarity measure: Number of words in common between a pair of documents (after some word filtering).

Category Total CorrectlyCategory TotalArticles

CorrectlyPlaced

Financial 555 364

Foreign 341 260Foreign 341 260

National 273 36

Metro 943 746Metro 943 746

Sports 738 573

Entertainment 354 278

Jeff Howbert Introduction to Machine Learning Winter 2012 12

Page 13: Clustering Basic Concepts and Algorithms 1

Clustering application 3

Image segmentation with mean-shift algorithmAllows clustering of pixels in combined (R, G, B) plus (x, y) space

Jeff Howbert Introduction to Machine Learning Winter 2012 13

Page 14: Clustering Basic Concepts and Algorithms 1

Clustering application 4

Genetic demography

Jeff Howbert Introduction to Machine Learning Winter 2012 14

Page 15: Clustering Basic Concepts and Algorithms 1

What is not clustering?

Supervised classification– Have class label informationa e c ass abe o a o

Simple segmentation– Dividing students into different registration groups

alphabetically, by last name

Results of a query– Groupings are a result of an external specificationg

Jeff Howbert Introduction to Machine Learning Winter 2012 15

Page 16: Clustering Basic Concepts and Algorithms 1

Notion of a cluster can be ambiguous

How many clusters? Six Clusters

Four ClustersTwo Clusters

Jeff Howbert Introduction to Machine Learning Winter 2012 16

Page 17: Clustering Basic Concepts and Algorithms 1

Other approaches to clustering

Exclusive versus non-exclusive– In non-exclusive clusterings, points may belong to multiple

clustersclusters.– Can represent multiple classes or ‘border’ points

Fuzzy versus non-fuzzyy y– In fuzzy clustering, a point belongs to every cluster with some

weight between 0 and 1– Weights must sum to 1– Probabilistic clustering has similar characteristics

Partial versus completeI l t t l t f th d t– In some cases, we only want to cluster some of the data

Heterogeneous versus homogeneous– Clusters of widely different sizes, shapes, and densities

Jeff Howbert Introduction to Machine Learning Winter 2012 17

Clusters of widely different sizes, shapes, and densities

Page 18: Clustering Basic Concepts and Algorithms 1

Types of clusters

Well separated clustersWell-separated clustersCenter-based clustersC ti l tContiguous clustersDensity-based clustersProperty or conceptualDescribed by an objective function

Jeff Howbert Introduction to Machine Learning Winter 2012 18

Page 19: Clustering Basic Concepts and Algorithms 1

Types of clusters: well-separated

Well-separated clusters – A cluster is a set of points such that any point in a c us e s a se o po s suc a a y po a

cluster is closer (or more similar) to every other point in the cluster than to any point not in the cluster.cluster.

Jeff Howbert Introduction to Machine Learning Winter 2012 19

3 well-separated clusters

Page 20: Clustering Basic Concepts and Algorithms 1

Types of clusters: center-based

Center-based clusters– A cluster is a set of points such that a point in a– A cluster is a set of points such that a point in a

cluster is closer (more similar) to the “center” of that cluster than to the center of any other cluster. Th t f l t b– The center of a cluster can be:the centroid, the average position of all the points in the clustera medoid, the most “representative” point of a clustera medoid, the most representative point of a cluster

Jeff Howbert Introduction to Machine Learning Winter 2012 20

4 center-based clusters

Page 21: Clustering Basic Concepts and Algorithms 1

Types of clusters: contiguity-based

Contiguous clusters (nearest neighbor or transitive)– A cluster is a set of points such that a point in a

cluster is closer (or more similar) to one or more other points in the cluster than to any point not inother points in the cluster than to any point not in the cluster.

Jeff Howbert Introduction to Machine Learning Winter 2012 21

8 contiguous clusters

Page 22: Clustering Basic Concepts and Algorithms 1

Types of clusters: density-based

Density-based clusters– A cluster is a dense region of points, which is c us e s a de se eg o o po s, c s

separated by low-density regions, from other regions of high density. Used when the clusters are irregular or intertwined– Used when the clusters are irregular or intertwined, and when noise and outliers are present.

Jeff Howbert Introduction to Machine Learning Winter 2012 22

6 density-based clusters

Page 23: Clustering Basic Concepts and Algorithms 1

Types of clusters: conceptual clusters

Shared property or conceptual clusters– A cluster is a set of objects that share some c us e s a se o objec s a s a e so e

common property or represent a particular concept.– The most general notion of a cluster; in some ways

includes all other typesincludes all other types.

Jeff Howbert Introduction to Machine Learning Winter 2012 23

2 overlapping concept clusters

Page 24: Clustering Basic Concepts and Algorithms 1

Types of clusters: objective function

Clusters defined by an objective function– Set of clusters minimizes or maximizes some objective function.Set of clusters minimizes or maximizes some objective function. – Enumerate all possible ways of dividing the points into clusters and

evaluate the `goodness' of each potential set of clusters by using the given objective function (NP-hard)the given objective function. (NP hard)

– Can have global or local objective function.Hierarchical clustering algorithms typically have local objective function.P titi l l ith t i ll h l b l bj ti f tiPartitional algorithms typically have global objective function.

– A variation of the global objective function approach is to fit the data to a parameterized model.

Parameters for the model are determined from the data. Example: Gaussian mixture models (GMM) assume the data is a

‘mixture' of a fixed number of Gaussian distributions.

Jeff Howbert Introduction to Machine Learning Winter 2012 24

Page 25: Clustering Basic Concepts and Algorithms 1

Characteristics of input data are important

Type of similarity or density measure– This is a derived measure, but central to clustering

Sparseness– Dictates type of similarity– Adds to efficiency

Attribute type– Dictates type of similarity

Domain of dataDomain of data– Dictates type of similarity– Other characteristics, e.g., autocorrelation

Di i litDimensionalityNoise and outliersType of distribution

Jeff Howbert Introduction to Machine Learning Winter 2012 25

Type of distribution

Page 26: Clustering Basic Concepts and Algorithms 1

Clustering algorithms

k-Means and its variants

Hierarchical clustering

Density-based clustering

Jeff Howbert Introduction to Machine Learning Winter 2012 26

Page 27: Clustering Basic Concepts and Algorithms 1

k-Means clustering

Partitional clustering approach Each cluster is associated with a centroid (center point) Each point is assigned to the cluster whose centroid it is closest toNumber of clusters k must be specifiedNumber of clusters, k, must be specifiedThe basic algorithm is very simple

Jeff Howbert Introduction to Machine Learning Winter 2012 27

Page 28: Clustering Basic Concepts and Algorithms 1

k-Means clustering: details

Initial centroids are often chosen randomly.– Clusters produced can vary from one run to another.

The centroid is (typically) the mean of the points in theThe centroid is (typically) the mean of the points in the cluster.Similarity is measured by Euclidean distance, cosine similarity correlation etcsimilarity, correlation, etc.k-Means will converge for common similarity measures mentioned above.Most of the convergence happens in the first few iterations.

– Often the stopping condition is changed to ‘Until relatively few pp g g ypoints change clusters’

Complexity is O( n * K * I * d )– n = number of points, K = number of clusters,

Jeff Howbert Introduction to Machine Learning Winter 2012 28

n number of points, K number of clusters, I = number of iterations, d = number of attributes

Page 29: Clustering Basic Concepts and Algorithms 1

Demo: k-means clustering

..\videos\k-means.mp4

on web:htt // t b / t h? 74 4 Ll70http://www.youtube.com/watch?v=74rv4snLl70

Jeff Howbert Introduction to Machine Learning Winter 2012 29

Page 30: Clustering Basic Concepts and Algorithms 1

Evaluating k-means clusterings

Most common measure is Sum of Squared Error (SSE)– For each point, the error is the distance to the nearest centroid.– To get SSE, we square these errors and sum them:

∑∑=K

i xmdistSSE 2 ),(

where x is a data point in cluster Ci and mi is the centroid of Ci. – Given two clusterings we choose the one with the smallest SSE

∑∑= ∈i Cx

ii1

),(

Given two clusterings, we choose the one with the smallest SSE– One easy way to reduce SSE is to increase k, the number of

clustersBut a good clustering with smaller k can have a lower SSE than aBut a good clustering with smaller k can have a lower SSE than a

poor clustering with higher k

Jeff Howbert Introduction to Machine Learning Winter 2012 30

Page 31: Clustering Basic Concepts and Algorithms 1

Two different k-means clusterings

1 5

2

2.5

3

Original points

0

0.5

1

1.5

y-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

x

33

1

1.5

2

2.5

y

1

1.5

2

2.5

y

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

x-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

x

Jeff Howbert Introduction to Machine Learning Winter 2012 31

x

Sub-optimal clusteringx

Optimal clustering

Page 32: Clustering Basic Concepts and Algorithms 1

Impact of initial choice of centroids

2

2.5

3Iteration 1

2

2.5

3Iteration 2

2

2.5

3Iteration 3

0

0.5

1

1.5

y

0

0.5

1

1.5

y0

0.5

1

1.5

y

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

x-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

x-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

x

It ti 4 It ti 5 It ti 6

1.5

2

2.5

3Iteration 4

1.5

2

2.5

3Iteration 5

1.5

2

2.5

3Iteration 6

0

0.5

1

1.5

y

0

0.5

1

1.5

y

0

0.5

1

1.5

y

Jeff Howbert Introduction to Machine Learning Winter 2012 32

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

Page 33: Clustering Basic Concepts and Algorithms 1

Impact of initial choice of centroids

3Iteration 1

3Iteration 2

3Iteration 3

3Iteration 4

3Iteration 5

3Iteration 6

2

2.5

2

2.5

2

2.5

2

2.5

2

2.5

2

2.5

1

1.5

y

1

1.5

y

1

1.5

y

1

1.5

y

1

1.5

y

1

1.5

y

0

0.5

0

0.5

0

0.5

0

0.5

0

0.5

0

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

A good outcome:

Jeff Howbert Introduction to Machine Learning Winter 2012 33

gclusters found by algorithm correspond to natural clusters in data

Page 34: Clustering Basic Concepts and Algorithms 1

Impact of initial choice of centroids

2

2.5

3Iteration 1

2

2.5

3Iteration 2

0.5

1

1.5

2

y

0.5

1

1.5

2

y

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

x-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

x

Iteration 3 Iteration 4 Iteration 5

1.5

2

2.5

3

y

3

1.5

2

2.5

3

y

1.5

2

2.5

3

y2 1 5 1 0 5 0 0 5 1 1 5 2

0

0.5

1

y

2 1 5 1 0 5 0 0 5 1 1 5 2

0

0.5

1

y

2 1 5 1 0 5 0 0 5 1 1 5 2

0

0.5

1

y

Jeff Howbert Introduction to Machine Learning Winter 2012 34

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

Page 35: Clustering Basic Concepts and Algorithms 1

Impact of initial choice of centroids

3Iteration 1

3Iteration 2

3Iteration 3

3Iteration 4

3Iteration 5

2

2.5

2

2.5

2

2.5

2

2.5

2

2.5

1

1.5

y

1

1.5

y

1

1.5

y

1

1.5

y

1

1.5

y

0

0.5

0

0.5

0

0.5

0

0.5

0

0.5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2x

A bad outcome:

Jeff Howbert Introduction to Machine Learning Winter 2012 35

clusters found by algorithm do not correspond to natural clusters in data

Page 36: Clustering Basic Concepts and Algorithms 1

Problems with selecting initial centroids

If there are k ‘real’ clusters then the chance of selecting one centroid from each cluster is small.

Ch i ll ll h k i l– Chance is really small when k is large– If clusters are the same size, n, then

– For example, if k = 10, then probability = 10!/1010 = 0.00036– Sometimes the initial centroids will readjust themselves in

‘ i ht’ d ti th d ’t‘right’ way, and sometimes they don’t– Consider an example of five pairs of clusters

Jeff Howbert Introduction to Machine Learning Winter 2012 36

Page 37: Clustering Basic Concepts and Algorithms 1

Ten clusters example

4

6

8Iteration 1

4

6

8Iteration 2

-4

-2

0

2

y

-4

-2

0

2

y

0 5 10 15 20

-6

x0 5 10 15 20

-6

x

8Iteration 3

8Iteration 4

0

2

4

6

y 0

2

4

6

y

0 5 10 15 20

-6

-4

-2

0 5 10 15 20

-6

-4

-2

Jeff Howbert Introduction to Machine Learning Winter 2012 37

x x

Starting with two initial centroids in one cluster of each pair of clusters

Page 38: Clustering Basic Concepts and Algorithms 1

Ten clusters example

6

8Iteration 1

6

8Iteration 2

6

8Iteration 3

6

8Iteration 4

2

4

6

2

4

6

2

4

6

2

4

6

0

2

y 0

2

y 0

2

y 0

2

y

-4

-2

-4

-2

-4

-2

-4

-2

0 5 10 15 20

-6

x0 5 10 15 20

-6

x0 5 10 15 20

-6

x0 5 10 15 20

-6

x

Jeff Howbert Introduction to Machine Learning Winter 2012 38

xxxxStarting with two initial centroids in one cluster of each pair of clusters

Page 39: Clustering Basic Concepts and Algorithms 1

Ten clusters example

2

4

6

8Iteration 1

2

4

6

8Iteration 2

-4

-2

0

2

y

-4

-2

0

2

y

0 5 10 15 20

-6

x0 5 10 15 20

-6

x

6

8Iteration 3

6

8Iteration 4

-2

0

2

4

y

-2

0

2

4

y

0 5 10 15 20

-6

-4

x0 5 10 15 20

-6

-4

x

Jeff Howbert Introduction to Machine Learning Winter 2012 39

Starting with some pairs of clusters having three initial centroids, while other have only one.

Page 40: Clustering Basic Concepts and Algorithms 1

Ten clusters example

6

8Iteration 1

6

8Iteration 2

6

8Iteration 3

6

8Iteration 4

2

4

6

2

4

6

2

4

6

2

4

6

0

2

y 0

2

y 0

2

y 0

2

y

-4

-2

-4

-2

-4

-2

-4

-2

0 5 10 15 20

-6

x0 5 10 15 20

-6

x0 5 10 15 20

-6

x0 5 10 15 20

-6

x

Jeff Howbert Introduction to Machine Learning Winter 2012 40

Starting with some pairs of clusters having three initial centroids, while other have only one.xxxx

Page 41: Clustering Basic Concepts and Algorithms 1

Solutions to initial centroids problem

Multiple runs– Helps, but probability is not on your sidep , p y y

Sample and use hierarchical clustering to determine initial centroidsSelect more than k initial centroids and then select among these initial centroids

Select most widely separated– Select most widely separatedPostprocessingBisecting k-meansBisecting k-means– Not as susceptible to initialization issues

Jeff Howbert Introduction to Machine Learning Winter 2012 41

Page 42: Clustering Basic Concepts and Algorithms 1

Handling empty clusters

Basic k-means algorithm can yield empty clusters

Several strategies– Choose the point that contributes most to SSEChoose the point that contributes most to SSE– Choose a point from the cluster with the

highest SSEg– If there are several empty clusters, the above

can be repeated several times.p

Jeff Howbert Introduction to Machine Learning Winter 2012 42

Page 43: Clustering Basic Concepts and Algorithms 1

Pre-processing and Post-processing

Pre-processing– Normalize the data– Eliminate outliers

Post-processingPost processing– Eliminate small clusters that may represent outliers– Split ‘loose’ clusters, i.e., clusters with relatively high p , , y g

SSE– Merge clusters that are ‘close’ and that have relatively

l SSElow SSE– Can use these steps during the clustering process

ISODATA

Jeff Howbert Introduction to Machine Learning Winter 2012 43

ISODATA

Page 44: Clustering Basic Concepts and Algorithms 1

Bisecting k-means

Bisecting k-means algorithm– Variant of k-means that can produce a partitional a a o ea s a ca p oduce a pa o a

or a hierarchical clustering

Jeff Howbert Introduction to Machine Learning Winter 2012 44

Page 45: Clustering Basic Concepts and Algorithms 1

Limitations of k-means

k-Means can have problems when clusters have:– Variable sizes– Variable densities– Non-globular shapes

k-Means does not deal with outliers gracefully.

k-Means will always find k clusters, no matter what actual structure of data is (even randomly distributed)structure of data is (even randomly distributed).

Jeff Howbert Introduction to Machine Learning Winter 2012 45

Page 46: Clustering Basic Concepts and Algorithms 1

Limitations of k-means: variable sizes

original points k-means (3 clusters)

Jeff Howbert Introduction to Machine Learning Winter 2012 46

Page 47: Clustering Basic Concepts and Algorithms 1

Limitations of k-means: variable densities

original points k-means (3 clusters)

Jeff Howbert Introduction to Machine Learning Winter 2012 47

Page 48: Clustering Basic Concepts and Algorithms 1

Limitations of k-means: non-globular shapes

original points k-means (2 clusters)

Jeff Howbert Introduction to Machine Learning Winter 2012 48

Page 49: Clustering Basic Concepts and Algorithms 1

Demo: k-means clustering

..\videos\Visualizing k Means Algorithm.mp4

on web:htt // t b / t h? St4 k ZP Ehttp://www.youtube.com/watch?v=gSt4_kcZPxE

Note that well defined clusters are formed evenNote that well-defined clusters are formed eventhough data is uniformly and randomly distributed.

Jeff Howbert Introduction to Machine Learning Winter 2012 49

Page 50: Clustering Basic Concepts and Algorithms 1

Overcoming k-means limitations

One solution is to use many clusters. Finds pieces of

original points k-means (10 clusters)

Jeff Howbert Introduction to Machine Learning Winter 2012 50

y pnatural clusters, but need to put together.

Page 51: Clustering Basic Concepts and Algorithms 1

Overcoming k-means limitations

original points k-means (10 clusters)

Jeff Howbert Introduction to Machine Learning Winter 2012 51

Page 52: Clustering Basic Concepts and Algorithms 1

Overcoming k-means limitations

original points k-means (10 clusters)

Jeff Howbert Introduction to Machine Learning Winter 2012 52

Page 53: Clustering Basic Concepts and Algorithms 1

MATLAB interlude

matlab_demo_10.m

Jeff Howbert Introduction to Machine Learning Winter 2012 53