Today’s Topic: Clustering Document clustering Motivations Document representations Success...

44
Today’s Topic: Clustering Document clustering Motivations Document representations Success criteria Clustering algorithms Partitional Hierarchical

Transcript of Today’s Topic: Clustering Document clustering Motivations Document representations Success...

Page 1: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Today’s Topic: ClusteringDocument clustering

MotivationsDocument representationsSuccess criteria

Clustering algorithmsPartitionalHierarchical

Page 2: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

What is clustering? Clustering: the process of grouping a

set of objects into classes of similar objects The commonest form of unsupervised learning

Unsupervised learning = learning from raw data, as opposed to supervised data where a classification of examples is given

A common and important task that finds many applications in IR and other places

Page 3: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Why cluster documents? Whole corpus analysis/navigation

Better user interface For improving recall in search

applications Better search results

For better navigation of search results Effective “user recall” will be higher

For speeding up vector space retrieval Faster search

Page 4: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Yahoo! Hierarchy

dairycrops

agronomyforestry

AI

HCIcraft

missions

botany

evolution

cellmagnetism

relativity

courses

agriculture biology physics CS space

... ... ...

… (30)

www.yahoo.com/Science

... ...

Page 5: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Scatter/Gather: Cutting, Karger, and Pedersen

Page 6: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

For visualizing a document collection and its themes Wise et al, “Visualizing the non-visual” PNNL ThemeScapes, Cartia

[Mountain height = cluster size]

Page 7: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

For improving search recall Cluster hypothesis - Documents with similar

text are related Therefore, to improve search recall:

Cluster docs in corpus a priori When a query matches a doc D, also return

other docs in the cluster containing D Hope if we do this: The query “car” will also

return docs containing automobile Because clustering grouped together docs

containing car with those containing automobile.

Page 8: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

For better navigation of search results For grouping search results thematically

clusty.com / Vivisimo

Page 9: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Issues for clustering Representation for clustering

Document representation Vector space? Normalization?

Need a notion of similarity/distance How many clusters?

Fixed a priori? Completely data driven?

Avoid “trivial” clusters - too large or small In an application, if a cluster's too large, then for

navigation purposes you've wasted an extra user click without whittling down the set of documents much.

Page 10: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

What makes docs “related”? Ideal: semantic similarity.Practical: statistical similarity

We will use cosine similarity.Docs as vectors.For many algorithms, easier to

think in terms of a distance (rather than similarity) between docs.

We will use cosine similarity.

Page 11: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Clustering Algorithms Partitional algorithms

Usually start with a random (partial) partitioning

Refine it iteratively K means clustering Model based clustering

Hierarchical algorithms Bottom-up, agglomerative Top-down, divisive

Page 12: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Partitioning Algorithms Partitioning method: Construct a partition

of n documents into a set of K clusters Given: a set of documents and the number

K Find: a partition of K clusters that

optimizes the chosen partitioning criterion Globally optimal: exhaustively

enumerate all partitions Effective heuristic methods: K-means

and K-medoids algorithms

Page 13: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

K-Means Assumes documents are real-valued vectors. Clusters based on centroids (aka the center

of gravity or mean) of points in a cluster, c:

Reassignment of instances to clusters is based on distance to the current cluster centroids. (Or one can equivalently phrase it in terms of

similarities)

cx

xc

||

1(c)μ

Page 14: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

K-Means Algorithm

Select K random docs {s1, s2,… sK} as seeds.Until clustering converges or other stopping criterion: For each doc di: Assign di to the cluster cj such that dist(di, sj) is

minimal. (Update the seeds to the centroid of each cluster) For each cluster cj

sj = (cj)

Page 15: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

K Means Example(K=2)

Pick seeds

Reassign clusters

Compute centroids

xx

Reassign clusters

xx xx Compute centroids

Reassign clusters

Converged!

Page 16: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Termination conditionsSeveral possibilities, e.g.,

A fixed number of iterations.Doc partition unchanged.Centroid positions don’t change.

Page 17: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Convergence Why should the K-means algorithm

ever reach a fixed point? A state in which clusters don’t change.

K-means is a special case of a general procedure known as the Expectation Maximization (EM) algorithm. EM is known to converge. Number of iterations could be large.

Page 18: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Convergence of K-Means Define goodness measure of cluster

k as sum of squared distances from cluster centroid: Gk = Σi (di – (ck) )2 (sum over all di in cluster k)

G = Σk Gk

Reassignment monotonically decreases G since each vector is assigned to the closest centroid.

Page 19: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Convergence of K-Means Recomputation monotonically decreases

each Gk since (mk is number of members in cluster k): Σ (di – a)2 reaches minimum for:

Σ –2(di – a) = 0

Σ di = Σ a

mK a = Σ di

a = (1/ mk) Σ di = ck

K-means typically converges quickly

Page 20: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Time Complexity Computing distance between two docs is

O(m) where m is the dimensionality of the vectors.

Reassigning clusters: O(Kn) distance computations, or O(Knm).

Computing centroids: Each doc gets added once to some centroid: O(nm).

Assume these two steps are each done once for I iterations: O(IKnm).

Page 21: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Seed Choice Results can vary based on random seed

selection. Some seeds can result in poor

convergence rate, or convergence to sub-optimal clusterings. Select good seeds using a heuristic

(e.g., doc least similar to any existing mean)

Try out multiple starting points Initialize with the results of another

method.

In the above, if you startwith B and E as centroidsyou converge to {A,B,C}and {D,E,F}If you start with D and Fyou converge to {A,B,D,E} {C,F}

Example showingsensitivity to seeds

Page 22: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

How Many Clusters? Number of clusters K is given

Partition n docs into predetermined number of clusters

Finding the “right” number of clusters is part of the problem Given docs, partition into an “appropriate”

number of subsets. E.g., for query results - ideal value of K not

known up front - though UI may impose limits. Can usually take an algorithm for one flavor

and convert to the other.

Page 23: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

K not specified in advance Say, the results of a query. Solve an optimization problem:

penalize having lots of clusters application dependent, e.g., compressed

summary of search results list. Tradeoff between having more

clusters (better focus within each cluster) and having too many clusters

Page 24: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

K not specified in advanceGiven a clustering, define the

Benefit for a doc to be the cosine similarity to its centroid

Define the Total Benefit to be the sum of the individual doc Benefits.

Page 25: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Penalize lots of clusters For each cluster, we have a Cost C. Thus for a clustering with K clusters, the

Total Cost is KC. Define the Value of a clustering to be =

Total Benefit - Total Cost. Find the clustering of highest value, over

all choices of K. Total benefit increases with increasing K. But

can stop when it doesn’t increase by “much”. The Cost term enforces this.

Page 26: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

K-means issues, variations, etc. Recomputing the centroid after every

assignment (rather than after all points are re-assigned) can improve speed of convergence of K-means

Assumes clusters are spherical in vector space Sensitive to coordinate changes, weighting

etc. Disjoint and exhaustive

Doesn’t have a notion of “outliers”

Page 27: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Hierarchical Clustering Build a tree-based hierarchical

taxonomy (dendrogram) from a set of documents.

One approach: recursive application of a partitional clustering algorithm.

animal

vertebrate

fish reptile amphib. mammal worm insect crustacean

invertebrate

Page 28: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

• Clustering obtained by cutting the dendrogram at a desired level: each connectedconnected component forms a cluster.

Dendogram: Hierarchical Clustering

Page 29: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Hierarchical Agglomerative Clustering (HAC)Starts with each doc in a separate

cluster then repeatedly joins the closest

pair of clusters, until there is only one cluster.

The history of merging forms a binary tree or hierarchy.

Page 30: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Closest pair of clusters Many variants to defining closest pair of clusters Single-link

Similarity of the most cosine-similar (single-link) Complete-link

Similarity of the “furthest” points, the least cosine-similar Centroid

Clusters whose centroids (centers of gravity) are the most cosine-similar

Average-link Average cosine between pairs of elements

Page 31: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Single Link Agglomerative Clustering Use maximum similarity of pairs:

Can result in “straggly” (long and thin) clusters due to chaining effect.

After merging ci and cj, the similarity of the resulting cluster to another cluster, ck, is:

),(max),(,

yxsimccsimji cycx

ji

)),(),,(max()),(( kjkikji ccsimccsimcccsim

Page 32: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Single Link Example

Page 33: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Complete Link Agglomerative Clustering

Use minimum similarity of pairs:

Makes “tighter,” spherical clusters that are typically preferable.

After merging ci and cj, the similarity of the resulting cluster to another cluster, ck, is:

),(min),(,

yxsimccsimji cycx

ji

)),(),,(min()),(( kjkikji ccsimccsimcccsim

Ci Cj Ck

Page 34: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Complete Link Example

Page 35: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Computational Complexity In the first iteration, all HAC methods need to

compute similarity of all pairs of n individual instances which is O(n2).

In each of the subsequent n2 merging iterations, compute the distance between the most recently created cluster and all other existing clusters.

In order to maintain an overall O(n2) performance, computing similarity to each other cluster must be done in constant time. Else O(n2 log n) or O(n3) if done naively

Page 36: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Group Average Agglomerative Clustering Similarity of two clusters = average

similarity of all pairs within merged cluster.

Compromise between single and complete link. Two options:

Averaged across all ordered pairs in the merged cluster

Averaged over all pairs between the two original clusters

No clear difference in efficacy

)( :)(

),()1(

1),(

ji jiccx xyccyjiji

ji yxsimcccc

ccsim

Page 37: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Computing Group Average Similarity

Always maintain sum of vectors in each cluster.

Compute similarity of clusters in constant time:

jcx

j xcs

)(

)1||||)(|||(|

|)||(|))()(())()((),(

jiji

jijijiji cccc

cccscscscsccsim

Page 38: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

What Is A Good Clustering? Internal criterion: A good clustering

will produce high quality clusters in which: the intra-class (that is, intra-cluster)

similarity is high the inter-class similarity is low The measured quality of a clustering

depends on both the document representation and the similarity measure used

Page 39: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

External criteria for clustering quality

Quality measured by its ability to discover some or all of the hidden patterns or latent classes in gold standard data

Assesses a clustering with respect to ground truth

Assume documents with C gold standard classes, while our clustering algorithms produce K clusters, ω1, ω2, …, ωK with ni

members.

Page 40: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

External Evaluation of Cluster QualitySimple measure: purity, the ratio

between the dominant class in the cluster πi and the size of cluster ωi

Others are entropy of classes in clusters (or mutual information between classes and clusters)

Cjnn

Purity ijji

i )(max1

)(

Page 41: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Cluster I Cluster II Cluster III

Cluster I: Purity = 1/6 (max(5, 1, 0)) = 5/6

Cluster II: Purity = 1/6 (max(1, 4, 1)) = 4/6

Cluster III: Purity = 1/5 (max(2, 0, 3)) = 3/5

Purity example

Page 42: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Rand Index

Number of points

Same Cluster in clustering

Different Clusters in clustering

Same class in ground truth

A C

Different classes in ground truth

B D

Page 43: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Rand index: symmetric version

BA

AP

DCBA

DARI

CA

AR

Compare with standard Precision and Recall.

Page 44: Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.

Rand Index example: 0.68

Number of points

Same Cluster in clustering

Different Clusters in clustering

Same class in ground truth

20 24

Different classes in ground truth

20 72