11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han,...

54
06/18/22 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University ©2009 Han, Kamber & Pei. All rights reserved. 1

Transcript of 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han,...

Page 1: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

04/20/23 1

Data Mining: Concepts and

Techniques (3rd ed.)

— Chapter 11 —

Jiawei Han, Micheline Kamber, and Jian Pei

University of Illinois at Urbana-Champaign &

Simon Fraser University

©2009 Han, Kamber & Pei. All rights reserved.

1

Page 2: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

April 20, 2023Data Mining: Concepts and

Techniques 2

Page 3: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

3

Chapter 11. Cluster Analysis: Advanced Methods

Statistics-Based Clustering Clustering High-Dimensional Data Semi-Supervised Learning and Active Learning Constraint-Based Clustering Bi-Clustering and co-Clustering Collaborative filtering Spectral Clustering Evaluation of Clustering Quality Summary

3

Page 4: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

4

Model-Based Clustering

What is model-based clustering? Assumption: a cluster is generated by a model such as a

probability distribution A model (e.g., Gaussian distribution) is determined by a

set of parameters Task: optimize the fit between the given data and some

mathematical models by learning the parameters of the models

Typical methods Statistical approach

EM (Expectation maximization), AutoClass Neural network approach

SOM (Self-Organizing Feature Map)

Page 5: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

Mixture Models

A cluster can be modeled as a probability distribution Practically, assume a distribution can be

approximated well using multivariate normal distribution

Multiple clusters is a mixture of different probability distributions

A data set is a set of observations from a mixture of models

5

Page 6: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

Object Probability

Suppose there are k clusters and a set X of m objects Let the j-th cluster have parameter j = (j, j) The probability that a point is in the j-th cluster

is wj, where w1 + …+ wk = 1 The probability of an object x is

k

jjjj xpwxprob

1

)|()|(

m

i

k

jjijj

m

ii xpwxprobXprob

1 11

)|()|()|(

6

Page 7: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

Example

2

2

2

)(

2

1)|(

x

i exprob

)2,4()2,4( 21

8

)4(

8

)4( 22

22

1

22

1)|(

xx

eexprob

7

Page 8: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

Maximal Likelihood Estimation

Maximum likelihood principle: If we know a set of objects are from one

distribution, but do not know the parameter, we can choose the parameter maximizing the probability

Maximize

Equivalently, maximize

m

j

x

i exprob1

2

)(2

2

2

1)|(

m

i

i mmx

Xprob1

2

2

log2log5.02

)()|(log

8

Page 9: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

The EM (Expectation Maximization) Algorithm

Expectation Maximization algorithm

Select an initial set of model parameters

Repeat

Expectation Step: For each object, calculate the probability that it belongs to each distribution i, i.e., prob(xi|i)

Maximization Step: Given the probabilities from the expectation step, find the new estimates of the parameters that maximize the expected likelihood

Until the parameters are stable9

Page 10: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

Advantages and Disadvantages

Mixture models are more general than k-means and fuzzy c-means

Clusters can be characterized by a small number of parameters

The results may satisfy the statistical assumptions of the generative models

Computationally expensive Need large data sets Hard to estimate the number of clusters

10

Page 11: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

11

Neural Network Approaches

Neural network approaches Represent each cluster as an exemplar, acting as a

“prototype” of the cluster New objects are distributed to the cluster whose

exemplar is the most similar according to some distance measure

Typical methods SOM (Soft-Organizing feature Map) Competitive learning

Involves a hierarchical architecture of several units (neurons)

Neurons compete in a “winner-takes-all” fashion for the object currently being presented

Page 12: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

12

Self-Organizing Feature Map (SOM)

SOMs, also called topological ordered maps, or Kohonen Self-Organizing Feature Map (KSOMs)

It maps all the points in a high-dimensional source space into a 2 to 3-d target space, s.t., the distance and proximity relationship (i.e., topology) are preserved as much as possible

Similar to k-means: cluster centers tend to lie in a low-dimensional manifold in the feature space

Clustering is performed by having several units competing for the current object

The unit whose weight vector is closest to the current object wins The winner and its neighbors learn by having their weights adjusted

SOMs are believed to resemble processing that can occur in the brain Useful for visualizing high-dimensional data in 2- or 3-D space

Page 13: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

13

Web Document Clustering Using SOM

The result of

SOM clustering

of 12088 Web

articles

The picture on

the right: drilling

down on the

keyword

“mining”

Based on

websom.hut.fi

Web page

Page 14: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

14

Chapter 11. Cluster Analysis: Advanced Methods

Statistics-Based Clustering Clustering High-Dimensional Data Semi-Supervised Learning and Active Learning Constraint-Based Clustering Bi-Clustering and co-Clustering Collaborative filtering Spectral Clustering Evaluation of Clustering Quality Summary

14

Page 15: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

15

Clustering High-Dimensional Data

Clustering high-dimensional data Many applications: text documents, DNA micro-array data Major challenges:

Many irrelevant dimensions may mask clusters Distance measure becomes meaningless—due to equi-

distance Clusters may exist only in some subspaces

Methods Feature transformation: only effective if most dimensions are

relevant PCA & SVD useful only when features are highly

correlated/redundant Feature selection: wrapper or filter approaches

useful to find a subspace where the data have nice clusters Subspace-clustering: find clusters in all the possible subspaces

CLIQUE, ProClus, and frequent pattern-based clustering

Page 16: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

16

The Curse of Dimensionality (graphs adapted from Parsons et al. KDD Explorations

2004)

Data in only one dimension is relatively

packed

Adding a dimension “stretch” the

points across that dimension, making

them further apart

Adding more dimensions will make the

points further apart—high dimensional

data is extremely sparse

Distance measure becomes

meaningless—due to equi-distance

Page 17: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

17

Why Subspace Clustering?(adapted from Parsons et al. SIGKDD Explorations

2004)

Clusters may exist only in some subspaces Subspace-clustering: find clusters in all the subspaces

Page 18: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

18

CLIQUE (Clustering In QUEst)

Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98)

Automatically identifying subspaces of a high dimensional data space that allow better clustering than original space

CLIQUE can be considered as both density-based and grid-based

It partitions each dimension into the same number of equal length interval

It partitions an m-dimensional data space into non-overlapping rectangular units

A unit is dense if the fraction of total data points contained in the unit exceeds the input model parameter

A cluster is a maximal set of connected dense units within a subspace

Page 19: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

19

CLIQUE: The Major Steps

Partition the data space and find the number of points that lie inside each cell of the partition.

Identify the subspaces that contain clusters using the Apriori principle

Identify clusters

Determine dense units in all subspaces of interests Determine connected dense units in all subspaces of

interests.

Generate minimal description for the clusters Determine maximal regions that cover a cluster of

connected dense units for each cluster Determination of minimal cover for each cluster

Page 20: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

20

Sala

ry

(10,

000)

20 30 40 50 60age

54

31

26

70

20 30 40 50 60age

54

31

26

70

Vac

atio

n(w

eek)

age

Vac

atio

n

Salary 30 50

= 3

Page 21: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

21

Strength and Weakness of CLIQUE

Strength automatically finds subspaces of the highest

dimensionality such that high density clusters exist in those subspaces

insensitive to the order of records in input and does not presume some canonical data distribution

scales linearly with the size of input and has good scalability as the number of dimensions in the data increases

Weakness The accuracy of the clustering result may be degraded

at the expense of simplicity of the method

Page 22: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

April 20, 2023Data Mining: Concepts and

Techniques 22

Frequent Pattern-Based Approach

Clustering high-dimensional space (e.g., clustering text documents, microarray data) Projected subspace-clustering: which dimensions to be

projected on? CLIQUE, ProClus

Feature extraction: costly and may not be effective? Using frequent patterns as “features”

Clustering by pattern similarity in micro-array data (pClustering) [H. Wang, W. Wang, J. Yang, and P. S. Yu.  Clustering by pattern similarity in large data sets, SIGMOD’02]

Page 23: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

23

Clustering by Pattern Similarity (p-Clustering)

Left figure: Micro-array “raw” data shows 3 genes and their values in a multi-D space: Difficult to find their patterns

Right two: Some subsets of dimensions form nice shift and scaling patterns

No globally defined similarity/distance measure Clusters may not be exclusive

An object can appear in multiple clusters

Page 24: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

April 20, 2023Data Mining: Concepts and

Techniques 24

Why p-Clustering?

Microarray data analysis may need to Clustering on thousands of dimensions (attributes) Discovery of both shift and scaling patterns

Clustering with Euclidean distance measure? — cannot find shift patterns Clustering on derived attribute Aij = ai – aj? — introduces N(N-1) dimensions Bi-cluster (Y. Cheng and G. Church. Biclustering of expression data. ISMB’00)

using transformed mean-squared residue score matrix (I, J)

Where A submatrix is a δ-cluster if H(I, J) ≤ δ for some δ > 0

Problems with bi-cluster No downward closure property Due to averaging, it may contain outliers but still within δ-threshold

Jj

ijd

Jijd

||

1

Ii

ijd

IIjd

||

1

JjIiij

dJIIJ

d,||||

1

Page 25: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

p-Clustering: Clustering by Pattern

Similarity

P-score: the similarity between two objects rx, ry on two

attributes au, av

δ-pCluster: If for any 2 by 2 matrix X, pScore(X) ≤ δ (δ > 0)

Properties of δ-pCluster

Downward closure

Clusters are more homogeneous than bi-cluster (thus the

name: pair-wise Cluster)

MaPle (Pei et al. 2003): Efficient mining of maximum p-clusters

For scaling patterns, taking logarithmic on will lead

to the pScore form

|)()(|)( ybyaxbxayb

xb

ya

xadddd

d

d

d

dpScore

ybxb

yaxa

dd

dd

/

/

Page 26: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

31

Chapter 11. Cluster Analysis: Advanced Methods

Statistics-Based Clustering Clustering High-Dimensional Data Semi-Supervised Learning and Active Learning Constraint-Based Clustering Bi-Clustering and co-Clustering Collaborative filtering Spectral Clustering Evaluation of Clustering Quality Summary

31

Page 27: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

32

Why Constraint-Based Cluster Analysis?

Need user feedback: Users know their applications the best Less parameters but more user-desired constraints, e.g., an

ATM allocation problem: obstacle & desired clusters

Page 28: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

33

A Classification of Constraints in Cluster Analysis

Clustering in applications: desirable to have user-guided (i.e., constrained) cluster analysis

Different constraints in cluster analysis: Constraints on individual objects (do selection first)

Cluster on houses worth over $300K Constraints on distance or similarity functions

Weighted functions, obstacles (e.g., rivers, lakes) Constraints on the selection of clustering parameters

# of clusters, MinPts, etc. User-specified constraints

Contain at least 500 valued customers and 5000 ordinary ones Semi-supervised: giving small training sets as

“constraints” or hints

Page 29: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

34

Clustering With Obstacle Objects

Tung, Hou, and Han. Spatial Clustering in the Presence of Obstacles, ICDE'01

K-medoids is more preferable since k-means may locate the ATM center in the middle of a lake

Visibility graph and shortest path Triangulation and micro-clustering Two kinds of join indices (shortest-

paths) worth pre-computation VV index: indices for any pair of

obstacle vertices MV index: indices for any pair of

micro-cluster and obstacle indices

Page 30: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

35

An Example: Clustering With Obstacle Objects

Taking obstacles into account

Not Taking obstacles into account

Page 31: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

36

User-Guided Clustering

name

office

position

Professorcourse-id

name

area

course

semester

instructor

office

position

Studentname

student

course

semester

unit

Register

grade

professor

student

degree

Advise

name

Group

person

group

Work-In

area

year

conf

Publicationtitle

title

Publishauthor

Target of clustering

User hint

CourseOpen-course

X. Yin, J. Han, P. S. Yu, “Cross-Relational Clustering with User's Guidance”, KDD'05

User usually has a goal of clustering, e.g., clustering students by research area User specifies his clustering goal to CrossClus

Page 32: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

37

Comparing with Classification

User-specified feature (in the form

of attribute) is used as a hint, not

class labels

The attribute may contain too

many or too few distinct values,

e.g., a user may want to

cluster students into 20

clusters instead of 3 Additional features need to be

included in cluster analysisAll tuples for clustering

User hint

Page 33: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

38

Comparing with Semi-Supervised Clustering

Semi-supervised clustering: User provides a training set consisting of “similar” (“must-link) and “dissimilar” (“cannot link”) pairs of objects

User-guided clustering: User specifies an attribute as a hint, and more relevant features are found for clustering

All

tupl

es f

or c

lust

erin

g

Semi-supervised clustering

All tuples for clustering

User-guided clustering

x

Page 34: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

39

Why Not Semi-Supervised Clustering?

Much information (in multiple relations) is needed to judge whether two tuples are similar

A user may not be able to provide a good training set It is much easier for a user to specify an attribute as a hint,

such as a student’s research area

Tom Smith SC1211 TA

Jane Chang BI205 RA

Tuples to be compared

User hint

Page 35: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

40

CrossClus: An Overview

Measure similarity between features by how they group

objects into clusters

Use a heuristic method to search for pertinent features

Start from user-specified feature and gradually

expand search range

Use tuple ID propagation to create feature values

Features can be easily created during the expansion

of search range, by propagating IDs

Explore three clustering algorithms: k-means, k-medoids,

and hierarchical clustering

Page 36: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

41

Multi-Relational Features

A multi-relational feature is defined by: A join path, e.g., Student → Register → OpenCourse → Course An attribute, e.g., Course.area (For numerical feature) an aggregation operator, e.g., sum or average

Categorical feature f = [Student → Register → OpenCourse → Course, Course.area, null]

Tuple Areas of courses

DB AI TH

t1 5 5 0

t2 0 3 7

t3 1 5 4

t4 5 0 5

t5 3 3 4

areas of courses of each studentTuple Feature f

DB AI TH

t1 0.5 0.5 0

t2 0 0.3 0.7

t3 0.1 0.5 0.4

t4 0.5 0 0.5

t5 0.3 0.3 0.4

Values of feature f f(t1)

f(t2)

f(t3)

f(t4)

f(t5)

DB

AI

TH

Page 37: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

42

Representing Features

Similarity between tuples t1 and t2 w.r.t. categorical feature f

Cosine similarity between vectors f(t1) and f(t2)

Most important information of a feature f is how f groups tuples into clusters

f is represented by similarities between every pair of tuples indicated by f

The horizontal axes are the tuple indices, and the vertical axis is the similarity

This can be considered as a vector of N x N dimensions

Similarity vector Vf

L

kk

L

kk

L

kkk

f

ptfptf

ptfptftt

1

22

1

21

121

21

..

..,sim

Page 38: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

43

Similarity Between Features

Feature f (course) Feature g (group)

DB AI TH Info sys Cog sci Theory

t1 0.5 0.5 0 1 0 0

t2 0 0.3 0.7 0 0 1

t3 0.1 0.5 0.4 0 0.5 0.5

t4 0.5 0 0.5 0.5 0 0.5

t5 0.3 0.3 0.4 0.5 0.5 0

Values of Feature f and g

Similarity between two features – cosine similarity of two vectors

Vf

Vg

gf

gf

VV

VVgfsim

,

Page 39: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

44

Computing Feature SimilarityTuplesFeature f Feature g

DB

AI

TH

Info sys

Cog sci

Theory

Similarity between feature values w.r.t. the tuples

sim(fk,gq)=Σi=1 to N f(ti).pk∙g(ti).pq

DB Info sys

2

1 11 1

,,,

l

k

m

qqk

N

i

N

jjigjif

gf gfsimttsimttsimVV Tuple similarities, hard to compute

Feature value similarities, easy to compute

DB

AI

TH

Info sys

Cog sci

Theory

Compute similarity between each pair of feature values by one scan on data

Page 40: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

45

Searching for Pertinent Features

Different features convey different aspects of information

Features conveying same aspect of information usually cluster tuples in more similar ways Research group areas vs. conferences of publications

Given user specified feature Find pertinent features by computing feature similarity

Research group area

Advisor

Conferences of papers

Research area

GPA

Number of papers

GRE score

Academic Performances

Nationality

Permanent address

Demographic info

Page 41: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

46

Heuristic Search for Pertinent Features

Overall procedure1. Start from the user-

specified feature

2. Search in neighborhood of existing pertinent features

3. Expand search range gradually

name

office

position

Professor

office

position

Studentname

student

course

semester

unit

Register

grade

professor

student

degree

Advise

person

group

Work-In

name

Group

areayear

conf

Publicationtitle

title

Publishauthor

Target of clustering

User hint

course-id

name

area

Coursecourse

semester

instructor

Open-course

1

2

Tuple ID propagation is used to create multi-relational features IDs of target tuples can be propagated along any join path, from

which we can find tuples joinable with each target tuple

Page 42: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

47

Clustering with Multi-Relational Features

Given a set of L pertinent features f1, …, fL, similarity

between two tuples

Weight of a feature is determined in feature search by its similarity with other pertinent features

Clustering methods CLARANS [Ng & Han 94], a scalable clustering

algorithm for non-Euclidean space K-means Agglomerative hierarchical clustering

L

iif weightftttt

i1

2121 .,sim,sim

Page 43: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

48

Experiments: Compare CrossClus with

Baseline: Only use the user specified feature PROCLUS [Aggarwal, et al. 99]: a state-of-the-art

subspace clustering algorithm Use a subset of features for each cluster We convert relational database to a table by

propositionalization User-specified feature is forced to be used in every

cluster RDBC [Kirsten and Wrobel’00]

A representative ILP clustering algorithm Use neighbor information of objects for clustering User-specified feature is forced to be used

Page 44: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

49

Measure of Clustering Accuracy

Accuracy

Measured by manually labeled data

We manually assign tuples into clusters according

to their properties (e.g., professors in different

research areas)

Accuracy of clustering: Percentage of pairs of tuples in

the same cluster that share common label

This measure favors many small clusters

We let each approach generate the same number of

clusters

Page 45: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

50

DBLP DatasetClustering Accurarcy - DBLP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Conf

Wor

d

Coauth

or

Conf+

Wor

d

Conf+

Coauth

or

Wor

d+Coa

utho

r

All thr

ee

CrossClus K-Medoids

CrossClus K-Means

CrossClus Agglm

Baseline

PROCLUS

RDBC

Page 46: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

61

Summary

Cluster analysis groups objects based on their similarity and has wide applications

Measure of similarity can be computed for various types of data

Clustering algorithms can be categorized into partitioning methods, hierarchical methods, density-based methods, grid-based methods, and model-based methods

Outlier detection and analysis are very useful for fraud detection, etc. and can be performed by statistical, distance-based or deviation-based approaches

There are still lots of research issues on cluster analysis

Page 47: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

62

Problems and Challenges

Considerable progress has been made in scalable clustering methods Partitioning: k-means, k-medoids, CLARANS Hierarchical: BIRCH, ROCK, CHAMELEON Density-based: DBSCAN, OPTICS, DenClue Grid-based: STING, WaveCluster, CLIQUE Model-based: EM, Cobweb, SOM Frequent pattern-based: pCluster Constraint-based: COD, constrained-clustering

Current clustering techniques do not address all the requirements adequately, still an active area of research

Page 48: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

63

References

G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to Clustering. John Wiley and Sons, 1988.

P. Michaud. Clustering Techniques. Future Generation Computer Systems, 13, 1997. A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988. L. Parsons, E. Haque and H. Liu, Subspace Clustering for High Dimensional Data: A

Review, SIGKDD Explorations, 6(1), June 2004 E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large

data sets. Proc. 1996 Int. Conf. on Pattern Recognition,. A. K. H. Tung, J. Han, L. V. S. Lakshmanan, and R. T. Ng. Constraint-Based

Clustering in Large Databases, ICDT'01. A. K. H. Tung, J. Hou, and J. Han. Spatial Clustering in the Presence of Obstacles,

ICDE'01 H. Wang, W. Wang, J. Yang, and P.S. Yu. Clustering by pattern similarity in large data

sets,  SIGMOD’ 02. X. Yin, J. Han, and P.S. Yu, “Cross-Relational Clustering with User's Guidance”, in

Proc. 2005 Int. Conf. on Knowledge Discovery and Data Mining (KDD'05), Chicago, IL, Aug. 2005.

Page 49: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

64

Page 50: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

Slides Not to Be Used in Class

65

Page 51: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

Chapter 11. Cluster Analysis: Advanced Methods

Statistics-Based Clustering Model-Based Clustering: The Expectation-Maximization Method Neural Network Approach (SOM) Fuzzy and non-crisp clustering

Clustering High-Dimensional Data Why Subspace Clustering?—Challenges on Clustering High-Dimensional Data PROCLUS: A Dimension-Reduction Subspace Clustering Method Frequent Pattern-Based Clustering Methods

Semi-Supervised Learning and Active Learning Semi-supervised clustering Classification of partially labeled data

Constraint-Based and User-Guided Cluster Analysis Clustering with Obstacle Objects User-Constrained Cluster Analysis User-Guided Cluster Analysis

Bi-Clustering and co-Clustering Collaborative filtering

Clustering-based approach Classification-Based Approach Frequent Pattern-Based Approach

Spectral Clustering Evaluation of Clustering Quality Summary

66

Page 52: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

67

Conceptual Clustering

Conceptual clustering A form of clustering in machine learning Produces a classification scheme for a set of unlabeled

objects Finds characteristic description for each concept (class)

COBWEB (Fisher’87) A popular a simple method of incremental conceptual

learning Creates a hierarchical clustering in the form of a

classification tree Each node refers to a concept and contains a

probabilistic description of that concept

Page 53: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

68

COBWEB Clustering Method

A classification tree

Page 54: 11/21/2015 1 Data Mining: Concepts and Techniques (3 rd ed.) — Chapter 11 — Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign.

69

More on Conceptual Clustering

Limitations of COBWEB

The assumption that the attributes are independent of each other is

often too strong because correlation may exist

Not suitable for clustering large database data – skewed tree and

expensive probability distributions

CLASSIT

an extension of COBWEB for incremental clustering of continuous

data

suffers similar problems as COBWEB

AutoClass (Cheeseman and Stutz, 1996)

Uses Bayesian statistical analysis to estimate the number of

clusters

Popular in industry