Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf ·...

74
SLIDES BY: SHREE JASWAL Frequent Pattern Mining

Transcript of Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf ·...

Page 1: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

S L I D E S B Y : S H R E E J A S W A L

Frequent Pattern Mining

Page 2: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Topics to be covered

Chp 7 Slides by Shree Jaswal

2

Market Basket Analysis, Frequent Itemsets, Closed Itemsets, and Association Rules;

Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods,

The Apriori Algorithm for finding Frequent Itemsets Using Candidate Generation,

Generating Association Rules from Frequent Itemsets, Improving the Efficiency of Apriori, A pattern growth approach for mining Frequent Itemsets; Mining Frequent itemsets using vertical data formats; Mining closed and maximal patterns; Introduction to Mining Multilevel Association Rules and Multidimensional

Association Rules; From Association Mining to Correlation Analysis, Pattern Evaluation Measures; Introduction to Constraint-Based Association Mining.

Page 3: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

What Is Frequent Pattern Analysis? 3

Frequent pattern: a pattern (a set of items, subsequences, substructures,

etc.) that occurs frequently in a data set

First proposed by Agrawal, Imielinski, and Swami [AIS93] in the context

of frequent itemsets and association rule mining

Motivation: Finding inherent regularities in data

What products were often purchased together?— Beer and diapers?!

What are the subsequent purchases after buying a PC?

What kinds of DNA are sensitive to this new drug?

Can we automatically classify web documents?

Applications

Basket data analysis, cross-marketing, catalog design, sale campaign

analysis, Web log (click stream) analysis, and DNA sequence analysis.

Chp 7 Slides by Shree Jaswal

Page 4: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Chp 7 Slides by Shree Jaswal

4

Why Is Freq. Pattern Mining Important?

Discloses an intrinsic and important property of data sets

Forms the foundation for many essential data mining tasks

Association, correlation, and causality analysis

Sequential, structural (e.g., sub-graph) patterns

Pattern analysis in spatiotemporal, multimedia, time-series, and stream data

Classification: associative classification

Cluster analysis: frequent pattern-based clustering

Data warehousing: iceberg cube and cube-gradient

Semantic data compression: fascicles

Broad applications

Page 5: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Association Rule Mining

Given a set of transactions, find rules that will predict the occurrence of an item based on the occurrences of other items in the transaction

Market-Basket transactions

TID Items

1 Bread, Milk

2 Bread, Diaper, Beer, Eggs

3 Milk, Diaper, Beer, Coke

4 Bread, Milk, Diaper, Beer

5 Bread, Milk, Diaper, Coke

Example of Association Rules

{Diaper} {Beer}, {Milk, Bread} {Eggs,Coke}, {Beer, Bread} {Milk},

Implication means co-occurrence, not causality!

Chp 7 Slides by Shree Jaswal

6

Page 6: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Definition: Frequent Itemset

Itemset

A collection of one or more items

Example: {Milk, Bread, Diaper}

k-itemset

An itemset that contains k items

Support count ()

Frequency of occurrence of an itemset

E.g. ({Milk, Bread,Diaper}) = 2

Support

Fraction of transactions that contain an itemset

E.g. s({Milk, Bread, Diaper}) = 2/5

Frequent Itemset

An itemset whose support is greater than or equal to a minsup threshold

TID Items

1 Bread, Milk

2 Bread, Diaper, Beer, Eggs

3 Milk, Diaper, Beer, Coke

4 Bread, Milk, Diaper, Beer

5 Bread, Milk, Diaper, Coke

Page 7: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Definition: Association Rule

Example:

Beer}Diaper,Milk{

4.05

2

|T|

)BeerDiaper,,Milk(

s

67.03

2

)Diaper,Milk(

)BeerDiaper,Milk,(

c

Association Rule

– An implication expression of the form X Y, where X and Y are itemsets

– Example: {Milk, Diaper} {Beer}

Rule Evaluation Metrics

– Support (s)

Fraction of transactions that contain both X and Y

– Confidence (c)

Measures how often items in Y appear in transactions that contain X

TID Items

1 Bread, Milk

2 Bread, Diaper, Beer, Eggs

3 Milk, Diaper, Beer, Coke

4 Bread, Milk, Diaper, Beer

5 Bread, Milk, Diaper, Coke

Chp 7 Slides by Shree Jaswal

8

Page 8: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Association Rule Mining Task

Given a set of transactions T, the goal of association rule mining is to find all rules having

support ≥ minsup threshold

confidence ≥ minconf threshold

Brute-force approach:

List all possible association rules

Compute the support and confidence for each rule

Prune rules that fail the minsup and minconf thresholds

Computationally prohibitive!

Chp 7 Slides by Shree Jaswal

9

Page 9: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Mining Association Rules

Example of Rules:

{Milk,Diaper} {Beer} (s=0.4, c=0.67) {Milk,Beer} {Diaper} (s=0.4, c=1.0) {Diaper,Beer} {Milk} (s=0.4, c=0.67) {Beer} {Milk,Diaper} (s=0.4, c=0.67) {Diaper} {Milk,Beer} (s=0.4, c=0.5) {Milk} {Diaper,Beer} (s=0.4, c=0.5)

TID Items

1 Bread, Milk

2 Bread, Diaper, Beer, Eggs

3 Milk, Diaper, Beer, Coke

4 Bread, Milk, Diaper, Beer

5 Bread, Milk, Diaper, Coke

Observations:

• All the above rules are binary partitions of the same itemset: {Milk, Diaper, Beer}

• Rules originating from the same itemset have identical support but can have different confidence

• Thus, we may decouple the support and confidence requirements

Chp 7 Slides by Shree Jaswal

10

Page 10: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Mining Association Rules

Two-step approach:

1. Frequent Itemset Generation

– Generate all itemsets whose support minsup

2. Rule Generation

– Generate high confidence rules from each frequent itemset, where each rule is a binary partitioning of a frequent itemset

Frequent itemset generation is still computationally expensive

Chp 7 Slides by Shree Jaswal

11

Page 11: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

12

Closed Patterns and Max-Patterns

A long pattern contains a combinatorial number of sub-patterns, e.g., {a1, …, a100} contains (100

1) + (1002) + … +

(110

00

0) = 2100 – 1 = 1.27*1030 sub-patterns!

Solution: Mine closed patterns and max-patterns instead

An itemset X is closed if X is frequent and there exists no super-pattern Y כ X, with the same support as X (proposed by Pasquier, et al. @ ICDT’99)

An itemset X is a max-pattern if X is frequent and there exists no frequent super-pattern Y כ X (proposed by Bayardo @ SIGMOD’98)

Closed pattern is a lossless compression of freq. patterns

Reducing the # of patterns and rules

Chp 7 Slides by Shree Jaswal

Page 12: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

13

Closed Patterns and Max-Patterns

Exercise. DB = {<a1, …, a100>, < a1, …, a50>}

Min_sup = 1.

What is the set of closed itemset?

<a1, …, a100>: 1

< a1, …, a50>: 2

What is the set of max-pattern?

<a1, …, a100>: 1

Chp 7 Slides by Shree Jaswal

Page 13: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Frequent Itemset Generation null

AB AC AD AE BC BD BE CD CE DE

A B C D E

ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

ABCD ABCE ABDE ACDE BCDE

ABCDE

Given d items, there are 2d possible candidate itemsets

Chp 7 Slides by Shree Jaswal

14

Page 14: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

15

Scalable Methods for Mining Frequent Patterns

The downward closure property of frequent patterns

Any subset of a frequent itemset must be frequent

If {beer, diaper, nuts} is frequent, so is {beer, diaper}

i.e., every transaction having {beer, diaper, nuts} also contains {beer, diaper}

Scalable mining methods: Three major approaches

Apriori (Agrawal & Srikant@VLDB’94)

Freq. pattern growth (FPgrowth—Han, Pei & Yin @SIGMOD’00)

Vertical data format approach (Charm—Zaki & Hsiao @SDM’02)

Chp 7 Slides by Shree Jaswal

Page 15: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

16

Apriori: A Candidate Generation-and-Test Approach

Apriori pruning principle: If there is any itemset which is

infrequent, its superset should not be generated/tested!

(Agrawal & Srikant @VLDB’94, Mannila, et al. @ KDD’ 94)

That is, If an itemset is frequent, then all of its subsets must also

be frequent

Apriori principle holds due to the following property of the support measure:

Support of an itemset never exceeds the support of its subsets

This is known as the anti-monotone property of support

Chp 7 Slides by Shree Jaswal

)()()(:, YsXsYXYX

Page 16: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Chp 7 Slides by Shree Jaswal

17

Method:

Initially, scan DB once to get frequent 1-itemset

Generate length (k+1) candidate itemsets from length k

frequent itemsets

Test the candidates against DB

Terminate when no frequent or candidate set can be

generated

Apriori: A Candidate Generation-and-Test Approach

Page 17: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Found to be Infrequent

null

AB AC AD AE BC BD BE CD CE DE

A B C D E

ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

ABCD ABCE ABDE ACDE BCDE

ABCDE

Illustrating Apriori Principle

null

AB AC AD AE BC BD BE CD CE DE

A B C D E

ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

ABCD ABCE ABDE ACDE BCDE

ABCDE

Pruned supersets

Chp 7 Slides by Shree Jaswal 18

Page 18: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Illustrating Apriori Principle

I te m C o u n t

B re a d 4

C o k e 2

M ilk 4

B e e r 3

D ia p e r 4

E g g s 1

I te m s e t C o u n t

{B re a d ,M ilk } 3

{B re a d ,B e e r } 2

{B re a d ,D ia p e r } 3

{M ilk ,B e e r } 2

{M ilk ,D ia p e r } 3

{B e e r ,D ia p e r } 3

I te m s e t C o u n t

{B re a d ,M ilk ,D ia p e r} 3

Items (1-itemsets)

Pairs (2-itemsets) (No need to generate candidates involving Coke or Eggs)

Triplets (3-itemsets) Minimum Support = 3

If every subset is considered, 6C1 + 6C2 + 6C3 = 41 With support-based pruning, 6 + 6 + 1 = 13

Chp 7 Slides by Shree Jaswal

19

Page 19: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

20

The Apriori Algorithm—An Example

Database TDB

1st scan

C1

L1

L2

C2 C2

2nd scan

C3 L3 3rd scan

Tid Items

10 A, C, D

20 B, C, E

30 A, B, C, E

40 B, E

Itemset sup

{A} 2

{B} 3

{C} 3

{D} 1

{E} 3

Itemset sup

{A} 2

{B} 3

{C} 3

{E} 3

Itemset

{A, B}

{A, C}

{A, E}

{B, C}

{B, E}

{C, E}

Itemset sup

{A, B} 1

{A, C} 2

{A, E} 1

{B, C} 2

{B, E} 3

{C, E} 2

Itemset sup

{A, C} 2

{B, C} 2

{B, E} 3

{C, E} 2

Itemset

{B, C, E}

Itemset sup

{B, C, E} 2

Supmin = 2

Chp 7 Slides by Shree Jaswal

Page 20: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

21

The Apriori Algorithm

Pseudo-code: Ck: Candidate itemset of size k Lk : frequent itemset of size k

L1 = {frequent items}; for (k = 1; Lk !=; k++) do begin Ck+1 = candidates generated from Lk; for each transaction t in database do

increment the count of all candidates in Ck+1

that are contained in t Lk+1 = candidates in Ck+1 with min_support end return k Lk;

Chp 7 Slides by Shree Jaswal

Page 21: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

22

Important Details of Apriori

How to generate candidates?

Step 1: self-joining Lk

Step 2: pruning

How to count supports of candidates?

Example of Candidate-generation

L3={abc, abd, acd, ace, bcd}

Self-joining: L3*L3

abcd from abc and abd

acde from acd and ace

Pruning:

acde is removed because ade is not in L3

C4={abcd}

Chp 7 Slides by Shree Jaswal

Page 22: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

What’s Next ?

Chp 7 Slides by Shree Jaswal

23

Having found all of the frequent items. This completes our Apriori Algorithm.

What’s Next ?

These frequent itemsets will be used to generate strong association rules( where strong association rules satisfy both minimum support & minimum confidence).

Page 23: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Generating Association Rules from Frequent Itemsets

Chp 7 Slides by Shree Jaswal

24

Procedure:

For each frequent itemset “l”, generate all nonempty subsets of l.

For every nonempty subset s of l, output the rule “s ->(l-s)” if support_count(l)/support_count(s) >= min_conf where min_conf is minimum confidence threshold.

Lets take l = {I1,I2,I5}.

–Its all nonempty subsets are {I1,I2}, {I1,I5}, {I2,I5}, {I1}, {I2}, {I5}.

Page 24: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Chp 7 Slides by Shree Jaswal

25

Page 25: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Chp 7 Slides by Shree Jaswal

26

Page 26: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Methods to Improve Apriori’s Efficiency

Chp 7 Slides by Shree Jaswal

28

1. Hash-based technique can be used to reduce the size of the candidate k-itemsets, Ck, for k>1. For example when scanning each transaction in the database to generate the frequent 1-itemsetes, L1, from the candidate 1-itemsets in C1, we can generate all of the 2-itemsets for each transaction, hash them into a different buckets of a hash table structure and increase the corresponding bucket counts:

a. H(x,y)=((order of x)*10+(order of y)) mod 7 e.g. H(I1,I4)=((1*10)+4))mod7=0

b. A 2-itemset whose corresponding bucket count in the hash table is below the threshold cannot be frequent and thus should be removed from the candidate set.

Page 27: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Chp 7 Slides by Shree Jaswal

29

Page 28: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Methods to Improve Apriori’s Efficiency contd.

Chp 7 Slides by Shree Jaswal

30

2. Transaction reduction – a transaction that does not contain any frequent k itemsets cannot contain any frequent k+1 itemsets. Therefore, such a transaction can be marked or removed from further consideration because subsequent scans of the database for j-itemsets, where j>k, will not require it.

Page 29: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Methods to Improve Apriori’s Efficiency contd.

Chp 7 Slides by Shree Jaswal

31

3. Partitioning (partitioning the data to find candidate itemsets): A partitioning

technique can be used that requires just two database scans to mine the frequent

itemsets. It consists of two phases.

In phase I, the algorithm subdivides the transactions of D into n non-overlapping partitions. If the minimum support threshold for transactions in D is min_sup, then the minimum support count for a partition is min_sup * the number of transactions in that partition. For each partition, all frequent itemsets within the partition are found. These are referred to as local frequent itemsets.

A local frequent itemset may or may not be frequent with respect to the entire

database, D. Any itemset that is potentially frequent with respect to D must occur as a frequent itemset in at least one of the partitions. Therefore all local frequent itemsets are candidate itemsets with respect to D. The collection of frequent itemsets from all partitions forms the global candidate itemsets with respect to D.

In phase II, a second scan of D is conducted in which the actual support of each candidate is assessed in order to determine the global frequent itemsets

Page 30: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Chp 7 Slides by Shree Jaswal

32

Page 31: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Methods to Improve Apriori’s Efficiency contd.

Chp 7 Slides by Shree Jaswal

33

4. Sampling (mining on a subset of a given data): The basic idea of the sampling approach is to pick a random sample S of the given data D, and then search for frequent itemsets in S instead of D. In this way, we trade off some degree of accuracy against efficiency.

The sample size of S is such that the search for frequent itemsets in S can be done in main memory, and so only one scan of the transactions in S in required overall.

Because we are searching for frequent itemsets in S rather than in D, it is possible that we will miss some of the global frequent itemsets.

To lessen this possibility, we use a lower support threshold than minimum support to find the frequent itemsets local to S.

Page 32: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Methods to Improve Apriori’s Efficiency contd.

Chp 7 Slides by Shree Jaswal

34

5. Dynamic itemset counting (adding candidate itemsets at different points during a scan): A dynamic itemset counting technique was proposed in which the database is partitioned into blocks marked by start points.

In this variation new candidate itemsets can be added at any start point, which determines new candidate itemsets only immediately before each complete database scan.

The resulting algorithm requires fewer database scan than Apriori.

Page 33: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Mining Frequent Patterns Without Candidate Generation

Chp 7 Slides by Shree Jaswal

35

Compress a large database into a compact, Frequent-Pattern tree (FP-tree)structure

–highly condensed, but complete for frequent pattern mining

–avoid costly database scans

Develop an efficient, FP-tree-based frequent pattern mining method

–A divide-and-conquer methodology: decompose mining tasks into smaller ones

–Avoid candidate generation: sub-database test only!

Page 34: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

FP-Growth Method: Construction of FP-Tree

Chp 7 Slides by Shree Jaswal

36

First, create the root of the tree, labeled with “null”. Scan the database D a second time. (First time we scanned it to

create 1-itemset and then L). The items in each transaction are processed in L order (i.e.

descending order). A branch is created for each transaction with items having their

support count separated by colon. Whenever the same node is encountered in another transaction,

we just increment the support count of the common node or Prefix.

To facilitate tree traversal, an item header table is built so that each item points to its occurrences in the tree via a chain of node-links.

Now, The problem of mining frequent patterns in database is transformed to that of mining the FP-Tree.

Page 35: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

FP-Growth Method: Construction of FP-Tree

Chp 7 Slides by Shree Jaswal

37

Page 36: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Mining the FP-Tree by Creating Conditional (sub) pattern bases

Chp 7 Slides by Shree Jaswal

38

Steps: 1.Start from each frequent length-1 pattern(as an initial suffix pattern). 2.Construct its conditional pattern base which consists of the set of prefix paths in the FP-Tree co-occurring with suffix pattern. 3.Then, Construct its conditional FP-Tree& perform mining on such a tree. 4.The pattern grow this achieved by concatenation of the suffix pattern with the frequent patterns generated from a conditional FP-Tree. 5.The union of all frequent patterns (generated by step 4) gives the required frequent itemset.

Page 37: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

FP-Tree Example Continued

Chp 7 Slides by Shree Jaswal

39

Page 38: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

FP-Tree Example Continued

Chp 7 Slides by Shree Jaswal

40

Out of these, Only I1 & I2 is selected in the conditional FP-Tree because I3 is not satisfying the minimum support count. For I1 , support count in conditional pattern base = 1 + 1 = 2

For I2 , support count in conditional pattern base = 1 + 1 = 2

For I3, support count in conditional pattern base = 1

Thus support count for I3 is less than required min_sup which is2 here.

Now , We have conditional FP-Tree with us.

All frequent pattern corresponding to suffix I5 are generated by considering all possible combinations of I5 and conditional FP-Tree.

The same procedure is applied to suffixes I4, I3 and I1.

Note:I2 is not taken into consideration for suffix because it doesn’t have any prefix at all.

Page 39: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Why Frequent Pattern Growth Fast ?

Chp 7 Slides by Shree Jaswal

42

Performance study shows

–FP-growth is an order of magnitude faster than Apriori, and is also faster than tree-projection

Reasoning

–No candidate generation, no candidate test

–Use compact data structure

–Eliminate repeated database scan

–Basic operation is counting and FP-tree building

Page 40: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Example 1

Transaction Items

T1 a, c, d, f, g, i, m, p

T2 a, b, c, f, l, m, o

T3 b, f, h, j, o

T4 b, c, k, n, p

T5 a, c, e, f, l, m, n, p

Min support = 60%

Chp 7 Slides by Shree Jaswal

43

Page 41: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Step 1

Item Support Item Support

a 3 i 1

b 3 j 1

c 4 k 1

d 1 l 2

e 1 m 3

f 4 n 2

g 1 o 2

h 1 p 3

Find support of every item.

Chp 7 Slides by Shree Jaswal

44

Page 42: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Step 2

Choose items that are above min support and arrange them in descending order.

item support

f 4

c 4

a 3

b 3

m 3

p 3

Chp 7 Slides by Shree Jaswal

45

Page 43: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Step 3

Rewrite transactions with only those items that have more than min support, in descending order

Transaction

Items Frequent items

T1 a, c, d, f, g, i, m, p f, c, a, m, p

T2 a, b, c, f, l, m, o f, c, a, b, m

T3 b, f, h, j, o f, b

T4 b, c, k, n, p c, b, p

T5 a, c, e, f, l, m, n, p f, c, a, m, p

Chp 7 Slides by Shree Jaswal

46

Page 44: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Step 4

Introduce a Root Node

Root = NULL

Give the first transaction to the root.

Transaction T1: f, c, a, m, p

NULL

f : 1

p : 1

Root

m : 1

a : 1

c : 1

Chp 7 Slides by Shree Jaswal

47

Page 45: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Transaction T2: f, c, a, b, m

f : 2

p : 1

Root

m : 1

a : 2

c : 2

b : 1

m : 1

Chp 7 Slides by Shree Jaswal

48

Page 46: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Transaction T3 : f, b

f : 3

p : 1

Root

m : 1

a : 2

c : 2

b : 1

m : 1

b : 1

Chp 7 Slides by Shree Jaswal

49

Page 47: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Transaction T4: c, b, p

f : 3

p : 1

Root

m : 1

a : 2

c : 2

b : 1

m : 1

b : 1

c : 1

b : 1

p : 1

Chp 7 Slides by Shree Jaswal

50

Page 48: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Transaction T5: f, c, a, m, p

f : 4

p : 2

Root

m : 2

a : 3

c : 3

b : 1

m : 1

b : 1

c : 1

b : 1

p : 1

Chp 7 Slides by Shree Jaswal

51

Page 49: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Mining the FP Tree

Frequent items Condition pattern Conditional tree

p fcam:2, cb: 1 c:3

m fca: 2, fcab: 1 f:3, c:3, a:3

b fca:1, f:1, c:1 empty

a fc:3 f:3, c:3

c f:3 f:3

f empty empty

Chp 7 Slides by Shree Jaswal

52

Page 50: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Example 2

Transaction Id Items Purchased

t1 b, e

t2 a, b, c, e

t3 b, c, e

t4 a, c, d

t5 a

Given: min support = 40%

Chp 7 Slides by Shree Jaswal

53

Page 51: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Items Support

a 3

b 3

c 3

d 1

e 3

Step 1/ 2:

Transactions Items in desc order

t1 b, e

t2 b, e, c, a

t3 b, e, c

t4 a, c

t5 a

Step: 3

Chp 7 Slides by Shree Jaswal

54

Page 52: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

b : 3

Root

a : 1

c:2

e: 3

a : 2

c:1

Step: 4

Item Conditional pattern FP tree

a bec: 1 empty

b empty empty

c be: 2, a: 1 b:2, e:2

e b:3 b:3

Step: 5 – Mining the FP Tree

Chp 7 Slides by Shree Jaswal

55

Page 53: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Mining frequent itemsets using vertical data format

Chp 7 Slides by Shree Jaswal

56

The Apriori and the FP-growth methods mine frequent patterns from a set of transactions in TID-itemset format, where TID is a transaction id and itemset is the set of items bought in transaction TID. This data format is known as horizontal data format.

Alternatively data can also be presented in item-TID_set format, where item is an item name and TID_set is the set of transaction identifiers containing the item. This format is known as vertical data format.

Page 54: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Chp 7 Slides by Shree Jaswal

57

Page 55: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Chp 7 Slides by Shree Jaswal

58

Page 56: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Chp 7 Slides by Shree Jaswal

59

Page 57: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Procedure

Chp 7 Slides by Shree Jaswal

60

First we transform the horizontally formatted data to the vertical format by scanning the data set once.

The support count of an itemset is simply the length of the TID_set of the itemset.

Starting with k=1 the frequent k-itemsets can be used to construct the candidate (k+1)-itemsets.

This process repeats with k incremented by 1 each time until no frequent itemsets or no candidate itemsets can be found.

Page 58: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Advantages

Chp 7 Slides by Shree Jaswal

61

Advantages of this algorithm:

Better than Apriori in the generation of candidate (k+1)-itemset from frequent kitemsets

There is no need to scan the database to find the support (k+1) itemsets (for k>=1). This is because the TID_set of each k-itemset carries the complete information required for counting such support.

The disadvantage of this algorithm consist in the TID_set being to long, taking substantial memory space as well as computation time for intersecting the long sets.

Page 59: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Multilevel AR

Chp 7 Slides by Shree Jaswal

64

It is difficult to find interesting patterns at a too primitive level high support = too few rules low support = too many rules, most uninteresting

Approach: reason at suitable level of abstraction A common form of background knowledge is that an attribute may

be generalized or specialized according to a hierarchy of concepts Dimensions and levels can be efficiently encoded in transactions Multilevel Association Rules: rules which combine associations with

hierarchy of concepts

Page 60: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Hierarchy of concepts

Chp 7 Slides by Shree Jaswal

65

P ro d u c t

F a m ily

S e c to r

D e p a r tm e n t

F ro z e n R e frig e ra te d

V e g e ta b le

B a n a n a A p p le O ra n g e E tc ...

F ru it D a iry E tc ....

F re s h B a k e ry E tc ...

F o o d S tu ff

Page 61: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Multilevel AR

Chp 7 Slides by Shree Jaswal

66

Fresh

[support = 20%]

Dairy

[support = 6%]

Fruit

[support = 1%]

Vegetable

[support = 7%]

Fresh Bakery [20%, 60%]

Dairy Bread [6%, 50%]

Fruit Bread [1%, 50%] is not valid

Page 62: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Support and Confidence of Multilevel Association Rules

Chp 7 Slides by Shree Jaswal

67

Generalizing/specializing values of attributes affects support and confidence

from specialized to general: support of rules increases (new rules may become valid)

from general to specialized: support of rules decreases (rules may become not valid, their support falls under the threshold)

Page 63: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Mining Multilevel AR

Chp 7 Slides by Shree Jaswal

68

Hierarchical attributes: age, salary Association Rule: (age, young) (salary, 40k)

age

young middle-aged old

salary

low medium high

18 … 29 30 … 60 61 … 80 10k…40k 50k 60k 70k 80k…100k Candidate Association Rules: (age, 18 ) (salary, 40k), (age, young) (salary, low), (age, 18 ) (salary, low)

Page 64: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Mining Multilevel AR

Chp 7 Slides by Shree Jaswal

69

Calculate frequent itemsets at each concept level, until no more frequent itemsets can be found For each level use Apriori A top_down, progressive deepening approach: First find high-level strong rules: fresh bakery [20%, 60%]. Then find their lower-level “weaker” rules:

fruit bread [6%, 50%]. Variations at mining multiple-level association rules. Level-crossed association rules:

fruit wheat bread

Page 65: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

70

Mining Multiple-Level Association Rules

Items often form hierarchies

Flexible support settings

Items at the lower level are expected to have lower support

Exploration of shared multi-level mining (Agrawal & Srikant@VLB’95, Han & Fu@VLDB’95)

uniform support

Milk

[support = 10%]

2% Milk

[support = 6%]

Skim Milk

[support = 4%]

Level 1

min_sup = 5%

Level 2

min_sup = 5%

Level 1

min_sup = 5%

Level 2

min_sup = 3%

reduced support

Chp 7 Slides by Shree Jaswal

Page 66: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Multi-level Association: Uniform Support vs. Reduced Support

Chp 7 Slides by Shree Jaswal

71

Uniform Support: the same minimum support for all levels

One minimum support threshold. No need to examine itemsets containing any item whose ancestors do not have minimum support.

If support threshold

too high miss low level associations.

too low generate too many high level associations.

Reduced Support: reduced minimum support at lower levels - different strategies possible.

Page 67: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

72

Multi-level Association: Redundancy Filtering

Some rules may be redundant due to “ancestor”

relationships between items.

Example

milk wheat bread [support = 8%, confidence = 70%]

2% milk wheat bread [support = 2%, confidence = 72%]

We say the first rule is an ancestor of the second rule.

A rule is redundant if its support is close to the “expected”

value, based on the rule’s ancestor.

Chp 7 Slides by Shree Jaswal

Page 68: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

73

Mining Multi-Dimensional Association

Single-dimensional rules:

buys(X, “milk”) buys(X, “bread”)

Multi-dimensional rules: 2 dimensions or predicates

Inter-dimension assoc. rules (no repeated predicates)

age(X,”19-25”) occupation(X,“student”) buys(X, “coke”)

hybrid-dimension assoc. rules (repeated predicates)

age(X,”19-25”) buys(X, “popcorn”) buys(X, “coke”)

Categorical Attributes: finite number of possible values, no

ordering among values—data cube approach

Quantitative Attributes: numeric, implicit ordering among

values—discretization, clustering, and gradient approaches

Chp 7 Slides by Shree Jaswal

Page 69: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

Interestingness Measure: Correlations (Lift)

play basketball eat cereal [40%, 66.7%] is misleading

The overall % of students eating cereal is 75% > 66.7%.

play basketball not eat cereal [20%, 33.3%] is more accurate, although

with lower support and confidence

Measure of dependent/correlated events: lift

If value less than 1 then negatively correlated and if value greater than 1 then positively correlated

If value equal to 1 then they are independent

89.05000/3750*5000/3000

5000/2000),( CBlift

Basketball Not basketball Sum (row)

Cereal 2000 1750 3750

Not cereal 1000 250 1250

Sum(col.) 3000 2000 5000

)()(

)(

BPAP

BAPlift

33.15000/1250*5000/3000

5000/1000),( CBlift

Chp 7 Slides by Shree Jaswal

74

Page 70: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

75

Are lift and 2 Good Measures of Correlation?

“Buy walnuts buy milk [1%, 80%]” is misleading

if 85% of customers buy milk

Support and confidence are not good to represent correlations

So many interestingness measures? (Tan, Kumar, Sritastava @KDD’02)

Milk No Milk Sum (row)

Coffee m, c ~m, c c

No Coffee m, ~c ~m, ~c ~c

Sum(col.) m ~m

)()(

)(

BPAP

BAPlift

DB m, c ~m, c m~c ~m~c lift all-conf coh 2

A1 1000 100 100 10,000 9.26 0.91 0.83 9055

A2 100 1000 1000 100,000 8.44 0.09 0.05 670

A3 1000 100 10000 100,000 9.18 0.09 0.09 8172

A4 1000 1000 1000 1000 1 0.5 0.33 0

)sup(_max_

)sup(_

Xitem

Xconfall

|)(|

)sup(

Xuniverse

Xcoh

Chp 7 Slides by Shree Jaswal

Page 71: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

76

Which Measures Should Be Used?

lift and 2 are not good measures for correlations in large transactional DBs

all-conf or coherence could be good measures (Omiecinski@TKDE’03)

Both all-conf and coherence have the downward closure property

Efficient algorithms can be derived for mining (Lee et al. @ICDM’03sub)

Chp 7 Slides by Shree Jaswal

Page 72: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

77

Constraint-based (Query-Directed) Mining

Finding all the patterns in a database autonomously? — unrealistic!

The patterns could be too many but not focused!

Data mining should be an interactive process

User directs what to be mined using a data mining query language (or a graphical user interface)

Constraint-based mining

User flexibility: provides constraints on what to be mined

System optimization: explores such constraints for efficient mining—constraint-based mining

Chp 7 Slides by Shree Jaswal

Page 73: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

78

Constraints in Data Mining

Knowledge type constraint:

classification, association, etc.

Data constraint — using SQL-like queries

find product pairs sold together in stores in Mumbai in Dec.’15

Dimension/level constraint

in relevance to region, price, brand, customer category

Rule (or pattern) constraint

small sales (price < $10) triggers big sales (sum > $200)

Interestingness constraint

strong rules: min_support 3%, min_confidence 60%

Chp 7 Slides by Shree Jaswal

Page 74: Frequent Pattern Mining - ssjaswal.comssjaswal.com/wp-content/uploads/2016/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The

79

Constrained Mining vs. Constraint-Based Search

Constrained mining vs. constraint-based search/reasoning

Both are aimed at reducing search space

Finding all patterns satisfying constraints vs. finding some (or one) answer in constraint-based search in AI

Constraint-pushing vs. heuristic search

It is an interesting research problem on how to integrate them

Constrained mining vs. query processing in DBMS

Database query processing requires to find all

Constrained pattern mining shares a similar philosophy as pushing selections deeply in query processing

Chp 7 Slides by Shree Jaswal