Association Rules
Mining Association Rules between Sets of Items in Large Databases (R. Agrawal, T. Imielinski & A. Swami) 1993.
Fast Algorithms for Mining Association Rules
(R. Agrawal & R. Srikant) 1994.
Basket Data
Retail organizations, e.g., supermarkets, collect and store massive amounts sales data, called basket data.A record consist of transaction date items bought
Or, basket data may consist of items bought by a customer over a period.
Example Association Rule
90% of transactions that purchase bread and butter also purchase milk
Antecedent: bread and butterConsequent: milkConfidence factor: 90%
Example Queries Find all the rules that have “Uludağ Gazozu”
as consequent. Find all rules that have “Diet Coke” in the
antecedent. Find all rules that have “sausage” in the
antecedent and “mustard” in the consequent. Find all the rules relating items located on
shelves A and B in the store. Find the “best” (most confident) k rules that
have “Uludağ Gazozu” in the consequent.
Formal Model
I = i1, i2, …, im: set of literals (items) D : database of transactions T D : a transaction. T I
TID: unique identifier, associated with each T X: a subset of I
T contains X if X T.
Formal Model (Cont.)
Association rule: X Yhere X I, Y I and X Y = .
Rule X Y has a confidence c in D if c% of transactions in D that contain X also contain Y.
Rule X Y has a support s in Dif s% of transactions in D contain X Y.
Example
I: itemset{cucumber, parsley, onion, tomato, salt, bread, olives,
cheese, butter}
D: set of transactions1 {{cucumber, parsley, onion, tomato, salt, bread},2 {tomato, cucumber, parsley},3 {tomato, cucumber, olives, onion, parsley},4 {tomato, cucumber, onion, bread},5 {tomato, salt, onion},6 {bread, cheese}7 {tomato, cheese, cucumber}8 {bread, butter}}
Problem
Given a set of transactions, Generate all association rules that have the support and confidence greater
than the user-specified minimum support (minsup) and minimum confidence (minconf).
Problem decomposition
1. Find all itemsets that have transaction support above minimum support.
2. Use the large itemsets to generate the Association rules:
2 1. For every large itemset I, find its all subsets
2.2. For every subset a, output a rule:a (I - a) if
support(a)
support(l)minconf
Discovering Large Itemsets
Apriori and AprioriTid algorithms:
Basic intuition:
Any subset of a large itemset must be large
Itemset having k items can be generated by joining large itemsets having k-1 items, and deleting those that contain any subset that is not large.
Def. k-itemset: large itemset with k items.
Apriori Algorithm
L1 = { large 1-itemsets }for (k=2; Lk-1; k++) do begin
Ck = apriori-gen(Lk-1); // New candidatesforall transactions t D do begin
C’t = subset (Ck, t) // Candidates contained in t
forall candidates c Ct do c.count++end Lk = {c Ct | c.count minsup}
end
Return k Lk
Apriori Candidate Generation
apriori-gen(Lk-1):
Returns a superset of the set of all large k-items First select two itemsets p, q from Lk-1 s.t.
first k-2 items of p and q are the same,form a new candidate k-itemset c as
common k-2 items + 2 differing items Prune those c, s.t. some (k-1) subset of c is
not in Lk-1
Apriori Algorithm (cont.)
Go thru all transactions in D,increment the counts of all itemsets in Ck
Lk is the set of all large itemsets in Ck
For minsup s=30%,L= {{bread}, {cheese}, {cucumber}, {onion}, {parsley}, {salt}, {tomato}, {cucumber, onion}, {cucumber, parsley},
{cucumber, tomato}, {onion, tomato}, {parsley, tomato}, {cucumber, parsley, tomato}}
Subset Function
Subset (Ck, t): candidate itemsets contained in t Candidate itemsets in Ck are stored in a
hash-tree Leaf node: contains a list of itemsets Interior node: contains a hash table
Each bucket points to another node Depth of root = 1 Buckets of a node at depth d points to
nodes at depth d+1
Subset Function (cont.)
Construction of hash-tree for Ck
To add itemset c: start from the root go down until reaching a leaf node At interior node at depth d, to choose the
branch to follow, apply a hash function to the d th item of c
All nodes are initially created as leaves A leaf is converted into internal when the
number of nodes exceeds a threshold.
Subset Function (cont.)
After constructing the hash-tree for Ck, subset function finds candidates contained in t as follows: At a leaf, find itemsets contained in t At an interior node reached by hashing on
item i, hash on each item that comes after i in t, recursively apply to the nodes in the corresponding bucket
At root, hash on every item in t.
AprioriTid Algorithm
Uses apriori-gen to generate candidates Database D is not used for counting support after
the first pass The set Ck is used, for this purpose Elements of Ck are in the form <TID, {Xk}> where
each Xk is a potentially large k-itemset present in the transaction with identifier TID.
The member of Ck corresponding to transaction t is
<t.TID, {c Ck | c contained in t}>
AprioriTid Algorithm (cont.)L1 = { large 1-itemsets }for (k=2; Lk-1; k++) do begin
Ck = apriori-gen(Lk-1); // New candidates Ck = forall transactions t Ck do begin
// Determine candidates in Ck contained in t.TIDC’t = {c Ck | last two elements of c are in t }forall candidates c C’t do c.count++if (Ct ) then Ck = += <t.TID, C’t>
end Lk = {c C’t | c.count minsup}
endReturn k Lk
Example
minsup = 2 transactions, s=50D: TID Items L1: Itemset Sup C1: TID Set-of-Itemsets 100 1 3 4 {1} 2 100 {{1},{3},{4}} 200 2 3 5 {2} 3 200 {{2},{3},{5}} 300 1 2 3 5 {3} 3 300 {{1},{2},{3},{5}} 400 2 5 {5} 3 400 {{2},{5}}C2={{100,{{1,3}}},{200,{{2,3},{2,5},{3,5}}, {300,{{2,3},{2,5},{3,5}}, {400,{{2,5}}}}L2={{1,3}, {2,3}, {2,5}, {3,5}}
Performance
Example:HW: IBM RS/6000, 33MHz
Dataset:Number of Items: 1000Avg. size of transactions: 10Avg. size of maximal
potentially large items: 4Number of transactions: 100KData size: 4.4 MBytes
Apriori vs. AprioriTid
Per pass execution times of Apriori and AprioriTid
Average size of transactions: 10
Average size of maximal potentially large items: 4
Number of transactions: 100K
minsup=0.75%
AprioriHybrid Algorithm
Uses Apriori in the initial passes and switches to AprioriTid when it expects that the set Ck at the end of the pass will fit in memory.
Conclusions and Future Work
Apriori, AprioriTid and AprioriHybrid algorithms presented
Future work: use is-a hirarchies
(e.g., beef is-a red-meat is-a meat) use quantities of items bought
This work is in the context of Quest Project of IBM
Top Related