Association Rule Mining on Web - York University · What Is Association Rule Mining? Association...
Transcript of Association Rule Mining on Web - York University · What Is Association Rule Mining? Association...
Association Rule Mining on Web
What Is Association Rule Mining? Association rule mining:
Finding interesting relationships among items (or objects, events) in a given data set.
Example: Basket data analysis Given a database of transactions, each transaction is a list of items
(purchased by a customer in a visit or browsed by a user in a visit) You wonder: which groups of items are customers likely to
purchase together on a given trip to a store or which groups of pages are users likely to browse together on a given trip to a web site?
You may find: computer ⇒ financial_management_software [support=2%, confidence=60%]
Action: place financial management software close to computer display to increase sales of both of these items.
Data Mining Techniques - Association Rules
Supermarket example
Transaction ID Items Purchased 1 butter, bread, milk, beer, diaper 2 bread, milk, beer, egg, diaper 3 Coke, Film, bread, butter, milk … ………
“If a customer buys diapers, in 60% of cases, he/she also buys beers. This happens in 3% of all transactions. 60%: confidence 3%: support
An association rule will be like
Association Rule: Basic Concepts Association rule:
Rule form: “A ⇒ Β [support, confidence]” where A ⊂ I, B ⊂ I, A∩B=∅ and I is a set of items
(objects or events). support: probability that a transaction contains A
and B P(AB) confidence: conditional probability that a
transaction having A also contains B P(B|A) Examples:
buys(x, “diapers”) ⇒ buys(x, “beers”) [0.5%, 60%] major(x, “CS”) ^ takes(x, “DB”) ⇒ grade(x, “A”) [1%, 75%]
Applications ? ⇒ a particular product (What the store should do to boost sales
of the particular product) Home Electronics ⇒ ? (What other products should the store
stocks up?) Attached mailing in direct marketing
Association Rule: Basic Concepts Given a minimum support threshold (min_sup) and a
minimum confidence threshold (min_conf), an association rule is strong if it satisfies both min_sup and min_conf. If min_sup = min_conf = 50%,
Itemset: a set of items. k-itemset: an itemset that contains k items.
{computer, financial_management_software} is a 2-itemset. Support count (frequency, count) of an itemset: number of transactions
that contain the itemset. Frequent itemset: itemset that satisfies minimum support count. Minimum support count = min_sup × total # of transactions in a data
set.
A ⇒ C C ⇒ A A & C ⇒ B A & B ⇒ E
(50%, 66.6%) (50%, 100%) (25%, 50%) (0%, 0%)
Strong Strong
Rule Measures: Support and Confidence
Transaction ID Items Bought2000 A,B,C1000 A,C4000 A,D5000 B,E,F
A ⇒ C C ⇒ A A & C ⇒ B A & B ⇒ E
support(A⇒ B) = P(A ∧ B) confidence(A⇒ B) = P(B|A)
Transaction ID Items Bought2000 A,B,C1000 A,C4000 A,D5000 B,E,F
A ⇒ C C ⇒ A A & C ⇒ B A & B ⇒ E
support(A⇒ B) = P(A ∧ B) confidence(A⇒ B) = P(B|A)
(50%, 66.6%) (50%, 100%) (25%, 50%) (0%, 0%)
Rule Measures: Support and Confidence
How to Mine Association Rules A two-step process:
Find all frequent itemsets Generate strong association rules from frequent
itermsets. Example: given min_sup=50% and min_conf=50%
Transaction ID Items Bought2000 A,B,C1000 A,C4000 A,D5000 B,E,F
Frequent Itemset Support{A} 75%{B} 50%{C} 50%
{A, C} 50%
Generate strong rules: {A} → {C} [support=50%, confidence=66.6%] {C} → {A} [support=50%, confidence=100%]
Finding Frequent Itemsets: the Key Step The Apriori principle:
Any subset of a frequent itemset must be frequent
i.e., if {AB} is a frequent itemset, both {A} and {B} should be a frequent itemset
Find the frequent itemsets: the sets of items that have minimum support Iteratively find frequent itemsets with cardinality from 1 to k
(k-itemset), that is,
find all frequent 1-itemsets.
find all frequent 2-itemsets using frequent 1-itemsets.
…
find all frequent k-itemsets using frequent (k-1)-itemsets.
The Apriori Algorithm Based on the Apriori principle, use iterative
level-wise approach and candidate generation Pseudo-code:
Ck: a set of candidate itemsets of size k Lk : the set of frequent itemsets of size k
L1 = {frequent items}; for (k = 1; Lk !=∅; k++) do begin Ck+1 = candidates generated from Lk; for each transaction t in database do
increment the count of all candidates in Ck+1 that are contained in t
Lk+1 = candidates in Ck+1 with min_support end return ∪k Lk-1;
Scan database to calculate support for each itemset in Ck+1
Generation process: Lk → Ck+1 → Lk+1
(Scan database to find all frequent 1-itemsets)
The Apriori Algorithm – Example (minimum support = 0.5)
TID Items100 1 3 4200 2 3 5300 1 2 3 5400 2 5
Database D itemset sup.{1} 2{2} 3{3} 3{4} 1{5} 3
itemset sup.{1} 2{2} 3{3} 3{5} 3
Scan D
C1 L1
itemset{1 2}{1 3}{1 5}{2 3}{2 5}{3 5}
itemset sup{1 2} 1{1 3} 2{1 5} 1{2 3} 2{2 5} 3{3 5} 2
itemset sup{1 3} 2{2 3} 2{2 5} 3{3 5} 2
L2
C2 C2 Scan D
C3 L3 itemset{2 3 5}
Scan D itemset sup{2 3 5} 2
Apriori Algorithm (Flow Chart)
L1= set of frequent 1-itemset (scan DB) k=1
Compute candidate set Ck+1: • Ck+1=join Lk with Lk • prune Ck+1
Lk≠ φ?
Scan DB to get Lk+1 from Ck+1
k=k+1
Yes
Output L1, …, Lk-1
No
• Candidate set generation (get Ck+1 from Lk) • Prune candidate set based on Apriori principle • Scan DB to get Lk+1 from Ck+1
Ck: set of candidate k-itemsets Lk : set of frequent k-itemsets
Generate Association Rules from Frequent Itemsets
Naïve algorithm: for each frequent itemset l do for each nonempty subset c of l do if (support(l)/support(l-c) >= min_conf) output the rule (l-c) → c , with support =support(l) and confidence=support(l)/support(l-c) Just an example!
a frequent itemset: l={I1, I2, I5} nonempty subsets of l are {I1,I2}, {I1,I5}, {I2,I5}, {I1}, {I2}, {I5} resulting association rules:
100% confidence III29% confidence III33% confidence ,III100% confidence ,III100% confidence III50%confidence III
=∧→=∧→=∧→=→∧=→∧=→∧
,215 ,512 521 152 ,251 ,521
If minimum confidence threshold is 70%, only 3 rules are output.
P(c|l-c)
Support(l) = Freq(l)/Total
Is Apriori Fast Enough? Performance Bottlenecks
The core of the Apriori algorithm: Use frequent k -itemsets to generate candidate frequent (k+1)-
itemsets Use database scan and pattern matching to collect counts for the
candidate itemsets to generate frequent (k+1)-itemsets from k+1 candidate set
The bottleneck of Apriori: candidate generation Huge candidate sets:
104 frequent 1-itemset will generate more than 107 candidate 2-itemsets
To discover a frequent pattern of size 100, e.g., {a1, a2, …, a100}, one needs to generate 2100 ≈ 1030 candidates.
Multiple scans of database: Needs n scans, n is the length of the longest pattern
Web Usage Mining - Clustering
Clustering User Clusters
Discover groups of users exhibiting similar browsing patterns
Page Clusters Discover groups of pages having related
content
Web Usage Mining - Classification
Classification clients from state or government agencies who
visit the site tend to be interested in the page company/product1
50% of clients who placed an online order in /company/product2, were in the 20-25 age group and lived on the West Coast
Web Usage Mining - Sequential Patterns
Sequential Patterns - 30 % of clients who visited
/company/products, had done a search in Yahoo, within the past week on keyword w
- 60 % of clients who placed an online order in /company/product1, also placed an online order in /company/product4 within 15 days
Pattern Analysis
Web Usage Mining - Pattern Analysis
Not all patterns are interesting
Some are downright misleading
The goal of pattern analysis is to filter out information that is not useful and interesting.
Web Mining Applications
E-commerce Generate user profiles Targeted advertising Fraud Similar image retrieval Building Adaptive Web
Sites by User Profiling
Pattern Analysis Example 1 Interestingness measures
Use interestingness measures to rank discovered rules or sequential patterns.
Pruning Prune the discovered rules and patterns if they are
contained by others with higher or comparable interestingness values.
Prune uninteresting rules according to domain background knowledge.
Interestingness of Discovered Patterns Interestingness:
Different people define interestingness differently.
What is interesting depends on the type of knowledge discovered and user’s belief.
One definition: (not necessarily suitable for all situations) A pattern is interesting if it is easily understood by humans, valid on
new or test data with some degree of certainty, potentially useful, and novel or validates some hypothesis that a user seeks to confirm
Interestingness measures: Objective: based on statistics and depending on types of patterns
Eg: support and confidence for association rules, classification accuracy for classification rules.
Subjective: based on user’s belief in the data, e.g., unexpectedness, novelty, actionability, or confirmation of a hypothesis, etc.
Interestingness Measures The following measures are used to evaluate
an association rule A⇒ B, where A and B are itemsets, a sequential pattern ⟨A B⟩, where B is the last element in
the sequence and A is the subsequence in front of B.
Support:
IS:
RI:
MD:
IM:
)()()()(
BPAPABPABP
)()()( BPAPABP −
))|(1)(|())|(1)(|(log
BAPBAPBAPBAP
−−
2)|(1 *
)(1)()|( BAP
BPBPABP +
−−
)()()(
BAPBPAP
Confidence:
CV:
MI:
C2:
)(ABP )|( ABP
)()()(log2 BPAP
ABP
)()(log2 ABSupportABP ×−
Pattern Analysis Example 2
Problem of association rule mining (rule quantity problem) − A large number of rules are often generated and many of them are
similar or redundant.
Solutions − Constraint-based mining − post-pruning rules − grouping rules
Two grouping algorithms − Objective grouping
♦ group rules according to rule structure; no domain knowledge is used
− Subjective grouping ♦ domain knowledge is used
Objective Grouping
Basic idea − Recursively group rules with common items in their antecedents and
consequents. − At each level of recursive call, select the cluster with the biggest size.
Result: a tree of clusters
ab→ d a→ d: other
a→ d b→ a other
All
ab→ cd abe→ d ac→ d
bcd→ ae b→ a d→ c
Subjective Grouping
Basic idea − group rules according to the semantic distance between rules.
Domain knowledge used: a tree-structured semantic network of objects: a taxonomy or is-a-
hierarchy of objects An association rule can relate objects on both leaf and non-leaf
levels.
Shoes Hiking Boots
Footwear Cloth
Outer- wear Shirts
Jackets Ski Pants
Cloth → Shoes
Tagging the Semantic Tree Objectives
− Enable calculation of the semantic distance between rules by assigning a Relative Semantic Position (RSP) to each node of the tree.
− Two objects that are semantically closer to each other should be assigned two closer RSPs
Definition of RSP of a node: (hpos, vpos) hpos: horizontal position of the node (position of the node in the
balanced tree’s in-order traversal sequence) vpos: vertical position of the node (the level of the node in the tree)
(10, 3) (14, 3)
(12, 2) (4, 2)
(2, 3) (6, 3)
(1, 4) (3, 4) (9, 4)
(8, 1)
Representing Rules with RSPs
Replace the objects in a rule with their RSPs {Jaket, Shirt} → {Shoes} can be represented as {(1,4), (6, 3}) → {(10,3)}.
Calculate the mean RSPs of antecedent and consequent The above rule becomes (3.5, 3.5) → (10, 3)
(10, 3) (14, 3)
(12, 2) (4, 2)
(2, 3) (6, 3)
(1, 4) (3, 4) (9, 4)
(8, 1)
Jacket
Shirt
Shoes
Representing Rules with Line Segments
A rule can then be represented by a directed line segment in a two-dimensional space. For example, (3.5, 3.5) → (10, 3) is represented as:
0
1
2
3
4
5
0 2 4 6 8 10 12 14 16
hpos
vpos
(2, 3)
(4, 2)
(3.5, 3.5)
(9, 4)
(10, 3)
(8, 1)
(12, 2)
(14, 3)
(3, 4) (1, 4)
(6, 3)
Grouping Rules The problem of grouping rules becomes the problem of
grouping directed line segments. Objective of clustering
group line segments that are close to each other and have similar length and orientation.
A standard clustering algorithm can be used with the distance function defined as Distance(s1,s2)= 1-cos(s1,s2) + Ndist(c1,c2)+Ndiff(length(s1),length(s2))
hpos
vpos
http://www.computer.org/portal/web/csdl/doi/10.1109/ICDM.2002.1184048
Time: Monday, December 11th from 14:00pm to 16:00pm
Location: DB 1004 (or TEL 1004)
Time and Location
Week 1 (Objectives and introduction) Week 2 (CGI, form, HTML and XML) Week 3 (DTD, XML, XSL and servlets) Week 4 (Tomcat, Servlets and its lift cycle) Week 6 (Course project presentation week) Week 7 (Servlet and JSP) Week 8 (Recommendation systems and JDBC) Week 9 (E-commerce and digital signature) Week 10 (Web crawlers, Web search engine and their
algorithms; indexer and inverted file ) Week 11 (Information retrieval and its models, Probabilistic
information retrieval) Week 12 (System evaluation, Web mining and association
rule learning)
Coverage of Final
Types of Final Exam Questions
This exam is not a programming based exam. It will last for 2 hours.
Types of questions, such as: multiple choice question answer true or false problem solving
You should memorize some basic concepts and measures, and understand all the materials taught in our class.
You should focus on the lecture notes.
Data Preprocessing
Data Preprocessing Example Session identification
A session on the Web can be defined as a group of user activities that are related to each other to achieve a purpose.
Session identification is to divide the object accesses of each user into individual sessions.
UID Time ObjectIDs 4570 4/29/2002-8:11:7 o14655738 4570 4/29/2002-8:11:10 o15199366, o2541625, o8272639 4570 4/29/2002-8:11:13 t12, t14, t18 4571 4/17/2002-7:37:14 o6234980 4571 4/17/2002-7:37:45 o6234980 4571 4/17/2002-7:37:52 o6234980, o8735468 4571 4/17/2002-7:37:56 o15291602 4571 4/17/2002-7:38:14 o6330745, o8759058 4571 4/17/2002-7:38:24 o13972781 4571 4/17/2002-7:38:29 o15322672
A sample for session
Data Preprocessing Example(Cont’d)
Session identification methods Standard timeout method (5, 10, 15, 20, 25 and 30
minutes) In this case, a session consists of a sequence of
sets of objects requested by a single user such that no two consecutive requests are separated by an interval more than a predefined threshold.
N-gram language modeling method a statistical method that was originally used in
speech recognition for predicting the probability of a word sequence.
Language Model in Web Session Identification
)...|( )( 111
−+−=
∏= ini
l
ii oooPsP l
inii
l
i oooP )...|(1
111 −+−=∏ log2 Perplexity
Perplexity(s): Entropy(s):
To predict the probability of the object sequence s = o1o2……o l
Language model
N-gram model ∏
=
−
−
=
=l
i
ii
ll
oooP
oooPoooPooPoPsP
1
11
11213121
)| (
)|()|()|()()(
Session boundary detection Consider o1o2……ol ol+1 If the difference between Entropy(o1o2……o l) and
Entropy(o1o2……ol ol+1) is significant, there is a boundary between ol and ol+1
Chain rule
Smoothing Technique: Good-Turing Estimate
A maximum likelihood estimate of n-gram probability from a corpus is given by:
For any n-gram that occurs r times, we should pretend that it occurs r* times as follows:
where Nr is the number of n-grams that occur exactly r times
in the training data.
r
r
NNrr 1)1(* +
+≈
)...(#)...(#)...|(
11
111
−+−
+−−+− =
ini
iniinii
oooooooP
Entropy Evolution
log2 Perplexityl
inii
l
i oooP )...|(1
111 −+−=∏Perplexity: Entropy:
entro
py
sequence of log entries
beginning of a session
end of a session
Empirical Evaluation
Objective Evaluating the effectiveness of language modeling based
session detection method. Investigating the optimal order of n-gram language models
and the influence of different smoothing techniques.
Evaluation Asking domain experts to evaluate the discovered association
rules according to the unexpectedness and actionability of the rules.
Analyzing the entropy evolution curves of different smoothing methods.
35
45
55
65
75
5 10 15 20 25 30 35 40
Timeout threshold
Aver
age
Prec
isio
n %
top 10 top 20 top 30
45
55
65
75
85
timeout ABS GT LIN WB
Method
Aver
age
Prec
isio
n %
top 10 top 20 top 30
Comparisons of Language Modeling and Timeout Methods for Association Rule Learning
timeout language models