07. Closet - An Efficient Algorithm for Mining Frequent

download 07. Closet - An Efficient Algorithm for Mining Frequent

of 9

Transcript of 07. Closet - An Efficient Algorithm for Mining Frequent

  • 8/14/2019 07. Closet - An Efficient Algorithm for Mining Frequent

    1/9

    VIGNAN INSTOFINFORMATION TECHNOLOGY

    DUVVADA, VISAKHAPATNAM.

    TITLE: CLOSET- An Efficient Algorithm for Mining Frequent Closed Itemsets

    PRESENTED BY :

    P.BHAKTHAVATHSALANAIDU K.K.S.SWAROOP

    [email protected]

    [email protected]

    CLOSET: An Efficient Algorithm for Mining Frequent Closed

    Itemsets

    ABSTRACT

    Association mining may often derive an undesirably large set of frequent itemsets and

    association rules. Recent studies have proposed an interesting alternative: mining frequent closed

    itemsets and their corresponding rules, which has the same power as association mining but

    substantially reduces the number of rules to be presented.

    An efficient algorithm CLOSET, for mining closed itemsets, with the development of

    three techniques:

    (1) Applying a compressed frequent pattern tree FP-tree structure for mining closed itemsets

    without candidate generation,

    (2) Developing a single prefix path compression technique to identify frequent closed

    itemsets quickly, and

    (3) Exploring a partition bases projection mechanism for scalable mining in large databases.

    CLOSET is efficient and scalable over lager databases and is faster than the previously

    proposed methods.

    INTRODUCTION:

    It has been well recognized that frequent pattern mining plays an essential role in many

    important data mining tasks, e.g. associations, sequential patterns , episodes, partial periodicity,

    etc. However, it is also well known that frequent pattern mining often generates a very large

    mailto:[email protected]:[email protected]:[email protected]:[email protected]
  • 8/14/2019 07. Closet - An Efficient Algorithm for Mining Frequent

    2/9

    number of frequent itemsets and rules, which reduces not only efficiency but also effectiveness

    of mining since users have to sift through a large number of mined rules to find useful ones.

    There is an interesting alternative: instead of mining the complete set of frequent itemsets

    and their associations, association mining only their corresponding rules. An important

    implication is that mining frequent closed itemset has the same power as mining the complete set

    of frequent itemsets, but it will substantially reduce redundant rules to be generated and increase

    both efficiency and effectiveness of mining.

    Definition: Association rule mining searches for interesting relationships among items in a given

    data set.

    Interestingness Measures:

    Certainty: Each discovered pattern should have a measure of certainty associated with it that

    assesses the validity or trustworthiness of the pattern. A certainty measure for association rules

    of the form AB, where A and B are sets of items, is Confidence. Given a set task-relevant

    data tuples, the confidence of is defined as AB is defined as

    Confidence (AB) = # tuples containing both A and B

    # tuples containing A

    Utility : The potential usefulness of a pattern is a factor defining its interestingness. It can be

    estimated by a utility function, such as support. The support of an association pattern refers to the

    percentage of task-relevant data tuples for which the pattern is true. For association rules of the

    form AB where A and B are sets of items, it is defined as

    Support(AB) = # tuples containing both A and B

    total # of tuples

    Association rules that satisfy both a user specified minimum confidence and user specified

    minimum support threshold are referred to as Strong Association Rules.

    Association Rule Mining is a two step process:

    1. Find all frequent itemsets: each of these itemsets will occur at least as frequently as a pre-

    determined minimum support count.

    2. Generate strong association rules from the frequent itemsets: These rules must satisfy

    minimum support and minimum confidence.

    The Apriori Algorithm: Finding Frequent Itemsets Using Candidate Generation:

    Apriori is an influential algorithm for mining frequent itemsets for Boolean association rules.

    The names of the algorithm are based on the fact that the algorithm uses prior knowledge of

  • 8/14/2019 07. Closet - An Efficient Algorithm for Mining Frequent

    3/9

  • 8/14/2019 07. Closet - An Efficient Algorithm for Mining Frequent

    4/9

    2) Suppose that the minimum transactions support count required is 2. The set of frequent 1-

    itemsets, L1, can then be determined. It consists of the candidate 1-itemsets satisfying

    minimum support.

    3) To discover the set of frequent 2-itemsets, L2 the algorithm uses L1 1X1 L1 to generate a

    candidate set of 2-itemsets C2.

    In this way we will find candidate sets until a candidate set is null.

    Generating Association Rules from Frequent Itemsets

    Once the frequent itemsets from transactions in a database D have been found, it is

    straightforward to generate strong association rules from them. This can be done using the

    following equation for confidence. Where the conditional probability is expressed in terms of

    itemset support count.

    Confidence (AB) = support count (A U B)

    Support count (A)

    where support count (AUB) is the number of transactions containing the itemsets AUB, and

    support count (A) is the number of transactions containing the itemset A. Based on this equation,

    association rules can be generated as follows:

    for each frequent itemset l, generate all nonempty subsets of l.

  • 8/14/2019 07. Closet - An Efficient Algorithm for Mining Frequent

    5/9

    for every nonempty subset of l, output the rule s(l-s) if support count(l) min_conf,

    support count(s)

    where min_conf is the minimum confidence threshold.

    Since the rules are generated from frequent itemsets, each one automatically satisfies minimum

    support. Frequent itemsets can be stored ahead of time in hash tables along with their counts so

    that they can be accessed quickly.

    Eg:- Suppose the data contain the frequent itemset l= {I1,I2,I5}. What are the association rules

    that can be generated from l?

    The nonempty subsets of l are {I1, I2}, {I1, I5}, {I2, I5}, {I1}, {I2} and {I5}. The resulting

    association rules are as shown below, each listed with its confidence.

    I1I2I5, confidence = 2/4 = 50%

    I1I5I2, confidence = 2/2 = 100%

    I2I5I2, confidence = 2/2 = 100%

    I1I2I5, confidence = 2/6 = 33%

    I2I1I5, confidence = 2/7 = 29%

    I5I1I2, confidence = 2/2 = 100%

    If the minimum confidence threshold is, say, 70% then only the second, third and last rules

    above are output, since these are the only ones generated that are strong.

    Mining Frequent Itemsets without Candidate GenerationThe Apriori algorithm suffers from two non-trivial costs:

    1) It may need to generate a huge number of candidate sets.

    2) It may need to repeatedly scan the database and checks large set of candidates by pattern

    matching

    An interesting method that mines the complete set of frequent itemsets without candidate

    generation is called frequent-pattern growth of simply FP-growth, which adopts a divide-and

    conquer strategy as follows: compress the database representing frequent items into a set of

    conditional databases, each associated with one frequent item, and mine each such database

    separately.

    Reexamine the mining of transaction database, using the frequent-pattern growth approach.

    The first scan of the database is the same as Apriori, which derives the set of frequent items (1-

    items) and their support counts. Let the minimum count be 2. The set of frequent items is sorted

  • 8/14/2019 07. Closet - An Efficient Algorithm for Mining Frequent

    6/9

    in the order of descending support count. This resulting set or list is denoted L. Thus, we have

    L=[I2: 7,I1: 6,I3: 6,I4: 2,I5: 2].

    An FP-tree is then constructed as follows. First, create the root of the tree, labeled

    with null. Scan database D a second time. The items in each transaction are processed in L

    order and a branch is created for each transaction. For example, the scan of the first transaction,

    T100:I1, I2, I5, which contains three items (I2, I1, I5) in L order, leads to the construction of

    the first branch of the tree with three nodes:, where I2 is linked as child of

    the root, I1 is linked to I2 and I5 is linked to I2. The second transaction, T200, contains the items

    I2 and I4 in L order, which would result in a branch where I2 is linked to the root and I4 is linked

    to I2. However this branch would share a common prefix, (I2:2), with the existing path for T100.

    Therefore, we instead increment the count of the I2 node when considering the branch to be

    added for a transaction, the count of each node along a common prefix is incremented by 1, and

    nodes for the item following the prefix are created and linked accordingly.

    To facilitate tree traversal, an item header table is built so that each item points to its

    occurrence in the tree via a chain of node-links. The tree obtained after scanning all of the

    transactions is

    Item Conditional pattern base Conditional FP-tree Frequent patterns generated

    I5 {(I2 I1: 1) , (I2 I1 I3: 1)} I2 I5: 2, I1 I5: 2, I2 I1 I5: 2

    I4 {(I2 I1: 1) , (I2: 1)} I2 I4: 2

    The mining of the FP-tree proceeds as follows. Start from each frequent length-1 pattern,

    construct its conditional pattern base (a sub database which consists of the set of prefix paths in

    the FP-tree co-occurring with the suffix pattern), then construct its (conditional) FP-tree and

  • 8/14/2019 07. Closet - An Efficient Algorithm for Mining Frequent

    7/9

    perform mining recursively on such a tree. The pattern growth is achieved by the concatenation

    tree.

    Lets first consider I5 which is the last item in L, rather than the first. The reasoning

    behind this will become apparent as we explain the FP-tree mining process. I5 occurs in two

    branches of the FP-tree. The paths formed by these branches are < (I2, I2, I5:1)> and < (I2, I1,

    I3, I5:1)>. Therefore considering I5 as a suffix, its corresponding two prefix paths are and < (I2, I1, I3:1) >, which form its conditional pattern base. Its conditional FP-tree

    contains only a single path, (I2:2, I1:2); I3 is not included because its support count of 1 is less

    than the minimum support count. The single path generates all the combinations of frequent

    patterns: I2 I5:2, I1 I5:2, I2 I1 I5:2.

    In the same way find the frequent itemsets for all other Items. The FP-growth method

    transforms the problem of finding long frequent patterns to looking for shorter ones recursively

    and then concatenating the suffix. It uses the least frequent items as suffix, offering good

    selectivity. The method substantially reduces the search costs.

    CLOSET: An Efficient Algorithm for Mining Frequent Closed Itemsets

    PROBLEM DEFINITION:

    An itemset X is contained in transaction if X Y. Given a transaction database

    TDB, the support of an itemset X, denoted as sup(X), is the number of transactions in TDB

    which contain X. An association rule R: XY is an implication between two itemsets X and Y

    where X, YI and XY =. The support of the rule, denoted as sup(XY), is defined as sup

    (XUY). The confidence of the rule, denoted as conf(XY), is defined as sup (XUY)/sup(X).

    The requirement of mining the complete set of association rules leads to two problems:

    1) There may exist a large number of frequent itemsets in a transaction database, especially

    when the support threshold is low.

    2) There may exist a huge number of association rules. It is hard for users to comprehend

    and manipulate a huge number of rules.

    An interesting alternative to this problem is the mining offrequent closed itemsets and their

    corresponding association rules.

    Frequent closed itemset: An itemset X is a closed itemset if there exist no itemset X such that

    (1) X is a proper superset of X and (2) every transaction containing X also contains X. A closed

    itemset X is frequent if its support passes the given support threshold.

  • 8/14/2019 07. Closet - An Efficient Algorithm for Mining Frequent

    8/9

    How to find the complete set of frequent closed itemsets efficiently from large database,

    which is called the frequent closed itemset mining problem

    For the transaction database in table1 with min_sup = 2, the divide and conquer method for

    mining frequent closed itemset.

    1) Find frequent items. Scan TDB to find the set of frequent items and derive a global

    frequent item list, called f_list, and f_list = {c:4, e:4, f:4, a:3, d:2}, where the items are

    sorted in support descending order any infrequent item, such as b are omitted..

    2) Divide search space. All the frequent closed itemsets can be divided into 5 non-overlap

    subsets based on the f_list: (1) the ones containing items d,(2) the ones containing item a

    but no d, (3) the ones containing item f but no a not d, (4) the ones containing e but no f,

    a nor d, and (5) the one containing only c. once all subsets are found, the complete set of

    frequent closed itemsets is done.

    3) Find subsets of frequent closed itemsets. The subsets of frequent closed itemsets can be

    mined by constructing corresponding conditional database and mine each recursively.

  • 8/14/2019 07. Closet - An Efficient Algorithm for Mining Frequent

    9/9

    Find frequent closed itemsets containing d. Only transaction containing d are needed. The d-

    conditional database, denoted as TDB|d, contains all the transactions having d, which is {cefa,

    cfa}. Notice that item d is omitted in each transaction since it appears in every transaction in the

    d-conditional database.

    The support of d is 2. Items c, f and a appear twice respectively in TDB|d. Therefore, cfad: 2 is a

    frequent closed itemset. Since this itemset covers every frequent items in TDB|d finishes.

    In the same way find the frequent closed itemsets for a, f, e, and c.

    4) The set of frequent closed itemsets fund is {acdf :2, a :3, ae :2, cf :4, cef :3, e :4}

    Optimization 1: Compress transactional and conditional databases using FP-tree structures. FP-

    tree compresses databases for frequent itemset mining. Conditional databases can be derived

    from FP-tree efficiently.

    Optimization 2: Extract items appearing in every transaction of conditional databases.

    Optimization 3: Directly extract frequent closed itemsets from FP-tree.

    Optimization 4: Prune search branches.

    PERFORMANCE STUDY

    Comparison of A-close, CHARM, and CLOSET, CLOSET out performs both CHARM

    and A-close. CLOSET is efficient and scalable in mining frequent closed itemsets in large

    databases. It is much faster than A-close, and also faster than CHARM.

    CONCLUSION

    CLOSET leads to less and more interesting associations rules then the other previously

    proposed methods.