DATA MINING II - 1DL460 · Kjell Orsborn - UDBL - IT - UU 17-03-09 2 Introduction to Spatial Mining...

84
17-03-09 1 Kjell Orsborn - UDBL - IT - UU DATA MINING II - 1DL460 Spring 2017 A second course in data mining http://www.it.uu.se/edu/course/homepage/infoutv2/vt17 Kjell Orsborn Uppsala Database Laboratory Department of Information Technology, Uppsala University, Uppsala, Sweden

Transcript of DATA MINING II - 1DL460 · Kjell Orsborn - UDBL - IT - UU 17-03-09 2 Introduction to Spatial Mining...

17-03-09 1 Kjell Orsborn - UDBL - IT - UU

DATA MINING II - 1DL460

Spring 2017

A second course in data mining��

http://www.it.uu.se/edu/course/homepage/infoutv2/vt17

Kjell Orsborn�Uppsala Database Laboratory �

Department of Information Technology, Uppsala University, �Uppsala, Sweden

17-03-09 2 Kjell Orsborn - UDBL - IT - UU

Introduction to Spatial Mining

•  Slides from the book by Dr. M. H. Dunham, Data Mining, Introductory and Advanced Topics, Prentice Hall, 2002.

17-03-09 3 Kjell Orsborn - UDBL - IT - UU

Spatial Mining Outline

Goal: Provide an introduction to some spatial mining techniques.�

•  Introduction•  Spatial Data Overview •  Spatial Data Mining Primitives•  Generalization/Specialization•  Spatial Rules•  Spatial Classification•  Spatial Clustering (covered in clustering lectures)•  Spatial outliers (covered in outlier detection)

17-03-09 4 Kjell Orsborn - UDBL - IT - UU

Spatial Object

•  Contains both spatial and nonspatial attributes.•  Must have a location type attributes:

–  Coordinates–  Latitude/longitude–  Zip code–  Street address

•  May retrieve object using either (or both) spatial or nonspatial attributes.

17-03-09 5 Kjell Orsborn - UDBL - IT - UU

Spatial Data Mining Applications

•  Geology•  GIS Systems•  Environmental Science•  Agriculture•  Medicine•  Robotics•  Engineering•  May involve both spatial and temporal aspects

17-03-09 6 Kjell Orsborn - UDBL - IT - UU

Spatial Queries

•  Spatial selection may involve specialized selection comparison operations:–  Near–  North, South, East, West–  Contained in–  Overlap/intersect

•  Region (Range) Query – find objects that intersect a given region.•  Nearest Neighbor Query – find object close to identified object.•  Distance Scan – find object within a certain distance of an identified

object where distance is made increasingly larger.

17-03-09 7 Kjell Orsborn - UDBL - IT - UU

Spatial Data Structures

•  Data structures designed specifically to store or index spatial data.

•  Often based on B-tree or Binary Search Tree•  Cluster data on disk basked on spatial/geographic location.•  May represent complex spatial structure by placing the

spatial object in a containing structure of a specific spatial/geographic shape.

•  Techniques:–  Quad Tree–  R-Tree–  k-D Tree–  X-Tree

17-03-09 8 Kjell Orsborn - UDBL - IT - UU

MBR

•  Minimum Bounding Rectangle•  Smallest rectangle that completely contains the object

17-03-09 9 Kjell Orsborn - UDBL - IT - UU

MBR Examples

17-03-09 10 Kjell Orsborn - UDBL - IT - UU

Quad Tree

•  Hierarchical decomposition of the space into quadrants (MBRs)

•  Each level in the tree represents the object as the set of quadrants which contain any portion of the object.

•  Each level is a more exact representation of the object.•  The number of levels is determined by the degree of

accuracy desired.

17-03-09 11 Kjell Orsborn - UDBL - IT - UU

Quad Tree Example

17-03-09 12 Kjell Orsborn - UDBL - IT - UU

R-Tree

•  As with Quad Tree the region is divided into successively smaller rectangles (MBRs).

•  Rectangles need not be of the same size or number at each level.

•  Rectangles may actually overlap.•  Lowest level cell has only one object.•  Tree maintenance algorithms similar to those for B-trees.

17-03-09 13 Kjell Orsborn - UDBL - IT - UU

R-Tree Example

17-03-09 14 Kjell Orsborn - UDBL - IT - UU

K-D Tree

•  Designed for multi-attribute data, not necessarily spatial•  Variation of binary search tree•  Each level is used to index one of the dimensions of the

spatial object.•  Lowest level cell has only one object•  Divisions not based on MBRs but successive divisions of

the dimension range.

17-03-09 15 Kjell Orsborn - UDBL - IT - UU

k-D Tree Example

17-03-09 16 Kjell Orsborn - UDBL - IT - UU

X-tree

•  An X-tree (for eXtended node tree) is an index tree structure based on the R-tree used for storing data in many dimensions.

•  It appeared in 1996.•  Differs from R-trees (1984), R+-trees (1987) and R*-trees (1990) because it

emphasizes prevention of overlap in the bounding boxes, which increasingly becomes a problem in high dimensions.

•  In cases where nodes cannot be split without preventing overlap, the node split will be deferred, resulting in super-nodes.

17-03-09 17 Kjell Orsborn - UDBL - IT - UU

Topological Relationships

•  Disjoint•  Overlaps or Intersects•  Equals•  Covered by or inside or contained in•  Covers or contains

17-03-09 18 Kjell Orsborn - UDBL - IT - UU

Distance between spatial objects

•  Distance between points–  E.g Euclidean and Manhattan disctance

•  Extensions for spatial objects (compare clustering):

17-03-09 19 Kjell Orsborn - UDBL - IT - UU

Progressive refinement

•  Make approximate answers prior to more accurate ones.•  Filter out data not part of answer•  Hierarchical view of data based on spatial relationships•  Coarse predicate recursively refined

17-03-09 20 Kjell Orsborn - UDBL - IT - UU

Progressive Refinement

17-03-09 21 Kjell Orsborn - UDBL - IT - UU

Spatial Data Dominant Algorithm

17-03-09 22 Kjell Orsborn - UDBL - IT - UU

STING

•  STatistical Information Grid-based•  Hierarchical technique to divide area into rectangular cells•  Grid data structure contains summary information about

each cell•  Hierarchical clustering •  Similar to quad tree

17-03-09 23 Kjell Orsborn - UDBL - IT - UU

STING

17-03-09 24 Kjell Orsborn - UDBL - IT - UU

STING Build Algorithm

17-03-09 25 Kjell Orsborn - UDBL - IT - UU

STING Algorithm

17-03-09 26 Kjell Orsborn - UDBL - IT - UU

Spatial Rules

•  Characteristic Rule - describes the data–  The average family income in Dallas is $50,000.�

•  Discriminant Rule - describe the differences between different classes of the data–  The average family income in Dallas is $50,000, while in Plano

the average income is $75,000.�

•  Association Rule - implications of one set of data by another–  The average family income in Dallas for families living near

White Rock Lake is $100,000.

17-03-09 27 Kjell Orsborn - UDBL - IT - UU

Spatial Association Rules

•  Either antecedent or consequent must contain spatial predicates.

•  View underlying database as set of spatial objects.–  compare with a set of transactions in conventional

association analysis•  May be created using a type of progressive refinement

–  First filter on coarser level using e.g. concept hierarchies or MBR’s

17-03-09 28 Kjell Orsborn - UDBL - IT - UU

Spatial Association Rules Variants:

•  Examples of nonspatial and spatial combinations �in spatial association rules:

–  nonspatial antecedent and spatial consequent:•  All elementary schools are located close to single-family housing

developments.

–  spatial antecedent and nonspatial consequent:•  If a house is located in Highland Park, it is expensive.

–  spatial antecedent and spatial consequent:•  Any house that is near downtown is south of Plano.

17-03-09 29 Kjell Orsborn - UDBL - IT - UU

Spatial Association Rule Algorithm (Koperski “A progressive refinementapproach to spatial data mining, 1999.)

17-03-09 30 Kjell Orsborn - UDBL - IT - UU

Spatial Classification

•  Partitions spatial objects•  May use nonspatial attributes and/or spatial attributes•  May use concept hierarchies•  As in other types of of spatial mining, generalization and

progressive refinement may be used to improve efficiency.

17-03-09 31 Kjell Orsborn - UDBL - IT - UU

ID3 Extension(Ester, Kriegel and Sander “Spatial data mining: a database approach”, 1997.)

•  The concept of neighborhood graphs has been applied to perform classification of spatial objects using an ID3 extension

•  A Neighborhood Graph constructed for nodes in space –  Nodes – objects–  Edges – connects neighbors

•  Definition of neighborhood can vary–  e.g. distance between objects, topological relationships between

objects and direction relationship•  The idea is to take objects in the neighborhood of a given object into

account in the classification•  ID3 then considers nonspatial attributes of all objects in a

neighborhood (not just one) for classification.

17-03-09 32 Kjell Orsborn - UDBL - IT - UU

A Spatial Decision Tree algorithm (Koperski, Han and Stefanovic,

“An efficient two-step method for classifying spatial data”, 1998)

•  Approach similar to that used for spatial association rules.•  Spatial objects can be described based on objects close to them by

aggregating values of most relevant predicates •  The area around an object called buffer.•  A circular shape of the buffer is used •  information gain is used for discrimination objects based upon set of

generalized predicates•  Description of class based on aggregation of nearby objects.•  A training set with known classification is used for building a model

based on most relevant predicates (spatial and non-spatial)

17-03-09 33 Kjell Orsborn - UDBL - IT - UU

Spatial Decision Tree Algorithm

17-03-09 34 Kjell Orsborn - UDBL - IT - UU

Spatial Clustering

•  Detect clusters of irregular shapes•  Use of centroids and simple distance approaches may not

work well.•  Clusters should be independent of order of input.

17-03-09 35 Kjell Orsborn - UDBL - IT - UU

Spatial Clustering

17-03-09 36 Kjell Orsborn - UDBL - IT - UU

CLARANS Extensions (Ester, Kriegel, Xu “Knowledge discovery in large spatial databases: focusing

techniques for efficient class identification”, 1995.)

•  CLARANS – clustering large applications based upon randomized search

•  Remove main memory assumption of original CLARANS.•  Use spatial index techniques.•  Use sampling and R*-tree to identify central objects.•  Find cluster for these central objects.•  Change cost calculations by reducing the number of

objects examined.•  Voronoi diagram applied to define clusters

17-03-09 37 Kjell Orsborn - UDBL - IT - UU

Voronoi

17-03-09 38 Kjell Orsborn - UDBL - IT - UU

Spatial Dominant CLARANS - SD(CLARANS) (Han, Cai, Cercone, “Knowledge discovery in databases �

an attribute-based approach”, 1992.)

•  Spatial Dominant SD(CLARANS)•  An initial filtering to extract relevant data based on non spatial data•  First clusters spatial components using CLARANS•  Examines the non-spatial data within each cluster to derive a

description of that cluster–  e.g. to decide upon major type of vegetation within a region

•  Uses a learning tool to derive descriptions of clusters, in this case DBLEARN.

•  There is also a nonspatial dominant CLARANS � NSD(CLARANS)–  Find non spatial clusters and then their spatial extensions

17-03-09 39 Kjell Orsborn - UDBL - IT - UU

SD(CLARANS) Algorithm

17-03-09 40 Kjell Orsborn - UDBL - IT - UU

DBCLASD (Xu, Ester, Kriegel, Sander, “ A distribution based clustering algorithm

for mining large spatial databases, 1998,)

•  DBCLASD - Distribution Based Clustering of LArge Spatial Databases

•  Extension of DBSCAN•  Assumes items in cluster are uniformly distributed.•  Identifies distribution satisfied by distances between nearest

neighbors.•  Clusters are created around a target element.•  Objects added to a cluster if distribution is uniform, i.e. as long as

their nearest-neighbour distance set fits the uniform distribution.

17-03-09 41 Kjell Orsborn - UDBL - IT - UU

DBCLASD Algorithm

17-03-09 42 Kjell Orsborn - UDBL - IT - UU

Aggregate proximity

•  Approximation can be used to identify the characteristics of clusters by determining features that are close.–  features are spatial objects such as rivers, oceans, schools, etc.�

•  Aggregate proximity – measure of how close a cluster is to a feature.�

•  Aggregate proximity relationship finds the k closest features to a cluster.�

•  CRH Algorithm uses different shapes (with increasing accuracy and decreasing efficency) to identify these closest features:–  Encompassing Circle–  Isothetic Rectangle–  Convex Hull

17-03-09 43 Kjell Orsborn - UDBL - IT - UU

CRH

17-03-09 44 Kjell Orsborn - UDBL - IT - UU

17-03-09 45 Kjell Orsborn - UDBL - IT - UU

Introduction to Temporal Mining

•  Slides from the book by Dr. M. H. Dunham, Data Mining, Introductory and Advanced Topics, Prentice Hall, 2002.

17-03-09 46 Kjell Orsborn - UDBL - IT - UU

Temporal Mining Outline

Goal: examine some temporal data mining issues and approaches.�

•  Introduction•  Modeling Temporal Events•  Time Series•  Pattern Detection•  Sequences•  Temporal Association Rules

17-03-09 47 Kjell Orsborn - UDBL - IT - UU

Temporal Database

•  Snapshot – Traditional database•  Temporal – Multiple time points•  Ex:

17-03-09 48 Kjell Orsborn - UDBL - IT - UU

Temporal Queries

•  Query

•  Database

•  Intersection query

•  Inclusion query

•  Containment query

•  Point query – Tuple retrieved is valid at a particular point

in time.

tsq te

q

tsd te

d

tsq te

q tsd te

d

tsq te

q tsd te

d

tsq te

q tsd te

d

17-03-09 49 Kjell Orsborn - UDBL - IT - UU

Types of Databases

•  Snapshot – No temporal support�

•  Transaction Time – Supports time when transaction inserted data–  Timestamp–  Range�

•  Valid Time – Supports time range when data values are valid�

•  Bitemporal – Supports both transaction and valid time.

17-03-09 50 Kjell Orsborn - UDBL - IT - UU

Modeling Temporal Events

•  Techniques to model temporal events.•  Often based on earlier approaches•  Finite State Recognizer (Machine) (FSR)

–  Each event recognizes one character–  Temporal ordering indicated by arcs–  May recognize a sequence–  Require precisely defined transitions between states

•  Additional approaches–  Markov Model–  Hidden Markov Model–  Recurrent Neural Network

17-03-09 51 Kjell Orsborn - UDBL - IT - UU

FSR - Finite State Recognizer (Machine)

•  Start state 0•  Recognizing t shift state to state 1•  Recognizing h in state 1 shift state to state 2•  Recognizing e in state 2 shift state to state 3 (end node)•  Recognizing other characters (*) shift state to state 0

17-03-09 52 Kjell Orsborn - UDBL - IT - UU

Markov Model (MM)

•  Directed graph–  Vertices represent states–  Arcs show transitions between states–  Arc has probability of transition–  At any time, one state is designated as current state.

•  Markov Property – Given a current state, the transition probability is independent of any previous states.

•  Applications: speech recognition (nodes-sounds), natural language processing (nodes-words), system availability (nodes-system states)

17-03-09 53 Kjell Orsborn - UDBL - IT - UU

Markov Model

•  A transition (arc) is associated with a transition probability, pij, i.e. the probability on an arc that a transition will be made from state i to state j.

•  Probabilities can be combined to determine the probability that a pattern will be produced by the MM.

•  Transition probabilities are learned during a training phase

17-03-09 54 Kjell Orsborn - UDBL - IT - UU

Hidden Markov Model (HMM)

•  Like MM, but states need not correspond to observable states.•  HMM models processes that produces as output a sequence of

observable symbols.–  Given a sequence of symbols, the HMM can be constructed to produce

and output these symbols.–  What is hidden is the state sequence that produced these symbols.

•  Associated with each node is the probability of the observation of an event.

•  Train HMM to recognize a sequence.•  Transition and observation probabilities learned from training set.

17-03-09 55 Kjell Orsborn - UDBL - IT - UU

Hidden Markov Model

•  The two states in the example represent two different coins•  It is equally likely to toss either of the coins in the next toss •  The hidden part of the model is associated with the observable output from each

state–  Left coins is fair (head = 0.5, tail = 0.5), Right coin is unfair (head = 0.3, tail = 0.7)

•  The hidden probabilities are used to determine what the output from that state will be (H/T) while the public (or transition) probabilities determine the next state that will occur (tossing fair/unfair coin)

Modified from [RJ86]

17-03-09 56 Kjell Orsborn - UDBL - IT - UU

HMM Algorithm

17-03-09 57 Kjell Orsborn - UDBL - IT - UU

HMM Applications

•  Given a sequence of events and an HMM, what is the probability that the HMM produced the sequence?

•  Given a sequence and an HMM, what is the most likely state sequence which produced this sequence?

17-03-09 58 Kjell Orsborn - UDBL - IT - UU

Recurrent Neural Network (RNN)

•  Extension to basic NN•  Neuron can obtain input from any other neuron (including

output layer).•  Can be used for both recognition and prediction

applications.•  Time to produce output unknown•  Temporal aspect added by backlinks.

17-03-09 59 Kjell Orsborn - UDBL - IT - UU

RNN

17-03-09 60 Kjell Orsborn - UDBL - IT - UU

Time Series

•  Set of attribute values over time•  Time Series Analysis – finding patterns in the values.

–  Trends–  Cycles–  Seasonal–  Outliers

17-03-09 61 Kjell Orsborn - UDBL - IT - UU

Analysis techniques

•  Trend analysis - detecting trends in time series–  Smoothing – moving average of attribute values.–  Autocorrelation – provides relationships between different

subseries with lag intervals•  yearly, seasonal•  lag – time difference between related items.•  the autocorrelation coefficient rk measures the correlations between

time series values a certain disctance, lag k, apart.•  e.g. Pearson’s correlation coefficient r

17-03-09 62 Kjell Orsborn - UDBL - IT - UU

Smoothing

17-03-09 63 Kjell Orsborn - UDBL - IT - UU

Correlation with Lag of 3

17-03-09 64 Kjell Orsborn - UDBL - IT - UU

Similarity between sequences

•  Determine similarity between a target pattern, X, and sequence, Y: sim(X,Y)

•  Similar to Web usage mining•  Similar to earlier word processing and spelling corrector

applications.•  Issues:

–  Length–  Scale–  Gaps–  Outliers–  Baseline

17-03-09 65 Kjell Orsborn - UDBL - IT - UU

Longest Common Subseries

•  Find longest subseries they have in common.•  Ex:

–  X = <10,5,6,9,22,15,4,2>–  Y = <6,9,10,5,6,22,15,4,2>–  Output: <22,15,4,2>–  Sim(X,Y) = l/n = 4/9

17-03-09 66 Kjell Orsborn - UDBL - IT - UU

Similarity based on Linear Transformation•  Linear transformation function f

–  Convert a value from one series to a value in the second–  Algorithms for finding f in “Time-series similarity problems and well-separated

geometric sets”, Bollobas et al, Proceeding SCG '97.

•  εf – tolerated difference in results•  δ – time value difference allowed

17-03-09 67 Kjell Orsborn - UDBL - IT - UU

Prediction

•  Predict future value for time series•  Regression may not be sufficient•  Stationary vs nonstationary •  Statistical Techniques

–  ARMA (Autoregressive Moving Average)–  ARIMA (Autoregressive Integrated Moving Average)

•  Special techniques might be needed for online infinite streams–  Algorithms for moving average discussed in stream mining

section

17-03-09 68 Kjell Orsborn - UDBL - IT - UU

Pattern Detection

•  Identify patterns of behavior in time series•  Speech recognition, signal processing•  Can make use of FSR, MM, HMM

17-03-09 69 Kjell Orsborn - UDBL - IT - UU

Pattern detection – string matching

•  Find given pattern in sequence•  Knuth-Morris-Pratt: Construct FSM•  Boyer-Moore: Construct FSM

17-03-09 70 Kjell Orsborn - UDBL - IT - UU

Pattern detection - distance between strings

•  Cost to convert one to the other•  Transformations

–  Match: Current characters in both strings are the same–  Delete: Delete current character in input string–  Insert: Insert current character in target string into string

17-03-09 71 Kjell Orsborn - UDBL - IT - UU

Pattern detection - distance between strings

17-03-09 72 Kjell Orsborn - UDBL - IT - UU

Frequent Sequence

17-03-09 73 Kjell Orsborn - UDBL - IT - UU

Frequent Sequence Example

•  Purchases made by customers•  s(<{A},{C}>) = 1/3•  s(<{A},{D}>) = 2/3•  s(<{B,C},{D}>) = 2/3

17-03-09 74 Kjell Orsborn - UDBL - IT - UU

Frequent Sequence Lattice

17-03-09 75 Kjell Orsborn - UDBL - IT - UU

SPADE

•  Sequential Pattern Discovery using Equivalence classes•  Identifies patterns by traversing lattice in a top down

manner.•  Divides lattice into equivalent classes and searches each

separately.•  ID-List: Associates customers and transactions with each

item.

17-03-09 76 Kjell Orsborn - UDBL - IT - UU

SPADE Example

•  ID-List for Sequences of length 1:

•  Count for <{A}> is 3•  Count for <{A},{D}> is 2

17-03-09 77 Kjell Orsborn - UDBL - IT - UU

Θ1 Equivalence Classes

17-03-09 78 Kjell Orsborn - UDBL - IT - UU

SPADE Algorithm

17-03-09 79 Kjell Orsborn - UDBL - IT - UU

Temporal Association Rules

•  Transaction has time:<TID, CID, I1, I2, …, Im, ts, te>

•  [ts, te] is range of time the transaction is active.•  Types:

–  Inter-transaction rules–  Episode rules–  Trend dependencies–  Sequence association rules–  Calendric association rules

17-03-09 80 Kjell Orsborn - UDBL - IT - UU

Temporal Association Rules Inter-transaction rules

•  Intra-transaction association rulesTraditional association Rules

•  Inter-transaction association rules–  Rules across transactions–  Sliding window – How far apart (time or number of transactions)

to look for related itemsets.

17-03-09 81 Kjell Orsborn - UDBL - IT - UU

Temporal Association Rules Episode Rules

•  Association rules applied to sequences of events.•  Episode – set of event predicates and partial ordering on

them

17-03-09 82 Kjell Orsborn - UDBL - IT - UU

Temporal Association Rules Trend Dependencies

•  Association rules across two database states based on time.•  Ex: (SSN,=) ⇒ (Salary, ≤)�

�Confidence=4/5�Support=4/36

17-03-09 83 Kjell Orsborn - UDBL - IT - UU

Temporal Association Rules Sequence Association Rules

•  Association rules involving sequences•  Ex:

<{A},{C}> ⇒ <{A},{D}>Support = 1/3Confidence 1

17-03-09 84 Kjell Orsborn - UDBL - IT - UU

Temporal Association Rules Calendric Association Rules

•  Each transaction has a unique timestamp.•  Group transactions based on time interval within which

they occur.•  Identify large itemsets by looking at transactions only in

this predefined interval.