Notes based on: Data Mining for Business Intelligence€¦ · Find midpoints between successive...

Post on 09-Oct-2020

0 views 0 download

Transcript of Notes based on: Data Mining for Business Intelligence€¦ · Find midpoints between successive...

Chapter 9 – Classification and Regression TreesRoger Bohn April 2017

Notes based on: Data Mining for Business Intelligence Shmueli, Patel & Bruce

�1

�2

�3

II. Results and Interpretation ● There are 1183 auction results that are a part of the

observation used in the model. The four most important rules are as follows: ● Rule 1: IF(OpenPrice < 1.23), THEN COMPETITIVE with a

91% accuracyRule 2: IF(3.718 > OpenPrice >= 1.23) AND (2572 > sellerRating >= 570.5), THEN COMPETITIVE with

● a 82% accuracyRule 3: IF(OpenPrice >= 1.23) AND (sellerRating < 570.5) THEN COMPETITIVE with a 65% accuracy

● Rule 4: IF(3.718 > OpenPrice >= 1.23) AND (sellerRating >= 2572), THEN NOT COMPETITIVE with a 33% accuracy

�4

Trees and RulesGoal: Classify or predict an outcome based on a set of

predictorsThe output is a set of rules Example: ● Goal: classify a record as “will accept credit card

offer” or “will not accept”● Rule might be “IF (Income > 92.5) AND (Education <

1.5) AND (Family <= 2.5) THEN Class = 0 (nonacceptor)

● Also called CART, Decision Trees, or just Trees● Rules are represented by tree diagrams

�5

Trees and RulesGoal: Classify or predict an outcome based on a set of

predictorsThe output is a set of rules, shown as a tree Example: ● Goal: classify a record as “will accept credit card

offer” or “will not accept”● Rule might be “IF (Income > 92.5) AND (Education <

1.5) AND (Family <= 2.5) THEN Class = 0 (nonacceptor)

● Also called CART, Decision Trees, or just Trees● Rules are represented by tree diagrams

�6

�7

Key Ideas Recursive partitioning: Repeatedly split the records into two parts so as to achieve maximum homogeneity within the new parts

Pruning the tree: Simplify the tree by pruning peripheral branches to avoid overfitting

�8

Recursive Partitioning

�9

Recursive Partitioning Steps● Pick one of the predictor variables, xi

● Pick a value of xi, say si, that divides the training data into two (not necessarily equal) portions

● Measure how “pure” or homogeneous each of the resulting portions are “Pure” = containing records of mostly one class

● Algorithm tries different values of xi, and si to maximize purity in initial split

● After you get a “maximum purity” split, repeat the process for a second split, and so on

�10

Example: Riding Mowers

● Goal: Classify 24 households as owning or not owning riding mowers

● Predictors = Income, Lot Size

�11

Income Lot_Size Ownership60.0 18.4 owner85.5 16.8 owner64.8 21.6 owner61.5 20.8 owner87.0 23.6 owner110.1 19.2 owner108.0 17.6 owner82.8 22.4 owner69.0 20.0 owner93.0 20.8 owner51.0 22.0 owner81.0 20.0 owner75.0 19.6 non-owner52.8 20.8 non-owner64.8 17.2 non-owner43.2 20.4 non-owner84.0 17.6 non-owner49.2 17.6 non-owner59.4 16.0 non-owner66.0 18.4 non-owner47.4 16.4 non-owner33.0 18.8 non-owner51.0 14.0 non-owner63.0 14.8 non-owner�12

How to split● Order records according to one variable, say lot

size

● Find midpoints between successive valuesE.g. first midpoint is 14.4 (halfway between 14.0 and 14.8)

● Divide records into those with lotsize > 14.4 and those < 14.4

● After evaluating that split, try the next one, which is 15.4 (halfway between 14.8 and 16.0)

�13

Note: Categorical Variables● Examine all possible ways in which the categories

can be split.

● E.g., categories A, B, C can be split 3 ways{A} and {B, C}{B} and {A, C}{C} and {A, B}

● With many categories, # of splits becomes huge● XLMiner supports only binary categorical variables

�14

The first split: Lot Size = 19,000

�15

Second Split: Income = $84,000

�16

After All Splits

�17

Measuring Impurity

�18

Gini IndexGini Index for rectangle A containing m records

p = proportion of cases in rectangle A that belong to class k

● I(A) = 0 when all cases belong to same class● Max value when all classes are equally represented (=

0.50 in binary case)

Note: XLMiner uses a variant called “delta splitting rule”

I(A) = 1 -

�19

Entropy

p = proportion of cases (out of m) in rectangle A that belong to class k

● Entropy ranges between 0 (most pure) and log2(m) (equal representation of classes)

�20

Impurity and Recursive Partitioning

● Obtain overall impurity measure (weighted avg. of individual rectangles)

● At each successive stage, compare this measure across all possible splits in all variables

● Choose the split that reduces impurity the most

● Chosen split points become nodes on the tree

�21

First Split – The Tree

�22

Tree after three splits

�23

Tree Structure

● Split points become nodes on tree (circles with split value in center)

● Rectangles represent “leaves” (terminal points, no further splits, classification value noted)

● Numbers on lines between nodes indicate # cases

● Read down tree to derive ruleE.g., If lot size < 19, and if income > 84.75, then class = “owner”

�24

Determining Leaf Node Label

● Each leaf node label is determined by “voting” of the records within it, and by the cutoff value

● Records within each leaf node are from the training data

● Default cutoff=0.5 means that the leaf node’s label is the majority class.

● Cutoff = 0.75: requires majority of 75% or more “1” records in the leaf to label it a “1” node

�25

Tree after all splits

�26

The Overfitting Problem

�27

Stopping Tree Growth● Natural end of process is 100% purity in each leaf

● This overfits the data, which end up fitting noise in the data

● Overfitting leads to low predictive accuracy of new data

● Past a certain point, the error rate for the validation data starts to increase

�28

Full Tree Error Rate

�29

CHAID

CHAID, older than CART, uses chi-square statistical test to limit tree growth

Splitting stops when purity improvement is not statistically significant

�30

Pruning● CART lets tree grow to full extent, then prunes it back

● Idea is to find that point at which the validation error begins to rise

● Generate successively smaller trees by pruning leaves

● At each pruning stage, multiple trees are possible

● Use cost complexity to choose the best tree at that stage

�31

Cost Complexity

CC(T) = cost complexity of a treeErr(T) = proportion of misclassified recordsα = penalty factor attached to tree size (set by

user)

● Among trees of given size, choose the one with lowest CC

● Do this for each size of tree

CC(T) = Err(T) + α L(T)

�32

Using Validation Error to PrunePruning process yields a set of trees of different sizes and associated error rates

Two trees of interest:●Minimum error tree

Has lowest error rate on validation data●Best pruned tree

Smallest tree within one std. error of min. errorThis adds a bonus for simplicity/parsimony

�33

Error rates on pruned trees

�34

Regression Trees

�35

Regression Trees for Prediction

● Used with continuous outcome variable● Procedure similar to classification tree● Many splits attempted, choose the one that

minimizes impurity

�36

Differences from CT● Prediction is computed as the average of

numerical target variable in the rectangle (in CT it is majority vote)

● Impurity measured by sum of squared deviations from leaf mean

● Performance measured by RMSE (root mean squared error)

�37

Advantages of trees● Easy to use, understand● Produce rules that are easy to interpret &

implement● Variable selection & reduction is automatic● Do not require the assumptions of statistical

models● Work fine with missing data

�38

Disadvantages

● May not perform well where there is structure in the data that is not well captured by horizontal or vertical splits

● Very simple, don’t always give best fits

Summary● Classification and Regression Trees are an easily

understandable and transparent method for predicting or classifying new records

● A tree is a graphical representation of a set of rules

● Trees must be pruned to avoid over-fitting of the training data

● As trees do not make any assumptions about the data structure, they usually require large samples

�40

Toyota Corolla Prices● Case analysis

● Higher or lower than median price?● 1436 records, 38 attributes● Textbook page 237● Clean up and

�41

Lots of variables. Which to use?

● Look for unimportant variables● Little variation in the data● Zero correlation (not always safe)

● Look for groups of variables (high correlation)● Logically irrelevant to price (judgment)

�42

�43

�44

5/28/2014 ggpairs3.png (800×800)

http://solomonmessing.files.wordpress.com/2014/01/ggpairs3.png 1/1

ggpairs1#,Uncomment,these,lines,and,install,if,necessary: 2 #install.packages('GGally')3 #install.packages('ggplot2')4 #install.packages('scales')5 #install.packages('memisc') 6 7 library(ggplot2) 8 library(GGally) 9 library(scales) 10 data(diamonds)11 12 diasamp = diamonds[sample(1:length(diamonds$price),10000),]13 ggpairs(diasamp,params = c(shape = I("."),outlier.shape=I(".")))

�45

another ggpairs example

�46

https://tgmstat.wordpress.com/2013/11/13/plot-matrix-with-the-r-package-ggally/

One variable regression lines

�47 https://www.r-bloggers.com/multiple-regression-lines-in-ggpairs/

Notice 3 regions;each can have different information.

�48

272 CHAPTER 11 Intermediate graphs

disp -0.85 0.90 1.00 0.79 -0.710 0.89 -0.434 -0.71 -0.591 -0.56 0.395hp -0.78 0.83 0.79 1.00 -0.449 0.66 -0.708 -0.72 -0.243 -0.13 0.750drat 0.68 -0.70 -0.71 -0.45 1.000 -0.71 0.091 0.44 0.713 0.70 -0.091wt -0.87 0.78 0.89 0.66 -0.712 1.00 -0.175 -0.55 -0.692 -0.58 0.428qsec 0.42 -0.59 -0.43 -0.71 0.091 -0.17 1.000 0.74 -0.230 -0.21 -0.656vs 0.66 -0.81 -0.71 -0.72 0.440 -0.55 0.745 1.00 0.168 0.21 -0.570am 0.60 -0.52 -0.59 -0.24 0.713 -0.69 -0.230 0.17 1.000 0.79 0.058gear 0.48 -0.49 -0.56 -0.13 0.700 -0.58 -0.213 0.21 0.794 1.00 0.274carb -0.55 0.53 0.39 0.75 -0.091 0.43 -0.656 -0.57 0.058 0.27 1.000

Which variables are most related? Which variables are relatively independent? Arethere any patterns? It isn’t that easy to tell from the correlation matrix without signifi-cant time and effort (and probably a set of colored pens to make notations).

You can display that same correlation matrix using the corrgram() function in thecorrgram package (see figure 11.17). The code is

library(corrgram)corrgram(mtcars, order=TRUE, lower.panel=panel.shade, upper.panel=panel.pie, text.panel=panel.txt, main="Corrgram of mtcars intercorrelations")

gear

am

drat

mpg

vs

qsec

wt

disp

cyl

hp

carb

Corrgram of mtcars intercorrelations

Figure 11.17 Corrgram of the correlations among the variables in the mtcars data frame. Rows and columns have been reordered using principal components analysis.

Licensed to Alice Shih <rbohn@ucsd.edu>

�49

Corolla tree: age, km,

�50

�51

How to build in practice● Start with every plausible variable● Let the algorithm decide what belongs in final

model● Do NOT pre-screen based on theory

● Do Throw out obvious junk● Consider pruning highly correlated variables (at least

at first)

�52

Evaluating result: Confusion matrix● Calculate model using only training data● Evaluate model using only testing (or validation)

data.

�53

Report what you DID

● Don’t omit so many variables unless sure they don’t matter.

�54

Typical results ● Age (in months)● Km traveled● Air conditioning● Weight

�55

Air conditioning● AC Yes/No● Automatic AC Yes/no● Model thinks this is 2 independent variables● Use outside knowledge:● Convert this to 1 variable with 3 le● vels

�56