Ensemble Learning

31
Ensemble Learning Textbook, Learning From Examples, Section 10-12 (pp. 56-66).

description

Ensemble Learning. Textbook, Learning From Examples, Section 10-12 (pp. 56-66). From the book Guesstimation. How much domestic trash and recycling is collected each year in the US (in tons)? Individual answers Confidence-weighted ensemble. Answer: 245 million tons. Ensemble learning. - PowerPoint PPT Presentation

Transcript of Ensemble Learning

Page 1: Ensemble Learning

Ensemble Learning

Textbook, Learning From Examples, Section 10-12 (pp. 56-66).

Page 2: Ensemble Learning
Page 3: Ensemble Learning
Page 4: Ensemble Learning
Page 5: Ensemble Learning

From the book Guesstimation

• How much domestic trash and recycling is collected each year in the US (in tons)?

– Individual answers

– Confidence-weighted ensemble

Page 6: Ensemble Learning

• Answer: 245 million tons

Page 7: Ensemble Learning

Ensemble learning

Training sets Hypotheses Ensemble hypothesis

S1 h1

S2 h2

.

.

.

SN hN

H

Page 8: Ensemble Learning

Advantages of ensemble learning

• Can be very effective at reducing generalization error!

(E.g., by voting.)

• Ideal case: the hi have independent errors

Page 9: Ensemble Learning

Example

Given three hypotheses, h1, h2, h3, with hi (x) {−1,1}

Suppose each hi has 60% generalization accuracy, and assume errors are independent.

Now suppose H(x) is the majority vote of h1, h2, and h3 . What is probability that H is correct?

Page 10: Ensemble Learning

h1 h2 h3 H probability

C C C C .216

C C I C .144

C I I I .096

C I C C .144

I C C C .144

I I C I .096

I C I I .096

I I I I .064

Total probability correct: .648

Page 11: Ensemble Learning

Again, given three hypotheses, h1, h2, h3.

Suppose each hi has 40% generalization accuracy, and assume errors are independent.

Now suppose we classify x as the majority vote of h1, h2, and h3 . What is probability that the classification is correct?

Another Example

Page 12: Ensemble Learning

h1 h2 h3 H probability

C C C C .064

C C I C .096

C I I I .144

C I C C .096

I C C C .096

I I C I .144

I C I I .144

I I I I .261

Total probability correct: .352

Page 13: Ensemble Learning

In general, if hypotheses h1, ..., hM all have generalization accuracy A, what is probability that a majority vote will be correct?

General case

Page 14: Ensemble Learning

Possible problems with ensemble learning

• Errors are typically not independent

• Training time and classification time are increased by a factor of M.

• Hard to explain how ensemble hypothesis does classification.

• How to get enough data to create M separate data sets,

S1, ..., SM?

Page 15: Ensemble Learning

• Three popular methods:

– Voting:• Train classifier on M different training sets Si to

obtain M different classifiers hi.

• For a new instance x, define H(x) as:

where αi is a confidence measure for classifier hi

Page 16: Ensemble Learning

– Bagging (Breiman, 1990s):• To create Si, create “bootstrap replicates” of original

training set S

– Boosting (Schapire & Freund, 1990s)• To create Si, reweight examples in original training

set S as a function of whether or not they were misclassified on the previous round.

Page 17: Ensemble Learning

Adaptive Boosting (Adaboost)

A method for combining different weak hypotheses (training error close to but less than 50%) to produce a strong hypothesis (training error close to 0%)

Page 18: Ensemble Learning

Sketch of algorithm

Given examples S and learning algorithm L, with | S | = N

• Initialize probability distribution over examples w1(i) = 1/N .

• Repeatedly run L on training sets St S to produce h1, h2, ... , hK.

– At each step, derive St from S by choosing examples probabilistically according to probability distribution wt . Use St to learn ht.

• At each step, derive wt+1 by giving more probability to examples that were misclassified at step t.

• The final ensemble classifier H is a weighted sum of the ht’s, with each weight being a function of the corresponding ht’s error on its training set.

Page 19: Ensemble Learning

Adaboost algorithm

• Given S = {(x1, y1), ..., (xN, yN)} where x X, yi {+1, −1}

• Initialize w1(i) = 1/N. (Uniform distribution over data)

Page 20: Ensemble Learning

• For t = 1, ..., K:

– Select new training set St from S with replacement, according to wt

– Train L on St to obtain hypothesis ht

– Compute the training error t of ht on S :

– Compute coefficient

t

tt

1ln

2

1

Page 21: Ensemble Learning

– Compute new weights on data:

For i = 1 to N

where Zt is a normalization factor chosen so that wt+1 will be a probability distribution:

Page 22: Ensemble Learning

• At the end of K iterations of this algorithm, we have

h1, h2, . . . , hK

We also have

1, 2, . . . ,K, where

• Ensemble classifier:

• Note that hypotheses with higher accuracy on their training sets are

weighted more strongly.

t

tt

1ln

2

1

Page 23: Ensemble Learning

A Hypothetical Example

where { x1, x2, x3, x4 } are class +1

{x5, x6, x7, x8 } are class −1

t = 1 :

w1 = {1/8, 1/8, 1/8, 1/8, 1/8, 1/8, 1/8, 1/8}

S1 = {x1, x2, x2, x5, x5, x6, x7, x8} (notice some repeats)

Train classifier on S1 to get h1

Run h1 on S. Suppose classifications are: {1, −1, −1, −1, −1, −1, −1, −1}

• Calculate error:

Page 24: Ensemble Learning

Calculate ’s:

Calculate new w’s:

Page 25: Ensemble Learning

t = 2

w2 = {0.102, 0.163, 0.163, 0.163, 0.102, 0.102, 0.102, 0.102}

S2 = {x1, x2, x2, x3, x4, x4, x7, x8}

Run classifier on S2 to get h2

Run h2 on S. Suppose classifications are: {1, 1, 1, 1, 1, 1, 1, 1}

Calculate error:

Page 26: Ensemble Learning

Calculate ’s:

Calculate w’s:

Page 27: Ensemble Learning

t =3

w3 = {0.082, 0.139, 0.139, 0.139, 0.125, 0.125, 0.125, 0.125}

S3 = {x2, x3, x3, x3, x5, x6, x7, x8}

Run classifier on S3 to get h3

Run h3 on S. Suppose classifications are: {1, 1, −1, 1, −1, −1, 1, −1}

Calculate error:

Page 28: Ensemble Learning

• Calculate ’s:

• Ensemble classifier:

Page 29: Ensemble Learning

What is the accuracy of H on the training data?

Example Actual class

h1 h2 h3

x1 1 1 1 1

x2 1 −1 1 1

x3 1 −1 1 −1

x4 1 1 1 1

x5 −1 −1 1 −1

x6 −1 −1 1 −1

x7 −1 1 1 1

x8 −1 −1 1 −1

where { x1, x2, x3, x4 } are class +1 {x5, x6, x7, x8 } are class −1

Recall the training set:

Page 30: Ensemble Learning

Sources of Classifier Error: Bias, Variance, and Noise

• Bias:– Classifier cannot learn the correct hypothesis (no

matter what training data is given), and so incorrect hypothesis h is learned. The bias is the average error of h over all possible training sets.

• Variance:– Training data is not representative enough of all data,

so the learned classifier h varies from one training set to another.

• Noise:– Training data contains errors, so incorrect hypothesis h

is learned.

Page 31: Ensemble Learning

Adaboost seems to reduce both bias and variance.

Adaboost does not seem to overfit for increasing K.