Adaboost Matas
-
Upload
son-nguyen-quy -
Category
Documents
-
view
233 -
download
0
Transcript of Adaboost Matas
-
7/31/2019 Adaboost Matas
1/136
AdaBoost
Jiri Matas and JanSochman
Centre for Machine PerceptionCzech Technical University, Prague
http://cmp.felk.cvut.cz
http://prevpage/ -
7/31/2019 Adaboost Matas
2/136
Presentation
Outline:
AdaBoost algorithm
Why is of interest? How it works? Why it works?
AdaBoost variants
AdaBoost with a Totally Corrective Step (TCS)
Experiments with a Totally Corrective Step
1
2
3
4
5
6
78
9
10
1112
13
14
15
16
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/http://prevpage/ -
7/31/2019 Adaboost Matas
3/136
Introduction
1990 Boost-by-majority algorithm (Freund)
1995 AdaBoost (Freund & Schapire)
1997 Generalized version of AdaBoost (Schapire & Singer)
2001 AdaBoost in Face Detection (Viola & Jones)
Interesting properties:
AB is a linear classifier with all its desirable properties.
AB output converges to the logarithm of likelihood ratio.
AB has good generalization properties.
AB is a feature selector with a principled strategy (minimisation of upperbound on empirical error).
AB close to sequential decision making (it produces a sequence of graduallymore complex classifiers).
1
2
3
4
5
6
78
9
10
1112
13
14
15
16
http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/ -
7/31/2019 Adaboost Matas
4/136
What is AdaBoost?
AdaBoost is an algorithm for constructing a strong classifier as linear
combination
f(x) =
Tt=1
tht(x)
of simple weak classifiers ht(x).
1
2
3
4
5
6
78
9
10
1112
13
14
15
16
http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
5/136
What is AdaBoost?
AdaBoost is an algorithm for constructing a strong classifier as linear
combination
f(x) =
Tt=1
tht(x)
of simple weak classifiers ht(x).
Terminology
ht(x) ... weak or basis classifier, hypothesis, feature
H(x) = sign(f(x)) ... strong or final classifier/hypothesis
1
2
3
4
5
6
78
9
10
1112
13
14
15
16
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
6/136
What is AdaBoost?
AdaBoost is an algorithm for constructing a strong classifier as linear
combination
f(x) =
Tt=1
tht(x)
of simple weak classifiers ht(x).
Terminology
ht(x) ... weak or basis classifier, hypothesis, feature
H(x) = sign(f(x)) ... strong or final classifier/hypothesis
Comments
The ht(x)s can be thought of as features.
Often (typically) the set H = {h(x)} is infinite.
1
2
3
4
5
6
78
9
10
1112
13
14
15
16
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
7/136
(Discrete) AdaBoost Algorithm Singer & Schapire (1997)
Given: (x1, y1), ..., (xm, ym); xi X, yi {1, 1}Initialize weights D1(i) = 1/m
For t = 1,...,T:
1. (Call WeakLearn), which returns the weak classifier ht : X {1, 1} withminimum error w.r.t. distribution Dt;
2. Choose t
R,
3. Update
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt
where Zt
is a normalization factor chosen so that Dt+1
is a distribution
Output the strong classifier:
H(x) = sign
T
t=1
tht(x)
1
2
34
5
6
78
9
10
1112
13
14
15
16
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
8/136
(Discrete) AdaBoost Algorithm Singer & Schapire (1997)
Given: (x1, y1), ..., (xm, ym); xi X, yi {1, 1}Initialize weights D1(i) = 1/m
For t = 1,...,T:
1. (Call WeakLearn), which returns the weak classifier ht : X {1, 1} withminimum error w.r.t. distribution Dt;
2. Choose t R,3. Update
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt
where Zt is a normalization factor chosen so that Dt+1 is a distribution
Output the strong classifier:
H(x) = sign
Tt=1
tht(x)
Comments
The computational complexity of selecting ht is independent of t. All information about previously selected features is captured in Dt!
1
2
34
5
6
78
9
10
11
12
13
14
15
16
http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
9/136
http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
10/136
WeakLearn
Loop step: Call WeakLearn, given distribution Dt;returns weak classifier ht : X {1, 1} from H = {h(x)}
Select a weak classifier with the smallest weighted errorht = arg min
hjHj =
mi=1 Dt(i)[yi = hj(xi)]
Prerequisite: t < 1/2 (otherwise stop)
WeakLearnexamples:
Decision tree builder, perceptron learning rule H infinite Selecting the best one from given finiteset H
Demonstration example
Weak classifier = perceptron
N(0, 1) 1r83
e1/2(r4)2
1
2
34
5
6
78
9
10
11
12
13
14
15
16
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/http://prevpage/ -
7/31/2019 Adaboost Matas
11/136
WeakLearn
Loop step: Call WeakLearn, given distribution Dt;returns weak classifier ht : X {1, 1} from H = {h(x)}
Select a weak classifier with the smallest weighted errorht = arg min
hjHj =
mi=1 Dt(i)[yi = hj(xi)]
Prerequisite: t < 1/2 (otherwise stop)
WeakLearnexamples:
Decision tree builder, perceptron learning rule H infinite Selecting the best one from given finiteset H
Demonstration example
Training set Weak classifier = perceptron
N(0, 1) 1r83e1/2(r4)2
1
2
34
5
6
78
9
10
11
12
13
14
15
16
http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
12/136
WeakLearn
Loop step: Call WeakLearn, given distribution Dt;returns weak classifier ht : X {1, 1} from H = {h(x)}
Select a weak classifier with the smallest weighted errorht = arg min
hjHj =
mi=1 Dt(i)[yi = hj(xi)]
Prerequisite: t < 1/2 (otherwise stop)
WeakLearnexamples:
Decision tree builder, perceptron learning rule H infinite Selecting the best one from given finiteset H
Demonstration example
Training set Weak classifier = perceptron
N(0, 1) 1r83e1/2(r4)2
1
2
34
5
6
78
9
10
11
12
13
14
15
16
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/http://prevpage/ -
7/31/2019 Adaboost Matas
13/136
AdaBoost as a Minimiser of an Upper Bound on the Empirical Error
The main objective is to minimize tr =1m|{i : H(xi) = yi}|
It can be upper bounded by tr(H) Tt=1 Zt
1
2
34
5
6
78
9
10
11
12
13
14
15
16
Ad B Mi i i f U B d h E i i l E
http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
14/136
AdaBoost as a Minimiser of an Upper Bound on the Empirical Error
The main objective is to minimize tr =1m|{i : H(xi) = yi}|
It can be upper bounded by tr(H) Tt=1 Zt
How to set t?
Select t to greedily minimize Zt() in each step
Zt() is convex differentiable function with one extremum
ht(x) {1, 1} then optimal t = 12 log(1+rt1rt)where rt =
mi=1 Dt(i)ht(xi)yi
Zt = 2t(1 t) 1 for optimal t Justification of selection of ht according to t
1
2
34
5
6
78
9
10
11
12
13
14
15
16
Ad B t Mi i i f U B d th E i i l E
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
15/136
AdaBoost as a Minimiser of an Upper Bound on the Empirical Error
The main objective is to minimize tr =1m|{i : H(xi) = yi}|
It can be upper bounded by tr(H) T
t=1 ZtHow to set t?
Select t to greedily minimize Zt() in each step
Zt() is convex differentiable function with one extremum
ht(x)
{1, 1
}then optimal t =
12 log(
1+rt1rt
)
where rt =m
i=1 Dt(i)ht(xi)yi
Zt = 2
t(1 t) 1 for optimal t Justification of selection of ht according to t
Comments
The process of selecting t and ht(x) can be interpreted as a singleoptimization step minimising the upper bound on the empirical error.Improvement of the bound is guaranteed, provided that t < 1/2.
The process can be interpreted as a component-wise local optimization(Gauss-Southwell iteration) in the (possibly infinite dimensional!) space of
= (1, 2, . . . ) starting from. 0 = (0, 0, . . . ).
1
2
34
5
6
78
9
10
11
12
13
14
15
16
R i hti
http://cmp.felk.cvut.cz/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/ -
7/31/2019 Adaboost Matas
16/136
Reweighting
Effect on the training set
Reweighting formula:
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt=
exp(yitq=1 qhq(xi))mt
q=1 Zq
exp(tyiht(xi))
< 1, yi = ht(xi)> 1, yi = ht(xi)
} Increase (decrease) weight of wrongly (correctly)
classified examples. The weight is the upper bound on the
error of a given example!
1
2
34
5
6
78
9
10
11
12
13
14
15
16
Reweighting
http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
17/136
Reweighting
Effect on the training set
Reweighting formula:
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt=
exp(yitq=1 qhq(xi))mt
q=1 Zq
exp(tyiht(xi))
< 1, yi = ht(xi)> 1, yi = ht(xi)
} Increase (decrease) weight of wrongly (correctly)
classified examples. The weight is the upper bound on the
error of a given example!
1
2
34
5
6
78
9
10
11
12
13
14
15
16
Reweighting
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
18/136
Reweighting
Effect on the training set
Reweighting formula:
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt=
exp(yitq=1 qhq(xi))mt
q=1 Zq
exp(tyiht(xi))
< 1, yi = ht(xi)> 1, yi = ht(xi)
} Increase (decrease) weight of wrongly (correctly)
classified examples. The weight is the upper bound on the
error of a given example!
1
2
34
5
6
78
9
10
11
12
13
14
15
16
Reweighting
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
19/136
Reweighting
Effect on the training set
Reweighting formula:
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt=
exp(yitq=1 qhq(xi))mt
q=1 Zq
exp(tyiht(xi))
< 1, yi = ht(xi)> 1, yi = ht(xi)
} Increase (decrease) weight of wrongly (correctly)
classified examples. The weight is the upper bound on the
error of a given example!
1
2
34
5
6
7
8
9
10
11
12
13
14
15
16
Reweighting
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
20/136
Reweighting
Effect on the training set
Reweighting formula:
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt=
exp(yitq=1 qhq(xi))mt
q=1 Zq
exp(tyiht(xi))
< 1, yi = ht(xi)> 1, yi = ht(xi)
} Increase (decrease) weight of wrongly (correctly)
classified examples. The weight is the upper bound on the
error of a given example!
2 1.5 1 0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
yf(x)
err
1
2
34
5
6
7
8
9
10
11
12
13
14
15
16
Reweighting
http://cmp.felk.cvut.cz/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/ -
7/31/2019 Adaboost Matas
21/136
Reweighting
Effect on the training set
Reweighting formula:
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt=
exp(yitq=1 qhq(xi))mt
q=1 Zq
exp(tyiht(xi))
< 1, yi = ht(xi)> 1, yi = ht(xi)
} Increase (decrease) weight of wrongly (correctly)
classified examples. The weight is the upper bound on the
error of a given example!
2 1.5 1 0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
yf(x)
err
Effect on ht
t minimize Zt i:ht(xi)=yi
Dt+1(i) =
i:ht(xi)=yiDt+1(i)
Error of ht on Dt+1 is 1/2
Next weak classifier is the most
independent one1 2 3 4
e
0.5
t
1
2
34
5
6
7
8
9
10
11
12
13
14
15
16
Summary of the Algorithm
http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/ -
7/31/2019 Adaboost Matas
22/136
Summary of the Algorithm
1
2
34
5
6
7
8
9
10
11
12
13
14
15
16
Summary of the Algorithm
http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://nextpage/http://prevpage/ -
7/31/2019 Adaboost Matas
23/136
Summary of the Algorithm
Initialization...1
2
34
5
6
7
8
9
10
11
12
13
14
15
16
Summary of the Algorithm
http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://nextpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/ -
7/31/2019 Adaboost Matas
24/136
y g
Initialization...For t = 1,...,T:
1
2
34
5
6
7
8
9
10
11
12
13
14
15
16
Summary of the Algorithm
http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/ -
7/31/2019 Adaboost Matas
25/136
y g
Initialization...For t = 1,...,T:
Find ht = arg minhjH
j =mi=1
Dt(i)[yi = hj(xi)]
t = 11
2
34
5
6
7
8
9
10
11
12
13
14
15
16
Summary of the Algorithm
http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/ -
7/31/2019 Adaboost Matas
26/136
y g
Initialization...For t = 1,...,T:
Find ht = arg minhjH
j =mi=1
Dt(i)[yi = hj(xi)]
If t 1/2 then stop
t = 11
2
34
5
6
7
8
9
10
11
12
13
14
15
16
Summary of the Algorithm
http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/ -
7/31/2019 Adaboost Matas
27/136
Initialization...For t = 1,...,T:
Find ht = arg minhjH
j =mi=1
Dt(i)[yi = hj(xi)]
If t 1/2 then stop Set t =
12
log(1+rt1rt)
t = 11
2
34
5
6
7
8
9
10
11
12
13
14
15
16
Summary of the Algorithm
http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/ -
7/31/2019 Adaboost Matas
28/136
Initialization...For t = 1,...,T:
Find ht = arg minhjH j =
mi=1
Dt(i)[yi = hj(xi)]
If t 1/2 then stop Set t =
12
log(1+rt1rt)
Update
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt
t = 11
2
34
5
6
7
8
9
10
11
12
13
14
15
16
Summary of the Algorithm
http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
29/136
Initialization...For t = 1,...,T:
Find ht = arg minhjH j =
mi=1
Dt(i)[yi = hj(xi)]
If t 1/2 then stop Set t =
12
log(1+rt1rt)
Update
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt
Output the final classifier:
H(x) = sign
Tt=1
tht(x)
t = 1
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
0.3
0.35
1
2
34
5
6
7
8
9
10
11
12
13
14
15
16
Summary of the Algorithm
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
30/136
Initialization...For t = 1,...,T:
Find ht = arg minhjH j =
mi=1
Dt(i)[yi = hj(xi)]
If t 1/2 then stop Set t =
12
log(1+rt1rt)
Update
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt
Output the final classifier:
H(x) = sign
Tt=1
tht(x)
t = 2
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
0.3
0.35
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Summary of the Algorithm
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
31/136
Initialization...For t = 1,...,T:
Find ht = arg minhjH j =
mi=1
Dt(i)[yi = hj(xi)]
If t 1/2 then stop Set t =
12
log(1+rt1rt)
Update
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt
Output the final classifier:
H(x) = sign
Tt=1
tht(x)
t = 3
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
0.3
0.35
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Summary of the Algorithm
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
32/136
Initialization...For t = 1,...,T:
Find ht = arg minhjH j =
mi=1
Dt(i)[yi = hj(xi)]
If t 1/2 then stop Set t =
12
log(1+rt1rt)
Update
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt
Output the final classifier:
H(x) = sign
Tt=1
tht(x)
t = 4
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
0.3
0.35
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Summary of the Algorithm
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
33/136
Initialization...For t = 1,...,T:
Find ht = arg minhjH j =
mi=1
Dt(i)[yi = hj(xi)]
If t 1/2 then stop Set t =
12
log(1+rt1rt)
Update
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt
Output the final classifier:
H(x) = sign
Tt=1
tht(x)
t = 5
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
0.3
0.35
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Summary of the Algorithm
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
34/136
Initialization...For t = 1,...,T:
Find ht = arg minhjH j =
mi=1
Dt(i)[yi = hj(xi)] If t 1/2 then stop Set t =
12
log(1+rt1rt)
Update
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt
Output the final classifier:
H(x) = sign
Tt=1
tht(x)
t = 6
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
0.3
0.35
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Summary of the Algorithm
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
35/136
Initialization...For t = 1,...,T:
Find ht = arg minhjH j =
mi=1
Dt(i)[yi = hj(xi)] If t 1/2 then stop Set t =
12
log(1+rt1rt)
Update
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt
Output the final classifier:
H(x) = sign
Tt=1
tht(x)
t = 7
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
0.3
0.35
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Summary of the Algorithm
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
36/136
Initialization...For t = 1,...,T:
Find ht = arg minhjH j =
mi=1
Dt(i)[yi = hj(xi)] If t 1/2 then stop Set t =
12
log(1+rt1rt)
Update
Dt+1(i) =Dt(i)exp(tyiht(xi))
Zt
Output the final classifier:
H(x) = sign
Tt=1
tht(x)
t = 40
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
0.3
0.35
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Does AdaBoost generalize?
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
37/136
Margins in SVM
max min(x,y)S
y( h(x))
2
Margins in AdaBoost
max min(x,y)S
y( h(x))1
Maximizing margins in AdaBoost
PS[yf(x) ] 2TTt=1
1t (1 t)1+ where f(x) =
h(x)1
Upper bounds based on margin
PD[yf(x) 0] PS[yf(x) ] + O 1
m
d log2(m/d)
2+ log(1/)
1/2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
AdaBoost variants
http://cmp.felk.cvut.cz/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
38/136
Freund & Schapire 1995
Discrete (h : X {0, 1}) Multiclass AdaBoost.M1 (h : X {0, 1,...,k}) Multiclass AdaBoost.M2 (h : X [0, 1]k) Real valued AdaBoost.R (Y = [0, 1], h : X [0, 1])
Schapire & Singer 1997
Confidence rated prediction (h : X R, two-class) Multilabel AdaBoost.MR, AdaBoost.MH (different formulation of minimized
loss)
... Many other modifications since then (Totally Corrective AB, Cascaded AB)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
39/136
Adaboost with a Totally Corrective Step (TCA)
http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/ -
7/31/2019 Adaboost Matas
40/136
Given: (x1, y1), ..., (xm, ym); xi X, yi {1, 1}Initialize weights D1(i) = 1/m
For t = 1,...,T:
1. (Call WeakLearn), which returns the weak classifier ht : X {1, 1} withminimum error w.r.t. distribution Dt;
2. Choose t R,3. Update Dt+1
4. (Call WeakLearn) on the set of hms with non zero s . Update ..Update Dt+1. Repeat till |t 1/2| < , t.
Comments
All weak classifiers have t 1/2, therefore the classifier selected at t + 1 isindependent of all classifiers selected so far.
It can be easily shown, that the totally corrective step reduces the upperbound on the empirical error without increasing classifier complexity.
The TCA was first proposed by Kivinen and Warmuth, but their t is set as instadard Adaboost.
Generalization of TCA is an open question.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Experiments with TCA on the IDA Database
http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
41/136
Discrete AdaBoost, Real AdaBoost, and Discrete and Real TCA evaluated
Weak learner: stumps. Data from the IDA repository (Ratsch:2000):
Input Training Testing Number ofdimension patterns patterns realizations
Banana 2 400 4900 100Breast cancer 9 200 77 100
Diabetes 8 468 300 100German 20 700 300 100Heart 13 170 100 100Image segment 18 1300 1010 20Ringnorm 20 400 7000 100Flare solar 9 666 400 100
Splice 60 1000 2175 20Thyroid 5 140 75 100Titanic 3 150 2051 100Twonorm 20 400 7000 100Waveform 21 400 4600 100
Note that the training sets are fairly small
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Results with TCA on the IDA Database
http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
42/136
Training error (dashed line), test error (solid line)
Discrete AdaBoost (blue), Real AdaBoost (green),
Discrete AdaBoost with TCA (red), Real AdaBoost with TCA (cyan)
the black horizontal line: the error of AdaBoost with RBF network weak classifiers from (Ratsch-ML:2000)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Results with TCA on the IDA Database
http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
43/136
Training error (dashed line), test error (solid line)
Discrete AdaBoost (blue), Real AdaBoost (green),
Discrete AdaBoost with TCA (red), Real AdaBoost with TCA (cyan)
the black horizontal line: the error of AdaBoost with RBF network weak classifiers from (Ratsch-ML:2000)
100
101
102
103
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Length of the strong classifier
IMAGE
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Results with TCA on the IDA Database
http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/ -
7/31/2019 Adaboost Matas
44/136
Training error (dashed line), test error (solid line)
Discrete AdaBoost (blue), Real AdaBoost (green),
Discrete AdaBoost with TCA (red), Real AdaBoost with TCA (cyan)
the black horizontal line: the error of AdaBoost with RBF network weak classifiers from (Ratsch-ML:2000)
100
101
102
103
0.32
0.34
0.36
0.38
0.4
0.42
0.44
0.46
Length of the strong classifier
FLARE
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Results with TCA on the IDA Database
1
http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
45/136
Training error (dashed line), test error (solid line)
Discrete AdaBoost (blue), Real AdaBoost (green),
Discrete AdaBoost with TCA (red), Real AdaBoost with TCA (cyan)
the black horizontal line: the error of AdaBoost with RBF network weak classifiers from (Ratsch-ML:2000)
100
101
102
103
0.16
0.18
0.2
0.22
0.24
0.26
0.28
0.3
0.32
Length of the strong classifier
GERMAN
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Results with TCA on the IDA Database
1
http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/http://prevpage/ -
7/31/2019 Adaboost Matas
46/136
Training error (dashed line), test error (solid line)
Discrete AdaBoost (blue), Real AdaBoost (green),
Discrete AdaBoost with TCA (red), Real AdaBoost with TCA (cyan)
the black horizontal line: the error of AdaBoost with RBF network weak classifiers from (Ratsch-ML:2000)
100
101
102
103
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Length of the strong classifier
RINGNORM
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Results with TCA on the IDA Database
1
http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://prevpage/http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
47/136
Training error (dashed line), test error (solid line)
Discrete AdaBoost (blue), Real AdaBoost (green),
Discrete AdaBoost with TCA (red), Real AdaBoost with TCA (cyan)
the black horizontal line: the error of AdaBoost with RBF network weak classifiers from (Ratsch-ML:2000)
100
101
102
103
0
0.05
0.1
0.15
0.2
0.25
Length of the strong classifier
SPLICE
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Results with TCA on the IDA Database
1
http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
48/136
Training error (dashed line), test error (solid line)
Discrete AdaBoost (blue), Real AdaBoost (green),
Discrete AdaBoost with TCA (red), Real AdaBoost with TCA (cyan)
the black horizontal line: the error of AdaBoost with RBF network weak classifiers from (Ratsch-ML:2000)
100
101
102
103
0
0.05
0.1
0.15
0.2
0.25
Length of the strong classifier
THYROID
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Results with TCA on the IDA Database
1
http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
49/136
Training error (dashed line), test error (solid line)
Discrete AdaBoost (blue), Real AdaBoost (green),
Discrete AdaBoost with TCA (red), Real AdaBoost with TCA (cyan)
the black horizontal line: the error of AdaBoost with RBF network weak classifiers from (Ratsch-ML:2000)
100
101
102
103
0.21
0.215
0.22
0.225
0.23
0.235
0.24
Length of the strong classifier
TITANIC
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Results with TCA on the IDA Database
1
http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
50/136
Training error (dashed line), test error (solid line)
Discrete AdaBoost (blue), Real AdaBoost (green),
Discrete AdaBoost with TCA (red), Real AdaBoost with TCA (cyan)
the black horizontal line: the error of AdaBoost with RBF network weak classifiers from (Ratsch-ML:2000)
100
101
102
103
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
Length of the strong classifier
BANANA
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Results with TCA on the IDA Database
1
http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
51/136
Training error (dashed line), test error (solid line)
Discrete AdaBoost (blue), Real AdaBoost (green),
Discrete AdaBoost with TCA (red), Real AdaBoost with TCA (cyan)
the black horizontal line: the error of AdaBoost with RBF network weak classifiers from (Ratsch-ML:2000)
100
101
102
103
0.24
0.25
0.26
0.27
0.28
0.29
0.3
0.31
Length of the strong classifier
BREAST
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Results with TCA on the IDA Database
1
http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
52/136
Training error (dashed line), test error (solid line)
Discrete AdaBoost (blue), Real AdaBoost (green),
Discrete AdaBoost with TCA (red), Real AdaBoost with TCA (cyan)
the black horizontal line: the error of AdaBoost with RBF network weak classifiers from (Ratsch-ML:2000)
100
101
102
103
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Length of the strong classifier
DIABETIS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Results with TCA on the IDA Database
1
http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
53/136
Training error (dashed line), test error (solid line)
Discrete AdaBoost (blue), Real AdaBoost (green),
Discrete AdaBoost with TCA (red), Real AdaBoost with TCA (cyan)
the black horizontal line: the error of AdaBoost with RBF network weak classifiers from (Ratsch-ML:2000)
100
101
102
103
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Length of the strong classifier
HEART
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Conclusions
1
http://cmp.felk.cvut.cz/ -
7/31/2019 Adaboost Matas
54/136
The AdaBoost algorithm was presented and analysed
A modification of the Totally Corrective AdaBoost was introduced
Initial test show that the TCA outperforms AB on some standard data sets.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
-
7/31/2019 Adaboost Matas
55/136
http://prevpage/ -
7/31/2019 Adaboost Matas
56/136
http://prevpage/http://prevpage/ -
7/31/2019 Adaboost Matas
57/136
http://prevpage/http://prevpage/ -
7/31/2019 Adaboost Matas
58/136
http://prevpage/ -
7/31/2019 Adaboost Matas
59/136
http://prevpage/ -
7/31/2019 Adaboost Matas
60/136
http://prevpage/http://prevpage/ -
7/31/2019 Adaboost Matas
61/136
http://prevpage/http://prevpage/ -
7/31/2019 Adaboost Matas
62/136
3
http://prevpage/http://prevpage/ -
7/31/2019 Adaboost Matas
63/136
2 1.5 1 0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
yf(x)
err
e
0.5
http://prevpage/ -
7/31/2019 Adaboost Matas
64/136
1 2 3 4 t
http://prevpage/ -
7/31/2019 Adaboost Matas
65/136
http://prevpage/http://prevpage/ -
7/31/2019 Adaboost Matas
66/136
http://prevpage/http://prevpage/ -
7/31/2019 Adaboost Matas
67/136
3
http://prevpage/http://prevpage/ -
7/31/2019 Adaboost Matas
68/136
2 1.5 1 0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
yf(x)
err
3
http://prevpage/ -
7/31/2019 Adaboost Matas
69/136
2 1.5 1 0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
yf(x)
err
e
0.5
-
7/31/2019 Adaboost Matas
70/136
1 2 3 4 t
-
7/31/2019 Adaboost Matas
71/136
-
7/31/2019 Adaboost Matas
72/136
-
7/31/2019 Adaboost Matas
73/136
-
7/31/2019 Adaboost Matas
74/136
-
7/31/2019 Adaboost Matas
75/136
-
7/31/2019 Adaboost Matas
76/136
-
7/31/2019 Adaboost Matas
77/136
0.3
0.35
-
7/31/2019 Adaboost Matas
78/136
0 5 10 15 20 25 30 35 40
0
0.05
0.1
0.15
0.2
0.25
-
7/31/2019 Adaboost Matas
79/136
0.3
0.35
-
7/31/2019 Adaboost Matas
80/136
0 5 10 15 20 25 30 35 40
0
0.05
0.1
0.15
0.2
0.25
-
7/31/2019 Adaboost Matas
81/136
0.3
0.35
-
7/31/2019 Adaboost Matas
82/136
0 5 10 15 20 25 30 35 40
0
0.05
0.1
0.15
0.2
0.25
-
7/31/2019 Adaboost Matas
83/136
0.3
0.35
-
7/31/2019 Adaboost Matas
84/136
0 5 10 15 20 25 30 35 40
0
0.05
0.1
0.15
0.2
0.25
-
7/31/2019 Adaboost Matas
85/136
0.3
0.35
-
7/31/2019 Adaboost Matas
86/136
0 5 10 15 20 25 30 35 40
0
0.05
0.1
0.15
0.2
0.25
-
7/31/2019 Adaboost Matas
87/136
0.3
0.35
-
7/31/2019 Adaboost Matas
88/136
0 5 10 15 20 25 30 35 40
0
0.05
0.1
0.15
0.2
0.25
-
7/31/2019 Adaboost Matas
89/136
0.3
0.35
-
7/31/2019 Adaboost Matas
90/136
0 5 10 15 20 25 30 35 40
0
0.05
0.1
0.15
0.2
0.25
-
7/31/2019 Adaboost Matas
91/136
0.3
0.35
-
7/31/2019 Adaboost Matas
92/136
0 5 10 15 20 25 30 35 40
0
0.05
0.1
0.15
0.2
0.25
-
7/31/2019 Adaboost Matas
93/136
-
7/31/2019 Adaboost Matas
94/136
-
7/31/2019 Adaboost Matas
95/136
-
7/31/2019 Adaboost Matas
96/136
-
7/31/2019 Adaboost Matas
97/136
-
7/31/2019 Adaboost Matas
98/136
-
7/31/2019 Adaboost Matas
99/136
0.3
0.35
-
7/31/2019 Adaboost Matas
100/136
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
-
7/31/2019 Adaboost Matas
101/136
0.3
0.35
-
7/31/2019 Adaboost Matas
102/136
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
-
7/31/2019 Adaboost Matas
103/136
0.3
0.35
-
7/31/2019 Adaboost Matas
104/136
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
-
7/31/2019 Adaboost Matas
105/136
0.3
0.35
-
7/31/2019 Adaboost Matas
106/136
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
-
7/31/2019 Adaboost Matas
107/136
0.3
0.35
-
7/31/2019 Adaboost Matas
108/136
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
-
7/31/2019 Adaboost Matas
109/136
0.3
0.35
-
7/31/2019 Adaboost Matas
110/136
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
-
7/31/2019 Adaboost Matas
111/136
0.3
0.35
-
7/31/2019 Adaboost Matas
112/136
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
http://nextpage/http://prevpage/ -
7/31/2019 Adaboost Matas
113/136
0.3
0.35
http://nextpage/http://prevpage/http://prevpage/ -
7/31/2019 Adaboost Matas
114/136
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
IMAGE
http://prevpage/ -
7/31/2019 Adaboost Matas
115/136
100
101
102
103
0
0.05
0.1
0.15
0.2
0.25
Length of the strong classifier
0.44
0.46
FLARE
-
7/31/2019 Adaboost Matas
116/136
100
101
102
103
0.32
0.34
0.36
0.38
0.4
0.42
Length of the strong classifier
0.28
0.3
0.32
GERMAN
http://prevpage/ -
7/31/2019 Adaboost Matas
117/136
100
101
102
103
0.16
0.18
0.2
0.22
0.24
0.26
Length of the strong classifier
0.3
0.35
0.4
RINGNORM
http://prevpage/http://nextpage/http://prevpage/ -
7/31/2019 Adaboost Matas
118/136
100
101
102
103
0
0.05
0.1
0.15
0.2
0.25
Length of the strong classifier
http://nextpage/http://prevpage/ -
7/31/2019 Adaboost Matas
119/136
0.2
0.25THYROID
http://nextpage/http://prevpage/ -
7/31/2019 Adaboost Matas
120/136
100
101
102
103
0
0.05
0.1
0.15
Length of the strong classifier
0.235
0.24TITANIC
http://nextpage/http://prevpage/http://nextpage/http://prevpage/ -
7/31/2019 Adaboost Matas
121/136
100
101
102
103
0.21
0.215
0.22
0.225
0.23
Length of the strong classifier
0.35
0.4
0.45BANANA
http://nextpage/http://prevpage/http://nextpage/http://prevpage/ -
7/31/2019 Adaboost Matas
122/136
100
101
102
103
0.05
0.1
0.15
0.2
0.25
0.3
Length of the strong classifier
http://nextpage/http://prevpage/ -
7/31/2019 Adaboost Matas
123/136
0 25
0.3
0.35DIABETIS
-
7/31/2019 Adaboost Matas
124/136
100
101
102
103
0
0.05
0.1
0.15
0.2
0.25
Length of the strong classifier
0 25
0.3
0.35HEART
-
7/31/2019 Adaboost Matas
125/136
100
101
102
103
0
0.05
0.1
0.15
0.2
0.25
Length of the strong classifier
0.3
0.35
0.4IMAGE
-
7/31/2019 Adaboost Matas
126/136
100
101
102
103
0
0.05
0.1
0.15
0.2
0.25
Length of the strong classifier
0 42
0.44
0.46FLARE
-
7/31/2019 Adaboost Matas
127/136
100
101
102
103
0.32
0.34
0.36
0.38
0.4
0.42
Length of the strong classifier
0.28
0.3
0.32GERMAN
-
7/31/2019 Adaboost Matas
128/136
100
101
102
103
0.16
0.18
0.2
0.22
0.24
0.26
Length of the strong classifier
0.3
0.35
0.4RINGNORM
-
7/31/2019 Adaboost Matas
129/136
100
101
102
103
0
0.05
0.1
0.15
0.2
0.25
Length of the strong classifier
0.2
0.25SPLICE
-
7/31/2019 Adaboost Matas
130/136
100
101
102
103
0
0.05
0.1
0.15
Length of the strong classifier
0.2
0.25THYROID
-
7/31/2019 Adaboost Matas
131/136
100
101
102
103
0
0.05
0.1
0.15
Length of the strong classifier
0.235
0.24TITANIC
-
7/31/2019 Adaboost Matas
132/136
100
101
102
103
0.21
0.215
0.22
0.225
0.23
Length of the strong classifier
0.35
0.4
0.45BANANA
-
7/31/2019 Adaboost Matas
133/136
100
101
102
103
0.05
0.1
0.15
0.2
0.25
0.3
Length of the strong classifier
0.29
0.3
0.31BREAST
-
7/31/2019 Adaboost Matas
134/136
100
101
102
103
0.24
0.25
0.26
0.27
0.28
Length of the strong classifier
0.25
0.3
0.35DIABETIS
-
7/31/2019 Adaboost Matas
135/136
100
101
102
103
0
0.05
0.1
0.15
0.2
Length of the strong classifier
0.25
0.3
0.35HEART
-
7/31/2019 Adaboost Matas
136/136
100
101
102
103
0
0.05
0.1
0.15
0.2
Length of the strong classifier