Optimization Theory
Primal Optimization Problem
minx
pih
mif
i
i
,2,1,0)(
,2,1,0)(
x
x
)(0 xf
subject to:
*)(* 0 xfp Primal Optimal Value:
Optimization Theory
Convex Optimization Problem
minx
bAx
x
mifi ,2,1,0)(
)(0 xf
subject to:
)(),(),(),( 210 xxxx mffff : convex functions
Optimization Theory
Primal Lagrangian function
minx
pih
mif
i
i
,2,1,0)(
,2,1,0)(
x
x
)(0 xf
subject to:
)()()(
)()()(),,(
0
110
xhβxfαx
xxxβαx
TT
i
p
iii
m
ii
f
hffL
)0( i
Optimization Theory
Kuhn-Tucker Theory
)()()(
)()()(),,(
0
110
xhβxfαx
xxxβαx
TT
i
p
iii
m
ii
f
hffL
0x
L
0β
L
0)(,0
0)(,0 0)(
x
xx
ii
iiii f
ff
0)( xif
0i
KKT Complementarity Condition
Optimization Theory
Dual Optimization problem
minx
pih
mif
i
i
,2,1,0)(
,2,1,0)(
x
xsubject to:
*)(* 0 xfp
)(0 xf
Primal
max,βα
),( βα
subject to: mii ,2,1,0
*)*,(* βαd
Dual
For convex optimization problem: ** dp
Support Vector Machine (SVM)
Linear SVM Training
)},(),,(),,{( 2211 kk yyy xxx Training dataset:
ki ,2,1 Label:}1,1{
Attribute :
i
Ni
y
Rx
bf T xwx)(Optimal Separating Hyperplane:
Support Vector Machine (SVM)
Linear SVM Prediction
},,{ 21 lxxx Testing dataset:
0)(1
0)(1)))(( Label
x
xx
f
ffsign
Support Vector Machine (SVM)
Linear SVM: Separable case
• The optimal hyperplane is obtained by maximizing the margin
• Support vectors
Top Related