CHAPTER 06
SUPPORT VECTOR MACHINES
CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq M. Mostafa
Computer Science Department
Faculty of Computer & Information Sciences
AIN SHAMS UNIVERSITY
(some of the figures in this presentation are copyrighted to Pearson Education, Inc.)
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
Introduction
Optimal Hyperplane for Linearly Separable Pattern
Quadratic Optimization for Finding the Optimal Hyperplan
Optimal Hyperplane for Nonseparable Patterns
Underlying Philosophy of SVM for Pattern Calssification
SVM viewed as Kernel Machine
The XOR problem
Computer Experiment
2
Outlines
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq 3
Introduction
The main idea of the SVMs may be summed up as follows:
βGiven a training samples, the SVM constructs a
hyperplane as decision surface in such a way the
margin of separation between positive and negative
examples is maximized.β
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq 4
Linearly Separable Patterns
SVM is a binary learning machine.
Binary classification is the task of separating classes in feature space.
wTx + b = 0
wTx + b < 0
wTx + b > 0
bxwxg T
)(
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq 5
Linearly Separable Patterns
Which of the linear separators is optimal?
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
Optimal Decision Boundary
The optimal decision boundary is the one that maximize the margin
6
r
Ο
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
The Margin
7
|||| w
wrxx P
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
The Margin
||||)( then
,0 since
||||)()(
|||| , )(
wrxg
bxw
ww
wrbxwxg
w
wrxxbxwxg
P
T
T
P
T
P
T
8
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
The Margin
1||||
1
1||||
1
||||
)(
11)(
difw
difw
w
xgr
dforbxwxg T
9
r
Ο
1 bxwT
1 bxwT
0 bxwT
||||
22
wr
Then the margin is given as:
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
Optimal Decision Boundary
Let {x1, ..., xn} be our data set and let di {1,-1} be the class label of xi
The decision boundary should classify all points correctly.
That is, we have a constrained optimization problem
Maximize = ππ =π
π, or Minimize π
Subject to π π(ππ»π Β± π) β₯ π
10
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
The Optimization Problem
Introduce Lagrange multipliers ,
That is, the Lagrange function:
Is to be minimized with respect to w and b, i.e,
ππ±(π,π,)ππ
= π ; and ππ±(π,π, )
ππ= π
)1][(||||2
1),,(
1
2
bxwdwbwJ i
T
i
N
i
i
11
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
Solving the Optimization Problem
Need to optimize a quadratic function subject to linear constraints.
The solution involves constructing a dual problem where a Lagrange multiplier Ξ±i is associated with every constraint in the primary problem:
Find πΌ1β¦πΌπsuch that
πΈ πΆ = πΌπ β1
2 πΌππΌπππππxπxπππ
π΅π=π
is maximized and
(1) πΌππππ
(2) πΌ1 β₯ 0 β π
12
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
The Optimization Problem
The solution has the form:
and such that π β π
Each non-zero Ξ±i indicates that corresponding xi is a support vector.
Then the classifying function will have the form:
Notice that it relies on an inner product between the test point x and the
support vectors xi
Also keep in mind that solving the optimization problem involved computing
the inner products xiTxj between all training points!
13
ii
N
i
i xd
1
w iii
N
i
idb xx11
bdxg iii
N
i
i
xx)(1
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
6=1.4
The Optimization Problem
Support vectors are samples that have non-zero
Class 1
Class 2
1=0.8
2=0
3=0
4=0
5=0
7=0
8=0.6
9=0
10=0
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
Optimal Hyperplane for Nonseparable Patterns
Figure 6.3 Soft margin hyperplane (a) Data point xi (belonging to class C1,
represented by a small square) falls inside the region of separation, but on the correct side of the decision surface. (b) Data point xi (belonging to class C2,
represented by a small circle) falls on the wrong side of the decision surface.
15
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
Optimal Hyperplane for Nonseparable Patterns
We allow βerrorβ xi in classification
16
ΞΎi
ΞΎi
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
Soft Margin Hyperplane
The old formulation:
The new formulation incorporating relaxed variables:
Parameter C can be viewed as a way to control overfitting.
17
Find w and b such that
β πΎ = π
ππΎπ»πΎ is minimized and for all {(xi ,yi)}
Subject to: π π(ππ»π Β± π) β₯ π
Find w and b such that
β π = π
ππππ+ π πππ is minimized for all {(xi ,yi)}
Subject to: π π(ππ»π Β± π) β₯ π , and ΞΎi β₯ 0 for all i
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
Soft Margin Hyperplane
Again, xi with non-zero Ξ±i will be support vectors.
Solution to the dual problem is:
πΎ = πΆππ ππππ
and
π = π π π β ππ βπΎπ»ππ
18
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
Extension to Non-linear Decision Boundary
Key idea: transform xi to a higher dimensional space
Input space: the space of xi
Feature space: the βkernelβ space of f(xi)
19
f( )
f( )
f( ) f( ) f( )
f( )
f( ) f( )
f(.) f( )
f( )
f( )
f( ) f( )
f( )
f( )
f( ) f( )
f( )
Feature space Input space
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
Kernel Trick
The linear classifier relies on inner product between vectors:
π² π±π, π±π = π±ππ»π±π
If every datapoint is mapped into high-dimensional space via some transformation Ξ¦: x β Ο(x), the inner product becomes:
π² π±π, π±π = π π±π’π»π(π±π)
A kernel function is some function that corresponds to an inner product into some feature space.
K (x, xj) needs to satisfy a technical condition (Mercer condition) in order for f(.) to exist
20
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
Mercerβs Theorem
π² = π(ππ, ππ) βπ, π has to be non-negative definite or
positive semidefinite , that is, it satisfies:
ππ»Kπ β₯ π
Some of kernel functions that satisfy Mercerβs condition:
21
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
The SVM viewed as Kernel Machine
Figure 6.5 Architecture of support vector machine, using a
radial-basis function network.
22
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
The XOR Problem
For the two dimensional vectors x=[x1 x2];
Define the following Kernel:
π x,xπ = π + xπ»xπ2
Need to show that
K(xi,xj)= Ο(xi) TΟ(xj)
K(xi,xj)=(1 + xiTxj)
2
= 1+ xi12xj1
2 + 2 xi1xj1 xi2xj2+ xi2
2xj22 + 2xi1xj1 + 2xi2xj2=
= [1 xi12 β2 xi1xi2 xi2
2 β2xi1 β2xi2]T [1 xj12 β2 xj1xj2 xj2
2 β2xj1 β2xj2]
= Ο(xi) TΟ(xj),
where
Ο(x) = [1 x12 β2 x1x2 x2
2 β2x1 β2x2]
23
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
The XOR Problem
Which give the optimal hyperplane as:
βππππ = π
This yields
Figure 6.6 (a) Polynomial machine for solving the XOR problem. (b) Induced
images in the feature space due to the four data points of the XOR problem.
24
(1, -1)
(-1,1)
(-1, -1) (1,1)
-1.0
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
Conclusion
SVM is a useful alternative to neural networks
Two key concepts of SVM: maximize the margin
and the kernel trick
Many active research is taking place on areas
related to SVM
Many SVM implementations are available on the
web for you to try on your data set!
25
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
Computer Experiment
Figure 6.7 Experiment on SVM for the double-moon of Fig. 1.8 with
distance d = β6.
26
ASU-CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq
Computer Experiment
Figure 6.8 Experiment on SVM for the double-moon of Fig. 1.8 with
distance d = β6.5.
27
Principal Component Analysis (PCA)
Next Time
28
Top Related