1 COMP3503 Semi-Supervised Learning COMP3503 Semi-Supervised Learning Daniel L. Silver.
Synaptic DynamicsII : Supervised Learning
description
Transcript of Synaptic DynamicsII : Supervised Learning
![Page 1: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/1.jpg)
Synaptic DynamicsII : Supervised Learning
The Backpropagation Algorithm and Spport Vector Machines
JingLIU
2004.11.3
![Page 2: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/2.jpg)
History of BP Algorithm Rumelhart [1986]popularized the BP
algorithm in the Parallel Distributed Processing edited volume in the late 1980’s.
BP algorithm overcame the limitations of the perceptron algorithm, limitations that Minsky and Papert[1969] had carefully enumerated.
BP’s popularity begot waves of criticism. BP algorithm often failed to converge, and at best converged to local error minima.
The wave of criticism challenged BP’s historical priority. The wave of criticism challenge whether the BP learning was
new. The algorithm not offer a new kind of learning.
![Page 3: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/3.jpg)
Multilayer feedforward NNs
![Page 4: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/4.jpg)
Feedforward Sigmoidal Representation Theorems Feedforward sigmoidal architectures
can in principle represent any Borel-measurable function to any desired accuracy—if the network contains enough “hidden” neurons between the input and output neuronal fields.
So the MLP can solve the problems of nonlinear separable problems and function approximate.
![Page 5: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/5.jpg)
Feedforward Sigmoidal Representation Theorems We can explain the theorem in the
following two aspects: To improve the NNs’ classification ability, we must use
multilayer networks, at least one hidden layers. One the other hand, when feedforward –sigmoidal
neural networks finely approximate complicated functions, the networks may suffer a like “explosion” of hidden neurons and interconnecting synapses.
![Page 6: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/6.jpg)
How can a MLP learn? We know only the random sample (xi,Yi) and the vector
error Yi-N(Xi)
ijkij mckm
MSE
)(
ijkij m
kckm
)SE(
)(
Xi Yi-N(Xi)
![Page 7: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/7.jpg)
BP Algorithm The Principle of BP algorithm
Working signals are propagate forward to the output neurons.
Error signals are propagated backward to the input field.
![Page 8: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/8.jpg)
BP Algorithm The BP Algorithm for the learning
process of Multilayer networks The error signal of the jth output is:
The instantaneous summed squared error of the output is:
The objective function of learning process is:
)()()( nyndne jjj
j
j nenE )(2
1)( 2
N
nAV nE
NE
1
)(1
![Page 9: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/9.jpg)
BP Algorithm In the case of learning sample by sample:
The gradient of E(n) at nth iteration is:
)(nv j
)(nyi
)(nj
)(ny j
)(nd j
)(ne j)(nw ji
)())(()()(
)(nynvne
nw
nEijjj
ji
)(nj
![Page 10: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/10.jpg)
BP Algorithm The modification quantity of weight wji is:
where
(1) For j is an output neuron: (2) For j is a hidden neuron:
)()()(
)()( nyn
nw
nEnw ij
jiji
))(()()( nvnen jjjj
k
kjkjjj nwnnvn )()())(()(
))(())()(()( nvnyndn jjjjj
))(()(
)()( nv
ny
nEn j
jj
)]([)]([][][ nynw ijij
![Page 11: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/11.jpg)
BP Algorithm
j k
![Page 12: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/12.jpg)
Calculate for all neurons
Modify the weights of Output layer
Modify the weights of Hidden layer
Calculate the error of the output layer
Calculate output of each layer
initialize
j
![Page 13: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/13.jpg)
Some improvements for BP algorithm
The problems of BP algorithm(1) It convergence slowly.(2) There are local minima in the objective
function.Some methods to improvement BP
algorithm, such as: conjugate gradient descent, using high order derivative and adding momentum term, etc.
![Page 14: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/14.jpg)
Adding momentum term for BP Algorithm Here we modify weights of neurons
with:)()()1()( nynnwnw jjjiji
)(
)()()()( 1
0
1
tw
tEtytnw
ji
nn
tj
nji
![Page 15: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/15.jpg)
Support Vector Machines SVM was presented for the binary
classification problems based on the principle of Structural Risk Minimization.
1,1,,,,2,1),,( yRxniyx dii
bxwxg )(
0 bxw
![Page 16: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/16.jpg)
Support Vector MachinesTo get the largest margin, we deduce the optimal
classification function:
Using inner product to replace the dot product:
})({}){()(1
****
n
iiii bxxysignbxwsignxf
ni
yts
xxKyyQ
i
n
iii
bw
n
jijijiji
n
ii
,,2,1,0
0..
]),(2
1max[)(max
1
,1,1
}),({)(1
**
n
iiii bxxKysignxf
![Page 17: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/17.jpg)
Support Vector Machines
![Page 18: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/18.jpg)
Support Vector Machines
![Page 19: Synaptic DynamicsII : Supervised Learning](https://reader030.fdocuments.in/reader030/viewer/2022033102/5681481f550346895db5488e/html5/thumbnails/19.jpg)
Support Vector Machines
(i) (j)
(k) (l)