Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.
-
Upload
toby-blankenship -
Category
Documents
-
view
218 -
download
0
Transcript of Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.
![Page 1: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/1.jpg)
Ming-Feng Yeh 1
CHAPTER 2CHAPTER 2
Neuron ModelNeuron Modeland Networkand NetworkArchitectureArchitecture
![Page 2: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/2.jpg)
Ming-Feng Yeh 2
ObjectivesObjectives
Introduce the simplified mathematical model of the neuron
Explain how these artificial neurons can be interconnected to form a variety of network architectures
Illustrate the basic operation of these neural networks
![Page 3: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/3.jpg)
Ming-Feng Yeh 3
NotationNotation
Scalars: small italic letters e.g., a, b, c
Vectors: small bold nonitalic letters e.g., a, b, c
Matrices: capital BOLD nonitalic letters e.g., A, B, C
Other notations are given in Appendix B
![Page 4: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/4.jpg)
Ming-Feng Yeh 4
Single-Input NeuronSingle-Input Neuron
f an
b
1
wp
Scalar input: p
Scalar weight: w
(synapse)
Bias: b
Net input: n
Transfer function : f
(cell body)
Output: a
n = wp + b
a = f(n) = f(wp + b)
w=3, p=2 and b= 1.5
n = 321.5=4.5
a = f(4.5)
f
p
1
w
b
a f(·)p w
1 b
a
synapse
cell body
axon
dendrites
![Page 5: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/5.jpg)
Ming-Feng Yeh 5
Bias and WeightBias and Weight
The bias b is much like a weight w, except that it has a constant input of 1. It can be omitted if NOT necessary.
Bias b and weight w are both adjustable scalar parameters of the neuron. They can be adjusted by some learning rule so that the neuron input/output relationship meets some special goal.
![Page 6: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/6.jpg)
Ming-Feng Yeh 6
Transfer FunctionsTransfer Functions
The transfer function f may be a linear or nonlinear function of net input n
Three of the most commonly used func. Hard limit transfer function Linear limit transfer function Log-sigmoid transfer function
![Page 7: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/7.jpg)
Ming-Feng Yeh 7
Hard Limit Transfer Func.Hard Limit Transfer Func.a
n0
1
1
a
p0
1
1
wb
a = 0, if n 0
a = 1, if n 1
MATLAB function: hardlim
a=hardlim(n) a=hardlim(wp+b)
![Page 8: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/8.jpg)
Ming-Feng Yeh 8
Linear Transfer FunctionLinear Transfer Functiona
n
a
pwb
b
00
1
1
a = n
MATLAB function: purelin
a=purelin(n) a=purelin(wp+b)
![Page 9: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/9.jpg)
Ming-Feng Yeh 9
Log-Sigmoid Transfer Func.Log-Sigmoid Transfer Func.a
n
a
pwb 00
1
1
1
1
a = 1/[1+exp(n)]
MATLAB function: logsig
Other transfer functions see Pages 2-6 & 2-17
a=logsig(n) a=logsig(wp+b)
![Page 10: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/10.jpg)
Ming-Feng Yeh 10
Multiple-Input NeuronMultiple-Input Neuron
A neuron (node) with R inputs, p1, p2,…, pR
The weight matrix W, w11, w12,…,w1R
The neuron has a bias b
Net input: n = w11 p1 + w12 p2+…+ w1RpR + b = Wp + bNeuron output: a = f(Wp + b)
f an
b
1
1p2p3p
Rp
1,1w
Rw ,1
![Page 11: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/11.jpg)
Ming-Feng Yeh 11
Single-Layer NetworkSingle-Layer Network• R: number of input• S: number of neuron (node) in a layer (R S)• Input vector p is a vector of length R• Bias vector b and output vector a are vectors of length S• Weight matrix W is an SR matrix
SRSS
R
R
www
www
www
21
22221
12211
W
![Page 12: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/12.jpg)
Ming-Feng Yeh 12
Multiple-Layer NetworkMultiple-Layer Network
Input Layer Hidden Layer Output Layer
Input Layer Hidden Layer Output Layer
![Page 13: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/13.jpg)
Ming-Feng Yeh 13
Multiple-Layer NetworkMultiple-Layer Network
Layer Superscript: the number of the layerR inputs, S n neurons (nodes) in the nth layerDifferent layers can have different numbers of neuronsThe outputs of layer k are the inputs of layer (k+1)
Weight matrix W j between layer i and j is an S iS j matrix
Input Layer Hidden Layer Output Layer
a1= f1(W1p+ b1) a2= f2(W2p+ b2) a3= f3(W3p+ b3)
![Page 14: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/14.jpg)
Ming-Feng Yeh 14
Network ArchitecturesNetwork Architectures
Models of neural networks are specified by the three basic entities: models of the processing elements (neurons), models of inter-connections and structures (network topology), and the learning rules (the ways information is stored in the network).
The weights may be positive (excitatory) or negative (inhibitory).
Information is stored in the connection weights.
![Page 15: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/15.jpg)
Ming-Feng Yeh 15
Network StructuresNetwork Structures
The layer that receives inputs is called the input layer.
The outputs of the network are generated from the output layer.
Any layer between the input and the output layers is called a hidden layer.
There may be from zero to several hidden layers in a neural network.
![Page 16: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/16.jpg)
Ming-Feng Yeh 16
Network StructuresNetwork Structures
When no node output is an input to a node in the same layer or preceding layer, the network is a feedforward network.
When outputs are directed back as inputs to same- or preceding-layer nodes, the network is a feedback network.
Feedback networks that have closed loops are called recurrent networks.
![Page 17: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/17.jpg)
Ming-Feng Yeh 17
Delay Block & IntegratorDelay Block & Integrator
D)(ta)(tu
)0(a
)1()( tuta
)(ta)(tu
)0(a
)0()()(0
adutat
![Page 18: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/18.jpg)
Ming-Feng Yeh 18
Recurrent NetworkRecurrent Network
w
b1S
1
)1( ta)1( tn
p
1S 1S
S
D)(ta
1S
SS 1S
S
,)0()1( bWaa satlins
pa )0(
bWaa )1()2( satlins
bWaa )()1( tsatlinst
Recurrent LayerInitial Condition
![Page 19: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/19.jpg)
Ming-Feng Yeh 19
Learning SchemeLearning SchemeTwo kinds of learning in neural networks: parameter learning, which concerns the updating the connection weights in a neural network, and structure learning, which focuses on the change in the network structure, including the number of nodes and their connection types.Each kind of learning can be further classified into three categories: supervised learning, reinforcement learning, and unsupervised learning.
![Page 20: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/20.jpg)
Ming-Feng Yeh 20
Learning SchemeLearning Scheme
![Page 21: Ming-Feng Yeh1 CHAPTER 2 Neuron Model and Network Architecture.](https://reader036.fdocuments.in/reader036/viewer/2022062407/56649dc45503460f94ab7bcb/html5/thumbnails/21.jpg)
Ming-Feng Yeh 21
How to Pick an ArchitectureHow to Pick an Architecture
Problem specifications help define the network in the following ways:
1. Number of network inputs = number of problem inputs
2. Number of neurons in output layer = number of problem outputs
3. Output layer transfer function choice at least partly determined by problem specification of the outputs.