Artificial Neuron Network. (1)
Transcript of Artificial Neuron Network. (1)
-
8/2/2019 Artificial Neuron Network. (1)
1/27
ArtificialNeuron
Network
-
8/2/2019 Artificial Neuron Network. (1)
2/27
An idea
Since ancient times. ; this field wasestablished before the advent of
computers, The first artificial neuron was
produced in 1943 by theneurophysiologist Warren McCullochand the logician Walter Pits
But not realm
-
8/2/2019 Artificial Neuron Network. (1)
3/27
Conventional computing versus
artificial neural networks
Traditional computers: processing is sequential, function logically with a set of rules and calculations, must learn only by doing different sequences or steps in an
algorithm, top-down learning.
An ANN: is an inherently multiprocessor,
can function via images, pictures, and concepts, neural networks can program themselves, Bottom-up learning.
-
8/2/2019 Artificial Neuron Network. (1)
4/27
Getting familiar to ANN
-
8/2/2019 Artificial Neuron Network. (1)
5/27
Model of ANN
-
8/2/2019 Artificial Neuron Network. (1)
6/27
Rules for the operation of
the neurons1. Propagation delay is assumed to be constant for all
neurons,
2. Neurons fire at discrete moments, not continuously.
3. Each synapse output stage impinges onto only onesynaptic input stage on a subsequent neuron.
4. Each neuron can have a number of input synapticstages.
5. Synaptic input stages contribute to overcoming of athreshold below which the neuron will not fire.
-
8/2/2019 Artificial Neuron Network. (1)
7/27
Firing rules
X1: 0 0 0 0 1 1 1 1
X2: 0 0 1 1 0 0 1 1
X3: 0 1 0 1 0 1 0 1 OUT: 0 0 0/1 0/1 0/1 1 0/1 1
-
8/2/2019 Artificial Neuron Network. (1)
8/27
The perceptron
The perceptron is a mathematical model of a
biological neuron.
-
8/2/2019 Artificial Neuron Network. (1)
9/27
a perceptron calculates the weighted sum of the inputvalues.
the perceptron outputs a non-zero value only whenthe weighted sum exceeds a certain threshold.
Output of P = {1 if A x + B y > C
{0 if A x + B y < = C
-
8/2/2019 Artificial Neuron Network. (1)
10/27
Architecture of Neural
Networks
Competitive neural networks
Feed-forward networks
Feed-back networks
-
8/2/2019 Artificial Neuron Network. (1)
11/27
Simple competitive
networks:
-
8/2/2019 Artificial Neuron Network. (1)
12/27
Composed of
The Hemming net The Maxnet
-
8/2/2019 Artificial Neuron Network. (1)
13/27
Each perceptron at the top layer of the Hemmingnet calculates a weighted sum of the input values.This weighted sum can be interpreted as the dot
product of the input vector and the weight vector.
The maxnet is a fully connected network with eachnode connecting to every other nodes, including
itself. The basic idea is that the nodes competeagainst each other by sending out inhibiting signals
to each other.
-
8/2/2019 Artificial Neuron Network. (1)
14/27
In a simple competitive network, a Maxnetconnects the top nodes of the Hemming net.
Whenever an input is presented, the Hemming netfinds out the distance of the weight vector ofeach node from the input vector via the dotproduct, while the Maxnet selects the node with thegreatest dot product. In this way, the wholenetwork selects the node with its weight vectorclosest to the input vector, i.e. the winner.
-
8/2/2019 Artificial Neuron Network. (1)
15/27
Feed-Forward networks
-
8/2/2019 Artificial Neuron Network. (1)
16/27
Feed-forward networks
characteristics 1. Perceptrons are arranged in layers, with the first
layer taking in inputs and the last layer producingoutputs. The middle layers have no connection withthe external world, and hence are called hiddenlayers.
2. Each perceptron in one layer is connected toevery perceptron on the next layer. Henceinformation is constantly "fed forward" from onelayer to the next., and this explains why thesenetworks are called feed-forward networks.
3. There is no connection among perceptrons in thesame layer.
-
8/2/2019 Artificial Neuron Network. (1)
17/27
feed-forward networks are commonly used for
classification.
-
8/2/2019 Artificial Neuron Network. (1)
18/27
Back-propagation -- learning in
feed-forward networks
pairs of input and output values are fed into thenetwork for many cycles, so that the network'learns' the relationship between the input andoutput.
{i = (1, 2) , o =( 0, 0)i = (1, 3) , o = (0, 0)i = (2, 3) , o = (1, 0)i = (3, 4) , o = (1, 0)
i = (5, 6) , o = (0, 1)i = (6, 7) , o = (0, 1)
}
-
8/2/2019 Artificial Neuron Network. (1)
19/27
BACK-PROPAGATIONFORMULAE
-
8/2/2019 Artificial Neuron Network. (1)
20/27
Feedback networks
-
8/2/2019 Artificial Neuron Network. (1)
21/27
The Learning Process
Associative mapping
Auto-association
Hetero-association
Regularity detection
-
8/2/2019 Artificial Neuron Network. (1)
22/27
Supervised learning
Unsupervised learning
-
8/2/2019 Artificial Neuron Network. (1)
23/27
Transfer/ActivationFunction
The Activation Function accepts a value that is theweighted sum of neuron inputs and returns a valuethat represents the output of the neuron.
This function should be used for training neuralnetworks because a continuous function like thisgives better feedback about the degree of error in anetwork.
To make a neural network that performs somespecific task, we must set the weights on theconnections appropriately.
-
8/2/2019 Artificial Neuron Network. (1)
24/27
Sigmoid Function
-
8/2/2019 Artificial Neuron Network. (1)
25/27
Some specific details of
neural networks
Classification. (pattern recognition
programs ) Prediction. (stock market prediction)
Clustering. (data-mining )
Association. ("remember" )
-
8/2/2019 Artificial Neuron Network. (1)
26/27
Applications of neuralnetworks
Sales forecasting. Industrial process control. Data validation. Target marketing. Recognition of speakers in
communications. Diagnosis of hepatitis. Recovery of telecommunications from
faulty software. Facial recognition
-
8/2/2019 Artificial Neuron Network. (1)
27/27
Conclusion
The computing world has a lot to gainfrom neural networks. Neural networks
have a huge potential we will only getthe best of them when they areintegrated with computing, AI, fuzzy
logic and related subjects.