Ann by rutul mehta
-
Upload
rutul-mehta -
Category
Technology
-
view
734 -
download
1
description
Transcript of Ann by rutul mehta
ARTIFICIAL NEURAL NETWORKS
BY:- Mehta Rutul R.GUIDED BY:- Vishwesh Sir
INTRODUCTION
What is Artificial Neural Network
An Artificial Neural Network (ANN) is an information processing that is inspired by the way biological nervous systems, such as the brain.
It is composed of a large number of highly interconnected processing elements (neurons) working to solve specific problems.
It is an attempt to simulate within specialized hardware or sophisticated software, the multiple layers of simple processing elements called neurons.
An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process.
Research History • McCulloch and Pitts (1943) are generally recognized as the
designers of the first neural network. • They combined many simple processing units together that could
lead to an overall increase in computational power.• They suggested many ideas like : a neuron has a threshold level
and once that level is reached the neuron fires.• The McCulloch and Pitts's network had a fixed set of weights.• Hebb (1949) developed the first learning rule, that is if two
neurons are active at the same time then the strength between them should be increased.
• Minsky & Papert (1969) showed that perceptron could not learn those functions which are not linearly separable. The researchers, Parkerand and LeCun discovered a learning algorithm for multi-layer networks called back propagation that could solve problems that were not linearly separable.
The schematic model of a biological neuron
Synapses
Dendrites
Soma
AxonDendrite from other
Axon from other neuron
1. Soma or body cell - is a large, round central body in which almost all the logical functions of the neuron are realized.
2. The axon (output), is a nerve fibre attached to the soma which can serve as a final output channel of the neuron. An axon is usually highly branched.
3. The dendrites (inputs)- represent a highly branching tree of fibres. These long irregularly shaped nerve fibres (processes) are attached to the soma.
4. Synapses are specialized contacts on a neuron which are the termination points for the axons from other neurons.
Biological Neurons
5
Why neural network?
1( ,..., )nf x x
0 1( , ,..., )nw w w
- unknown multi-factor decision rule
Learning process using a representative learning set
- a set of weighting vectors is the result of the learning process
1
0 1 1
ˆ ( ,..., )
( ... )n
n n
f x x
P w w x w x
- a partially defined function, which is an approximation of the decision rule function 6
A Neuron
1 0 1 1( ,..., ) ( ... )n n nf x x F w w x w x f is a function to be earned
are the inputs
φ is the activation function
1x
nx
1( ,..., )nxf x . . .
φ(z)
0 1 1 ... n nz w w x w x
1,..., nx x
Z is the weighted sum
7
A Neuron
• Neurons’ functionality is determined by the nature of its activation function, its main properties, its plasticity and flexibility, its ability to approximate a function to be learned
8
When we need a network
• The functionality of a single neuron is limited. For example, the threshold neuron can not learn non-linearly separable functions.
• To learn those functions that can not be learned by a single neuron, a neural network should be used.
9
A simplest network
1x Neuron 1
2x Neuron 2
Neuron 3
10
In the human brain, a typical neuron collects signals from others through a host of fine structures called dendrites.
The neuron sends out spikes of electrical activity through a long, thin stand known as an axon, which splits into thousands of branches.
While in Artificial Neuron……..
Similarities- Artificial Neuron & Brain Neuron
Similarities- Artificial Neuron & Brain Neuron
We conduct Artificial neural networks by first trying to deduce the essential features of neurons and their interconnections.
We then typically program a computer to simulate these features.
However because our knowledge of neurons is incomplete and our computing power is limited, our models are necessarily gross idealizations of real networks of neurons.
The firing rule is an important concept in neural networks and accounts for their high flexibility. A firing rule determines how one calculates whether a neuron should fire for any input pattern. It relates to all the input patterns, not only the ones on which the node was trained.
A simple firing rule can be implemented by using Hamming distance technique. The rule goes as follows:
• Take a collection of training patterns for a node, some of which cause it to fire (the 1-taught set of patterns) and others which prevent it from doing so (the 0-taught set).
• Then the patterns not in the collection cause the node to fire if, on comparison , they have more input elements in common with the 'nearest' pattern in the 1-taught set than with the 'nearest' pattern in the 0-taught set. If there is a tie, then the pattern remains in the undefined state.
Firing Rule
An artificial neuron is a device with many inputs and one output.
If the input pattern does not belong in the taught list of input patterns, the firing rule is used to determine whether to fire or not.
The neuron has two modes of operation; the training mode and the using mode. In the training mode, the neuron can be trained to fire (or not), for particular input patterns. In the using mode, when a taught input pattern is detected at the input, its associated output becomes the current output.
Simple Neuron
A more sophisticated neuron (figure) is the McCulloch and Pitts model (MCP).
The inputs are 'weighted', the effect that each input has at decision making is dependent on the weight of the particular input.
These weighted inputs are then added together and if they exceed a pre-set threshold value, the neuron fires. In any other case the neuron does not fire.
In mathematical terms, the neuron fires if and only if; X1W1 + X2W2 + X3W3 + ... > T( Threshold Value)
More Complicated Neuron
Weighted:
The weight of an input is a number which when multiplied with the input gives the weighted input.
Architecture
There are two types of architecture of Neural Networks:
• Feed-forward Networks
• Feed-back Networks
Feed-forward ANN’s (figure 1) allow signals to travel one way only; from input to output.
There is no feedback (loops) i.e. the output of any layer does not affect that same layer.
Feed-forward Ann's tend to be straight forward networks that associate inputs with outputs.
Feed-forward Networks
Feedback networks (figure) can have signals traveling in both directions by introducing loops in the network
Feedback networks are very powerful and can get extremely complicated.
They remain at the equilibrium point until the input changes and a new equilibrium needs to be found.
Feed-back Networks
Network Layers
The commonest type of artificial neural network consists of three groups, or layers, of units:
• A layer of "input" units is connected to a layer of "hidden" units, which is connected to a layer of "output" units.
The activity of the input units represents the raw information that is fed into the network
The activity of each hidden unit is determined by the activities of the input units and the weights on the connections between the input and the hidden units.
The behavior of the output units depends on the activity of the hidden units and the weights between the hidden and output units.
The most influential work on neural nets in the 60's went under the heading of 'perceptrons' a term coined by Frank Rosenblatt.
The perceptron (figure 4.4) turns out to be an MCP model ( neuron with weighted inputs ) with some additional, fixed, pre--processing.
Units labeled A1, A2, Aj , Ap are called association units and their task is to extract specific, localized featured from the input images.
Threshold Neuron (Perceptrons)
Perceptrons
Perceptrons mimic the basic idea behind the mammalian visual system.
They were mainly used in pattern recognition even though their capabilities extended a lot more.
In 1969 Minsky and Papert wrote a book in which they described the limitations of single layer Perceptrons.
The book was very well written and showed mathematically that single layer perceptrons could not do some basic pattern recognition operations like determining the parity of a shape or determining whether a shape is connected or not.
What they did not realized, until the 80's, is that given the appropriate training, multilevel perceptrons can do these operations.
ADVANTAGE OF ANN
• A neural network can perform tasks that a linear program can not.
• When an element of the neural network fails, it can continue without any problem by their parallel nature.
• A neural network learns and does not need to be reprogrammed.
• It can be implemented in any application. • It can be implemented without any problem.
DISADVANTAGE OF ANN
• The neural network needs training to operate.
• The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated.
• Requires high processing time for large neural networks.
Artificial Intellect with
Neural Networks
Intelligent Control
Intelligent Control
Technical Diagnisti
cs
Technical Diagnisti
csIntelligent
Data Analysis and Signal Processing
Intelligent Data Analysis
and Signal Processing
Advance Robotics
Advance Robotics
Machine Vision
Machine Vision
Image & Pattern
Recognition
Image & Pattern
Recognition
Intelligent Security Systems
Intelligent Security Systems
Intelligentl
Medicine Devices
Intelligentl
Medicine Devices
Intelligent Expert
Systems
Intelligent Expert
Systems
Applications of Artificial Neural Networks
35
THANK YOU