Post on 01-Jun-2018
8/9/2019 Neural Networks - Presentation
1/92
NEURAL
NETWORKSData Mining PhD Seminar – 31 October 2011
Gabriela Sava
8/9/2019 Neural Networks - Presentation
2/92
What is a Neural Networ!
" com#le$ structure that has the abilit% tostore e$#eriential nowle&ge an& use it in
&ecision maing Neural Networs are biologicall% ins#ire&
'he main goal o( using the Neural Networsis to )train* them to learn a classi+cation
tas 'he main characteristics o( the Neural
Networs are,
8/9/2019 Neural Networks - Presentation
3/92
8/9/2019 Neural Networks - Presentation
4/92
Mathematical &e+nition
" neural network is a &irecte& gra#hwith vertices an& arcs with the (ollowingrestrictions,
1/ is #artitione& into a set o( in#ut no&es hi&&en no&es an& out#ut no&es
2/ 'he vertices are #artitione& into layers
with all in#ut no&es in la%er 1 an& out#utno&es in la%er / 'he hi&&en no&es are inla%ers between 2 an& .1 – hi&&en la%ers
8/9/2019 Neural Networks - Presentation
5/92
3/ "n% arc must have no&e i in la%er h.1an& no&e j in la%er h
/ "n% arc is labele& with a numeric valuecalle& weight
4/ No&e 5 is labele& with a (unction
8/9/2019 Neural Networks - Presentation
6/92
What is a Neural NetworMo&el!
" neural network model is acom#utational mo&el consisting in,
1/ Neural networ gra#h that &e+nes the&ata structure o( the neural networ
2/ 6earning algorithm that in&icates how
learning taes #lace3/ 7ecall techni8ues that &etermine howin(ormation is obtaine& (rom the networ
8/9/2019 Neural Networks - Presentation
7/92
9haracteristics
Multila%er neural networs with at least onehi&&en la%er are universal a##ro$imators – canbe use& to a##ro$imate an% target (unction
9an han&le re&un&ant (eatures because theweights are automaticall% learne& &uring thetraining ste# – the weights (or re&un&ant(eatures are ver% small
Sensitive to the noise #resence in the training&ata – to han&le the noise we can use avali&ation set to &etermine the generali-ationerror o( the mo&el or to &ecrease the weight b%
a (actor : at each iteration
8/9/2019 Neural Networks - Presentation
8/92
'erminolog%
Input nodes – no&es that acce#t in#ut#atterns
Bias – e$tra in#ut (or a no&e with value 1
which has a negative weight Hidden nodes – no&es that acce#t &ata (rom
in#ut no&es #er(orm com#utation on theman& then sen& the results to out#uts no&es
Output nodes – no&es that acce#t &ata (romhi&&en ones an& give the out#ut to a user ora user inter(ace or com#are the out#ut withthe target #atterns
8/9/2019 Neural Networks - Presentation
9/92
Training set – use& to train an& teachthe networ to recogni-e #atterns
Validation set – use& to tune the#arameters o( a classi+er b% choosingthe number o( hi&&en no&es or hi&&en
la%ers in the networ Test set – use& to test the #er(ormance
o( the neural networ ;onl% (or a (ull%s#eci+e& classi+er<
8/9/2019 Neural Networks - Presentation
10/92
Activation function – (unction which isa##lie& to the set o( in#uts coming in to ano&e
'here have been man% #ro#osal (oractivation (unctions &uring the %ears butthe most use& ones currentl% are,
Threshold or step – the out#ut value is0 or 1 &e#en&ing on the sum o( the#ro&ucts o( the in#ut values an& theirassociate& weights
'he binar% out#ut values ma% be also .1 or
8/9/2019 Neural Networks - Presentation
11/92
=unction>s (orm,
where, . sum o( the a&?uste& in#uts b%weight
' – threshol&
Networs that use threshol& activation
(unction, @o#+el& Networs Ai&irectional "ssociative Memor% Mo&els
;A"M<
8/9/2019 Neural Networks - Presentation
12/92
Sigmoid – S.sha#e curve with out#utvalues between .1 an& 1 ;or 0 an& 1<which is monotonicall% increasing
"lthough there are several t%#es o(sigmoi& (unction a common one islogistic function,
where, c – #ositive constant value thatchanges the slo#e o( the (unction
8/9/2019 Neural Networks - Presentation
13/92
'he sigmoi& (unction is reall% use(ull%because is having some nice #ro#erties,
1/ 5t is a smooth threshol& com#are& with the
sim#le threshol&2/ 5s having a sim#le &erivative which iscritical in +n&ing the #ro#er weights to use
Networs that use sigmoi& activation(unction,
Aac#ro#agation Neural Networ Mo&el
;APNN<
8/9/2019 Neural Networks - Presentation
14/92
aussian – is a bell.sha#e& curve without#ut values in the range B01C
" t%#ical (unction is,
where, S – the mean
– variance o( the (unction
Networs that use Gaussian activation(unction,
ohonen Networs
Probabilistic Neural Networs ;PNN<
8/9/2019 Neural Networks - Presentation
15/92
Training algorithm – there are various trainingtechni8ues use& to train the neural networs,
Hebbian Learning Algorithm – unsu#ervise&
learning training which can be &escribe& as alocal #henomenon involving onl% 2 no&es an&a connection
Instar Training – #er(orms #attern recognitionE
the networ is training to res#on& to a s#eci+cin#ut vector
Self-organization – algorithm use& to constructohonen ma#s
8/9/2019 Neural Networks - Presentation
16/92
Design issues
Num!er of source nodes – assign an in#utno&e to each numerical or binar% in#utvariable ;i( the variable is categorical we canuse a co&ing s%stem<
Num!er of hidden la"ers – &e#en&ing on thenetwor com#le$it% one or two hi&&en la%ersare enoughE can be &eci&e& manuall% at thebeginning or automaticall% b% the training set
Num!er of hidden nodes – &e#en&s on thestructure o( the networ activation (unctiont%#e training algorithm #roblem being solve&the amount o( noise
8/9/2019 Neural Networks - Presentation
17/92
5( too (ew hi&&en no&es are use& the target(unction ma% not be learne& – undertting
5( too man% no&es are use& ma% occur
o!ertting 7ules o( thumb are o(ten given base& on thetraining set si-e
Num!er of output nodes – usuall% the
number o( the out#ut no&es is the same withthe number o( classes but this is not alwa%sthe case ;e/g two classes can have onl% oneout#ut no&e<
8/9/2019 Neural Networks - Presentation
18/92
#nterconnections . to select the rightmodel comple"ity we can start (rom a (ull%connecte& networ an& remove some
no&es evaluating the remaining structure Weights – initial ones are assume& to be
small #ositive values assigne& ran&oml%
Acti$ation function – it will be use& theone which best &escribes the learningalgorithm that we want to im#lement
8/9/2019 Neural Networks - Presentation
19/92
Learning algorithm – the most commona##roach is an a&a#tive (orm o(bac#ro#agation
Training data – with too much training &atathe networ ma% suFer (rom over+tting whilewith too little ma% not be able to classi(%accuratel% enough
'raining e$am#les with missing valuesshoul& be remove& or re#lace& Nee&s to cover the (ull range o( values (or all
(eatures that the networ might encounter
;inclu&ing the out#ut<
8/9/2019 Neural Networks - Presentation
20/92
When to use NeuralNetwors! Instances are represented by many attribute-
!alue pairs – target (unction is &escribe& as avector
Target function output may be discrete-!alued# real-!alued or a !ector of se!eral realor discrete attributes
Training set may contain errors $ learning
methods are robust to noise Long training time is necessary %ast e!aluation of the learned target function
– recall #rocess in neural networs is much
(aster than the learning one
8/9/2019 Neural Networks - Presentation
21/92
The ability of humans to understand thelearned target function is not important –learne& neural networs are easil%communicate& to humans
8/9/2019 Neural Networks - Presentation
22/92
"rchitecture
%eed&forward networks – connectionsare onl% to la%ers later in the structureEthe signal #ro#agates onl% in one
&irection the no&es (rom ne$t la%eruse the values
#ro&uces b%
#revious la%er asin#ut values
8/9/2019 Neural Networks - Presentation
23/92
%eed!ack networks – e$ists connectionsbac to earlier la%ers which allows thesignals to come bac to #revious no&esE
can learn simultaneousl% new #atternsan& recall ol& ones
8/9/2019 Neural Networks - Presentation
24/92
9lassi+cation
Unsupervised Learning Models –#rocess &oes not re8uire an e$am#le o(&esire& out#ut ;in most mo&els the
target out#ut is the same with the in#ut< Ob?ective – to categori-e or &iscover
(eatures or #atterns in the training &ata
se& in a wi&e variet% o( +el&s un&er&iFerent names – the most nown is)cluster anal%sis*
'he most common variet% is @ebbian
learning – &imensionalit% re&uction=or more about the @ebb 6aw>s, ' ohonen Sel(.
Organi-ing Ma#s 3r& e&ition S#ringer 2001 #ag H1.HI
8/9/2019 Neural Networks - Presentation
25/92
=ee&bac Nets
Networs which allow the out#ut to be(e& bac to the in#ut
Mo&els (rom this categor% Ainar% "&a#tive 7esonance 'heor%
;"7'1<
Discrete an& 9ontinuous @o#+el& ;D@an& 9@<
Discrete Ai&irectional "ssociativeMemor% ;A"M<
.
8/9/2019 Neural Networks - Presentation
26/92
=ee&.(orwar& Nets
Networs which &on not allow an%(ee&bac (rom the out#ut to the in#ut –
the connections are uni&irectional Mo&els (rom this categor%
6earning Matri$ ;6M<
6inear "ssociative Memor% ;6"M< =u--% "ssociative Memor% ;="M< 9ounter#ro#agation Networ ;9PN<
8/9/2019 Neural Networks - Presentation
27/92
Supervised learning models –re8uires e$am#les o( &esire& out#ut to
be s#eci+e& (rom which rules aregenerate&
Ob?ective – obtain the &esire& out#ut b%
iterative #rocess o( a&?usting the weightsto &evelo# an in#utJout#ut behavior thatma$imi-es the #robabilit% o( receiving arewar& an& minimi-es the one o(
receiving a #enalt%
8/9/2019 Neural Networks - Presentation
28/92
=ee&bac Nets
'he s%stem can re&uce the learning timeb% a&?usting the weights until it learnsthe in#ut #atterns
Mo&els (rom this categor% Arain.State.in.a.Ao$ ;ASA<
=u--% 9ognitive Ma# ;=9M<
Aolt-mann Machine ;AM< Aac#ro#agation 'hrough 'ime ;AP''<
7eal.'ime 7ecurrent 6earning ;7'76<
8/9/2019 Neural Networks - Presentation
29/92
=ee&.(orwar& Nets
Most a##lie& neural networs mo&els – i(the user &oes not obtain the &esire&out#ut the #rocess will be iterate& using
the connecte& no&es Mo&els (rom this categor%
Perce#tron
Aac#ro#agation ;AP< "&a#tive 6ogic Networ ;"6N<
6earning ector Kuanti-ation ;6K<
Probabilistic Neural Networs ;PNN<
A ti N l
8/9/2019 Neural Networks - Presentation
30/92
Aac#ro#agation NeuralNetwor Aac#ro#agation training is an iterative
gra&ient algorithm &esigne& to minimi-ethe mean.s8uare error between the actual
out#ut an& the &esire& one 'he #rocess is a ste#.b%.ste# one which
means the learning time is usuall% long Strength – gives goo& #er(ormance an&
easil% han&les com#le$ #atterns recognition Weaness – learning s#ee& is slow an& ma%
become tra##e& at local minima
8/9/2019 Neural Networks - Presentation
31/92
Networ "rchitecture
5n#ut la%er – in#ut variables which usethe linear trans(ormation (unction
@i&&en la%er – re#resents the interaction
among the in#ut no&esE uses thesigmoi& trans(ormation
(unction
Out#ut la%er –re#resents the out#ut
variables
8/9/2019 Neural Networks - Presentation
32/92
"lgorithm
Learning 'rocess
1/ Set the #arameter o( the networ
2/ Set the uni(orm ran&om values (or,
. the weights matri$ between the in#utla%er an& the hi&&en la%er
. the weights matri$ between the hi&&en
la%er an& the out#ut la%er. the bias (rom in the hi&&en la%er
. the bias (rom the out#ut la%er
8/9/2019 Neural Networks - Presentation
33/92
3/ Obtain an in#ut training vector L an&the &esire& out#ut vector '
/ 9alculate the out#ut vector as (ollows,
where, . net activation (or each hi&&en
no&e given the in#utsE/a 9alculate the out#ut vector @ in the
hi&&en la%er
8/9/2019 Neural Networks - Presentation
34/92
/b 9alculate the out#ut o( vector ,
where, . net activation (or each out#utno&e given the hi&&en no&es signalsE
8/9/2019 Neural Networks - Presentation
35/92
4/ 9alculate the sensitivit% value δ (outputerror)
where, . sensitivit% (or the unit jE trainingerror
4/a 9alculate the value in the out#ut la%er4/b 9alculate the value in the hi&&enla%er,
8/9/2019 Neural Networks - Presentation
36/92
I/ "&?ust the weight
I/a at the out#ut la%er,
where, – learning rate an& in&icates therelative si-e o( the change in weights
I/b at the hi&&en la%er,
8/9/2019 Neural Networks - Presentation
37/92
/ #&ate W an& using @ebbian 6earningalgorithm
/a at the out#ut la%er
/A at the hi&&en la%er
Q/ 7e#eat ste#s 3 to until the networconverges
8/9/2019 Neural Networks - Presentation
38/92
Recall 'rocess
1/ Set the networ #arameter
2/ 7ea& in the weights an& an& the vectors
an&3/ 7ea& in the test vector L
/ 9alculate the out#ut vector as (ollows,
/a 9alculate the out#ut vector @ in the hi&&enla%er
8/9/2019 Neural Networks - Presentation
39/92
/b 9alculate the out#ut o( vector ,
where, . net activation (or each out#utno&e given the hi&&en no&es signalsE
8/9/2019 Neural Networks - Presentation
40/92
6imitations
Local &inima . occurs because the algorithmalwa%s changes the weights in such a wa% as tocause the error to (all but the error might brieR%have to rise as #art o( a more general (allE i( thisis the case the algorithm will )gets stuc*;because it can>t go u#hill< an& the error will not&ecrease (urther
Solution, 7eset the weights an& start the training again
with other ran&om values
8/9/2019 Neural Networks - Presentation
41/92
LO7 #roblem
'he sim#lest #roblem which can besolve& using Aac#ro#agation is LO7which can be &escribe& as (ollows,
given two in#uts re#resentingcon&itions which are either both true orboth (alse then the result shoul& be(alse whereas given two in#uts (orwhich onl% one o( the con&itionsre#resente& is true then the out#utshoul& be true
=or e$am#le i( 5 woul& lie a new car
8/9/2019 Neural Networks - Presentation
42/92
Input
A
Input
B
Output
F0 0 0
0 1 1
1 0 1
1 1 0
'he situations &escribe& are s%ntheti-e& inthe near table
'he logical e$#ression that &escribes the#roblem is
(A or )* not (A and )*
8/9/2019 Neural Networks - Presentation
43/92
LO7 re#resentation
Gra#hicall% the LO7 #roblem can bere#resente& using a neural networ as(ollows,
8/9/2019 Neural Networks - Presentation
44/92
Solving LO7
When " an& A are both -ero on in#utthen their sum is still -ero on reachingthe hi&&en la%er thus neither no&e is
activate& resulting a +ero output When " is 0 an& A is 1 then their sum is
greater then T but less than 1T thusthe u##er no&e is activate& resulting in a1 an& the lower no&e is not activate&resulting in a 0
'hese values times their res#ective
weights o( 1 an& .1 result in a 1 at the
8/9/2019 Neural Networks - Presentation
45/92
When " is 1 an& A is 1 then their sum isgreater than both a T an& 1T resulting
in both no&es being activate& @owever because the weighting on
out#ut (rom the lower no&e is inverte&
thus the sum o( the values at theoutput node is -
;1 $ 1< U ;1 $ .1< or 1 .1 V 0/
8/9/2019 Neural Networks - Presentation
46/92
$am#les
Pattern classi+cation "&a#tive control Noise +ltering Data com#ression $#ert s%stems
8/9/2019 Neural Networks - Presentation
47/92
Probabilistic Neural Networ ;PNN<
Probabilistic neural networs are (orwar&(ee& networs built with three la%ers –uses nonlinear &ecision boun&aries that
are &erive& (rom Aa%es Decisionstrategies (or classif"ing in#ut vectors
'he% train 8uicl% since the training is&one in one ste# o( each training vectorrather than several
stimate the #robabilit% &ensit% (unction(or each class base& on the training
sam#les
8/9/2019 Neural Networks - Presentation
48/92
Strengths, @igh com#utation ca#abilit% – save time an&
eFort when woring with huge &atabases an&im#rove the accurac% o( the com#utation results
6earning – PNN is a &%namic s%stem which canlearn 8uicl% (rom the &ata source also the&ecision boun&aries can be u#&ate& in real timeusing new &ata when the% become available
=ault tolerance – a &amage to the connectionswill onl% &ecrease slightl% the (unctionalit%Eincom#lete in#ut in(ormation or with noise willnot sto# the networ #rocesses
8/9/2019 Neural Networks - Presentation
49/92
Weanesses, 6arge memor% re8uirements – because
the in(ormation is store& in matri$ (orman& as the number o( training isincreasing the matri$ will become ver%
large Slower recall #rocess – &ue to#rocessing o( the large matrices
8/9/2019 Neural Networks - Presentation
50/92
Networ "rchitecture
5n#ut la%ers – the one which nee& to beclassi+e&
Pattern la%er – has one neuron (or each
training vector sam#le Summation la%er – has one neuron (or each
#o#ulation class
Out#ut la%er –
threshol& &iscriminator
which &eci&e& the
summation with ma$imum
out#ut
8/9/2019 Neural Networks - Presentation
51/92
Aasic conce#ts
"ssume #ossible classi+cations, X// 9lassi+cation rule is &etermine& b% the
(ollowing vector,
'he #robabilit% o( classi(%ing the in#utvector into each class is &etermine& b%(unction which has a Gaussian
&istribution,
8/9/2019 Neural Networks - Presentation
52/92
where,
L – in#ut vectors
. total number o( training #atterns (orcategor%
? – #attern number
m – s#ace &imensionY – smoothing #arameter
. ?.th training #attern (or categor%
8/9/2019 Neural Networks - Presentation
53/92
5n PNN we are intereste& onl% about the
relative #robabilit% between eachcategor% so the (ormula use& to coðe learning #rogram is the (ollowing,
8/9/2019 Neural Networks - Presentation
54/92
"lgorithm
Learning process1/ se ran&om numbers to initiali-e theoriginal networ weights an& set the
smoothing #arameter2/ 5n#ut the vector L o( the trainingsam#le an& the target vector '
3/ Set the matri$ W
3/a matri$ '("h is between the in#ut la%eran& the hi&&en la%er
where, . the value o( one o( the in ut
8/9/2019 Neural Networks - Presentation
55/92
3/b matri$ '(hy is between the hi&&enla%er an& the out#ut la%er
where, . the value o( one o( the out#utvectors in one o( the training sam#les
8/9/2019 Neural Networks - Presentation
56/92
Recall process
1/ Set the smoothing #arameter – b%
e&ucate& guess base& on nowle&ge o(the &ata or using a heuristic techni8ue;e/g Zacni+ng<
2/ 7ea& &e matrices '("h an& '(hy 3/ 5n#ut the vector L o( one o( the testinge$am#les
/ 9om#ute the &e&uctive out#ut vector
8/9/2019 Neural Networks - Presentation
57/92
/a 9om#ute the out#ut vector @ o( hi&&enla%er,
/b 9om#ute the &e&uctive out#ut ,
8/9/2019 Neural Networks - Presentation
58/92
8/9/2019 Neural Networks - Presentation
59/92
'raining
'he training set must be thoroughl%re#resentative o( the actual #o#ulation(or eFective classi+cation
"&&ing an& removing training sam#lessim#l% involves a&&ing or removing)neurons* in the #attern la%er
"s the training set increases in si-e thePNN as%m#toticall% converges to theAa%es o#timal classi+er
'he training #rocess o( a PNN is
essentiall% the act o( &etermining the
8/9/2019 Neural Networks - Presentation
60/92
$am#les
"##lications in &atabases an& signal#rocessing
Mo&eling the nowle&ge in
com#utational biolog% an& bioin(ormatics;gene regulator% networs #roteinstructure gene e$#ression anal%sis<
me&icine ;#robabilistic relationshi#s
between &iseases an& s%m#toms . givens%m#toms the networ can be use& tocom#ute the #robabilities o( the#resence o( various &iseases<
8/9/2019 Neural Networks - Presentation
61/92
&ocument classi+cation in(ormation retrieval image #rocessing
&ecision su##ort s%stems
engineering
gaming law
ohonen Networs
8/9/2019 Neural Networks - Presentation
62/92
ohonen NetworsSel(.Organising Ma#s ;SOM<
=or more &etails see, ' ohonen Sel(.Organi-ing Ma#s 3r&
e&ition S#ringer 2001 ch 3 an& 4
5t is a self&organi+ing network – thecorrect out#ut can not be &e+ne& a priorian& there(ore a numerical measure o(
the magnitu&e o( the ma##ing error cannot be use& Main characteristic . trans(orm the in#ut
s#ace into a 1.D or 2.D &iscrete ma# ;(or
visuali-ation an& &imension re&uction< ina topologicall"&preser$ing wa%;neighboring neurons res#on& to
)similar* in#ut #atterns<
hi
8/9/2019 Neural Networks - Presentation
63/92
Networ "rchitecture
8/9/2019 Neural Networks - Presentation
64/92
Sel(.organi-ing ma#s are an e$am#le o(com#etitive learning – the #rocess +n&s this
to#olog% &irectl% (rom &ata ohonen mo&el has a strong neurobiological
bacgroun& – the ma##ing is similar withthe one o( the visual +el& on the corte$
ohonen SOMs result (rom the s%nerg% o(three basic #rocesses, 9om#etition9oo#eration an& "&a#tation
9 i i
8/9/2019 Neural Networks - Presentation
65/92
9om#etition
ach neuron in a SOM is assigne& aweight vector with the same&imensionalit% as the in#ut s#ace
"n% given in#ut #attern is com#are& tothe weight vector o( each neuron an&the closest neuron is &eclare& the winner
'he ucli&ean norm is
commonl% use& to measure
&istance
9 i
8/9/2019 Neural Networks - Presentation
66/92
9oo#eration
'he activation o( the winning neuron iss#rea& to neurons in its imme&iate
neighborhoo& 'his allows to#ologicall% close neurons to
become sensitive to similar #atterns . thewinner>s neighborhoo& is &etermine&
Distance in the area is a (unction o( thenumber o( lateral connections to the winner;as in cit%.bloc &istance<
8/9/2019 Neural Networks - Presentation
67/92
'he si-e o( the neighborhoo& is initiall%large but shrins over time
"n initiall% large neighborhoo& #romotes
a to#olog%.#reserving ma##ing Smaller neighborhoo&s
allows neurons to s#eciali-e
in the latter stages o(training
8/9/2019 Neural Networks - Presentation
68/92
"& t ti
8/9/2019 Neural Networks - Presentation
69/92
"&a#tation
During training the winner neuron an&its to#ological neighbors are a&a#te& tomae their weight vectors more similar
to the in#ut #attern that cause& theactivation
Neurons that are closer to the
winner will a&a#t more heavil%
than neurons that are
(urther awa%
Mathematical
8/9/2019 Neural Networks - Presentation
70/92
Mathematicalim#lementation
'he magnitu&e o( the a&a#tation iscontrolle& with a learning rate which
&eca%s over time to ensure convergenceo( the SOM
Learning rate decay rule,
)eighborhood size decay rule,
"l ith
8/9/2019 Neural Networks - Presentation
71/92
"lgorithm
1/ 5nitiali-e weights to some small ran&omvalues
2/a Select the ne$t in#ut #attern (orm the
&atabase =in& the unit that best matches the
in#ut #attern
#&ate the weights o( the winner an&all o( its neighbors
Uη
8/9/2019 Neural Networks - Presentation
72/92
2b/ Decrease the learning rate
2/c Decrease neighborhoo& si-e
3/ 7e#eat ste# 2 until convergence
l
8/9/2019 Neural Networks - Presentation
73/92
$am#les
isuali-ation o( higher &imensional &ata
or #rocess
Densit% estimation
5nverse inematics
Di t @ + l& N t
8/9/2019 Neural Networks - Presentation
74/92
Discrete @o#+el& Networs
Aasic i&ea o( @o#+el& was to a&& (ee&bacconnections to the networ an& show that withthese connections the networs are ca#able tohave memories - content.a&&ressable
memor% s%stems with binar% threshol& units Main characteristic . @o#+el& networ can
memori-e an& reconstruct a #attern (rom acorru#te& original ;auto&associati$e
memor" < 'he networs o#erates similarl% with the (ee&.
(orwar& ones
N t " hit t
8/9/2019 Neural Networks - Presentation
75/92
Networ "rchitecture
Single la%ere& recurrent networs "ll the neurons receive (ee&bac (rom
ever%bo&%
'he states o( neurons are binar% .1 an& 1 'he connections are s%mmetric –
No sel( connections 'he in(ormation is store& in
+$#oint attractors
"lgorithm
8/9/2019 Neural Networks - Presentation
76/92
"lgorithm
1/ 'rain the networ using a Stan&ar&#attern
2/ #&ate weight vectors o( networ
accor&ing to the ne$t threshol&ing rule;activation (unction
8/9/2019 Neural Networks - Presentation
77/92
'he u#&ating o( the networ can be ma&ese*uential or an& random
3/ 7un the traine& networ with corru#te&#attern
/ Networ returns the &ecr%#te& #attern
'he networ alwa%s will converge to a+$#oint attractor – the #attern onl% i( theconnections are s%mmetric
$am#les
8/9/2019 Neural Networks - Presentation
78/92
$am#les
Pattern reconstruction
6imitations
8/9/2019 Neural Networks - Presentation
79/92
6imitations
'raining #atterns can re#resenta##ro$imatel% 1[ o( the number o( no&esin the networ
5( more #atterns are use& then
the store& #atterns become unstable
s#urious stable states a##ear . states which
&o not corres#on& with store& #atterns Sometimes misinter#ret the corru#te&
#attern/
Ai&irectional "ssociative
8/9/2019 Neural Networks - Presentation
80/92
Memor% ;A"M< A"M Networ is a generali-ation o( the @o#+el&
Networs
Main characteristic – im#lements aheteroassociati$e memor" which means that
given a #attern the networ can return another#attern which is #otentiall% o( a &iFerent si-e
Strength – can 8uicl% recall the originaluncorru#te& #attern
Weaness – #oor internal ca#acit% to hol&in(ormation re8uire& to #er(orm reasoning
Networ "rchitecture
8/9/2019 Neural Networks - Presentation
81/92
Networ "rchitecture
Networ has onl% two la%ers – in#ut an&out#ut
'raining vector taes values ;.11< an& it
is &ivi&e& in 2 #arts, %ront part – in#utla%er an& Rear part – Out#ut la%er
A"M rule, networ can
remember the relationshi#s(rom the =ront #art to the
7ear #art
"lgorithm
8/9/2019 Neural Networks - Presentation
82/92
"lgorithm
Learning 'rocess
1/ Set the networ #arameter
2/ 9alculate the weight matri$
where, . in#ut value (or the (ront #art
. in#ut value (or the rear #art
# – learning &ata at the #.thelement
8/9/2019 Neural Networks - Presentation
83/92
Recall 'rocess
1/ 7ea& the weight matri$ W
2/ 5n#ut a test vector L
3/ 9alculate the out#ut vector ,
where,
8/9/2019 Neural Networks - Presentation
84/92
/ 9alculate the vector L at the in#ut la%eras (ollows,
where,4/ 7e#eat ste#s 3 an& until the networconverges to the learning rule – out#ut
no&es are associate& with the in#ut ones
$am#les
8/9/2019 Neural Networks - Presentation
85/92
$am#les
"##lications in &atabases an& signal#rocessing
9onnection between names an& #honenumbers – store& as vectors
9haracter recognition
9om#etitive 6earning
8/9/2019 Neural Networks - Presentation
86/92
9om#etitive 6earning
Unsuper$ised learning model wherethe outputs are in .competition/ forthe inputs
During training the out#ut unit that#rovi&es the highest activation to a givenin#ut #attern is &eclare& the winner an& ismove& closer to the in#ut #attern whereas
the rest o( the neurons are le(t unchange& 'he strateg% is also calle& inner!ta"e!all
since onl% the winning neuron is u#&ate&
Networ architecture
8/9/2019 Neural Networks - Presentation
87/92
Networ architecture
Out#ut units ma% have lateral inhibitor%connections so that a winner neuron caninhibit others b% an amount #ro#ortionalto its activation level
"lgorithm
8/9/2019 Neural Networks - Presentation
88/92
"lgorithm
1/ Normali-e all in#ut #atterns – to getvalues between ;01<
2/ 7an&oml% select a #attern
2a/ =in& the winner neuron – ma$imumvalue given b% the activation (unction
2/b/ #&ate the winner neuronUη
8/9/2019 Neural Networks - Presentation
89/92
2c/ Normali-e the winner neuron
3/ Go to ste# 2 until no changes occur
$am#les
8/9/2019 Neural Networks - Presentation
90/92
$am#les
During an 5nternational com#etition thecom#etitive learning e$ists when onl%one stu&ent goal is achieve& – winner o(
the 1st
#lace an& all other stu&ents (ail toreach the goal
9onsi&er bi&&ing in the stoc maret/ 'he stoc are the in#ut an& each broercom#etes b% bi&&ing with a value/ 'hemost suitable out#ut is the highest
value\
8/9/2019 Neural Networks - Presentation
91/92
Disa&vantages
8/9/2019 Neural Networks - Presentation
92/92
Disa&vantages
Di]cult to un&erstan& (or non.technicalusers
Generating rule (or the neural networsis not straight(orwar&
5n#ut attributes values must be numeric
Ma% occur the networ over+tting 'he learning #hase ma% (ail to converge