Manu Pratap Singh Dr. S. S. Pandey Abstract · Keywords: Hopfield Neural Networks, Associative...
Transcript of Manu Pratap Singh Dr. S. S. Pandey Abstract · Keywords: Hopfield Neural Networks, Associative...
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2454-3047; ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
www.elkjournals.com ……………………………………………………………………………………………………………………
13
PERFORMANCE EVALUATION OF HOPFIELD ASSOCATIVE MEMORY FOR COMPRESSED
IMAGES
Manu Pratap Singh Dr. S. S. Pandey Vikas Pandey Department of Computer Science,
Institute of Engineering &
Technology,
Dr. B. R. Ambedkar University,
Khandari, Agra (U. P.)
Department of Mathematics &
Computer Science
Rani Durgawati University,
Jabalpur (M. P.)
India
Department of Mathematics &
Computer Science
Rani Durgawati University,
Jabalpur (M. P.)
India
Abstract
Keywords: Hopfield Neural Networks, Associative memory, Compressed Images storage & recalling, pattern
storage networks
1. Introduction
Pattern storage & recalling i.e. pattern
association is one of prominent method for
the pattern recognition task that one would
like to realize using an artificial neural
network (ANN) as associative memory
feature. Pattern storage is generally
accomplished by a feedback network
consisting of processing units with non-
linear bipolar output functions.
This paper is designed to analyze the performance of a Hopfield neural network for storage and recall of
compressed images. In this paper we are considering the images of different sizes. These images are first
compressed by using wavelet transformation. The compressed images are then preprocessed and the feature
vectors of these images are obtained. The training set consists with all the pattern information of the preprocessed
and compressed images. Here each input pattern is of size 900 X 1. Each pattern of training set is encoded into
Hopfield neural network using hebbian and pseudo inverse learning rules. Once all the patterns of training set
are encoded then we simulate the performance of trained neural network for the presented noisy patterns of the
already encoded patterns. These noisy test patterns are also compressed and preprocessed images. The
performance for associative memory phenomena of Hopfield neural network is analyzed. The analysis is
considered in terms of successful and correct recalling of the patterns in the form of original compressed images.
It is found from simulated results that the performance of Hopfield neural network for recalling of the patterns
and then the reconstruction of images decreases as the noise or distortion in the original images is above 30 %. It
is also found that the Hopfield neural network is failed to recall the correct images if the presented prototype
input pattern of the original image is containing the noise more than 50 %.
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
14
The Hopfield neural network is a simple
feedback neural network (NN) which is able
to store patterns locally in the form of
connection strengths between the processing
units. This network can also work for the
pattern completion on the presentation of
partial information or prototype input
pattern. The stable states of the network
represent the memorized or stored patterns.
Since the Hopfield neural network with
associative memory [1-2] was introduced,
various modifications [3-10] are developed
for the purpose of storing and retrieving
memory patterns as fixed-point attractors.
The dynamics of these networks have been
studied extensively because of their
potential applications [11-14]. The dynamics
determines the retrieval quality of the
associative memories corresponding to
already stored patterns. The pattern
information in an unsupervised manner is
encoded as sum of correlation weight
matrices in the connection strengths between
the proceeding units of feedback neural
network using the locally available
information of the pre and post synaptic
units which is considered as final or parent
weight matrix.
The neural network applications address
problems in pattern classification,
prediction, financial analysis, and control
and optimization [15]. In most current
applications, neural networks are best used
as aids to human decision makers instead of
substitutes for them. The Neural Networks
have been designed to model the process of
memory recall in the human brain [16].
Association in human brain refers to the
phenomenon of one thought causing us to
think of another. Correspondingly,
associative memory is the function where
the brain is able to store and recall
information, given partial knowledge of the
information content [17].
Associative Memory is a dynamical system
which has a number of stable states with a
domain of attraction around them. If the
system starts at any state in the domain, it
will converge to the locally stable state,
which is called an attractor [18]. One such
model, describing the organization of
neurons in such a way that they function as
Associative Memory or also called as
Content Addressable Memory, was
proposed by J. J. Hopfield and was named
after him as Hopfield Model. It is a fully
connected neural network model in which
patterns can be stored by distributing among
neurons and we can retrieve one of the
previously presented patterns from an
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
15
example which is similar to, or a noisy
version of it [17, 18]. The network
associates each element of a pattern with a
binary neuron. The neurons are updated
asynchronously and in parallel. They are
initialized with an input pattern and the
network activation converges to the closest
learnt pattern [19]. This dynamical behavior
of the neurons strongly depends on the
synaptic strength between neurons. The
specification of the synaptic strength is
conventionally referred to as learning [20].
Learning employs a number of learning
algorithms as perceptron, hebbian, pseudo
inverse, LMS etc. [21]. While choosing a
learning algorithm, there are a number of
considerations. The following considerations
are used in this paper:
a) The maximum capacity of the
network.
b) The network’s ability to add patterns
incrementally to the network.
c) The network’s ability to correctly
recall patterns stored in the network.
d) The network’s ability to associate a
noisy pattern to its original pattern.
e) The network’s ability to associate a
new pattern its nearest neighbor.
Points d and e would also vary with the
number of patterns currently stored in the
network.
The simplest rule that can be used to train
a network is the hebbian rule but it suffers
from a number of problems such as:
a) The maximum capacity is limited to
just 0.14N, where N is the number of
neurons in the network [22].
b) The recall efficiency of the network
deteriorates as the number of
patterns stored in the network
increases [23].
c) The network’s ability to correct
noisy patterns is also extremely
limited and deteriorates with packing
density of the network.
d) New patterns could hardly be
associated to the stored patterns.
The next rule to be considered to overcome
the disadvantages of the hebbian rule was
the pseudo inverse learning rule. The
standard pseudo inverse rule is known to be
better than the hebbian rule in terms of the
capacity (N), recall efficiency and pattern
corrections [24]. In this paper we are
considering the images of different sizes.
These images are first compressed by using
wavelet transformation. The compressed
images are then preprocessed and the feature
vectors of these images are obtained.
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
16
Therefore for each image a pattern vector of
size 900 X 1 is constructed. The training set
consists with all the pattern information of
the preprocessed and compressed images.
Here each input pattern is of size 900 X 1.
Each pattern of training set is encoded into
Hopfield neural network using hebbian and
pseudo inverse learning rules. Once all the
patterns of training set are encoded then we
simulate the performance of trained neural
network for the presented noisy patterns of
the already encoded patterns. These noisy
test patterns are also compressed and
preprocessed images. The performance for
associative memory phenomena of Hopfield
neural network is analyzed. The analysis is
considered in terms of successful and correct
recalling of the patterns in the form of
original compressed images. It is found from
simulated results that the performance of
Hopfield neural network for recalling of the
patterns and then the reconstruction of
images decreases as the noise or distortion
in the original images is above 30 %. It is
also found that the Hopfield neural network
is failed to recall the correct images if the
presented prototype input pattern of the
original image is containing the noise more
than 50 %. This paper is organized as
follows:
Section II provides a brief description of the
Hopfield network as associative memory
and its storage and update dynamics. Section
III elaborates the Pseudo inverse Rule, the
associated problems and measures to
overcome them. Section IV considers the
pattern formation for compressed images.
Section V contains the experiments whose
results have been compiled and discussed in
Section VI. Conclusions then follow in
section VII.
2. Hopfield Network as Associative
Memory
The proposed Hopfield model consists of N
(900 = 30 X 30) neurons and NN (900 X
900) connection strengths. Each neuron can
be in one of the two states i.e. ±1, and )9(L
bipolar patterns have to be memorized in the
Hopfield neural network of associative
memory.
Hence, to store )9(L number of patterns in
this pattern storage network, the weight
matrix w is usually determined by the
Hebbian rule as follows:
L
l
l
T
l xxw1
(1)
or, l
j
l
iN
l
ij aaw 1 (2)
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
17
and, N
ji
l
j
l
i
l aaN
w,
1 (3)
or,
L
l
l
j
l
iij aaN
w1
1)( ji and 0iiw (4)
where,
{ LlNial
i 3,2,1;3,2,1, and
;ji with set of L patterns to be
memorized and N is the number of
processing units}. The network is initialized
as:
00 l
i
l
i as for all 1i to N (5)
The activation value and output of every
unit in Hopfield model can represent as:
N
j
iiji tswy1
; jiNji ;3,2,1,
(6)
and ii yts sgn1 (7)
where 1sgn iy for 0iy and 0iy
respectively.
Associative memory involves the
retrieval of a memorized pattern in response
to the presentation of some prototype input
patterns as the arbitrary initial states of the
network. These initial states have a certain
degree of similarity with the memorized
patterns and will be attracted towards them
with the evaluation of the neural network.
Hence, in order to memorize 9 scanned
images of in a 900-unit bipolar Hopfield
neural network, there should be one stable
state corresponding to each stored pattern.
Thus at the end, the memory pattern should
be fixed-point attractors of the network and
must satisfy the fixed-point condition as:
N
jij
l
jij
l
i
l
i swsy1
(8)
or,
N
j
l
jij
l
i
l
i swsy1
where 0l
iy (9)
Therefore, the following activation
dynamics equation must satisfy to
accomplish the pattern storage:
;)( l
i
N
ij
l
jij sswf
(10)
where jiNji ;,,2,1,
Let the pattern set be },,,{ 21 LxxxP
where ),,,,([ 112
11
1Naaax
),,,,( 222
21
2Naaax
-
-
-
),,,( 21LN
LLL aaax (11)
with 900,,2,1 N
and ]9,,2,1 L .
Now, the initial weights have been
considered as 0ijw (near to zero) for all
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
18
si' and sj' . From the synaptic dynamics as
vectors we have the following equation for
encoding the patterns information as:
XXWW Toldnew . (12)
and newold WW (13)
similarly for the Lth patterns, we have:
LTLLL XXWW 1 (14)
Thus, after the learning for all the patterns, the final parent weight matrix can be represented as:
0111
|||||
|||||
110
1
1110
1
3
1
2
1
1
1
2
1
32
1
12
1
1
1
31
1
21
L
l
ll
N
L
l
ll
N
L
l
ll
N
L
l
l
N
lL
l
llL
l
ll
L
l
l
N
lL
l
llL
l
ll
L
aaN
aaN
aaN
aaN
aaN
aaN
aaN
aaN
aaN
W(15)
Now, to represent LW in the convenient representation form, let us assume following
notations:
L
l
ll aaSS1
2121 ,
L
l
ll aaSS1
3131 -------------,
L
l
l
N
l
N aaSS1
11 ,
L
l
ll aaSS1
1212 ,
L
l
ll aaSS1
3232 --------- ,
L
l
l
N
l
N aaSS1
22 ,
L
l
ll
NN aaSS1
11 ,
L
l
ll
NN aaSS1
22 ,
L
l
ll
NN aaSS1
33 (16)
So that, from equation (6.16) & (6.17), we get:
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
19
0
|||||
|||||
0
0
1
321
23212
13121
ssssss
ssssss
ssssss
NW
NNN
N
N
L (17)
This square matrix is considered as the
parent weight matrices because it represents
the partial solution or sub-optimal solution
for the pattern recalling corresponding to the
presented prototype input pattern vector.
Hopfield suggested that the maximum limit
for the storage is N15.0 in a network with N
neurons, if a small error in recalling is
allowed. Later, this was theoretically
calculated as Np 14.0 by using replica
method [25]. Wasserman [26] showed that
the maximum number of memories ‘ m ’ that
can be stored in a network of ‘ n ’neurons
and recalled exactly is less that 2cn where ‘
c ’is a positive constant greater than one. It
has been identified that the storage capacity
strongly depends on learning scheme.
Researchers have proposed different
learning schemes, instead of the Hebbian
rule to increase the storage capacity of the
Hopfield neural network [27, 28] Gardner
showed that the ultimate capacity will be
Np 2 as a function of the size of the basin
of attraction [29].
Pattern recall involves setting the initial state
of the network equal to an input vector ξi.
The states of the individual units are then
updated repeatedly until the overall state of
the network is stable. Updating of units may
be synchronous or asynchronous [30]. In the
synchronous update all the units of the
network are updated simultaneously and the
state of the network is frozen until update is
made for all the units. While in the
asynchronous update, a unit is selected at
random and its state is updated using the
current state of the network. This update via
random choice of a unit is continued until no
further change in the state takes place for all
the units i.e. the network reaches a stable
state. Each stable state of the network
corresponds to a stored pattern that has a
minimum hamming distance from the input
pattern [31]. Each stable state of the network
is associated an energy E and hence that
state acts as a point attractor. And during
update the network moves from an initial
high energy state to the nearest attractor. All
stable states which are similar to any of ξi of
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
20
training set are called Fundamental
Memories. Apart from them there are other
stable states, including inverses of
fundamental memories. The number of such
fundamental memories and the nature of
additional stable states depend upon the
learning algorithm that is employed.
3. Pseudo inverse Rule
In Hopfield, we can use the pseudo
inverse learning rule to encode the pattern
information if pattern vectors are even non
orthogonal. It provides the more efficient
method for learning in the feedback neural
network models. The pseudo inverse weight
matrix is given by
W = Ξ Ξ -1 (18)
where Ξ is the matrix whose rows are ξn and
Ξ -1 is its pseudo inverse. The matrix with
the property that Ξ -1 Ξ = I [32].
The pseudo inverse rule is neither local
nor incremental as compared to the hebbian
rule. This means that the update of a
particular connection does not depend on the
information available on either side of the
connection and also patterns cannot be
incrementally added to the network. These
problems can be solved by modifying the
rule in such a way that some characteristics
of hebbian learning are also incorporated
such that locality and incrementally is
ensured. The hebbian rule is given as:
L
Wij = 1/N ∑ ξli * ξlj for i≠j (19)
l = 1
= 0, for i=j, 1≤i, j≤N
where,
N is the number of units/neurons in the
network
ξl for l = 1 to L are the vectors / images to
be stored, where each component of ξl is
binary i.e. each ξli = ±1 for i=1 to N.
Now the pseudo inverse of the weight
matrix can be calculated as
Wpinv = Wt * (W * Wt)-1 (20)
Where, Wt is the transpose of the weight
matrix W and (W * Wt)-1 is the inverse of
the product of W and its transpose.
This method will overcome the locality
and incrementally problems associated with
the pseudo inverse rule. In addition it has the
benefits of the pseudo inverse rule in terms
of the storage capacity and recall efficiency
over the hebbian rule.
Pattern recall refers to the identification and
retrieval of the corresponding image when
an image is presented as input to the
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
21
network. As soon as an image is fed as input
to the network, the network starts updating
itself. In the current paper, we use
asynchronous update of the network units to
find their new states. This update via
random choice of a unit is continued until no
further change in the state takes place for all
the units. That is, the state at time (t+1) is
the same as the state at time t for all the
units.
si (t+1) = si (t) for all i (21)
Such a state is referred to as the stable state.
In a stable state the output of the network
will be a stable (trained) pattern that has a
minimum hamming distance from the input
pattern [31]. The network is said to have
converged and recalled the pattern if the
output matches the pattern presented
initially as input. For pattern association, the
patterns stored in an associative memory act
as attractors and the largest hamming
distance within which almost all states flow
to the pattern is defined as the radius of the
basin of attraction [32].
Each state of the Hopfield Network is
associated with an energy value, whose
value either reduces or remains the same as
the state of the network changes [33]. The
energy function of the network is given by
N
i
iij
i
N
j
ij sssWV1 12
1 (22)
Hopfield has shown that for symmetric
weights with no self feedback connections
and bipolar output functions, the dynamics
of the network using asynchronous update
always lead towards energy minima at
equilibrium. The network states
corresponding to these energy minima are
termed as stable states [34] and the network
uses each of these stable states for storing
individual patterns.
4. Pattern formation using Image
Preprocessing techniques
The pattern formation is an essential step for
performing the associative feature in
Hopfield neural network model. Hence to
construct the pattern information to encode
the pattern, the preprocessing steps are
required. Preprocessing, in the form of
image enhancement, of the images is
required to convert the images into suitable
patterns for storage in the Hopfield
Network. The term image enhancement
refers to making the image clearer for easy
further operations. The images considered
for the study are the images of the
impressions of different individuals. The
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
22
images are not of perfect quality to be
considered for storage in a network. Hence
enhancement methods are required to reveal
the fine details of the images which may
remain uncovered due to insufficient ink or
imperfect impressions. The enhancement
methods would increase the contrast
between image components and connect the
broken or incomplete image components.
The images were first scanned as gray
images and then transform in wavelet to
retain the fine details in the images. The
image was then subjected to binarization.
Binarization refers to conversion of a
grayscale image to black and white image.
Typically binarization converts an image of
up to 256 gray levels to a black and white
image as shown in figure 1 (a) and (b).
0
Fig 1: (a) Original Greyscale Images
Fig 1: (b) Binary Images
After attaining binarization, the need was to
convert the binary image to bipolar image,
since Hopfield networks work best with
bipolar data. A bipolar image is one where
each pixel has value either +1 or -1. Hence,
all pixel values are verified and those with
value 0 are converted to -1, thus converting
the binary image to bipolar image. Finally
the image is converted to bipolar vectors.
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
23
All the image vectors are stored into a
comprehensive matrix which has the
following structure.
]
.........
..........[
990089007900690059004900390029001900
928272625242322212
918171615141312111
P (23)
The equation 23 is representing the training
set. This training set is used to encode the
pattern information of all the nine
preprocessed images in to Hopfield neural
network.
5. Implementation detail and experiment
design
The patterns in the form of bipolar vectors
created in section 4 were then stored in the
Hopfield network via the following
algorithm separately for hebbian and pseudo
inverse rules in separate weight matrices.
Algorithm: Pattern Storage and
Recalling
The algorithm for pattern recall in a
Hopfield Neural Network storing L patterns
is as follows:
The algorithm would be implemented both
for Hebbian and Pseudo inverse rules and
results would be recorded. Assume a pattern
x, of size N, already stored in the network
and modified by altering the values of k
randomly chosen pixels. Also, assume
vectors ynew to store the new states of the
network. Consider a variable count
initialized to value 1.
1. Initialize weights to store patterns
(Use Hebbian and Pseudo inverse
Rule) as per the equations 14 and 20
respectively for Pattern Storage.
While activations of the net are
converged perform steps 2 to 8.
2. For each input vector x, repeat steps
3 to 7.
3. Set initial activations of the net equal
to the external input vector x, yi = xi
(i=1 to n).
4. Perform steps 5 to 7 for each unit yi.
5. Compute the net input
ji
jijij WYxY,
*
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
24
6. Determine the activation (output
signal):
0
1
0,1j
j
j ifYifY
Y
Broadcast the value of jY to all other units.
7. Test for convergence as per equation 4.
The following parameters are used to encode
the sample patterns of training set.
Table 1: Parameters used for Hebbian and
Pseudo inverse learning rule
Parameter Value
Initial state of
neurons
Randomly
Generated Values
Either –1 and 1
Threshold values
of neurons
0.00
The value of threshold θ is assumed to be
zero. Each unit is randomly chosen for
update.
The maximum number of patterns
successfully recalled by the above procedure
is a pointer to the maximum storage capacity
of the Hopfield Network, which is further
discussed in the results. Further the recall
efficiency for noisy patterns is also
determined as up to what percentage of error
in the patterns is acceptable by the network
and convergence to the original pattern
occurs.
6. Results and Discussion
The Hopfield network has the ability to
recognize unclear pictures correctly. This
means that the network can recall actual
pattern when the noisy or partial clues of
that pattern are presented to the network. It
is known and has been shown [18] that
Hopfield networks converge to the original
patterns if up to 40% distorted version of a
stored pattern is presented. The patterns are
stored in the network in the form of
attractors on the energy surface. The
network can then be presented with either a
portion of one of the images (partial cue) or
an image degraded with noise (noisy cue)
and through multiple iterations it will
attempt to reconstruct one of the stored
images.
Fig 1: (c) Recall images
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
25
All the patterns after preprocessing as
depicted in Section 4 were converted into
bipolar vectors ready for storage into the
Network. As per the above discussed
algorithm, the patterns were input one by
one into the Hopfield Network first by
hebbian rule and then by pseudo inverse
rule. The weight matrices for both are 900 ×
900 symmetric matrix.
The storage capacity of a neural network
refers to the maximum number of patterns
that can be stored and successfully recalled
for a given number of nodes, N. The
Hopfield network is limited in storage
capacity to 0.14N when trained with hebbian
rule [35, 36, 37]. But the capacity enhances
to N with pseudo inverse rule. Experiments
were conducted to check the same and the
network was able to store and perfectly
recall 0.14N i.e. 126 patterns with hebbian
rule and N i.e. 900 patterns with pseudo
inverse rule. Thus the critical storage
capacity for the Hopfield Network comes
out to be 0.14N with hebbian and N with
pseudo inverse rule without any error in
pattern recall, for the current study.
The figure 2 (a) and 2 (b) are repressing the
distorted and noisy form of the already
encoded images. These distorted images are
preprocessed and presented as the prototype
input pattern vector to the trained Hopfield
neural network for recalling. Similarly the
figure 2 (c) and 2 (d) are showing the noisy
form of the compressed images from
wavelet transformation, Fourier
transformation and DCT transformation are
presented to the Hopfield network for
recalling of corresponding correct recalled
images.
Fig 2 (a) Error images
Fig 2 (b) noisy images
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
26
Fig 2 (c) compressed (noise) images
wavelets
Fig 2 (d)
compressed (noise)
images Fourier
transform
Fig 2 (d) compressed (noise) images DCT
The figure 2 (e) is representing the
Binarization of the noisy images as shown in
2(a) to 2(d). These images are presented
again in the form of 900 X 1 pattern vectors.
These pattern vectors are presented to the
Hopfield network as the prototype input
pattern vector. The Hopfield neural network
produces the recalled images corresponding
to each presented input pattern vector. The
figure 2 (f) is representing the recalled
images. It can see that the 5 out of 9 images
are same as the memorized binary images
but 4 images out of 9 are not correctly
recalled. The recalled images is containing
some amount of error.
Fig 2(e) Noisy Binary Images
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
27
.l
Fig 2(f) Recall images
7. Conclusion
It was observed that the network performs
sufficiently well for the compressed image
with wavelet transform, DCT and Fourier
transformations. Further it has been
observed that the network’s efficiency starts
deteriorating as the network gets saturated.
The performance of the network deteriorated
with 80 patterns for hebb rule and 130
patterns for pseudo inverse rule. This result
can be attributed to the reduction of the
Hamming Distance between the stored
patterns and the consequent reduction of the
radius of the basin of attraction of each
stable state. Hence only few patterns could
settle into the stable states of their original
patterns. The following points are observed
form the experimental results:
1. For all the 9 images the recalling is
correct if any one of the original
images is presented as the prototype
input pattern for the pattern recalling.
The behavior with distorted patterns
is similar with both the rules i.e. up
to 40% distortion the same pattern is
associated but at 50% distortion
some other stored pattern or the
nearest neighbor is associated. New
patterns are not recognized by the
hebbian rule but by pseudo inverse
they are associated to some stored
pattern.
2. The performance of the Hopfield
neural network is found better for the
compressed images with wavelet
transform, DCT and Fourier
transformation.
3. The patterns are correctly recalled
even with 50 % of the noise in the
images compressed with wavelet
transform, DCT and Fourier
transformation.
4. Hopfield neural network exhibits the
associative memory phenomena
correctly for the small number of
patterns but its performance starts
deteriorate as the more number of
images are stored.
5. It can quite obviously verify that the
performance of Hopfield neural
network for pattern storage and
recalling depends heavily of the
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
28
method which is used for feature
extraction form the given image.
6. It is also considered that the
compression of images is conducted
with wavelet transform DCT and
Fourier transformation provides
more effective features for the
construction of pattern information.
The performances of Hopfield neural
network can also analysis for the more
number of images with some more
sophisticated methods of feature
extraction. The performance of Hopfield
neural network for pattern recalling can
further improve with the use of
evolutionary algorithms.
8. References
[1] Hopfield, J. J., “Neural Networks
and Physical Systems with Emergent
Collective Computational Abilities”,
Proceedings of the National
Academy Sciences, USA, 79,
pp.2554 – 2558, (1982).
[2] Hopfield, J. J., “Neural Networks
and Physical Systems with Emergent
Collective Computational Abilities”,
Proceedings of the National
Academy Sciences, USA, 81,
pp.3088 – 3092, (1984).
[3] Amit, D. J., Gutfreund, H., and
Somopolinsky, H., “Storing Infinite
Number of Patterns in a Spin-glass
Model of Neural Networks”,
Physical Review Letters, vol. 55(14),
pp. 461-482, (1985).
[4] Amit, D. J., “Modeling Brain
Function: The World of Attractor
Neural Networks”, Cambridge
University Press New York, NY,
USA, (1989).
[5] Haykin, S., “Neural Networks: A
Comprehensive Foundation”, Upper
Saddle River: Prentice Hall, Chap
14, pp. 64, (1998).
[6] Zhou, Z., and Zhao, H.,
“Improvement of the Hopfield
Neural Network by MC-Adaptation
Rule”, Chin. Phys. Letters, vol.
23(6), pp. 1402-1405.
[7] Zhao, H., “Designing Asymmetric
Neural Networks with Associative
Memory”, Physical Review, vol.
70(6) 066137-4.
[8] Kawamura, M., and Okada, M.,
“Transient Dynamics for Sequence
Processing Neural Networks”, J.
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
29
Phys. A: Math. Gen., vol. 35 (2), pp.
253, (2002).
[9] Amit, D. J., “Mean-field Ising Model
and Low Rates in Neural Networks”,
Proceedings of the International
Conference on Statistical Physics, 5-
7 June Seoul Korea, pp. 1-10,
(1997).
[10] Imada, A., and Araki, K., “Genetic
Algorithm Enlarges the Capacity of
Associative Memory”, Proceedings
of the sixth International Conf. on
Genetic Algorithms, pp. 413 – 420,
(1995).
[11] Hopfield, J. J. and Tank, D. W.,
“Neural Computation of Decisions in
Optimization Problems”, Biological
Cybernetics, vol. 52 (3), pp. 141-
152, (1985).
[12] Tank, D. W. and Hopfield, J. J.,
“Simple Neural Optimization
Networks: An A/D Converter, Signal
Decision Circuit, and a Linear
Programming Circuit”, IEEE Trans.
Circuits and Syst., vol. 33(5), pp.
533-541, (1986).
[13] Jin, T. and Zhao, H., “Pattern
Recognition using Asymmetric
Attractor Neural Networks”, Phys.
Rev., vol. E 72(6), pp. 066111-7,
(2005).
[14] Kumar, S. and Singh, M. P., “Pattern
Recall Analysis of the Hopfield
Neural Network with a Genetic
Algorithm”, Computers and
Mathematics with Applications, vol.
60(4), pp. 1049-1057, (2010).
[15] Paliwal, M. and Kumar, U. A.,
“Review: Neural networks and
statistical techniques”, A review of
applications, Expert Systems with
Applications, vol. 36(1), pp. 2-17,
(2009).
[16] Tarkowski W., Lewenstein M.,
Nowak A., “Optimal Architectures
for Storage of Spatially Correlated
Data in Neural Network Memories”,
ACTA Physica Polonica B, Vol. 28,
No.7, pp 1695 – 1705, (1997).
[17] Takasaki K., “Critical Capacity of
Hopfield Networks”, MIT
Department of Physics, (2007).
[18] Ramachandran R., Gunusekharan N.,
“Optimal Implementation of two
Dimensional Bipolar Hopfield
Model Neural Network”, Proc. Natl.
Sci. Counc. ROC (A), Vol. 24(1), pp
73 – 78 (2000).
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
30
[19] McEliece, R. J., Posner, E. C.,
Rodemich, E. R. and Venkatesh, S.
S., “The capacity of the Hopfield
associative memory”, IEEE Trans
Information Theory IT-33 4, pp.
461-482, (1987).
[20] Ma J., “The Object Perceptron
Learning Algorithm on Generalized
Hopfield Networks for Associative
Memory”, Neural Computing and
Applications, Vol. 8, pp. 25 – 32
(1999).
[21] Atithan G., “A Comparative Study of
Two Learning rules for Associative
Memory”, PRAMANA – Journal of
Physics, Vol. 45, No. 6, pp 569 –
582 (1995).
[22] Pancha G., and Venkatesh S.,
“Feature and Memory-Selective
Error Correction in Neural
Associative Memory”, in M. H.
Hassoun eds. Associative Neural
Memories: Theory and
Implementation, Oxford University
Press, pp.-225-248, (1993).
[23] Abbott L. F., Arian Y., “Storage
Capacity of Generalized Networks”,
Rapid Communications, Physical
Review A, Vol. 36, No. 10, pp 5091
– 5094 (1987).
[24] Tarkowski W., Lewenstein M.,
Nowak A., “Optimal Architectures
for Storage of Spatially Correlated
Data in Neural Network Memories”,
ACTA Physica Polonica B, Vol. 28,
No.7, pp 1695 – 1705 (1997).
[25] Amit, D. J., Gutfreund, H., and
Somopolinsky, H., “Storing Infinite
Number of Patterns in a Spin-glass
Model of Neural Networks”,
Physical Review Letters, vol. 55(14),
pp. 461-482, (1985).
[26] Wasserman, P. D., “Neural
Computing: theory and practice”,
Van Nostrand Reinhold Co., New
York, NY, USA, (1989).
[27] Kohonen, T. and Ruohonen, M.
“Representation of Associated Data
by Matrix Operators”, IEEE Tran
Computers, vol. C-22(7), pp. 701-
702, (1973).
[28] Pancha, G. and Venkatesh, S. S.,
“Feature and Memory-Selective
Error Correction in Neural
Associative Memory”, Associative
Neural Memories: Theory and
Implementation, M. H. Hassoun, ed.,
Oxford University Press, pp. 225-
248, (1993).
ELK ASIA PACIFIC JOURNAL OF COMPUTER SCIENCE AND INFORMATION SYSTEMS
ISSN: 2394-0441 (Online) Volume 1 Issue 2 (2015)
……………………………………………………………………………………………………………………
31
[29] Gardner, E., “The Phase Space of
Interactions in Neural Networks
Models”, Journal of Physics, vol. 21
A, pp. 257-270, (1988).
[30] Yegnanarayana B., “Artificial Neural
Networks”, PHI, (2005).
[31] Vonk E, Veelenturf L. P. J, Jain L.
C., “Neural Networks:
Implementations and Applications”,
IEEE AES Magazine, (1996).
[32] Perez C. J., Vicente, “Hierarchical
Neural Network with High Storage
Capacity”, Physical Review A, Vol.
40(9), 5356 – 5360 (1989).
[33] Streib F. E, “Active Learning in
Recurrent Neural Networks
Facilitated by a Hebb-like Learning
Rule with Memory”, Neural
Information Processing – Letters and
Reviews, 9(2), 31 – 40 (2005).
[34] Hopfield J. J., “Neural Networks and
Physical Systems with emergent
Collective Computational Abilities”,
PNAS, Vol. 79, pp 2554 -2558
(1982).
[35] Ma J., “The Object Perceptron
Learning Algorithm on Generalized
Hopfield Networks for Associative
Memory”, Neural Computing and
Applications, Vol. 8, pp. 25 – 32
(1999)
[36] Jankowski S., Lozowski A., Zurada
J. M., “Complex Valued Multistate
Neural Associative Memory, IEEE
Transactions on Neural Networks,
7(4), 1491 – 1496 (1996).
[37] Meyder A., Kiderlen C.,
“Fundamental Properties of Hopfield
Networks and Boltzmann Machines
for Associative Memories”, Machine
Learning (2008).