The Pulse Coupled Neural Network

31
CHAPTER I INTRODUCTION The retina is the third and inner coat of the eye which is a light-sensitive layer of tissue. The retina is a layered structure with several layers of neurons interconnected by synapses. The only neurons that are directly sensitive to light are the photoreceptor cells. These are mainly of two types: the rods and cones. Rods function mainly in dim light and provide black-and white vision, while cones support daytime vision and the perception of color.[1] The optic disc is the, a part of the retina sometimes called the blind spot because it lacks phot- oreceptors, is located at the optic papilla, a nasal zone where the optic-nerve fibers leave the eye. It appears as an oval white area of 3mm².At its center is the fovea, a pit that is responsible for the sharp central vision but it is actually less sensitive to light because of its lack of rods. In adult humans, the entire retina is approximately 72% of a sphere about 22 mm in diameter. The entire retina contains about 7 million cones and 75 to 150 million rods.[1] The central retina is cone-dominated and the peripheral retina is rod-dominated. 5

description

report on pcnn

Transcript of The Pulse Coupled Neural Network

Page 1: The Pulse Coupled Neural Network

CHAPTER I

INTRODUCTION

The retina is the third and inner coat of the eye which is a light-sensitive layer of tissue. The retina is a layered structure with several layers of neurons interconnected by synapses. The only neurons that are directly sensitive to light are the photoreceptor cells. These are mainly of two types: the rods and cones. Rods function mainly in dim light and provide black-and white vision, while cones support daytime vision and the perception of color.[1]The optic disc is the, a part of the retina sometimes called the blind spot because it lacks phot- oreceptors, is located at the optic papilla, a nasal zone where the optic-nerve fibers leave the eye. It appears as an oval white area of 3mm².At its center is the fovea, a pit that is responsible for the sharp central vision but it is actually less sensitive to light because of its lack of rods. In adult humans, the entire retina is approximately 72% of a sphere about 22 mm in diameter. The entire retina contains about 7 million cones and 75 to 150 million rods.[1] The central retina is cone-dominated and the peripheral retina is rod-dominated.

Figure1.1. A normal human retina.

Veins are darker and slightly wider than corresponding arteries. The optic disc is at left, and the macula lutea is near the centre.

1.2 Blood vessel segmentation

The retinal vasculature is composed of arteries and veins appearing as elongated features, with their tributaries visible within the retinal image. There is a wide range of vessel widths ranging from one pixel to twenty pixels, depending on both the width of the vessel and the image resolution. The vessel cross-sectional intensity profiles approximate a Gaussian shape or a

5

Page 2: The Pulse Coupled Neural Network

mixture of Gaussians in the case where a central vessel reflex is present. The orientation and grey level of a vessel does not change abruptly; they are locally linear and gradually change in intensity along their lengths. The vessels can be expected to be connected and, in the retina, form a binary treelike structure.1.3 Optic disk segmentation

Localization and segmentation of disc are very important in many computer aided diagnosis systems, including glaucoma screening. The localization focuses on finding an disc pixel, very often the center.1.4 Retinal photography

Retinal photography requires the use of a complex optical system, called a fundus camera. It is a specialized low power microscope with an attached camera, capable of simultaneously illuminating and imaging the retina. It is designed to image the interior surface of the eye, which includes the retina, optic disc, macula, and posterior pole. The fundus camera normally operates in three modes. In Color photography the retina is examined in full color under the illumination of white light. In Red-free photography, the vessels and other structures are improved in contrast and the imaging light is filtered to remove the red colors. The fluorescent angiograms are acquired using the dye tracing method. A sodium fluorescein or indocyanine green is injected into the blood, and then the angiogram is obtained by photographing the fluorescence emitted after illumination of the retina with blue light at a wavelength of 490 nanometers.1.5 DRIVE database

The DRIVE (Digital Retinal Images for Vessel Extraction) is a publicly available database, consisting of a total of 40 color fundus photographs. The photographs were obtained from a diabetic retinopathy screening program in the Netherlands. The screening population consisted of 453 subjects between 31 and 86 years of age. Each image has been JPEG compressed, which is common practice in screening programs. Of the 40 images in the database, 7 contain pathology, namely exudates, hemorrhages and pigment epithelium changes. The images were acquired using a Canon CR5 non-mydriatic 3-CCD camera with a 45◦ field of view (FOV). Each image is captured using 8 bits per color plane at 768×584 pixels. The FOV of each image is circular with a diameter of approximately 540 pixels. The set of 40 images was divided into a test and training set both containing 20 images. Three observers, the first and second author and a computer science student manually segmented a number of images. All observers were trained by an experienced ophthalmologist .The first observer segmented 14 images of the training set while the second observer segmented the other 6 images.

Retinal vessel segmentation is very time consuming task. Several researches are going on the segmentation and several algorithms have been proposed. We can divide the segmentation algorithms into supervised method, unsupervised method, vessel tracking, Morphological processing, Multi-scale approaches, Vessel profile models, Deformable models. This project

6

Page 3: The Pulse Coupled Neural Network

proposes a more effective way to detect the through blood vessel and optic disk segmentation based on pulse coupled neural network ( PCNN).

1.6 PROBLEM STATEMENT

The supervised Method needed is the manual intervention. Because of that there is a chance of inaccuracy. But again vessel profile based approaches separate the fine vessels and major vessels, but when microaneurysm over lapped with the narrow blood vessels may confuse. But in MRF segmentation will fail on severely damaged retinal image. Modified line operator will fail when the segmentation of area with darker than retinal background. 1.7 OBJECTIVE

To segment the blood vessels and optic nerve head detection without manual intervention in fundus photography of drive data base.

7

Page 4: The Pulse Coupled Neural Network

CHAPTER II

PROPOSED OPTICAL DISK AND BLOOD VESSEL SEGMENTATION SYSTEM

Segmentation is a partitioning of an image into constituent parts using attributes such as pixel intensity, spectral values, and textural properties. Segmentation produces an image represent- tation in terms of boundaries and regions of various shapes and interrelationships. Image segmentation to a large extent depends on suitable threshold values which produce binary images to produce neither under-segmentation, nor over segmentation [8]. The results of the image segmentation process are further used for object recognition and image analysis.

The proposed system focus on achieving automatic segmentation of blood vessel and optic nerve disk in retinal image. The heart of the proposed system is pulse coupled neural network (PCNN). PCNN is a two-dimensional, high performance neural network used for image processing applications such as image segmentation, image denoising, feature extraction and pattern recognition[8].

2.1 Proposed Architecture for blood vessel and optic disk segmentation

Figure 2.1 Proposed Architecture for blood vessel and optic disk segmentation

2.2 METHODOLOGY

The segmentation of blood vessel and optic disk by means of PCNN is explained below.

2.2.1 PCNN Model and the Theory of Image Segmentation

2.2.1.1 PCNN Model

8

Page 5: The Pulse Coupled Neural Network

The Pulse coupled Neural Network (PCNN) ,is a new model in ANN , with great potential in the area of image processing. The new model PCNN is derived from a neural mammal model[ ].The PCNN can be used in various image processing task ,edge detection, segmentation, feature extraction, and image filtering[6].

The pulse coupled neural network is a two-dimensional neural network composed of pulse coupled neurons. The PCNN neuron model consists of three parts: dendrite tree , the linking and the pulse generator[4].The dendrite tree include two new region of neuron element ,the linking and feeding. Neighborhood information is incorporate through linking. The input information is obtain through the feeding. The pulse generator module compares the internal activity, linking plus feeding activity, with a dynamic threshold to decide if the neuron element fires or not.The standard improved PCNN model [ 6]. The discrete mathematics equation of the simplified PCNN model is described as:

Fij(n)=Iij (1)

Lij(n)= Wijkl Yij(n-1) (2)

Uij(n)=Fij (n) 1 + Lij(n) (3)

ij n exp θ ij n-1 (4)

Yijn= 0, Uij [n]> ij n

1 otherwise (5)

Where i,j , describe the position of the neuron, n is the current iteration, Iij is the input stimulation signal (when we do image processing, Iij usually takes the gray pixel value that is located in line i and column j of the image), Fij ,Lij ,Uij ,ij are the feeding inputs of the neurons, linking inputs of neurons, internal activity of neurons, and the dynamic threshold of the neurons, respectively. w is the power connection matrix, θ is the time constant decay factor of dynamic threshold value, is the strength of the linking, and Yij is the binary output of the PCNN model.

2.2.2 Pulse coupled neural network

The mammalian visual system is more complicated and efficient than a computer based image processing and recognition systems. In mammalian systems, decisions on the content of an image are made after performing many operations in the visual cortex. This processing capability cannot be achieved in computer based systems using simple processing algorithms. However, this can be achieved by computer algorithms which are capable of emulating the processes of the mammalian visual cortex [3].

Eckhron”s neuron model developed in 1990s is based on the experimental observation of synchronous pulse bursts in the cat and monkey visual systems [4]. Eckhron”s neural networks, through stimulus forced and stimulus induced synchronization, are able to bridge temporal gaps and minor magnitude variations in input data and cause the neurons with similar inputs to pulse

9

Page 6: The Pulse Coupled Neural Network

together. This neuron model will group image pixels based on spatial proximity and brightness similarity [4].

The accuracy of the image segmentation using PCNN depends on the network parameters namely the feeding decay constant αf, the dynamic threshold decay constant α , threshold constant v , linking coefficient β and stop mechanism used for determining the optimal segmented image. However, no method has been suggested for the automatic determination of these parameters from image statistics.[3]

2.2.3. Fast linking MethodIn order to process actual images under various conditions of illuminations, weapply PCNN with "fast linking", after the first signal is input, calculate all the output, and then refresh the linking territory. At last, the internal state is calculated, the output is decided. During the calculating process, if one of the neurons is changed, the linking territory is changed correspond. The calculation will continue until all the outputs are unchanged. Such cycle process is called one iterative. During this process, in order to keep the input unchanged, the linking territory will change constantly. The input wave transmit the data after one iteration is finished, while linking territory wave send information to all the elements of image during this iterative. This method is called Fast linking. It can decrease the effect of timing quantification. The flashings in original model are all separated because the time delays of the linking territory. While adopting the Fast linking model, the neuron can be flashed in one territory. In the model of PCNN, the linking coefficient! plays an important role. The larger! is, the further distance is transmitted. It can be obviously seen in the Fast linking model.

2.2.4 The flow chart of PCNN iteration

Figure 2.2.4.1 The Flow Diagram of PCNN iteration

2.2.5 The Blood vessel and Optic Disk Segmentation using PCNN

10

Page 7: The Pulse Coupled Neural Network

During image processing, the retinal image, with a size of M*N and a gray scale value of L, can be regarded as a two-dimensional matrix with a range of values from 0 to L and the size is M*N. During image segmentation, the matrix elements and PCNN model neurons can be put into one-to-one correspondence, where the pixel value Iij corresponds to the neurons’ input ij S on the corresponding position. The initial state of the neurons was set to 0. Following the first iteration, internal activity Uij is equal to the neurons’ input Sij , all the neurons threshold began to attenuate from the initial value, when a neuron's threshold attenuated till smaller or equal to the correspond U , this neuron was fired(natural fired), and output a pulse Y 1 , at the same time, the threshold of this neuron sharply increased, the pulse output stopped.

Figure 2.5.1 PCNNs Neuron Model

Then began to attenuate, when attenuated till smaller or equal to correspond U again, pulse-generated again. When several cycles are ran, the neurons produce an output sequence Y n , which contains information describing the area, boundary, texture and other characteristics of the retinal image.[7]

The parameters used in blood vessel and optic disk segmentation αf is the decay constant for feeding input vf is the magnitude scaling term for feeding input. αl is the decay constant for linking input vf is the magnitude scaling term for linking input. linking strength β. αθ is the decay constant for dynamic threshold, vθ is the scaling term for dynamic threshold.

2.3 The proposed algorithm

ALGORITHM

Step 1: Initialization:

Initialize values β=0.4, αf =0.3, α1 =0.3,αθ =0.3, vf=0.05, vl=1

0.07 0.1 0.07

11

Page 8: The Pulse Coupled Neural Network

weight matrix[] = 0.1 0 0.1

0.07 0.1 0.07Step 2:Calculate the feeding input and linking input

Step 3:Calculate the internal activation value

Step 4:Calculate the output and dynamic threshold

Step 5:m=m+1,n=n+1. If m≤M and n≤N go to step 2,else go to step 6.

Step 6:it = it +1, m=1,n=1. If it ≤Nit go to step 2, else go to step 7

Step 7:Save array S as the segmented image.

CHAPTER III

IMPLEMENTATION DETAILS

12

Page 9: The Pulse Coupled Neural Network

3.1 Software Requirements:

Software Used : MATLAB R2010a

Operating System : Windows XP

3.2 Hardware Requirements

Processor : Pentium III Processor

RAM : 128 MB

Hard Disk Drive : 20 GB

Keyboard : Standard Keyboard

Floppy Drive : 1.44 MB

CD-ROM Drive : Creative

3.3 Technological Requirements

ABOUT DEVELOPING TOOLS

MATLAB (matrix laboratory) is a multi-paradigm numerical computing environment and fourth-generation programming language. A proprietary programming language developed by MathWorks, MATLAB allows  matrix manipulations, plotting of functions and data, implementation of algorithms, creation of interfaces, and interfacing with programs written other languages , including C, C++, Java, Fortran and  Python. Although MATLAB is intended primarily for numerical computing, an optional toolbox uses the MuPAD symbolic engine, allowing access to symbolic computing capabilities. An additional adds multi-domain simulation and model-based design for dynamic and embedded systems.In 2004, MATLAB had around one million users across industry and academia. MATLAB users come from various backgrounds of engineering, science, and economics. MATLAB is widely used in academic and research institutions as well as industrial enterprises.

Syntax

The MATLAB application is built around the MATLAB scripting language. Common usage of the MATLAB application involves using the Command Window as an interactive mathematical shell or executing text files containing MATLAB code

Variables

Variables are defined using the assignment operator,=.MATLAB is weakly typed  programming language because types are implicitly converted. It is an inferred typed language because variables can be assigned without declaring their type, except if they are to be treated as symbolic objects, and that their type can change. Values can come from constant, from computation involving values of other variables, or from the output of a function.

13

Page 10: The Pulse Coupled Neural Network

Vectors and matrices

A simple array is defined using the colon syntax: init:increment:terminator

Structures

MATLAB has structure data types. Since all variables in MATLAB are arrays, a more adequate name is "structure array", where each element of the array has the same field names. In addition, MATLAB supports dynamic field names (field look-ups by name, field manipulations, etc.). Unfortunately, MATLAB JIT does not support MATLAB structures, therefore just a simple bundling of various variables into a structure will come at a cost.

Functions

When creating a MATLAB function, the name of the file should match the name of the first function in the file. Valid function names begin with an alphabetic character, and can contain letters, numbers, or underscores.

Function handles

MATLAB supports elements of lambda calculus by introducing function handles,or function references, which are implemented either in .m files or anonymous/nested functions.

Classes and Object-Oriented Programming

MATLAB's support for object-oriented programming includes classes, inheritance, virtual dispatch, packages, pass-by-value semantics, and pass-by-reference semantics. However, the syntax and calling conventions are significantly different from other languages. MATLAB has value classes and reference classes, depending on whether the class has handle as a super-class (for reference classes) or not (for value classes).Method call behavior is different between value and reference classes.

Object.Method();

Interfacing with other languages

MATLAB can call functions and subroutines written in the C programming language or Fortran. A wrapper function is created allowing MATLAB data types to be passed and returned. The dynamically loadable object files created by compiling such functions are termed "MEX-files" (for MATLAB executable). Since 2014 increasing two way interfacing with python is being added.

Libraries written in Perl, Java, ActiveX or .NET can be directly called from MATLAB, and many MATLAB libraries (for example XML or SQL support) are implemented as wrappers around Java or ActiveX libraries. Calling MATLAB from Java is more complicated, but can be done with a MATLAB toolbox] which is sold separately by MathWorks, or using an undocumented mechanism called JMI (Java-to-MATLAB Interface), (which should not be confused with the unrelated Java Metadata Interface that is also called JMI).

Image processing using matlab

14

Page 11: The Pulse Coupled Neural Network

imread()

The imread() command will read an image into a matrix:

imshow()

To show our image, we the imshow() or imagesc() command. The imshow() command shows an image in standard 8-bit format, like it would appear in a web browser. Theimagesc() command displays the image on scaled axes with the min value as black and the max value as white.

colormap gray

a command color map gray, the picture turns into a gray scale:

imhist()

Display a histogram of image data. imhist(img,n) displays a histogram with n bins for the intensity image above a grayscale colorbar of length n. If we omit the argument, imhist()uses a default value of n = 256 if I is a grayscale image, or n = 2 if I is a binary image.

im2bw()

im2bw() converts the gray scale image to a binary image. We'll use the adjusted image.

3.4 DATASET

The DRIVE (Digital Retinal Images for Vessel Extraction) is a publicly available database[2], consisting of a total of 40 color fundus photographs. The photographs were obtained from a diabetic retinopathy screening program in the Netherlands. The screening population consisted of 453 subjects between 31 and 86 years of age. Each image has been JPEG compressed, which is common practice in screening programs. Of the 40 images in the database, 7 contain pathology, namely exudates, hemorrhages and pigment epithelium changes. The images were acquired using a Canon CR5 non-mydriatic 3-CCD camera with a 45◦ field of view (FOV). Each image is captured using 8 bits per color plane at 768×584 pixels. The FOV of each image is circular with a diameter of approximately 540 pixels. The set of 40 images was divided into a test and training set both containing 20 images. Three observers, the first and second author and a computer science student manually segmented a number of images. All observers were trained by an experienced ophthalmologist .The first observer segmented 14 images of the training set while the second observer segmented the other 6 images

15

Page 12: The Pulse Coupled Neural Network

CHAPTER IVCONCLUSION

Extracting blood vessels from retina images is a challenging problem and the result of extracted blood vessels will directly affect the following research and development. In this project, simpli-

16

Page 13: The Pulse Coupled Neural Network

fied PCNN proposed for retinal blood vessels segmentation. The accuracy of the image segmentation using PCNN depends on the network parameters namely the feeding decay constant αf, the dynamic threshold decay constant α , threshold constant v , linking coefficient β and stop mechanism used for determining the optimal segmented image. However, no method has been suggested for the automatic determination of these parameters from image statistics.

The supervised Method needed is the manual intervention. Because of that there is a chance of inaccuracy. But again vessel profile based approaches separate the fine vessels and major vessels, but when microaneurysm over lapped with the narrow blood vessels may confuse. But in MRF segmentation will fail on severely damaged retinal image. Modified line operator will fail when the segmentation of area with darker than retinal background.

Various algorithms were proposed in recent literature. The unsupervised method need not any human training, usually training of Artificial Neural Network (ANN) required human help ,so there is a chance for error in it. Presented methodology for segmentation of blood vessels which uses the pulse coupled neural network .It is a class of unsupervised method. Pulse coupled neural network used for segmentation (PCNN).

REFERENCE

[1]Elaheh Imani.et.al.” Improvement of retinal blood vessel detection using morphological component analysis”8(2014)

17

Page 14: The Pulse Coupled Neural Network

[2]M.M. Fraz, P. Remagnino, A. Hoppe, B. Uyyanonvara, A.R. Rudnicka,C.G. Owenc, S.A. Bar man“Blood vessel segmentation methodologies in retinal images– A survey” 1 0 8 ( 2 0 1 2 ) 407 –433[3]https://uta ir.tdl.org /utair/bitstream/handle/10106/5848/TELIDEVARA_uta_2502M_11181 .pdf?sequence=1[4]R.Eckhorn, H.J. Reitboeck, M. Arndt, P.W. Dicke, A neural network for feature linking via synchronous activity: results from cat visual cortex and from simulations, in: Models of Brain Function,Cambridge University Press,Cambridge, UK, 1989, pp. 255– 272. [5] Lindblad, Th. and Kinser, J. M.” Image Processing using Pulse- Coupled Neural Networks, Perspectives In Neural Computing”. Springer-Verlag Limited. ISBN 3-540- 76264-7 [6] Mario I Chacon M.,Alejandro ZimmerMan.,and Pablo Rivas P.,(2007) ”Image processing application with a PCNN”, Retrieved from Google Books. [7] Hai-Rong Ma and Xin-Wen Cheng “Automatic Image Segmentation with PCNN Algorithm Based on Grayscale Correlation”, Vol.7, No.5 (2014), pp.249-258. [8] Chaitanya telidevara(2011) “ Silicon wafer defect segmentation using modified pulse coupled neural network”,Bibliography.Retrieved fromhttps://uta-ir.tdl.org/uta-ir/bitstream/handle/ 10106/5848/TELIDEVARA_uta_2502M_11181.pdf?sequence=1

APPENDIX A

function [Edge,Numberofaera]=pcnn(X)

18

Page 15: The Pulse Coupled Neural Network

X=imread('retina.jpg');% Read the image figure(1);imshow(X);X=double(X);% Setting weight matrixWeight=[0.07 0.1 0.07;0.1 0 0.1;0.07 0.1 0.07];WeightLI2=[-0.03 -0.03 -0.03;-0.03 0 -0.03;-0.03 -0.03 -0.03];d=1/(1+sum(sum(WeightLI2)));% % % % % % % % % %WeightLI=[-0.03 -0.03 -0.03;-0.03 0.5 -0.03;-0.03 -0.03 -0.03];d1=1/(sum(sum(WeightLI)));% setting initial parametersBeta=0.4; %linking strength Yuzhi=245;%Decay constantDecay=0.3;[a,b]=size(X);V_T=0.2;%Threshold=zeros(a,b);S=zeros(a+2,b+2);Y=zeros(a,b);T=zeros(a,b);%Firate=zeros(a,b);n=1;%no of iteration is initially set to 0count=0;Tempu1=zeros(a,b);Tempu2=zeros(a+2,b+2);% % % % % % ͼÏñÔöÇ¿²¿·Ö % % % % % %Out=zeros(a,b);Out=uint8(Out);for i=1:a for j=1:b if(i==1|j==1|i==a|j==b) Out(i,j)=X(i,j); else H=[X(i-1,j-1) X(i-1,j) X(i-1,j+1); X(i,j-1) X(i,j) X(i,j+1); X(i+1,j-1) X(i+1,j) X(i+1,j+1)];

19

Page 16: The Pulse Coupled Neural Network

temp=d1*sum(sum(H.*WeightLI)); Out(i,j)=temp; end end endfigure(2);imshow(Out);% % % % % % % % % % % % % % % for count=1:30 for i0=2:a+1 for i1=2:b+1 V=[S(i0-1,i1-1) S(i0-1,i1) S(i0-1,i1+1); S(i0,i1-1) S(i0,i1) S(i0,i1+1); S(i0+1,i1-1) S(i0+1,i1) S(i0+1,i1+1)]; L=sum(sum(V.*Weight)); V2=[Tempu2(i0-1,i1-1) Tempu2(i0-1,i1) Tempu2(i0-1,i1+1); Tempu2(i0,i1-1) Tempu2(i0,i1) Tempu2(i0,i1+1); Tempu2(i0+1,i1-1) Tempu2(i0+1,i1) Tempu2(i0+1,i1+1)]; F=X(i0-1,i1-1)+sum(sum(V2.*WeightLI2)); %linking component F=d*F; U=double(F)*(1+Beta*double(L)); Tempu1(i0-1,i1-1)=U; if U>=Threshold(i0-1,i1-1)|Threshold(i0-1,i1-1)<60 T(i0-1,i1-1)=1; Threshold(i0-1,i1-1)=Yuzhi Y(i0-1,i1-1)=1; else T(i0-1,i1-1)=0; Y(i0-1,i1-1)=0; end end end Threshold=exp(-Decay)*Threshold+V_T*Y; %calculating threshold if n==1 S=zeros(a+2,b+2); else S=Bianhuan(T); end n=n+1; count=count+1;

20

Page 17: The Pulse Coupled Neural Network

Firate=Firate+Y; figure(3); imshow(Y); Tempu2=Bianhuan(Tempu1);endFirate(find(Firate<11))=0;Firate(find(Firate>=11))=11;figure(4);imshow(Firate); Bianhuan function Y=Bianhuan(X)[m,n]=size(X);Y=zeros(m+2,n+2);for i=1:m+2 for j=1:n+2 if i==1|j==1|i==m+2|j==n+2 Y(i,j)=0; else Y(i,j)=X(i-1,j-1); end endend

Appendix B – Screenshots

21

Page 18: The Pulse Coupled Neural Network

Figure B.1 Retinal image DRIVE database

Figure B.2 Contrast Adjusted Image at three levels

22

Page 19: The Pulse Coupled Neural Network

Figure B.3 Segmented retinal blood vessel

Figure B.4 Segmented Optic Disk

23

Page 20: The Pulse Coupled Neural Network

SEGMENTED RESULT IN EACH ITERATION AT THREE LEVELS OF CONTRAST

Figure B.5 Iteration1

Figure B.6 Iteration2

Figure B.7 Iteration3

24

Page 21: The Pulse Coupled Neural Network

Figure B.8 Iteration4

Figure B.9 Iteration5

Figure B.10 Iteration6

25

Page 22: The Pulse Coupled Neural Network

Figure B.11 Iteration7

Figure B.12 Iteration8

Figure B.13 Iteration8

26

Page 23: The Pulse Coupled Neural Network

Figure B.14 Iteration9

Figure B.15 Iteration10

Figure B.16 Iteration11

27

Page 24: The Pulse Coupled Neural Network

Figure B.17 Iteration12

Figure B.18 Iteration13

28