Using A Neur opr ocessor Model f or Describing Arti Þ cial...

29
Using A Neuroprocessor Model for Describing Articial Neural Nets Project Report for EE563 by Rob Chapman 806461 Artificial Neural Net Neuroprocessor Model Hardware

Transcript of Using A Neur opr ocessor Model f or Describing Arti Þ cial...

Page 1: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Using A Neuroprocessor Model for Describing Artificial Neural Nets

Project Report for EE563

b

y Rob Chapman

806461

Artificial Neural Net

NeuroprocessorModel

Hardware

Page 2: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 ii

1 Introduction..............................................................................................................32 Technologies ............................................................................................................3

2.1 Artificial Neural Nets...........................................................................................................32.2 A Stack Processor ................................................................................................................4

3 Neuroprocessor ........................................................................................................5

3.1 Sum, Sign and Latch ............................................................................................................53.2 Grand Matrix........................................................................................................................6

4 NP Networks............................................................................................................6

4.1 Single Layer Networks ........................................................................................................64.2 Multilayer Networks ............................................................................................................74.3 Recurrent Networks .............................................................................................................74.4 Multiple Networks ...............................................................................................................7

5 Paradigms.................................................................................................................7

5.1 Mapping ...............................................................................................................................85.2 Multilayer, Single Output ....................................................................................................8

5.2.1 Results ...................................................................................................................95.3 Single Layer, Multiple Outputs............................................................................................9

5.3.1 Hebbian Learning ................................................................................................105.3.2 Input Patterns.......................................................................................................105.3.3 Test Patterns.........................................................................................................115.3.4 Emergent Property...............................................................................................125.3.5 Contour Plots .......................................................................................................145.3.6 Comments............................................................................................................14

6 Other Paradigms.....................................................................................................15

6.1 Multi-bit Inputs and Outputs..............................................................................................156.2 Different Activation Functions...........................................................................................16

6.2.1 NPN Learning......................................................................................................166.2.2 Active Learning ...................................................................................................17

6.3 Tapped Delay Line and Integrators ....................................................................................17

7 Simulation ..............................................................................................................178 Discussion ..............................................................................................................179 References..............................................................................................................18A Code for Simulator.................................................................................................19B Code for XOR NPN...............................................................................................20C HEBB.....................................................................................................................21D TESTS....................................................................................................................23E WEIGHTS..............................................................................................................25F RESULTS...............................................................................................................28

Page 3: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 3

Using A Neuroprocessor Model for Describing Artificial Neural Nets

Rob Chapman

1 Introduction

A neural network paradigm is introduced that is simple to put into hardware but is generalenough to be able to represent any artificial neural network (ANN) paradigm. It is efficienton hardware resources making it a good candidate for describing neural net designs whichneed to be implemented in hardware.

In this paper, the model is introduced and explored. Two ANNs are mapped onto the modelwith some interesting results. This is followed by comments on other paradigms. The paperis then wrapped up with a discussion and a look at future work.

2 Technologies

Conventional digital serial algorithms described in programming languages express muchof the way in which we think: serially, connected, thought after thought. Our consciousnessis modeled by such programs, but what of the hardware which runs it? Fast digitalcomputers with mega/giga potential try to run these programs fast, but there are still manyproblems for which they will never be fast enough or even close in skills when comparedto a very different hardware platform, a bio-neural net (BNN) — the brain.

2.1 Artificial Neural Nets

FIGURE 1. A model of an artificial neuron. The inputs and a bias are summed and then fed into an activation function to produce an output.

While the digital computer makes spatial as well as temporal connections in time, a BNNmake spatial connections in parallel, allowing for highly connected problems to be solved

WX !U

"

Y####

Page 4: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 4

in much shorter times than a digital computer. Problems like image recognition and visualnavigating are easily performed by a BNN. There is a well trod path for the digitalcomputer evolution, but, for a certain set of problems, a BNN structure will always besuperior.

ANNs simulating some qualities of a BNN can be run on a digital computer, but then, thepower they gain in pattern emulation is lost in time spent running all the parallelconnections serially. What is needed, is to drastically simplify the digital computer, let itsimulate a neuron instead of a network, clone many of them and connect them with weightscorresponding to a designed ANN.

2.2 A Stack Processor

A simple stack processor

1

which is lean on resources will serve as a basis for the hardwaremodel. The stack processor will run a summation program and output the result. Theweights for the connections will be literals in the program. It is estimated that at least tenof these processors could fit inside an average field programmable gate array (FPGA) suchas a Xilinx 4000 series part.

FIGURE 2. An array of stack processors running a neuroprocessor program are connected to a global bus which is the outputs. The weights are part of the program.

The stack processor uses a minimum number of registers and logic. One of its features isstatistical addition which takes only two gates per bit of precision to implement. Normal

1. “A Stack Processor: Synthesis”, Rob Chapman, Project Report for EE602.

SP

NPWy

Wx

X1

Y

SP

NPWy

Wx

XN

. . . . .

Page 5: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 5

adders use a lot more gates and cost more to implement in hardware. Statistical addition isfast, it minimizes resources and it will be used to implement the summation junction.While the number of cycles to complete the addition is determined by the number of carries,over a group of inputs, this averages out. The addition is done with integers to minimizecircuitry and save time.

3 Neuroprocessor

The model presented here for a neuron has one input, plus all the outputs, feeding into asumming junction whose output feeds a hard limiting activation function. The result is heldin the output latch. Each neuroprocessor (NP) has only one input. If there are severalinputs, then several NPs will be needed. Although one may be struck by this seeminglylavish use of an NP, this is only the model. When translated to hardware, certainoptimizations will be performed taking advantage of the simplicity of the model, reducingit to a minimum circuit.

A more compact version of the model which will be used in the paper is:

3.1 Sum, Sign and Latch

Inputs and outputs are binary in value. In hardware they are the logic values 0 and 1. Formodeling, they are the sign bit of the sum. The logic value 1 represents a negative valueand the logic value 0 represents a positive value. The weights are any positive or negativeinteger value. Summing all the inputs is simply done by adding up the weights andchanging the sign of a weight if the input value is negative. The activation function takesthe sign of the resultant summation and presents it to the latch. The latch passes the valueon to the output on a positive clock edge.

Since the inputs are of two values, there is no multiplication and the entire NP can be runwith addition. To make the addition even faster, the range of weights should be as minimalas possible. The inputs just determine the sign of the weight. If the input is one, then theweight is complemented before adding. The equations for an NP are:

(EQ 1)

(EQ 2)

Yi#WXi

WY

Ui

Y

Xi L

NP

Ui Xi W XiT× Y j W Y i j,

T×j 1=

N

$+=

Y i t 1+( ) signbit Ui( )=

Page 6: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 6

3.2 Grand Matrix

The weight matrix which contains all the input and output-input weights is called the grandmatrix (GM). It is derived in vector notation:

(EQ 3)

It is possible to model all ANNs with one sparse weight matrix (since the NPs are numberedfrom 1 to N, the weight matrix for the inputs can be treated as the zero

th

column and theoutput-input weights are the 1-N

th

columns). For most, if not all paradigms, the matrix willbe heavily populated with zeroes representing no connection or ones representingconnections between nodes. The GM is used to generate a hardware description forimplementing the design. In general, the discrete weight: 0 is used for no connection; 1 isused to implement a wire and -1 is used to invert.

4

NP Networks

A network of NPs or an NP net (NPN), can be shown by feeding in all the inputs andoutputs into a vector of summing junctions. The X and Y inputs and W

x

weights for the Xinputs are Nx1 matrices while W

Y

weights for the Y inputs is a NxN matrix, where N is thenumber of NPs.

4.1 Single Layer Networks

Single layer feedforward ANNs are characterized by one level of summing and then anoutput. They can be used as linear classifiers.

In this topology, there is one NP for each input, and one NP for each output. The unusedinputs on the second layer of NPs, can be used as biases or thresholds. A two layer NPNis used to implement a single layer of an ANN and the input layer. Paradigms likeADALINE or the Perceptron are single layer networks. A character recognition example[Section 5.3] will be used to demonstrate a single layer network.

U XW XT Y W Y

T+=

X Y W XT W Y

T=

P W GM×=

####WX

WY

U Y

Y

X L

NP

NP

NP

NP

inputs outputs

Page 7: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 7

4.2 Multilayer Networks

Multilayer networks are built by zeroing input weights of layer k (k from 2 to M) andzeroing output-input weights from all other NPs except for the ones in the (k-1)

th

layer.Since the latch divides the output into discrete time intervals, for a multilayer network, theinputs will take M intervals to propagate to the outputs. For each additional layer, therewill be one more NPN layer. So a multilayer ANN of M layers will be composed of M+1layers of an NPN network.

Certain ANNs require biases. If biases are required, unused inputs of other layers can beset to positive and the input weights adjusted. For the first layer, a new layer with justinputs as biases would be added. A simple example of a multilayer network is show in[Section 5.2].

4.3 Recurrent Networks

Recurrent ANNs use feedback as part of their design. Once you have feedback you havetemporal capabilities allowing you to detect patterns in time. These are implemented byallowing layers greater than the k

th

layer in an M layer network to feed back to the k

th

layer.Networks topologies such as Hopfield or other networks involving feedback can beimplemented. No examples are included in this paper.

4.4 Multiple Networks

Multiple networks which are not connected to each other, can also be created. This allowsdesigns which involve more than one network to be created. By keeping a network in aGM surrounded by zeros, it effectively isolates it from the other networks.

5 Paradigms

Now lets put theory into practice. Two ANNs will be implemented. The first one illustratesa multilayer network and it implements the exclusive-or concept. The second example is

NP

NP

NP

NP

inputs

NP

NP

NP

NP

outputs

...

...

NPinputs outputs

NPN1inputs outputs

NPN2inputs outputs

Page 8: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 8

a single layer ANN and it illustrates a digit recognition network designed with Hebbianlearning.

5.1 Mapping

Once an ANN has been designed, the weights for the GM are calculated and then the designis verified by simulation. This is illustrated with the examples. The GM and the stackprocessor design are then merged to create a VHDL description of the NPN.

5.2 Multilayer, Single Output

A single layer ANN can implement AND and OR functions but it has been proven that ittakes two layers to implement an XOR function. We will need two NPs on the first layerfor the two inputs, two NPs on the second layer to combine the inputs and one NP on thethird layer to produce the output. Since this is a simple topology, the weights are calculatedby inspection.

In the first NPN layer, for the input NPs, the W

Y

values are all zero and the input weightsare both one. The second NPN layer has no inputs so the input weights are all zero but eachNP has inputs from both NPs of the first layer. The weights are one and minus one. Thisway when the inputs are both the same, the NPs will sum to zero and when the inputs aredifferent, then the outputs will sum to -2 and 2. The third NPN layer reads the output ofthe second layer NPs through weights of one. When the second layer has the same outputs,then this sums to 2. When the second layer has different outputs, this sums to zero. So thethird layer NP has either a 0 or a 2 which both have the same sign. If we take the input to

Design ANN

GM

stack processor Translate

VHDL description

Simulate & Veri fy

Page 9: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 9

this NP and set it to zero (mathematically positive) and the weight to -1 then this will actlike a bias and shift the summation to -1 and 1. Now the output of this NP will be 1 whenthe inputs are different and 0 when they are the same.

FIGURE 3. These are the connections and weights for the XOR ANN.

(EQ 4)

FIGURE 4. This is the GM for the design. Each row is the transposed weight matrix for an NP. The first column is the input weights and the rest of the columns are the output-input weights.

5.2.1 Results

This is the result from running the above ANN design on the simulator:

Test XOR function with two inputs and one output and 5 NPs: 2:2:1x1 x2 y 0 0 0 should be zero 0 1 1 should be one 1 0 1 should be one 1 1 0 should be zero

5.3 Single Layer, Multiple Outputs

One use of ANNs is to store patterns which can be recalled later when a stimulus is appliedto the inputs. With the proper set of weights, the ANN is able to recognize patterns even

3

1

2

4

5

Wx=1

Wx=1

Wx=-1

Wy=-1

Wy=1

Wy=1

Wy=-1

Wy=1

Wy=1

x 1

x 2

y0

GM

1 0 0 0 0 01 0 0 0 0 00 1 1– 0 0 00 1– 1 0 0 01– 0 0 1 1 0

=

Page 10: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 10

when parts of the input patterns are corrupted or missing. The example we will use hereuses three digits, 0, 1 and 2, described in a 5x6 dot matrix. The weights are calculated byusing Hebbian learning. This network is a single layer network with 30 inputs and 30outputs so there will be 60 NPs connected in a two layer NPN.

5.3.1 Hebbian Learning

Hebbian learning uses the inputs and outputs to directly calculate the weights. The inputsare basically, multiplied by the outputs and the resultant values are used as the weightvalues. Since we are implementing autoassociative memory, our outputs are the same asour inputs. The weights are calculated by the formula:

(EQ 5)

where P is the matrix of all the input patterns and P

+

is the pseudoinverse of P. Since theinputs are not orthogonal, we have used the pseudoinverse instead of the transpose.

5.3.2 Input Patterns

The input patterns for the digits 0, 1 and 2 in ASCII are:

*** ** *** * * * * * * * * * * * ** * * * * *** * ****

They are all six pixels tall and five pixels wide. Translating these patterns into a matrix of1 and -1, where -1 is used to represent an OFF pixel and 1 is used to present an ON pixel,we have:

P = [-1 1 1 1 -1 1 -1 -1 -1 1 1 -1 -1 -1 1 1 -1 -1 -1 1 1 -1 -1 -1 1 -1 1 1 1 -1-1 1 1 -1 -1 -1 -1 1 -1 -1 -1 -1 1 -1 -1 -1 -1 1 -1 -1 -1 -1 1 -1 -1 -1 -1 1 -1 -11 1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 1 -1 -1 1 1 -1 -1 -1 1 -1 -1 -1 -1 1 1 1 1]

T

The input matrix is presented transposed to put it into a horizontal form. To calculate the

NP

NP

NP

NP

NP

NP

.

.

.

.

.

.

3 0 3 0

WY P P+×=

Page 11: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 11

weights, MATLAB was used as it has extensive support for matrices and matrix functions.The output weight matrix was calculated and converted from floating point to integers withthe following MATLAB program:

W = round(20*P*pinv(P)); % calculate hebb weights and convert to integer

5.3.3 Test Patterns

The network is now tested by presenting it with input patterns and recording the outputpatterns of the network. The results are shown below starting with the original patterns.The output should be one of the three patterns:

Now we try a set of input patterns with half of the pattern missing:

Now we try again with 67% of the patterns missing:

input pattern

*** * ** ** ** * ***

** * * * * *

*** * * ** * ****

output pattern

*** * ** ** ** * ***

** * * * * *

*** * * ** * ****

input pattern

*** * ** *

** * *

*** * *

output pattern

*** * ** ** ** * ***

** * * * * *

*** * * ** * ****

input pattern

*** * *

** *

*** *

output pattern

*** * ** ** ** * ***

** * * * * *

** * * * * *

Page 12: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 12

Now we try with a noisy input pattern:

From the above tests, we see that only one test failed, where a part of a 2 was taken andrecognized as a 1 pattern. Otherwise the network performed as expected. In the case ofnormal input, normal output was received. In the case where some information wasmissing or was distorted by extra ON pixels, the digit was still recognized.

5.3.4 Emergent Property

The ANN design weights are real numbersbetween the limits of zero and one. For the GM,they are transformed into integers by multiplyingand then rounding to the nearest decimal point. Inthe interest of reducing the precision and therebyreducing the complexity of the design, theweights were scaled down and then the twelvetests were rerun. Changes in the precision,number of connections (number of non-zeroweights) and number of mistakes were noted.

While converting the weights to integers and thenreducing the precision, it was expected that thenetwork would start to make more mistakes. Thisis not what happened. It turns out that as theprecision is decreased, the number of mistakesvaries. At the limits it performs as expected butin between the limits, the accuracy of the networkcan be improved. This is an emergent property ofdigitizing and optimizing the design.

This table represents the results of the simulationas the precision of the weights was reduced.Precision beyond ±10 made no difference in anyof the tests so those results are excluded. Theinteresting parts are with the precision less than±10. The first column is the range of the weights. The second column is the number ofconnections or non-zero weights. The third column is the number of mistakes made inrecognizing the test vectors. As the precision is reduced, zeroes are introduce in the weightmatrix. This is good as it will simplify the final hardware design.

input pattern

** * * * ** * * * * *

** ** * * * * ****

** * * ** * *** *

output pattern

*** * ** ** ** * ***

** * * * * *

*** * * ** * ****

TABLE 1. Results from reducing precision from ±10 to zero.

Precision(+/-n)

NP-NPConnections

RecognitionMistakes

10 900 29 900 28 900 27 900 27 900 36 900 36 900 26 900 25 900 24 900 23 900 33 720 12a

a. Optimum for accuracy

720 12 600 42 492 22 392 32 302 31 302 51b

b. Optimum for connections

242 21 142 121 61 121 25 120 0 12

Page 13: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 13

The results from the table are plotted to look for relationships between the changes.

As we can see, there are no simple line to connect the points, but we can still use theinformation to choose a set of values for the GM. The optimal choices are the ones closestto the zero-zero vertex in both plots. This gives us the least amount of mistakes for theminimum amount of precision and the a minimum amount of connections. The second plotoffers two close choices (yellow and green rows from table) while the first plot only offersone clear choice (green row from table). Based on the design parameters of precisionversus hardware real estate, either one would make a good choice.

0 200 400 600 800 10000

2

4

6

8

10

12

14

Connections

0 2 4 6 8 100

2

4

6

8

10

12

14

Precision(+/-n)

Page 14: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 14

5.3.5 Contour Plots

The weight matrix contains900 elements and it is a littledifficult to see it all at once.What we can do to try andobtain some meaning, is use acontour plot in MATLAB todisplay the weights. The twodifferent plots are for differentbits of accuracy.

The light green areas arezeroes and the other colors arethe non-zero weights or theconnections.

The reddish squares are wherethere are clusters of 1’s. Theblue areas are where the -1’scluster together. All the greenarea is zeros. Each line isadjacent 1’s or -1’s while apoint is 1 or -1.

While the two plots differmostly in the number of zeroesshown, the generalcharacteristics remain thesame.

5.3.6 Comments

The one with the greater bitaccuracy, Plot a, has a greaterpattern storage ability asindicated by the increase ofcolored areas. It is felt thatthis increase in pattern spaceallows a greater number ofpatterns to be stored creating asort of pattern noise which canbe triggered with untrainedinput. When the bit accuracyis reduced, the pattern space isreduced until no patterns canbe stored and the bit accuracyis zero. At some point beforethis, an optimal set of patterns

5 10 15 20 25 30

5

10

15

20

25

30

5 10 15 20 25 30

5

10

15

20

25

30

Plot a: This is the yellow row in the table where the weights are in the range of 2,1,0,-1,-2:

Plot b: This is the green row in the table where the weights are in the range of 1,0,1.

Page 15: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 15

can be stored with reduced pattern noise and the response of an associative memory isconstrained to a select set of patterns. Indeed, with the weights being 1, 0 and -1 withzeroes properly placed, the response of the circuit is the same as it is was for weights withmuch higher accuracy (±10). This reduced precision is greatly desired as it reduces thenumber of connections for each NP and when weights are 1 or -1 they are a wire or aninverter, vastly reducing the circuitry needed to support the implementation.

An unexpected result is the increase in accuracy at one point as the precision of the weightswas reduced. In the digit recognition example, when the weights were calculated asranging from 10 to -10, ten of the twelve presented patterns matched, two producedspurious patterns. When the accuracy of the weight values was scaled to an integer valuewith a range of , no spurious patterns were produced and eleven tests passed with onlyone producing the wrong digit output.

The weight matrix which has the most zeroes in it and still performs as well as higherprecision weights, has 658 zeroes in it, or 242 connections. This means that on the average,there will be 8 connections per NP as opposed 30 connections per NP for weights withgreater precision. The weight precision requires two bits. Multiplying this times thenumber of required weights gives sixteen bits or two bytes of memory required for eachNP. For all the weights in the GM, 484 bits are required which is 60.5 bytes.

6 Other ParadigmsThe two examples only have binary outputs so they are easily implemented with the NPmodel. For ANNs which have multi-level outputs or continuous outputs, several NPs canbe combined to produce the output with each output having a different binary weightingmuch in the same way we describe integers with binary numbers.

6.1 Multi-bit Inputs and Outputs

FIGURE 5. NPN input, network and output topology for functions that require multi-valued precision. In this case, 4 bits of precision are available allowing for the integer range 7 to -8

A simple symmetric matrix of NPs can be used to model functions with multiple bitprecision inputs and outputs. For more complex functions or more inflection points,internal layers of NPs, with more NPs, can be added. This becomes a parallel translator.There is a relationship between the bits of the input and the bits of the output that is captured

NIBBLE

1

2

3

4

5

6

7

8

NIBBLE

23

22

21

20

23

22

21

20

Page 16: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 16

in the weights.

Since the NP outputs are only binary in nature, they can only represent two values bythemselves. To represent more than two values, outputs can be given different numericalweightings and their combined bit array can then represent a number of different precisionvalues. For example, if the numbers between 0 and 100 were to be presented by the outputsof NPs, then 7 outputs would be needed when using a binary weighting system.

6.2 Different Activation FunctionsSome ANNs have different activation functions where the output might be linear or followsome other complex transform. To implement these, more output NPs can be used. Theactivation function is coded into the weights.

6.2.1 NPN Learning

The NPN weights can be calculated by instant learning, such as Hebbian learning, ortrained learning with a supervised feedback weight adjuster. J. Hertz et al.[4] describe anew family of learning algorithms that use nonlinear gradient decent in backpropogationlearning. The feature of their work is that the derivative of the activation function is notrequired. This means that non-linear activation functions may be used in a network that istrained with a backpropogation algorithm. The ultimate goal for the weights for acontinuous output is to have them represent samples of the function with linearinterpolation between the samples.

input precision activation function output precision

single bit hardlim single bit

single or multiple bit satlin multiple bit

single or multiple bit sigmoid multiple bit

single or multiple bit purelin multiple bit

!

Page 17: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 17

6.2.2 Active Learning

By connecting NPs in a feedforward network to a feedback network, a back propagationnetwork could be designed along with an ANN that would allow learning during use.Instead of training in a simulation, the learning would be done in the real environment. Theweights for the feedback NPN would be fixed but the weights for the feedforward NPNwould be adjusted by the feedback NPN.

6.3 Tapped Delay Line and IntegratorsTapped delay lines are quite simple to implement as they are no more than a wire with adelay. An NP with just one input weight as a one would serve as a delay device since theoutput is clocked. Integrators can be built by unzeroing the correct weights to build abinary counter which is really just a simple state machine.

7 SimulationA simulator based on the simple NP model which just sums up its inputs and then suppliesthe sign of the sum as the output was written in Timbre1. Since we are simulating a parallelnetwork in serial, all the outputs must be calculated one by one but only updated when allof them have been calculated. This preserves the time ordering of input/outputpropagation.

One thing to note about this simulation is that the activation function (signbit(Ui)) is notsimulated. The actual values of the outputs are stored in memory instead of just the sign.This is done for easier debugging purposes. By looking at the outputs you get an idea ofhow your network is performing, in some cases. When the output is used as an input to anNP, it is converted to a binary value.

8 DiscussionDue to the simplicity of the model, the simulation takes sixteen milliseconds to simulatesixty NPs. This leaves a lot of time for simulating larger networks of NPs for more difficulttopologies.

There has been some work [5] done on the limiting of precision in weights in ADALINEnetworks that bears further investigation. With respect to learning networks built into the

1. http://www.compusmart.ab.ca/rc/Timbre

FF

FB

Page 18: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 18

ANN design, A. Montalvo et al [6] have produced a VLSI chip with 64 synapses and 8neurons which can be hooked to a CPU to be taught. While VLSI provides an opportunityto maximize efficiency, it is less accessible to the layman then programmable logic devices.

An alternative solution for the multi-bit output is to not use the signbit function on theoutput but let the full set of bits from the summation be the output. Then for complicatedactivation functions, a program could be run on the stack processor to do a saturation orsigmoid. This shouldn’t impact the performance of the NPN by much and it would reducethe hardware by reducing the number of NPs required to generate output.

Although the designs in this paper were simple, they illustrate the general design principlesfor creating a hardware description using the NP model.

The ultimate output of the simulation is the GM which will be used to implement the ANNin reprogrammable hardware. The weights and a collection of simple stack processorsrunning a statistical addition algorithm1, will be used to describe the final hardware designin VHDL which can then be compiled to a format for running on hardware.

The next step in this research is to generate and test the actual hardware implementation.This is the scope of work for next semester.

9 References1. “Neural Network Design”, Hagan, Demuth and Beale, 19962. “Neural Networks”, Haykin, 19943. “Understanding Neural Networks and Fuzzy Logic”, Stamatios Kartalopoulos, 19964. “Nonlinear Backpropagation: Doing Backpropagation Without Derivatives of the

Activation Function”, J. Hertz, A. Krogh, B. Lautrup and T. Lehmann, IEEE Transactions on Neural Networks, Vol 8 No 6 Nov 1997

5. “The Effects of Limited-Precision Weights on the Threshold Adaline”, Maryhelen Stevenson, IEEE Transactions on Neural Networks, Vol 8 No 3 May 1997

6. “Toward a General-Purpose Analog VLSI Neural NEtwork with On-Chip Learning”, A. Montalvo, R. Gyurcsik, J. Paulos, IEEE Transactions on Neurla Netowrks, Vol 8 No 2 Mar 1997

1. The processor model and its addition algorithm are described in a, “A Writable Computer” available at: http://www.compusmart.ab.ca/rc/Timbre/ContentPages/Timbre/AWritableComputer.html.

Page 19: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 19

A Code for SimulatorThe simulator is written in Timbre.

( Neuroprocessor simulator Rob Chapman Oct 22, 1997 ) ( Each NP has one input and inputs from all the NP outputs ) ( input -->weight-->sum-->sign-->latch ) ( outputs-->weights--^ ) ( Each logical input is 0 or 1 mathematically representing 1 or -1 ) ( The weights are signed values ) ( Network connections are made by unzeroing the weights of the outputs ) ( Single, multilayer and recurrent topologies are supported )

( ==== Declare the number of nps ==== )DECIMAL 60 CONSTANT #np ( number of neuroprocessors ) 1 1 + #np + CELLS CONSTANT npsize ( calculated size of np RAM:min 3 cells )

( ==== and create a data foam for inputs, outputs and all the weights ==== ) CREATE neuroram #np CELLS npsize #np * + ALLOT ( | output*#np || [ input | input weight | output weights ]*#np |) ( ^ np points here; beginning of a two dimensional array ) 0 VARIABLE np ( points to a neuroprocessor for sequential evaluation )

( ==== Interface and access ==== ): OUT ( n -- a ) 1 - CELLS neuroram + ; ( Note: NP is from 1 to N ): NP ( n -- ) 1 - npsize * #np CELLS + neuroram + np ! ;: IN ( -- a ) np @ ;: IW ( -- a ) CELL np @ + ;: OW ( n -- a ) 1 + CELLS np @ + ;

( ==== Initialize and run ==== ): INIT-NET ( -- ) neuroram #np CELLS npsize #np * + ERASE ;

: RUN-NP ( i -- y ) ( run neuroprocessor i and leave output on stack ) NP IW @ IN @ IF NEGATE ENDIF 1 #np FOR >R R OW @ ?DUP IF R OUT @ 0< IF - ELSE + ENDIF ENDIF R> 1 + NEXT DROP ;

: RUN-NET ( -- ) ( run all NPs in the network ) 1 #np FOR DUP >R RUN-NP R> 1 + NEXT DROP ( run all neuroprocessors ) #np DUP FOR DUP >R OUT ! R> 1 - NEXT DROP ; ( update all outputs )

: RUNS ( n -- ) FOR RUN-NET NEXT ;

Once the simulation parameters are entered, the simulation can be run with RUN-NET foreach layer that needs to be run, or with RUNS with the number of desired runs as anargument.

Page 20: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 20

B Code for XOR NPNThis is the simulation script for simulating the XOR function:

( i1-->NP1+--->NP3-+ )( \ /` \, )( X bias-->NP5--->o )( / \, /` )( i2-->NP2+--->NP4-+ )

( zero all the weights and then wire up the net by unzeroing weights )INIT-NET ( all weights are zero )1 NP 1 IW ! ( wire )2 NP 1 IW ! ( wire )3 NP 1 1 OW ! -1 2 OW ! ( wire and inverter )4 NP -1 1 OW ! 1 2 OW ! ( inverter and wire )5 NP 1 3 OW ! 1 4 OW ! -1 IW ! ( two wires and a bias )

: TEST ( i1 \ i2 -- ) 2 NP IN ! 1 NP IN ! ( set inputs ) 3 RUNS ( run all three layers of the network ) CR 1 NP IN @ 2 .R 2 NP IN @ 3 .R 5 OUT @ 0< 1 AND 4 .R ; ( show results )

CR ." Test XOR function with two inputs and one output and 5 NPs: 2:2:1"CR ." x1 x2 y"

0 0 TEST ." should be zero"0 1 TEST ." should be one"1 0 TEST ." should be one"1 1 TEST ." should be zero"

Page 21: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 21

C HEBBLD NP.F: \ 0 SCAN ; IMMEDIATE

( ==== Weights matrix ==== ) CREATE weights 30 30 * CELLS ALLOT

: WMIN ( -- n ) weights DUP @ 30 30 * FOR >R @+ SWAP R> MIN NEXT NIP ;

: AWMIN ( -- n ) weights DUP @ ABS 30 30 * FOR >R @+ SWAP ABS R> MIN NEXT NIP ;

: WMAX ( -- n ) weights DUP @ 30 30 * FOR >R @+ SWAP R> MAX NEXT NIP ;

: AWMAX ( -- n ) weights DUP @ ABS 30 30 * FOR >R @+ SWAP ABS R> MAX NEXT NIP ;

( ==== Weight transformer ==== ) 0 VARIABLE owmin 0 VARIABLE owmax 0 VARIABLE fails 0 VARIABLE zeroes 500 VARIABLE scale

: STATS ( w -- w ) DUP owmin @ MIN owmin ! DUP owmax @ MAX owmax ! DUP 0= IF 1 zeroes +! ENDIF ;

: .STATS scale @ . owmin @ . owmax @ . zeroes @ . fails @ . ;

: TRANSFORM ( w -- w' ) DUP ABS ( 192 - ) scale @ 2/ + scale @ 1 MAX / SWAP 0< IF NEGATE ENDIF STATS ;

( ==== Weights transfer ==== ): INIT-WEIGHTS ( -- ) INIT-NET 1 30 FOR DUP NP 1 IW ! 1 + NEXT DROP ;

: TRANSFER ( -- ) INIT-WEIGHTS 31 weights 30 FOR OVER NP 1 SWAP 30 FOR @+ >R TRANSFORM OVER OW ! 1 + R> NEXT >R DROP 1 + R> NEXT 2DROP ;

\ ==== Data set: W = round(10000*P*pinv(P)); 10 pass 2 fail ==== ): NEXT-LINE ( -- ) INPUT-LINE DROP ;: COLUMNS! ( a \ n -- ) FOR BL WORD HERE NUMBER SWAP !+ NEXT DROP ;: PUT-WEIGHTS ( first \ how many -- ) >R CELLS weights + R> NEXT-LINE 30 FOR NEXT-LINE 2DUP COLUMNS! >R 30 CELLS + R> NEXT 2DROP ;

: Columns ;: through ( n -- ) 1 - BL WORD HERE NUMBER OVER - PUT-WEIGHTS ;

LD WEIGHTSTRANSFER

( ==== Testing ==== ): KQ KEY 27 = IF CLOSE QUIT ENDIF ;

: .WT 31 30 FOR DUP NP ?CR 1 30 FOR DUP OW @ 2 .R 1 + ?STOP NEXT DROP 1 + NEXT DROP KQ ;

: .IN ( view 6x5 inputs ) neuroram #np CELLS + 6 FOR CR 5 FOR DUP @ IF BL ELSE ` * ENDIF EMIT npsize + NEXT NEXT DROP ;

: .OUT ( view 6x5 outputs ) neuroram 30 CELLS + 6 FOR CR 5 FOR DUP @ 0< IF BL ELSE ` * ENDIF EMIT CELL + NEXT NEXT DROP ;

: INPUT-PATTERN ( -- ) 1 6 FOR INPUT-LINE DROP 5 FOR INPUT C@+ OVER IF +IN ELSE DROP ENDIF ` * = IF 0 ELSE 1 ENDIF OVER NP IN ! 1 +

Page 22: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 22

NEXT NEXT DROP ;

CREATE pattern 0 , 30 12 * CELLS ALLOT

: TRANSFER-PATTERN ( -- ) pattern @+ SWAP 30 CELLS * + 1 30 FOR DUP >R NP IN @ SWAP !+ R> 1 + NEXT 2DROP ;

0 VARIABLE grade

: COMPARE-OUTPUT ( -- ) 0 grade ! pattern @+ SWAP 3 MOD 30 CELLS * + 31 30 FOR >R @+ SWAP R OUT @ 0< 1 AND XOR grade +! R> 1 + NEXT 2DROP grade @ IF 1 fails +! ENDIF ;

: TEST INPUT-PATTERN TRANSFER-PATTERN 2 FOR RUN-NET NEXT COMPARE-OUTPUT \ CR ." Input:" .IN CR ." Output:" .OUT \ grade @ IF ." Fail" ELSE ." Pass" ENDIF KQ 1 pattern +! ;

: TEST-PATTERN ( -- ) pattern @+ SWAP 30 CELLS * + 1 30 FOR DUP >R NP @+ SWAP IN ! R> 1 + NEXT 2DROP ;

: RETEST 0 pattern ! 0 fails ! 12 FOR TEST-PATTERN 2 RUNS COMPARE-OUTPUT 1 pattern +! NEXT .STATS ;

: SC ( n -- ) 0 owmin ! 0 owmax ! scale ! 0 zeroes ! TRANSFER RETEST ;

LD OUTPUT.F

: SCS ( n -- ) START " RESULTS" >FILE ?FILE 1 SWAP FOR DUP SC CR 1 + NEXT DROP >CONSOLE CR END ;LD TESTS: TEST INPUT-PATTERN CR ." Input:" .IN 2 FOR RUN-NET NEXT CR ." Output:" .OUT COMPARE-OUTPUT grade @ IF ." Fail" ELSE ." Pass" ENDIF KQ 1 pattern +! ;.STATS

Page 23: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 23

D TESTSTEST *** * ** ** ** * ***

TEST ** * * * * *

TEST*** * * ** * ****

TEST *** * ** * TEST ** * *

TEST*** * * TEST *** * * TEST ** *

TEST*** *

Page 24: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 24

TEST ** * * * ** * * * * *

TEST ** ** * * * * ****

TEST ** * * ** * *** *

Page 25: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 25

E WEIGHTS Columns 1 through 6

1186 -321 -321 -288 321 -288 -321 897 897 -192 -897 -192 -321 897 897 -192 -897 -192 -288 -192 -192 827 192 827 321 -897 -897 192 897 192 -288 -192 -192 827 192 827 321 -897 -897 192 897 192 -577 -385 -385 -346 385 -346 1186 -321 -321 -288 321 -288 -288 -192 -192 827 192 827 -288 -192 -192 827 192 827 321 -897 -897 192 897 192 -577 -385 -385 -346 385 -346 1186 -321 -321 -288 321 -288 -288 -192 -192 827 192 827 -288 -192 -192 827 192 827 1186 -321 -321 -288 321 -288 288 192 192 -827 -192 -827 321 -897 -897 192 897 192 -288 -192 -192 827 192 827 -288 -192 -192 827 192 827 1186 -321 -321 -288 321 -288 -577 -385 -385 -346 385 -346 321 -897 -897 192 897 192 -288 -192 -192 827 192 827 321 -897 -897 192 897 192 577 385 385 346 -385 346 -321 897 897 -192 -897 -192 577 385 385 346 -385 346 1186 -321 -321 -288 321 -288

Columns 7 through 12

321 -577 1186 -288 -288 321 -897 -385 -321 -192 -192 -897 -897 -385 -321 -192 -192 -897 192 -346 -288 827 827 192 897 385 321 192 192 897 192 -346 -288 827 827 192 897 385 321 192 192 897 385 1308 -577 -346 -346 385 321 -577 1186 -288 -288 321 192 -346 -288 827 827 192 192 -346 -288 827 827 192 897 385 321 192 192 897 385 1308 -577 -346 -346 385 321 -577 1186 -288 -288 321 192 -346 -288 827 827 192 192 -346 -288 827 827 192 321 -577 1186 -288 -288 321 -192 346 288 -827 -827 -192 897 385 321 192 192 897 192 -346 -288 827 827 192 192 -346 -288 827 827 192 321 -577 1186 -288 -288 321 385 1308 -577 -346 -346 385 897 385 321 192 192 897 192 -346 -288 827 827 192 897 385 321 192 192 897 -385 -1308 577 346 346 -385 -897 -385 -321 -192 -192 -897 -385 -1308 577 346 346 -385 321 -577 1186 -288 -288 321

Columns 13 through 18

-577 1186 -288 -288 1186 288 -385 -321 -192 -192 -321 192

Page 26: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 26

-385 -321 -192 -192 -321 192 -346 -288 827 827 -288 -827 385 321 192 192 321 -192 -346 -288 827 827 -288 -827 385 321 192 192 321 -192 1308 -577 -346 -346 -577 346 -577 1186 -288 -288 1186 288 -346 -288 827 827 -288 -827 -346 -288 827 827 -288 -827 385 321 192 192 321 -192 1308 -577 -346 -346 -577 346 -577 1186 -288 -288 1186 288 -346 -288 827 827 -288 -827 -346 -288 827 827 -288 -827 -577 1186 -288 -288 1186 288 346 288 -827 -827 288 827 385 321 192 192 321 -192 -346 -288 827 827 -288 -827 -346 -288 827 827 -288 -827 -577 1186 -288 -288 1186 288 1308 -577 -346 -346 -577 346 385 321 192 192 321 -192 -346 -288 827 827 -288 -827 385 321 192 192 321 -192 -1308 577 346 346 577 -346 -385 -321 -192 -192 -321 192 -1308 577 346 346 577 -346 -577 1186 -288 -288 1186 288

Columns 19 through 24

321 -288 -288 1186 -577 321 -897 -192 -192 -321 -385 -897 -897 -192 -192 -321 -385 -897 192 827 827 -288 -346 192 897 192 192 321 385 897 192 827 827 -288 -346 192 897 192 192 321 385 897 385 -346 -346 -577 1308 385 321 -288 -288 1186 -577 321 192 827 827 -288 -346 192 192 827 827 -288 -346 192 897 192 192 321 385 897 385 -346 -346 -577 1308 385 321 -288 -288 1186 -577 321 192 827 827 -288 -346 192 192 827 827 -288 -346 192 321 -288 -288 1186 -577 321 -192 -827 -827 288 346 -192 897 192 192 321 385 897 192 827 827 -288 -346 192 192 827 827 -288 -346 192 321 -288 -288 1186 -577 321 385 -346 -346 -577 1308 385 897 192 192 321 385 897 192 827 827 -288 -346 192 897 192 192 321 385 897 -385 346 346 577 -1308 -385 -897 -192 -192 -321 -385 -897 -385 346 346 577 -1308 -385 321 -288 -288 1186 -577 321

Columns 25 through 30

-288 321 577 -321 577 1186 -192 -897 385 897 385 -321 -192 -897 385 897 385 -321 827 192 346 -192 346 -288 192 897 -385 -897 -385 321 827 192 346 -192 346 -288 192 897 -385 -897 -385 321 -346 385 -1308 -385 -1308 -577 -288 321 577 -321 577 1186

Page 27: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 27

827 192 346 -192 346 -288 827 192 346 -192 346 -288 192 897 -385 -897 -385 321 -346 385 -1308 -385 -1308 -577 -288 321 577 -321 577 1186 827 192 346 -192 346 -288 827 192 346 -192 346 -288 -288 321 577 -321 577 1186 -827 -192 -346 192 -346 288 192 897 -385 -897 -385 321 827 192 346 -192 346 -288 827 192 346 -192 346 -288 -288 321 577 -321 577 1186 -346 385 -1308 -385 -1308 -577 192 897 -385 -897 -385 321 827 192 346 -192 346 -288 192 897 -385 -897 -385 321 346 -385 1308 385 1308 577 -192 -897 385 897 385 -321 346 -385 1308 385 1308 577 -288 321 577 -321 577 1186

Page 28: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 28

F RESULTS1 -1308 1308 0 22 -654 654 0 23 -436 436 0 24 -327 327 0 25 -262 262 0 26 -218 218 0 27 -187 187 0 28 -164 164 0 29 -145 145 0 210 -131 131 0 211 -119 119 0 212 -109 109 0 213 -101 101 0 214 -93 93 0 215 -87 87 0 216 -82 82 0 217 -77 77 0 218 -73 73 0 219 -69 69 0 220 -65 65 0 221 -62 62 0 222 -59 59 0 223 -57 57 0 224 -55 55 0 225 -52 52 0 226 -50 50 0 227 -48 48 0 228 -47 47 0 229 -45 45 0 230 -44 44 0 231 -42 42 0 232 -41 41 0 233 -40 40 0 234 -38 38 0 235 -37 37 0 236 -36 36 0 237 -35 35 0 238 -34 34 0 240 -33 33 0 241 -32 32 0 242 -31 31 0 243 -30 30 0 245 -29 29 0 246 -28 28 0 248 -27 27 0 250 -26 26 0 252 -25 25 0 254 -24 24 0 256 -23 23 0 259 -22 22 0 261 -21 21 0 264 -20 20 0 268 -19 19 0 271 -18 18 0 275 -17 17 0 280 -16 16 0 285 -15 15 0 291 -14 14 0 297 -13 13 0 2105 -12 12 0 2114 -11 11 0 2125 -10 10 0 2138 -9 9 0 2154 -8 8 0 2175 -7 7 0 2193 -7 7 0 3202 -6 6 0 3215 -6 6 0 2238 -5 5 0 2291 -4 4 0 2

Page 29: Using A Neur opr ocessor Model f or Describing Arti Þ cial ...clubweb.interbaun.com/~rc/Papers/NeuroprocessorModeling.pdfUsing A Neur opr ocessor Model f or Describing Arti Þ cial

Neuroprocesor Modeling October 5, 2005 29

374 -3 3 0 3385 -3 3 180 1524 -2 2 180 1577 -2 2 300 4643 -2 2 408 2693 -2 2 508 3771 -2 2 598 3873 -1 1 598 51155 -1 1 658 21655 -1 1 758 121795 -1 1 839 122373 -1 1 875 122617 0 0 900 12