SpiNNaker Spiking Neural Network...

16
SpiNNaker SPIKING NEURAL NETWORK ARCHITECTURE MAX BROWN NICK BARLOW

Transcript of SpiNNaker Spiking Neural Network...

Page 1: SpiNNaker Spiking Neural Network Architecturemeseec.ce.rit.edu/756-projects/fall2015/2-1.pdfPerformance Computing and Communication & 2012 IEEE 9th International Conference on Embedded

SpiNNaker S P I K I N G N E U R A L N E T W O R K

A R C H I T E C T U R E M A X B R O W N

N I C K B A R L O W

Page 2: SpiNNaker Spiking Neural Network Architecturemeseec.ce.rit.edu/756-projects/fall2015/2-1.pdfPerformance Computing and Communication & 2012 IEEE 9th International Conference on Embedded

OVERVIEW ● What is SpiNNaker ● Architecture ● Spiking Neural Networks ● Related Work ● Router ● Commands ● Task Scheduling ● Related Works / Projects ● Conclusion

Page 3: SpiNNaker Spiking Neural Network Architecturemeseec.ce.rit.edu/756-projects/fall2015/2-1.pdfPerformance Computing and Communication & 2012 IEEE 9th International Conference on Embedded

WHAT IS IT? • Massively-Parallel architecture • Based on a biology • Simulates neural networks in real-time • Focused on low power consumption • Relatively small • Perfectly scalable • Developed by: University of Manchester, University of Southampton, University of Cambridge,

University of Sheffield, ARM Ltd, Silistix Ltd, and Thales

• Work on the SpiNNaker began in 2005

Page 4: SpiNNaker Spiking Neural Network Architecturemeseec.ce.rit.edu/756-projects/fall2015/2-1.pdfPerformance Computing and Communication & 2012 IEEE 9th International Conference on Embedded

ARCHITECTURE • 57,600 SpiNNaker System-in-package (SiP) chips

– 18 low-power ARM968 cores – 128 MB off-die SDRAM – 64 KB Data Tightly-Coupled Memory (DTCM) – 32 KB Instruction Tightly-Coupled Memory (ITCM) – Star Topology

• 1,036,800 total ARM9 cores • 7 TB total SDRAM • 2D Triangular Torus Topology • Custom self-timed Network-on-Chip (NiC)

– 8 Gb/s communication bandwidth • Custom Router

– 5-9 Byte messages

Page 5: SpiNNaker Spiking Neural Network Architecturemeseec.ce.rit.edu/756-projects/fall2015/2-1.pdfPerformance Computing and Communication & 2012 IEEE 9th International Conference on Embedded
Page 6: SpiNNaker Spiking Neural Network Architecturemeseec.ce.rit.edu/756-projects/fall2015/2-1.pdfPerformance Computing and Communication & 2012 IEEE 9th International Conference on Embedded

ARCHITECTURE (CONT’D) • Inspired by biology

– Human brain operates at 25W, composed of low speed neurons – Biological “interconnect” has high latency measured in ms – Tolerant to component “failure” – Unidirectional lossless transmission of information

• Each independent core in the SpiNNaker System represents a neuron

– Heavily simplified – Cores in SIP share memory and networking resources – Inter-node communication is explicitly through packets

Page 7: SpiNNaker Spiking Neural Network Architecturemeseec.ce.rit.edu/756-projects/fall2015/2-1.pdfPerformance Computing and Communication & 2012 IEEE 9th International Conference on Embedded
Page 8: SpiNNaker Spiking Neural Network Architecturemeseec.ce.rit.edu/756-projects/fall2015/2-1.pdfPerformance Computing and Communication & 2012 IEEE 9th International Conference on Embedded

SPIKING NEURAL NETWORK EMULATION • Spiking Neural Network

– Model the intercommunication and processes of the human brain • Currently capable of simulating 1%

– Uses three major principles • Representation of Neurons: Combination of non-linear encoding and weighted linear decoding. • Transformations: Functions of variables determined by weighted linear decoding • Neural Dynamics: Using the Representation on Neurons as control theoretic state variables

– Nonlinear encoding is determined by converting an analog input to a neural spike train using the following equation

• . • Gi represents a nonlinear function describing the neural network • ai represents the gain factor • ei represents the encoding factor • Jbias represents the bias current

Page 9: SpiNNaker Spiking Neural Network Architecturemeseec.ce.rit.edu/756-projects/fall2015/2-1.pdfPerformance Computing and Communication & 2012 IEEE 9th International Conference on Embedded

S PIKING NEURAL NET WORK EM UL ATION (CONTD. )

• Depicted to the right is a model of a network responding to a stimulus

• Neuron 1 responds to the negative values of input and spikes quickly

• When the input becomes 0, neuron 1 spikes slower

• When the input becomes -1,neuron 2 begins firing quickly

• This works along an analog scale and the neurons fire at different speeds to represent the input value.

Page 10: SpiNNaker Spiking Neural Network Architecturemeseec.ce.rit.edu/756-projects/fall2015/2-1.pdfPerformance Computing and Communication & 2012 IEEE 9th International Conference on Embedded

SPIKING NEURAL NETWORK EMULATION (CONTD.) • The output can be determined by decoding the output spike pattern

– . – di represents a set of decoding vectors – h(t-tn) represents the convolution of the spike train with the post-synaptic current

• The Encoding and Decoding methods are combined to implement functions • Pictured below is the output for two functions

• The graph on the left depicts the function y = x

• The graph on the right depicts y = x^2

• The actual process used to determine these functions is too complex to discuss here

Page 11: SpiNNaker Spiking Neural Network Architecturemeseec.ce.rit.edu/756-projects/fall2015/2-1.pdfPerformance Computing and Communication & 2012 IEEE 9th International Conference on Embedded

ARCHITECTURE (NETWORK/ROUTER) • Custom on-chip router • Built to send pulses instead of streams

– This saves time when sending small adjustable size packets – This saves power by not sending empty packets to fill stream size

• GALS (Globally Asynchronous, Locally Synchronous) – Keeps the internals synchronous and smaller – Keeps the clocks from different modules independant – Allows for clocked memory units in the router such as queues

• 3 types of communication methods – Multicast: Sent to signal that a core has finished a task

• routed by the internal routing information and the routing tables in the router – Point-to-Point: Communication sent between controller cores to pass data

• routed from one specific core to another specific core – Nearest Neighbor: Used Send info to every core on the SiP or to write data to the memory

• routed to every core on the chip at the same time

Page 12: SpiNNaker Spiking Neural Network Architecturemeseec.ce.rit.edu/756-projects/fall2015/2-1.pdfPerformance Computing and Communication & 2012 IEEE 9th International Conference on Embedded

COMMANDS • Nodes have a subset of the ARM ISA for real execution

• Intercommunication and execution only has three major functionalities – Get Input - on packet received – Direct Memory Access complete – Update - executed every 1ms

• Neurons update real-time synaptic information – Local algorithm for calculation

• SIP SDRAM stores information – Individual nodes must contend for access

Page 13: SpiNNaker Spiking Neural Network Architecturemeseec.ce.rit.edu/756-projects/fall2015/2-1.pdfPerformance Computing and Communication & 2012 IEEE 9th International Conference on Embedded

TASK SCHEDULING • Tasks are scheduled using one of the cores called the

scamp • The rest of the cores are called the sark • The programs are implemented using an event-driven

model – When something important happens, then code is

executed – The application does not control execution flow – The code run when an event occurs is called a callback. – Callbacks are either queueable or non-queueable

• Non-queueable callbacks are executed immediately • Queueable callbacks are placed in a scheduler queue

Page 14: SpiNNaker Spiking Neural Network Architecturemeseec.ce.rit.edu/756-projects/fall2015/2-1.pdfPerformance Computing and Communication & 2012 IEEE 9th International Conference on Embedded

RELATED WORKS

Page 15: SpiNNaker Spiking Neural Network Architecturemeseec.ce.rit.edu/756-projects/fall2015/2-1.pdfPerformance Computing and Communication & 2012 IEEE 9th International Conference on Embedded

CONCLUSIONS • SpiNNaker is not a general purpose CPU • SpiNNaker requires a thorough understanding of spiking neural networks to be used properly • Power consumption is more important than performance in ultra large scale design • Implementing special hardware is difficult and pre-existing components should be used when possible

– Low power ARM cores were used in the design – A custom router was designed because no solution existed that could provide the correct functionality

• The SpiNNaker had a very slow refresh rate and clock speed, but could still perform in real-time – The brain functions at these speeds and is able to perform large tasks, so the SpiNNaker was designed to

emulate this • The brain is very fault tolerant

– As a result, the SpiNNaker is also very fault tolerant – SpiNNaker is so large that entire SiP’s may fail and the system needs to be able to handle that

• SpiNNaker had functionality added in so that it could be programmed and debugged – The brain has no such functionality

Page 16: SpiNNaker Spiking Neural Network Architecturemeseec.ce.rit.edu/756-projects/fall2015/2-1.pdfPerformance Computing and Communication & 2012 IEEE 9th International Conference on Embedded

WORKS CITED , , “SpiNNaker Project”. RetrievedDecember , 2015 Available: http://apt.cs.manchester.ac.uk/projects/SpiNNaker/project/ Navaridas, J.; Lujan, M.; Plana, L.A.; Miguel-Alonso, J.; Furber, S.B., "Analytical Assessment of the Suitability of Multicast Communications for the SpiNNaker Neuromimetic System," in High

Performance Computing and Communication & 2012 IEEE 9th International Conference on Embedded Software and Systems (HPCC-ICESS), 2012 IEEE 14th International Conference on , vol., no.,pp.1-8, 25-27 June 2012doi: 10.1109/HPCC.2012.11 URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6332152&isnumber=6331993

Patterson, C.; Preston, T.; Galluppi, F.; Furber, S., "Managing a Massively-Parallel Resource-Constrained Computing Architecture," in Digital System Design (DSD), 2012 15th Euromicro Conference

on , vol., no., pp.723-726, 5-8 Sept. 2012 doi: 10.1109/DSD.2012.84 URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6386963&isnumber=6386869 http://webspace.ship.edu/cgboer/neuron.gif Galluppi, F.; Davies, S.; Furber, S.; Stewart, T.; Eliasmith, C., "Real time on-chip implementation of dynamical systems with spiking neurons," in Neural Networks (IJCNN), The 2012 International Joint

Conference on , vol., no., pp.1-8, 10-15 June 2012 doi: 10.1109/IJCNN.2012.6252706 URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6252706&isnumber=6252360 Jian Wu; Furber, S.; Garside, J., "A Programmable Adaptive Router for a GALS Parallel System," in Asynchronous Circuits and Systems, 2009. ASYNC '09. 15th IEEE Symposium on , vol., no., pp.23-31,

17-20 May 2009 doi: 10.1109/ASYNC.2009.17 URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5010333&isnumber=5010321 Neuromorphs.net,. 'Universal Neuromorphic Devices And Sensors For Real-Time Mobile Robotics'. N.p., 2015. Web. 12 Dec. 2015.