Shared Memory Bus for Multiprocessor...

18
Shared Memory Bus for Multiprocessor Systems Mat Laibowitz and Albert Chiou Group 6

Transcript of Shared Memory Bus for Multiprocessor...

Page 1: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

Shared Memory Bus for Multiprocessor Systems

Mat Laibowitz and Albert Chiou

Group 6

Page 2: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

Shared Memory Architecture

CPU

MemoryCPU

• We want multiple processors to share memory

Question: How do we connect them together?

CPU Memory

???

?? ?

??

Page 3: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

Shared Memory Architecture

CPU CPU

Memory

CPU CPU

Memory

CPU

Memory

CPU

Memory

Single, large memory Multiple smaller memories

Issues• Scalability• Access Time• Cost• Application: WLAN vs Single chip multiprocessor

Page 4: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

Cache Coherency Problem$

0$1

• Each cache needs to correctly handle memory accesses across multiple processors

•A value written by one processor is eventually visible by the other processors

•When multiple writes happen to the same location by multiple processors, all the processors see the writes in the same order.

Page 5: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

Snooping vs DirectoryCPU

ACPU

B

Memory

CPUCCPU

A

CPUC

CPUB

= M

= I

= I

CPUA

CPUB

Memory

CPUC

CPUA = M !!!

Page 6: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

MSI State Machine

I

S

M

CPUWr/RingInv

CPURd/RingRd

CPUWr/RingInv

CPUWr/--CPURd/--

RingInv/--CPUWr/--CPURd/--

RingInv/--DataMsg

RingInv/--DataMsg

Page 7: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

MSI Transition Chart

Pending->1-WriteBack2

M

Pending->0 & Pass Token-WriteBack1M

PASS-WriteBack0M

PASS-Write(Miss)0M

Add DATA; Cache->I & PASS-Write(Hit)0M

PASS-Read(Miss)0M

Add DATA; Cache->S & PASS-Read(Hit)0M

-Write(Hit)-0M

-Read(Hit)-0M

Pending->0, Pass Token-WriteBack1S

Modify Cache;Cache->M & Pass Token

-Write1S

PASS-WriteBack0S

PASS-Write(Miss)0S

Add DATA; Cache->I & PASS-Write(Hit)0S

PASS-Read(Miss)0S

Add DATA; PASS-Read(Hit)0S

Pending->1; SEND WriteWrite-0S

-Read(Hit)-0S

DATA/M->Cache, Modify Cache;SEND WriteBack(DATA),SEND WriteBack(data),Pending->2

Write (M)I & Miss

DATA/M->Cache, Modify Cache;SEND WriteBack(DATA)

-Write (I/S)1I & Miss

DATA/S->Cache;SEND WriteBack(DATA)

-Read1I & Miss

PASS-WriteBack0I & Miss

PASS-Write0I & Miss

PASS-Read0I & Miss

Pending->1; SEND WriteWrite-0I & Miss

Pending->1; SEND ReadRead-0I & Miss

ActionsIncomingProcessor

Transaction

IncomingRing

Transaction

PendingState

Cache State

Page 8: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

Ring Topology

CPU 1

CacheController 1

Memory Controller

Cache 1

CPU 2

CacheController 2

CPU n

CacheController n

●●●

Cache 2 Cache n

Memory

Page 9: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

Ring Implementation

• A ring topology was chosen for speed and its electrical characteristics– Only point-to-point– Like a bus– Scaleable

• Uses a token to ensure sequential consistency

Page 10: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

Test Rig

mkMSICacheController

waitReg pending token

ringOutFIFO

ringInFIFO

requestFIFO

responseFIFO

mkMSICache

=rules$ Controller

Page 11: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

Test Rig

mkMSICacheController

waitReg pending token

ringOutFIFO

ringInFIFO

requestFIFO

responseFIFO

mkMSICache

$ Controller=rules

mkMultiCacheTH

mkMultiCache

mkDataMemoryController

ringInFIFOtoDMem

FIFOfromDMem

FIFO

mkDataMem

dataReqQFIFO

dataRespQFIFO

$ Controller $ Controller $ Controller●●●

ringOut

FIFO

Client Client Client

rule

rule rule

rule

rule

Page 12: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

Test Rig (cont)• An additional module was implemented that takes a single streamof memory requests and deals them out to the individual cpu data request ports.

•This module can either send one request at a time, wait for a response, and then go on to the next cpu or it can deal them out as fast as the memory ports are ready.

•This demux allows individual processor verification prior to multi-processor verification.

•It can then be fed set test routines to exercise all the transitions or be hooked up to the random request generator

Page 13: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

=> Cache 2: toknMsg op->Tk8=> Cache 5: toknMsg op->Tk2=> Cache 3: ringMsg op->WrBk addr->0000022c data->aaaaaaaa valid->1 cache->1=> Cache 3: getState I=> Cache 1: newCpuReq St { addr=00000230, data=ba4f0452 }=> Cache 1: getState I=> Cycle = 56=> Cache 2: toknMsg op->Tk7=> Cache 6: ringMsg op->Rd addr->00000250 data->aaaaaaaa valid->1 cache->6=> DataMem: ringMsg op->WrBk addr->00000374 data->aaaaaaaa valid->1 cache->5=> Cache 6: getState I=> Cache 8: ringReturn op->Wr addr->000003a8 data->aaaaaaaa valid->1 cache->7=> Cache 8: getState I=> Cache 8: writeLine state->M addr->000003a8 data->4ac6efe7=> Cache 3: ringMsg op->WrBk addr->00000360 data->aaaaaaaa valid->1 cache->4=> Cache 3: getState I=> Cycle = 57=> Cache 6: toknMsg op->Tk2=> Cache 3: toknMsg op->Tk8=> Cache 4: ringMsg op->WrBk addr->0000022c data->aaaaaaaa valid->1 cache->1=> Cache 4: getState I=> Cycle = 58=> dMemReq: St { addr=00000374, data=aaaaaaaa }=> Cache 3: toknMsg op->Tk7=> Cache 7: ringReturn op->Rd addr->00000250 data->aaaaaaaa valid->1 cache->6=> Cache 7: writeLine state->S addr->00000250 data->aaaaaaaa=> Cache 7: getState I=> Cache 1: ringMsg op->WrBk addr->00000374 data->aaaaaaaa valid->1 cache->5=> Cache 1: getState I=> Cache 4: ringMsg op->WrBk addr->00000360 data->aaaaaaaa valid->1 cache->4=> Cache 4: getState I=> Cache 9: ringMsg op->WrBk addr->000003a8 data->aaaaaaaa valid->1 cache->7=> Cache 9: getState I=> Cycle = 59=> Cache 5: ringMsg op->WrBk addr->0000022c data->aaaaaaaa valid->1 cache->1=> Cache 5: getState I=> Cache 7: toknMsg op->Tk2=> Cache 3: execCpuReq Ld { addr=000002b8, tag=00 }=> Cache 3: getState I=> Cache 4: toknMsg op->Tk8=> Cycle = 60=> DataMem: ringMsg op->WrBk addr->000003a8 data->aaaaaaaa valid->1 cache->7=> Cache 2: ringMsg op->WrBk addr->00000374 data->aaaaaaaa valid->1 cache->5=> Cache 2: getState I=> Cache 8: ringMsg op->WrBk addr->00000250 data->aaaaaaaa valid->1 cache->6=> Cache 8: getState I=> Cache 5: ringReturn op->WrBk addr->00000360 data->aaaaaaaa valid->1 cache->4=> Cache 5: getState S=> Cycle = 61=> Cache 5: toknMsg op->Tk8

Trace

Example

Page 14: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

Design Exploration

• Scale up number of cache controllers

• Add additional tokens to the ring allowing basic pipelining of memory requests

• Tokens service disjoint memory addresses(ex. odd or even)

• Compare average memory access time versus number of tokens and number of active CPUs

Page 15: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

Number of Controllers vs. Avg. Access Time (2 Tokens)

0

5

10

15

20

25

30

3 6 9

Number of Controllers

Ave

rage

Acc

ess

Tim

e (c

lock

cyl

es)

Test Results

Page 16: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

Test ResultsNumber of Tokens vs. Avg. Access Time (9 Controllers)

0

5

10

15

20

25

30

2 4 8

Number of Tokens

Ave

rage

Acc

ess

Tim

e (c

lock

cyc

les)

Page 17: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

Placed and Routed

Page 18: Shared Memory Bus for Multiprocessor Systemscsg.csail.mit.edu/6.884/projects/group6-presentation.pdf=> Cache 2: toknMsg op->Tk8 => Cache 5: toknMsg op->Tk2 => Cache 3: ringMsg op->WrBk

Stats (9 cache, 8 tokens)

• Clock speed: 3.71ns (~270 Mhz)• Area: 1,296,726 µm2 with memory• Average memory access time: ~39ns