1 Multithreaded Architectures Lecture 3 of 4 Supercomputing ’93 Tutorial Friday, November 19, 1993...
Transcript of 1 Multithreaded Architectures Lecture 3 of 4 Supercomputing ’93 Tutorial Friday, November 19, 1993...
1
Multithreaded Architectures
Lecture 3 of 4
Supercomputing ’93 Tutorial
Friday, November 19, 1993
Portland, Oregon
Rishiyur S. Nikhil
Digital Equipment
Corporation
Cambridge Research Laboratory
Copyright (c) 1992, 1993 Digital Equipment Corporation
nikhil−su93−3
Supercomputing '93 Multithreaded Architectures Tutorial, Lecture 1 2
Overview of Lecture 3
Multithreading: a Dataflow Story
Dataflow graphs asa machine language
MIT Tagged Token ManchesterDataflow Architecture Dataflow
ETL Sigma−1
Explicit TokenStore Machines
MIT/Motorola ETL EM−4Monsoon
P−RISC: "RISC−ified" dataflow
MIT/Motorola *T
nikhil−94
Supercomputing '93 Multithreaded Architectures Tutorial, Lecture 1 3
Dataflow Graphs as a Machine Language
Code:Dataflow Graph(interconnectionof instructions)
Conceptually, "tokens" carryingvalues flow along the edges ofthe graph.
Memories:Two major components
Heap, for data structures(e.g., I−structure memory)
"Stack"(contexts, frames, ...)
Values on tokens may be memoryaddresses
Each instruction:
Waits for tokens on all inputs(and possibly a memorysynchronization condition)
Consumes input tokens
Computes output values basedon input values
Possibly reads/writes memory
Produces tokens on outputs
No further restriction on instructionordering
A common misconception:
"Dataflow graphs and machinescannot express side effects,
and they only implementfunctional languages"
nikhil−95
Supercomputing '93 Multithreaded Architectures Tutorial, Lecture 1 4
Encoding Dataflow Graphs and Tokens
Conceptual Encoding of graphProgram memory:
Op−
op1 op2
+
op3 op4
Encoding of token:A "packet" containing:
code
109 op1
113 op2
120 +
141 op3
159 op4
Destination(s)
120L
120R
141, 159L
... ,
120R Destination instruction address, Left/Right port6.847 Value
Re−entrancy ("dynamic" dataflow):
Each invocation of a function or loop iteration getsits own, unique, "Context"
Tokens destined for same instruction in differentinvocations are distinguished by a context identifier
120R Destination instruction address, Left/Right portCtxt Context Identifier6.847 Value
nikhil−96
Supercomputing '93 Multithreaded Architectures Tutorial, Lecture 1 5
MIT Tagged Token Dataflow Architecture
Designed by Arvind et. al., MIT, early 1980’s
Simulated; never built
Global view:
Processor Nodes
(including local program and "stack" memory)
ResourceInterconnection Network ManagerNodes
"I−Structure" Memory Nodes (global heap data memory)
Resource Manager Nodes responsible for
Function allocation (allocation of context
identifiers)
Heap allocation etc.
Stack memory and heap memory: globally addressed
nikhil−97
Supercomputing '93 Multithreaded Architectures Tutorial, Lecture 1 6
MIT Tagged Token Dataflow ArchitectureProcessor
Wait−Match Waiting tokenUnit memory (associative)
Instruction ProgramFetch memory
TokenQueue
ExecuteOp
FormTokens
Output
To network From network
Wait−Match Unit:
Tokens for unary ops go straight through
Tokens for binary ops: try to match incoming tokenand a waiting token with same instruction addressand context idSuccess:Both tokens forwarded
Fail: Incoming token −−> Waiting Token Mem,Bubble (no−op) forwarded
nikhil−98
Supercomputing '93 Multithreaded Architectures Tutorial, Lecture 1 7
MIT Tagged Token Dataflow ArchitectureProcessor Operation
120R,c, 6.847Waiting token
Wait−MatchUnit
120,c, 6.001,6.847
InstructionFetch
Token 141,159L,c, +,6.001,6.847Queue
ExecuteOp
141,159L,c, 12.848
FormTokens
141,c, 12.848
159L,c, 12.848
Output
memory (associative)120L,c, 6.001
Programmemory
120 + 141, 159L
To network From network
Output Unit routes tokens:
Back to local Token
Queue
To another Processor
To heap memorybased on the addresses on the token
Tokens from network are placed in Token Queue
nikhil−99