Computer Architecture System Interface Units

18
Computer Architecture System Interface Units Iolanthe II approaches Coromandel Harbour

description

Computer Architecture System Interface Units. Iolanthe II approaches Coromandel Harbour. System Interface Unit (also B us IU ). Positioned between cache and system bus (external to die). System Interface Unit (also B us IU ). Cache ó main memory bus Responsible for - PowerPoint PPT Presentation

Transcript of Computer Architecture System Interface Units

Computer Architecture

System Interface Units

Iolanthe II approaches Coromandel Harbour

System Interface Unit (also BusIU)

• Positioned between • cache and • system bus (external to die)

System Interface Unit (also BusIU)

• Cache main memory bus Responsible for

• Matching cache line length to memory bus• eg PowerPC 601

• 32 byte cache line• 64 bit (8 byte) bus• Memory transactions are “bursts” of 4 double

words

• Giving priority to reads• Read requests stall CPU pipeline• Writes are assumed “complete”

• when they exit the pipeline

• Detecting reads on data in the write buffers• Cache coherence - later!

System Interface Unit

• Read queue• 2-4 entries• Read will fetch a cache line

• (PowerPC 8 words)• Need to fetch requested word first

• May be word 0-7 in the cache line• Bus transactions may be “out of order”

• Requested word first

• Bus clock << processor clock• 30-60 MHz vs 200 - 600 MHz 2 words / bus cycle or 2 words / 3 - 20 CPU cycles! Many CPU cycles if requested word is last read!

2003 dataAdd a factor of 3 or so!

System Interface Unit

• Write buffer• 2-3 entries• Lower priority than read buffer

• Additional (R10000)• Incoming buffer

• External Agent (EA) supplying data writes to it• Processor doesn’t control transfer• EA signals completion and

data forwarded to cache

• Uncached buffer• Pages may be marked “not cached” Faster I/O transactions

System Interface Unit

• Overlap greater bus utilisation• Separate Address and Data buses

• Multiplexing two types of information on the same bus

• Saves pins• Costs time - unable to assert both at once

• Expensive for writes!• Required since …. whenever

• Separate Address and Data bus tenure• Address and data phases split

• Issue address1, wait for data1

• Issue address2 while waiting for data1

PowerPC organisation

PowerPC 601~1993

Boundary of the

Si die

New - Look in the “Example Processors” sectionof the Web notes

PowerPC organisation

PowerPC 601~1993

Boundary of theSi die

New - Look in the “Example Processors” sectionof the Web notes

3-way SuperScalar• Integer• Branch• Floating Point

PowerPC organisation

PowerPC 601~1993

Boundary of theSi die

New - Look in the “Example Processors” sectionof the Web notes

MMU• Unified TLB (Data and I-misses)• Instruction TLB

PowerPC organisation

PowerPC 601~1993

Boundary of the

Si die

New - Look in the “Example Processors” sectionof the Web notes

Cache• Unified (Data and Inst)

PowerPC organisation

PowerPC 601~1993

Boundary of the

Si die

New - Look in the “Example Processors” sectionof the Web notes

SIU• Read Q (2 entries)• Write Q (3 entries)

Bus Transactions

• Primary task of SIU• Implement bus protocol

• Sequence of signals and responses1. Requesting access

• Several devices can be bus masters

2. Granting access

3. Indicating type of transaction• Read, write, read-modify-write, cache invalidate,

4. Transaction tag• Allows multiple transactions ‘in flight’• Slow devices don’t block system

5. Address

6. Data

Bus Protocols

• Depend on type of bus• Serial

• Single (or small number) of data wires• Parallel

• Smallest number of wires• Multiplexed address and data

• Separate address and data• Most complex• Requiring separate grants to address and data

buses• Highest throughput• Greatest tolerance for slow devices• See following diagram

System Interface Unit

• Separate Address and Data bus tenure• Address and data phases split

• Issue address1, wait for data1

• Issue address2 while waiting for data1

System Interface Unit

• Separate Address and Data bus tenure• Address and data phases split

• Issue address1, wait for data1

• Issue address2 while waiting for data1

Separate Address and Data buses

• Overlap greater bus utilisation• Separate Address and Data buses

• Separate Address and Data bus tenure• Address and data phases split

• Issue address1, wait for data1

• Issue address2 while waiting

• Increases utilisation of both buses• Device latency can be long

• SIU doesn’t idle while device responds to access request

System Interface Unit

• Next generation (eg PowerPC 620)• Multiple transactions active at any time• Each transaction identified by a transaction ID

(3 bits on bus)• Allows multiple processors to interleave

transactions on the bus• Further tolerance for long, variable latencies

• Memory and devices may have different latencies

• Multiple levels of memory• Devices: discs, networks, graphics, etc

• Very long latency operation doesn’t block a (just) long latency one

Parallelism

• Modern systems derive performance from ability to perform many tasks in parallel• Datapath

• Pipelined – one instruction / stage• Instruction Issue Unit

• Instruction Q• Superscalar

• 3+ functional units execute multiple instructions• SIU

• I/O transaction Qs• System bus

• Several transactions active at any time• Peripherals

• Several servicing requests at the same time