COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

41
COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses Siddhartha Chatterjee Fall 2000

description

COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses. Siddhartha Chatterjee Fall 2000. Cache Performance. Average Access Time. Miss Rate. Miss Penalty. Exploits spatial locality. Increased Miss Penalty & Miss Rate. Fewer blocks: compromises temporal locality. - PowerPoint PPT Presentation

Transcript of COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Page 1: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

COMP 206Computer Architecture and

ImplementationUnit 8b: Cache Misses

Siddhartha Chatterjee

Fall 2000

Page 2: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 2

Cache Performance

penalty Miss rate Miss Hit time

timeaccessmemory Average

timeCycle penalty Missref MM

Misses

nInstructio

refs MM CPI PipelineIC timeCPU

cache without trafficBus

cache with trafficBus ratio trafficBus

Page 3: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 3

MissPenalty

Block Size

MissRate Exploits spatial locality

Fewer blocks: compromisestemporal locality

Block Size

Increased Miss Penalty& Miss Rate

AverageAccess

Time

Block Size

Block Size Tradeoff In general, larger block size take advantage of spatial

locality, BUT: Larger block size means larger miss penalty

• Takes longer time to fill up the block

If block size is too big relative to cache size, miss rate will go up• Too few cache blocks

Average Access Time Hit Time + Miss Penalty x Miss Rate

Page 4: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 4

Sources of Cache Misses

Compulsory (cold start or process migration, first reference): first access to a block “Cold” fact of life: not a whole lot you can do about it

Conflict/Collision/Interference Multiple memory locations mapped to the same cache location Solution 1: Increase cache size Solution 2: Increase associativity

Capacity Cache cannot contain all blocks access by the program Solution 1: Increase cache size Solution 2: Restructure program

Coherence/Invalidation Other process (e.g., I/O) updates memory

Page 5: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 5

The 3C Model of Cache Misses Based on comparison with another cache

Compulsory—The first access to a block is not in the cache, so the block must be brought into the cache. These are also called cold start misses or first reference misses.(Misses in Infinite Cache)

Capacity—If the cache cannot contain all the blocks needed during execution of a program (its working set), capacity misses will occur due to blocks being discarded and later retrieved.(Misses in fully associative size X Cache)

Conflict—If the block-placement strategy is set-associative or direct mapped, conflict misses (in addition to compulsory and capacity misses) will occur because a block can be discarded and later retrieved if too many blocks map to its set. These are also called collision misses or interference misses.(Misses in A-way associative size X Cache but not in fully associative size X Cache)

Page 6: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 6

Sources of Cache Misses

Direct Mapped N-way Set Associative Fully Associative

Compulsory Miss

Cache Size

Capacity Miss

Invalidation Miss

Big Medium Small

If you are going to run “billions” of instruction, compulsory misses are insignificant.

Same Same Same

Conflict Miss High Medium Zero

Low(er) Medium High

Same Same Same

Page 7: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 7

3Cs Absolute Miss Rate

Cache Size (KB)

Mis

s R

ate

per

Typ

e

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

1 2 4 8

16

32

64

12

8

1-way

2-way

4-way

8-way

Capacity

Compulsory

Conflict

Page 8: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 8

3Cs Relative Miss Rate

Cache Size (KB)

Mis

s R

ate

per

Typ

e

0%

20%

40%

60%

80%

100%

1 2 4 8

16

32

64

12

8

1-way

2-way4-way

8-way

Capacity

Compulsory

Conflict

Page 9: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 9

How to Improve Cache Performance

Latency Reduce miss rate (Section 5.3 of HP2) Reduce miss penalty (Section 5.4 of HP2) Reduce hit time (Section 5.5 of HP2)

Bandwidth Increase hit bandwidth Increase miss bandwidth

Page 10: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 10

Block Size (bytes)

Miss Rate

0%

5%

10%

15%

20%

25%

16

32

64

12

8

25

6

1K

4K

16K

64K

256K

1. Reduce Misses via Larger Block Size

Page 11: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 11

2. Reduce Misses via Higher Associativity 2:1 Cache Rule

Miss Rate DM cache size N Miss Rate FA cache size N/2Not merely empirical

• Theoretical justification in Sleator and Tarjan, “Amortized efficiency of list update and paging rules”, CACM, 28(2):202-208,1985

Beware: Execution time is only final measure!Will clock cycle time increase?Hill [1988] suggested hit time external cache +10%, internal + 2% for

2-way vs. 1-way

Cache Size (KB)

Mis

s R

ate

per

Typ

e

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

1 2 4 8

16

32

64

12

8

1-way

2-way

4-way

8-way

Capacity

Compulsory

Page 12: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 12

ExampleAverage Memory Access Time vs. Miss Rate

Example: assume clock cycle time is 1.10 for 2-way, 1.12 for 4-way, 1.14 for 8-way vs. clock cycle time of direct mapped

(Red means A.M.A.T. not improved by more associativity)

Cache size Associativity(KB) 1-way 2-way 4-way 8-way

1 2.33 2.15 2.07 2.012 1.98 1.86 1.76 1.684 1.72 1.67 1.61 1.538 1.46 1.48 1.47 1.43

16 1.29 1.32 1.32 1.3232 1.20 1.24 1.25 1.2764 1.14 1.2 1.21 1.23128 1.10 1.17 1.18 1.20

Page 13: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 13

3. Reduce Conflict Misses via Victim Cache

How to combine fast hit time of direct mapped yet still avoid conflict misses

Add small highly associative buffer to hold data discarded from cache

Jouppi [1990]: 4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache

TAG DATA

?

TAG DATA

?

CPU

Mem

Page 14: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 14

4. Reduce Conflict Misses via Pseudo-Associativity

How to combine fast hit time of direct mapped and have the lower conflict misses of 2-way SA cache

Divide cache: on a miss, check other half of cache to see if there, if so have a pseudo-hit (slow hit)

Drawback: CPU pipeline is hard if hit takes 1 or 2 cyclesBetter for caches not tied directly to processor

Hit Time

Pseudo Hit Time Miss Penalty

Time

Page 15: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 15

5. Reduce Misses by Hardware Prefetching of Instruction & Data

Instruction prefetchingAlpha 21064 fetches 2 blocks on a missExtra block placed in stream bufferOn miss check stream buffer

Works with data blocks tooJouppi [1990] 1 data stream buffer got 25% misses from 4KB cache;

4 streams got 43%Palacharla & Kessler [1994] for scientific programs for 8 streams got

50% to 70% of misses from 2 64KB, 4-way set associative caches

Prefetching relies on extra memory bandwidth that can be used without penalty

Page 16: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 16

6. Reducing Misses by Software Prefetching Data

Data prefetchLoad data into register (HP PA-RISC loads) bindingCache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v9)

non-bindingSpecial prefetching instructions cannot cause faults;

a form of speculative execution

Issuing prefetch instructions takes time Is cost of prefetch issues < savings in reduced misses?

Page 17: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 17

7. Reduce Misses by Compiler Optimizations Instructions

Reorder procedures in memory so as to reduce misses Profiling to look at conflicts McFarling [1989] reduced caches misses by 75% on 8KB direct mapped

cache with 4 byte blocks Data

Merging ArraysImprove spatial locality by single array of compound elements vs. 2 arrays

Loop InterchangeChange nesting of loops to access data in order stored in memory

Loop FusionCombine two independent loops that have same looping and some variables overlap

BlockingImprove temporal locality by accessing “blocks” of data repeatedly vs. going down

whole columns or rows

Page 18: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 18

Merging Arrays Example

Reduces conflicts between val and key Addressing expressions are different

/* Before */int val[SIZE];int key[SIZE];

/* Before */int val[SIZE];int key[SIZE];

/* After */struct merge { int val; int key;};struct merge merged_array[SIZE];

/* After */struct merge { int val; int key;};struct merge merged_array[SIZE];

Page 19: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 19

Loop Interchange Example

Sequential accesses instead of striding through memory every 100 words

/* Before */for (k = 0; k < 100; k++) for (j = 0; j < 100; j++) for (i = 0; i < 5000; i++) x[i][j] = 2 * x[i][j];

/* Before */for (k = 0; k < 100; k++) for (j = 0; j < 100; j++) for (i = 0; i < 5000; i++) x[i][j] = 2 * x[i][j];

/* After */for (k = 0; k < 100; k++) for (i = 0; i < 5000; i++) for (j = 0; j < 100; j++) x[i][j] = 2 * x[i][j];

/* After */for (k = 0; k < 100; k++) for (i = 0; i < 5000; i++) for (j = 0; j < 100; j++) x[i][j] = 2 * x[i][j];

Page 20: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 20

Loop Fusion Example

Two misses per access to a & c vs. one miss per access

/* Before */for (i = 0; i < N; i++) for (j = 0; j < N; j++) a[i][j] = 1/b[i][j] * c[i][j];for (i = 0; i < N; i++) for (j = 0; j < N; j++) d[i][j] = a[i][j] + c[i][j];

/* Before */for (i = 0; i < N; i++) for (j = 0; j < N; j++) a[i][j] = 1/b[i][j] * c[i][j];for (i = 0; i < N; i++) for (j = 0; j < N; j++) d[i][j] = a[i][j] + c[i][j];

/* After */for (i = 0; i < N; i++) for (j = 0; j < N; j++) { a[i][j] = 1/b[i][j] * c[i][j];

d[i][j] = a[i][j] + c[i][j];}

/* After */for (i = 0; i < N; i++) for (j = 0; j < N; j++) { a[i][j] = 1/b[i][j] * c[i][j];

d[i][j] = a[i][j] + c[i][j];}

Page 21: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 21

Blocking Example

Two Inner Loops:Read all NxN elements of z[]Read N elements of 1 row of y[] repeatedlyWrite N elements of 1 row of x[]

Capacity Misses a function of N and Cache Size3 NxN no capacity misses; otherwise ...

Idea: compute on BxB submatrix that fits

/* Before */for (i = 0; i < N; i++) for (j = 0; j < N; j++) {r = 0; for (k = 0; k < N; k++) r = r + y[i][k]*z[k][j]; x[i][j] = r;}

/* Before */for (i = 0; i < N; i++) for (j = 0; j < N; j++) {r = 0; for (k = 0; k < N; k++) r = r + y[i][k]*z[k][j]; x[i][j] = r;}

Page 22: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 22

Blocking Example

Capacity misses go from 2N3 + N2 to 2N3/B +N2

B called Blocking Factor What happens to conflict misses?

/* After */for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i++) for (j = jj; j < min(jj+B-1,N); j++) {r = 0; for (k = kk; k < min(kk+B-1,N); k++) r = r + y[i][k]*z[k][j]; x[i][j] = x[i][j] + r;}

/* After */for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i++) for (j = jj; j < min(jj+B-1,N); j++) {r = 0; for (k = kk; k < min(kk+B-1,N); k++) r = r + y[i][k]*z[k][j]; x[i][j] = x[i][j] + r;}

Page 23: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 23

Reducing Conflict Misses by Blocking

Conflict misses in non-FA caches vs. block sizeLam et al [1991] found that a blocking factor of 24 had a fifth the misses vs.

48 despite the fact that both fit in cache

Blocking Factor

Mis

s R

ate

0

0.05

0.1

0 50 100 150

Fully Associative Cache

Direct Mapped Cache

Page 24: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 24

Performance Improvement

1 1.5 2 2.5 3

compress

cholesky(nasa7)

spice

mxm (nasa7)

btrix (nasa7)

tomcatv

gmty (nasa7)

vpenta (nasa7)

mergedarrays

loopinterchange

loop fusion blocking

Summary of Compiler Optimizations to Reduce Cache Misses

Page 25: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 25

1. Reduce Miss Penalty: Read Priority over Write on Miss

Write through with write buffers offer RAW conflicts with main memory reads on cache misses

If simply wait for write buffer to empty might increase read miss penalty by 50% (old MIPS 1000)

Check write buffer contents before read; if no conflicts, let the memory access continue

Write Back? Read miss replacing dirty block Normal: Write dirty block to memory, and then do the read Instead copy the dirty block to a write buffer, then do the read, and

then do the write CPU stall less since restarts as soon as read completes

Page 26: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 26

Valid Bits

100

200

300

1 1 1 0

1 10 0

0 0 0 1

2. Subblock Placement to Reduce Miss Penalty

Don’t have to load full block on a miss Have bits per subblock to indicate valid (Originally invented to reduce tag storage)

Page 27: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 27

3. Early Restart and Critical Word First

Don’t wait for full block to be loaded before restarting CPUEarly Restart—As soon as the requested word of the block

arrrives, send it to the CPU and let the CPU continue execution

Critical Word First—Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block.

Also called wrapped fetch and requested word first Generally useful only in large blocks Spatial locality a problem; tend to want next sequential

word, so not clear if benefit by early restart

Page 28: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 28

4. Non-blocking Caches to reduce stalls on misses

Non-blocking cache or lockup-free cache allows the data cache to continue to supply cache hits during a miss

“Hit under miss” reduces the effective miss penalty by being helpful during a miss instead of ignoring the requests of the CPU

“Hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple missesSignificantly increases the complexity of the cache controller as

there can be multiple outstanding memory accesses

Page 29: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 29

Value of Hit Under Miss for SPEC

FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26 Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss

Hit Under i Misses

Av

g. M

em

. A

cce

ss T

ime

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

eqntott

espresso

xlisp

compress

mdljsp2

ear

fpppp

tomcatv

swm256

doduc

su2cor

wave5

mdljdp2

hydro2d

alvinn

nasa7

spice2g6

ora

0->1

1->2

2->64

Base

Integer Floating Point

“Hit under i Misses”

Page 30: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 30

5. Miss Penalty Reduction: Second Level Cache

L2 EquationsAMAT = Hit TimeL1 + Miss RateL1 Miss PenaltyL1

Miss PenaltyL1 = Hit TimeL2 + Miss RateL2 Miss PenaltyL2

AMAT = Hit TimeL1 + Miss RateL1 (Hit TimeL2 + Miss RateL2 Miss PenaltyL2)

Definitions:Local miss rate— misses in this cache divided by the total number of memory

accesses to this cache (Miss rateL2)

Global miss rate—misses in this cache divided by the total number of memory accesses generated by the CPU (Miss RateL1 Miss RateL2)

Page 31: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 31

Reducing Misses: Which Apply to L2 Cache?

Reducing Miss Rate1. Reduce Misses via Larger Block Size

2. Reduce Conflict Misses via Higher Associativity

3. Reducing Conflict Misses via Victim Cache

4. Reducing Conflict Misses via Pseudo-Associativity

5. Reducing Misses by HW Prefetching Instr, Data

6. Reducing Misses by SW Prefetching Data

7. Reducing Capacity/Conf. Misses by Compiler Optimizations

Page 32: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 32

Relative CPU Time

Block Size

11.11.21.31.41.51.61.71.81.9

2

16 32 64 128 256 512

1.361.28 1.27

1.34

1.54

1.95

L2 cache block size & A.M.A.T.

32KB L1, 8 byte path to memory

Page 33: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 33

Reducing Miss Penalty Summary

Five techniquesRead priority over write on missSubblock placementEarly Restart and Critical Word First on missNon-blocking Caches (Hit Under Miss)Second Level Cache

Can be applied recursively to Multilevel CachesDanger is that time to DRAM will grow with multiple levels in between

Page 34: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 34

Review: Improving Cache Performance

1. Reduce the miss rate,

2. Reduce the miss penalty, or

3. Reduce the time to hit in the cache.

Page 35: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 35

1. Fast Hit Times via Small, Simple Caches

Why Alpha 21164 has 8KB Instruction and 8KB data cache + 96KB second level cache

Direct Mapped, on chip Impact of dynamic scheduling?

Alpha 21264 has 64KB 2-way L1 Data and Inst Cache

Page 36: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 36

2. Fast Hits by Avoiding Addr. Translation Send virtual address to cache? Called Virtually Addressed Cache or

just Virtual Cache, vs. Physical CacheEvery time process is switched logically must flush the cache;

otherwise get false hits• Cost is time to flush + “compulsory” misses from empty cache

Dealing with aliases (sometimes called synonyms); Two different virtual addresses map to same physical address

I/O must interact with cache, so need virtual address Solution to aliases

HW guarantee: each cache frame holds unique physical addressSW guarantee: lower n bits must have same address; as long as

covers index field & direct mapped, they must be unique;called page coloring

Solution to cache flushAdd process identifier tag that identifies process as well as address

within process: can’t get a hit if wrong process

Page 37: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 37

Virtually Addressed Caches

CPU

TLB

$

MEM

VA

PA

PA

ConventionalOrganization

CPU

$

TLB

MEM

VA

VA

PA

Virtually Addressed CacheTranslate only on miss

Synonym Problem

CPU

$ TLB

MEM

VA

PATags

PA

Overlap $ accesswith VA translation:requires $ index to

remain invariantacross translation

VATags

L2 $

Page 38: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 38

2. Avoiding Translation: Process ID impact

Black is uniprocess Light Gray is multiprocess

when flush cache Dark Gray is multiprocess

when use Process ID tag Y axis: Miss Rates up to 20% X axis: Cache size from 2 KB

to 1024 KB Fig 5.26

Page 39: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 39

If index is physical part of address, can start tag access in parallel with translation so that can compare to physical tag

Limits cache to page size: what if we want bigger caches and use same trick?Higher associativityPage coloring

2. Avoiding Translation: Index with Physical Portion of Address

Page Address Page Offset

Address Tag Index Block Offset

Page 40: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 40

Cache Optimization Summary

Technique MR MP HT Complexity

Larger Block Size + – 0Higher Associativity + – 1Victim Caches + 2Pseudo-Associative Caches + 2HW Prefetching of Instr/Data + 2Compiler Controlled Prefetching + 3Compiler Reduce Misses + 0

Priority to Read Misses + 1Subblock Placement + + 1Early Restart & Critical Word 1st + 2Non-Blocking Caches + 3Second Level Caches + 2

Small & Simple Caches – + 0Avoiding Address Translation + 2

Page 41: COMP 206 Computer Architecture and Implementation Unit 8b: Cache Misses

Fall 2000 Siddhartha Chatterjee 41

Impact of Caches

1960-1985: Speed = ƒ(no. operations)

1997Pipelined

Execution & Fast Clock Rate

Out-of-Order completion

Superscalar Instruction Issue

1999: Speed = ƒ(non-cached memory accesses)

What does this mean forCompilers, Operating Systems, Algorithms, Data Structures?

1

10

100

1000

1980

1981

1982

1983

1984

1985

1986

1987

1988

1989

1990

1991

1992

1993

1994

1995

1996

1997

1998

1999

2000

DRAM

CPU