Memory caching

55
1 Cache Memory

Transcript of Memory caching

Page 1: Memory caching

1

Cache Memory

Page 2: Memory caching

2

Outline

• General concepts• 3 ways to organize cache memory• Issues with writes • Write cache friendly codes• Cache mountain• Suggested Reading: 6.4, 6.5, 6.6

Page 3: Memory caching

3

6.4 Cache Memories

Page 4: Memory caching

4

Cache Memory

• History– At very beginning, 3 levels

• Registers, main memory, disk storage– 10 years later, 4 levels

• Register, SRAM cache, main DRAM memory, disk storage– Modern processor, 4~5 levels

• Registers, SRAM L1, L2(,L3) cache, main DRAM memory, disk storage

– Cache memories• are small, fast SRAM-based memories • are managed by hardware automatically• can be on-chip, on-die, off-chip

Page 5: Memory caching

5

Cache MemoryFigure 6.24 P488

mainmemory

I/Obridgebus interfaceL2 cache

ALU

register file

CPU chip

cache bus system bus memory bus

L1 cache

Page 6: Memory caching

6

Cache Memory

• L1 cache is on-chip• L2 cache is off-chip several years ago• L3 cache can be off-chip or on-chip• CPU looks first for data in L1, then in L2,

then in main memory– Hold frequently accessed blocks of main

memory are in caches

Page 7: Memory caching

7

Inserting an L1 cache between the CPU and main memory

a b c dblock 10

p q r sblock 21 ......

w x y zblock 30 ...

The big slow main memoryhas room for many 4-wordblocks.

The small fast L1 cache has roomfor two 4-word blocks.

The tiny, very fast CPU register filehas room for four 4-byte words.

The transfer unit betweenthe cache and main memory is a 4-word block(16 bytes).

The transfer unit betweenthe CPU register file and the cache is a 4-byte block.

line 0

line 1

Page 8: Memory caching

8

6.4.1 Generic Cache Memory OrganizationFigure 6.25 P488

• • •B–110• • •B–110

valid

valid

tag

tagset 0:

B = 2b bytesper cache block

E lines per set

S = 2s sets

t tag bitsper line

1 valid bitper line

• • •

• • •B–110• • •B–110

valid

valid

tag

tagset 1: • • •

• • •B–110• • •B–110

valid

valid

tag

tagset S-1: • • •

• • •

Cache is an arrayof sets.

Each set containsone or more lines.

Each line holds ablock of data.

pp.488

Page 9: Memory caching

9

Addressing cachesFigure 6.25 P488

t bits s bits b bits

0m-1

<tag> <set index> <block offset>

Address A:

• • •B–110• • •B–110

v

v

tag

tagset 0: • • •

• • •B–110• • •B–110

v

v

tag

tagset 1: • • •

• • •B–110• • •B–110

v

v

tag

tagset S-1: • • •

• • •The word at address A is in the cache ifthe tag bits in one of the <valid> lines in set <set index> match <tag>.

The word contents begin at offset <block offset> bytes from the beginning of the block.

Page 10: Memory caching

10

Cache Memory

Fundamental parameters

Parameters

Descriptions

S = 2s

EB=2b

m=log2(M)

Number of setsNumber of lines per setBlock size(bytes)Number of physical(main memory) address bits

Page 11: Memory caching

11

Cache Memory

Derived quantitiesParameters

Descriptions

M=2m

s=log2(S)b=log2(B)t=m-(s+b)C=BE S

Maximum number of unique memory addressNumber of set index bitsNumber of block offset bitsNumber of tag bitsCache size (bytes) not including overhead such as the valid and tag bits

Page 12: Memory caching

12

6.4.2 Direct-mapped cacheFigure 6.27 P490• Simplest kind of cache• Characterized by exactly one line per set.

valid

valid

valid

tag

tag

tag

• • •

set 0:

set 1:

set S-1:

E=1 lines per setcache block

cache block

cache block

Page 13: Memory caching

13

Accessing direct-mapped cachesFigure 6.28 P491• Set selection

– Use the set index bits to determine the set of interest

valid

valid

valid

tag

tag

tag

• • •

set 0:

set 1:

set S-1:t bits s bits

0 0  0 0 10m-1

b bits

tag set index block offset

selected set

cache block

cache block

cache block

Page 14: Memory caching

14

Accessing direct-mapped caches

• Line matching and word extraction– find a valid line in the selected set with a

matching tag (line matching)– then extract the word (word selection)

Page 15: Memory caching

15

Accessing direct-mapped cachesFigure 6.29 P491

1

t bits s bits

100i01100m-1

b bits

tag set index block offset

selected set (i):

=1?

= ? (3) If (1) and (2), then cache hit,

and block offset selects

starting byte.

(1) The valid bit must be set

(2) The tag bits in the cacheline must match the

tag bits in the address

0110 w3w0 w1 w2

30 1 2 74 5 6

Page 16: Memory caching

16

Line Replacement on Misses in Directed Caches

• If cache misses– Retrieve the requested block from the next

level in the memory hierarchy– Store the new block in one of the cache lines of

the set indicated by the set index bits

Page 17: Memory caching

17

Line Replacement on Misses in Directed Caches

• If the set is full of valid cache lines– One of the existing lines must be evicted

• For a direct-mapped caches– Each set contains only one line– Current line is replaced by the newly fetched

line

Page 18: Memory caching

18

Direct-mapped cache simulation P492

• M=16 byte addresses• B=2 bytes/block, S=4 sets, E=1 entry/set

Page 19: Memory caching

19

Direct-mapped cache simulation P493

1 0 m[1] m[0]v tag data

1 1 m[13] m[12]

0 [0000] (miss)

(4)1 1 m[9] m[8]v tag data

1 1 m[13] m[12]

8 [1000] (miss)

(3)

1 0 m[1] m[0]v tag data

1 1 m[13] m[12]

13 [1101] (miss)

(2)1 0 m[1] m[0]v tag data

0 [0000] (miss)

(1)

M=16 byte addresses, B=2 bytes/block, S=4 sets, E=1 entry/setAddress trace (reads):

0 [0000] 1 [0001] 13 [1101] 8 [1000] 0 [0000]xt=1 s=2 b=1

xx x

Page 20: Memory caching

20

Direct-mapped cache simulation Figure 6.30 P493

Address bitsAddress

(decimal)Tag bits

(t=1)Index bits

(s=2)Offset bits

(b=1)Block

number(decimal)

0123456789101112131415

0000000011111111

00000101101011110000010110101111

0101010101010101

0011223344556677

Page 21: Memory caching

21

Why use middle bits as index?

• High-Order Bit Indexing– Adjacent memory lines would

map to same cache entry– Poor use of spatial locality

• Middle-Order Bit Indexing– Consecutive memory lines

map to different cache lines– Can hold C-byte region of

address space in cache at one time

4-line Cache High-OrderBit Indexing

Middle-OrderBit Indexing00

011011

0000000100100011010001010110011110001001101010111100110111101111

0000000100100011010001010110011110001001101010111100110111101111

Figure 6.31 P497

Page 22: Memory caching

22

6.4.3 Set associative caches

• Characterized by more than one line per setvalid tag

set 0: E=2 lines per set

set 1:

set S-1:

• • •

cache block

valid tag cache block

valid tag cache block

valid tag cache block

valid tag cache block

valid tag cache block

Figure 6.32 P498

Page 23: Memory caching

23

Accessing set associative caches

• Set selection– identical to direct-mapped cache

valid

valid

tag

tagset 0:

valid

valid

tag

tagset 1:

valid

valid

tag

tagset S-1:

• • •t bits s bits0 0  0 0 1

0m-1

b bits

tag set index block offset

Selected set

cache block

cache block

cache block

cache block

cache block

cache block

Figure 6.33 P498

Page 24: Memory caching

24

Accessing set associative caches

• Line matching and word selection– must compare the tag in each valid line in the

selected set.

(3) If (1) and (2), then cache hit, and

block offset selects starting byte.

1 0110 w3w0 w1 w2

1 1001

t bits s bits 100i01100m-1

b bits

tag set index block offset

selected set (i):

=1?

= ?(2) The tag bits in one of the cache lines must

match the tag bits inthe address

(1) The valid bit must be set.

30 1 2 74 5 6

Figure 6.34 P499

Page 25: Memory caching

25

6.4.4 Fully associative caches

• Characterized by all of the lines in the only one set

• No set index bits in the address

set 0:

valid

valid

tag

tag

cache block

cache block

valid tag cache block

… E=C/B lines in the one and only set

t bits b bits

tag block offset

Figure 6.36 P500

Figure 6.35 P500

Page 26: Memory caching

26

Accessing set associative caches

• Word selection– must compare the tag in each valid line

0 0110w3w0 w1 w2

1 1001

t bits 1000110 0m-1

b bits

tag block offset

=1?

= ? (3) If (1) and (2), then cache hit, and

block offset selects starting byte.

(2) The tag bits in one of the cache lines must

match the tag bits inthe address

(1) The valid bit must be set.

30 1 2 74 5 6

10

01101110

Figure 6.37 P500

Page 27: Memory caching

27

6.4.5 Issues with Writes

• Write hits– 1) Write through

• Cache updates its copy• Immediately writes the corresponding cache block to

memory– 2) Write back

• Defers the memory update as long as possible• Writing the updated block to memory only when it is

evicted from the cache• Maintains a dirty bit for each cache line

Page 28: Memory caching

28

Issues with Writes

• Write misses– 1) Write-allocate

• Loads the corresponding memory block into the cache• Then updates the cache block

– 2) No-write-allocate• Bypasses the cache• Writes the word directly to memory

• Combination– Write through, no-write-allocate– Write back, write-allocate

Page 29: Memory caching

29

6.4.6 Multi-level caches

size:speed:$/Mbyte:line size:

8-64 KB3 ns

32 B

128 MB DRAM60 ns$1.50/MB8 KB

30 GB8 ms$0.05/MB

larger, slower, cheaper

Memory disk

TLB

L1 I-cache

L1 D-cacheregs L2 Cache

Processor

1-4MB SRAM6 ns$100/MB32 B

larger line size, higher associativity, more likely to write back

Options: separate data and instruction caches, or a unified cache

Figure 6.38 P504

Page 30: Memory caching

30

6.4.7 Cache performance metrics P505

• Miss Rate– fraction of memory references not found in

cache (misses/references)– Typical numbers:

3-10% for L1• Hit Rate

– fraction of memory references found in cache (1 - miss rate)

Page 31: Memory caching

31

Cache performance metrics

• Hit Time– time to deliver a line in the cache to the

processor (includes time to determine whether the line is in the cache)

– Typical numbers:1-2 clock cycle for L15-10 clock cycles for L2

• Miss Penalty– additional time required because of a miss

• Typically 25-100 cycles for main memory

Page 32: Memory caching

32

Cache performance metrics P505

• 1> Cache size– Hit rate vs. hit time

• 2> Block size– Spatial locality vs. temporal locality

• 3> Associativity– Thrashing– Cost– Speed– Miss penalty

• 4> Write strategy– Simple, read misses, fewer transfer

Page 33: Memory caching

33

6.5 Writing Cache-Friendly Code

Page 34: Memory caching

34

Writing Cache-Friendly Code

• Principles– Programs with better locality will tend to have

lower miss rates– Programs with lower miss rates will tend to run

faster than programs with higher miss rates

Page 35: Memory caching

35

Writing Cache-Friendly Code

• Basic approach– Make the common case go fast

• Programs often spend most of their time in a few core functions.

• These functions often spend most of their time in a few loops

– Minimize the number of cache misses in each inner loop

• All things being equal

Page 36: Memory caching

36

Writing Cache-Friendly Code P507

8[h]7[h]6[h]5[m]4[h]3[h]2[h]1[m]Access order,[h]it or [m]iss

i= 7i= 6i= 5i= 4i= 3i= 2i= 1i=0v[i]

Temporal locality,These variables are usually put in registers int sumvec(int v[N])

{int i, sum = 0 ;

for (i = 0 ; i < N ; i++)sum += v[i] ;

return sum ; }

Page 37: Memory caching

37

Writing cache-friendly code

• Temporal locality– Repeated references to local variables are good

because the compiler can cache them in the register file

Page 38: Memory caching

38

Writing cache-friendly code

• Spatial locality– Stride-1 references patterns are good because

caches at all levels of the memory hierarchy store data as contiguous blocks

• Spatial locality is especially important in programs that operate on multidimensional arrays

Page 39: Memory caching

39

Writing cache-friendly code P508

• Example (M=4, N=8, 10cycles/iter)int sumvec(int a[M][N]){int i, j, sum = 0 ;

for (i = 0 ; i < M ; i++)for ( j = 0 ; j < N ; j++ )

sum += a[i][j] ;return sum ;

}

Page 40: Memory caching

40

Writing cache-friendly code

a[i][j] j=0 j= 1 j= 2 j= 3 j= 4 j= 5 j= 6 j= 7

i=0i=1i=2i=3

1[m]9[m]17[m]25[m]

2[h]10[h]18[h]26[h]

3[h]11[h]19[h]27[h]

4[h]12[h]20[h]28[h]

5[m]13[m]21[m]29[m]

6[h]14[h]22[h]30[h]

7[h]15[h]23[h]31[h]

8[h]16[h]24[h]32[h]

Page 41: Memory caching

41

Writing cache-friendly code P508

• Example (M=4, N=8, 20cycles/iter) int sumvec(int v[M][N]) {int i, j, sum = 0 ;

for ( j = 0 ; j < N ; j++ )for ( i = 0 ; i < M ; i++ )

sum += v[i][j] ;return sum ;

}

Page 42: Memory caching

42

Writing cache-friendly code

a[i][j] j=0 j= 1 j= 2 j= 3 j= 4 j= 5 j= 6 j= 7

i=0i=1i=2i=3

1[m]2[m]3[m]4[m]

5[m]6[m]7[m]8[m]

9[m]10[m]11[m]12[m]

13[m]14[m]15[m]16[m]

17[m]18[m]19[m]20[m]

21[m]22[m]23[m]24[m]

25[m]26[m]27[m]28[m]

29[m]30[m]31[m]32[m]

Page 43: Memory caching

43

6.6 Putting it Together: The Impact of Caches on Program Performance

6.6.1 The Memory Mountain

Page 44: Memory caching

44

The Memory Mountain P512

• Read throughput (read bandwidth)– The rate that a program reads data from the

memory system• Memory mountain

– A two-dimensional function of read bandwidth versus temporal and spatial locality

– Characterizes the capabilities of the memory system for each computer

Page 45: Memory caching

45

Memory mountain main routine Figure 6.41 P513

/* mountain.c - Generate the memory mountain. */

#define MINBYTES (1 << 10) /* Working set size ranges from 1 KB */

#define MAXBYTES (1 << 23) /* ... up to 8 MB */

#define MAXSTRIDE 16 /* Strides range from 1 to 16 */

#define MAXELEMS MAXBYTES/sizeof(int)

int data[MAXELEMS]; /* The array we'll be traversing */

Page 46: Memory caching

46

Memory mountain main routine

int main()

{

int size; /* Working set size (in bytes) */

int stride; /* Stride (in array elements) */

double Mhz; /* Clock frequency */

init_data(data, MAXELEMS); /* Initialize each element in data to 1 */

Mhz = mhz(0); /* Estimate the clock frequency */

Page 47: Memory caching

47

Memory mountain main routine

for (size = MAXBYTES; size >= MINBYTES; size >>= 1) {

for (stride = 1; stride <= MAXSTRIDE; stride++)

printf("%.1f\t", run(size, stride, Mhz));

printf("\n");

}

exit(0);

}

Page 48: Memory caching

48

Memory mountain test functionFigure 6.40 P512

/* The test function */

void test (int elems, int stride) {

int i, result = 0;

volatile int sink;

for (i = 0; i < elems; i += stride)

result += data[i];

sink = result; /* So compiler doesn't optimize away the loop */

}

Page 49: Memory caching

49

Memory mountain test function

/* Run test (elems, stride) and return read throughput (MB/s) */

double run (int size, int stride, double Mhz)

{

double cycles;

int elems = size / sizeof(int);

test (elems, stride); /* warm up the cache */

cycles = fcyc2(test, elems, stride, 0); /* call test (elems,stride) */

return (size / stride) / (cycles / Mhz); /* convert cycles to MB/s */

}

Page 50: Memory caching

50

The Memory Mountain

• Data– Size

• MAXBYTES(8M) bytes or MAXELEMS(2M) words

– Partially accessed• Working set: from 8MB to 1KB• Stride: from 1 to 16

Page 51: Memory caching

51

The Memory MountainFigure 6.42 P514

s1

s3

s5

s7

s9

s11

s13

s15

8m

2m 512k 12

8k 32k 8k

2k

0

200

400

600

800

1000

1200

read

thro

ughp

ut (M

B/s

)

stride (words) working set size (bytes)

Pentium III Xeon550 MHz16 KB on-chip L1 d-cache16 KB on-chip L1 i-cache512 KB off-chip unifiedL2 cache

Ridges ofTemporalLocality

L1

L2

mem

Slopes ofSpatialLocality

xe

Page 52: Memory caching

52

Ridges of temporal locality

• Slice through the memory mountain with stride=1– illuminates read throughputs of different

caches and memory

Ridges: 山脊

Page 53: Memory caching

53

Ridges of temporal localityFigure 6.43 P515

0

200

400

600

800

1000

12008m 4m 2m

1024

k

512k

256k

128k

64k

32k

16k

8k 4k 2k 1k

working set size (bytes)

read

thro

ugpu

t (M

B/s

)

L1 cacheregion

L2 cacheregion

main memoryregion

Page 54: Memory caching

54

A slope of spatial locality

• Slice through memory mountain with size=256KB– shows cache block size.

Page 55: Memory caching

55

A slope of spatial localityFigure 6.44 P516

0

100

200

300

400

500

600

700

800

s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 s11 s12 s13 s14 s15 s16

stride (words)

read

thro

ughp

ut (M

B/s

)

one access per cache line