Cache Performance Metrics Miss Rate Fraction of memory references not found in cache (misses /...

62
Cache Performance Metrics Miss Rate Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate Typical numbers (in percentages): 3-10% for L1 Can be quite small (e.g., < 1%) for L2, depending on size, etc. Hit Time Time to deliver a line in the cache to the processor Includes time to determine whether the line is in the cache Typical numbers: 1-2 clock cycle for L1 5-20 clock cycles for L2 Miss Penalty Additional time required because of a miss

Transcript of Cache Performance Metrics Miss Rate Fraction of memory references not found in cache (misses /...

Page 1: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Cache Performance Metrics Miss Rate

Fraction of memory references not found in cache (misses / accesses)= 1 – hit rate

Typical numbers (in percentages): 3-10% for L1 Can be quite small (e.g., < 1%) for L2, depending on size, etc.

Hit Time Time to deliver a line in the cache to the processor

Includes time to determine whether the line is in the cache Typical numbers:

1-2 clock cycle for L1 5-20 clock cycles for L2

Miss Penalty Additional time required because of a miss

typically 50-200 cycles for main memory (Trend: increasing!)

Page 2: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Average memory access time The average memory access time, or AMAT, can then be computed.

AMAT = Hit time + (Miss rate x Miss penalty)

This is just averaging the amount of time for cache hits and the amount of time for cache misses.

How can we improve the average memory access time of a system? Obviously, a lower AMAT is better. Miss penalties are usually much greater than hit times, so the best way to

lower AMAT is to reduce the miss penalty or the miss rate. However, AMAT should only be used as a general guideline.

Remember that execution time is still the best performance metric.

Page 3: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Lets think about those numbers Huge difference between a hit and a miss

Could be 100x, if just L1 and main memory

Would you believe 99% hits is twice as good as 97%? Consider:

cache hit time of 1 cyclemiss penalty of 100 cycles

Average memory access time (AMAT): 97% hits: 1 cycle + 0.03 * 100 cycles = 4 cycles 99% hits: 1 cycle + 0.01 * 100 cycles = 2 cycles

This is why “miss rate” is used instead of “hit rate”

Page 4: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Types of Cache Misses Cold (compulsory) miss

Occurs on first access to a block

Conflict miss Most hardware caches limit block placement to a small subset

(sometimes a singleton) of the available cache slots e.g., block i must be placed in slot (i mod 4)

Conflict misses occur when the cache is large enough, but multiple data objects all map to the same slot

e.g., referencing blocks 0, 8, 0, 8, ... would miss every time

Capacity miss Occurs when the set of active cache blocks (working set) is larger

than the cache

Page 5: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Four important questions

1. When we copy a block of data from main memory to the cache, where exactly should we put it?

2. How can we tell if a word is already in the cache (hit), or if it has to be fetched from main memory first (miss)?

3. Eventually, the small cache memory might fill up. To load a new block from main RAM, we’d have to replace one of the existing blocks in the cache... which one?

4. How can write operations be handled by the memory system?

Page 6: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

General Cache Organization (S, E, B)E = 2e lines/blocks per set

S = 2s sets

setline

0 1 2 B-1tagv

valid bit B = 2b bytes per cache block (the data)

Nominal cache size:S x E x B data bytes

Page 7: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Cache ReadE = 2e lines/blocks per set

S = 2s sets

0 1 2 B-1tagv

B = 2b bytes per cache block (the data)

t bits s bits b bitsAddress of word:

tag setindex

blockoffset

data begins at this offset

• Locate set• Check if any line in set

has matching tag• Yes + line valid: hit• Locate data starting

at offset

Page 8: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Where should we put data in the cache? A direct-mapped cache is the simplest approach: each main memory

address maps to exactly one cache line/block. For example, on the right

is a 16-byte main memoryand a 4-byte cache (four1-byte blocks).

Memory locations 0, 4, 8and 12 all map to cacheblock 0.

Addresses 1, 5, 9 and 13map to cache block 1, etc.

How can we compute thismapping?

0123

Index

0123456789

101112131415

MemoryAddress

Page 9: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

It’s all divisions… One way to figure out which cache block a particular memory address

should go to is to use the mod (remainder) operator. If the cache contains 2k

blocks, then the data atmemory address i wouldgo to cache block index

i mod 2k

For instance, with the four-block cache here,address 14 would mapto cache block 2.

14 mod 4 = 2

0123

Index

0123456789

101112131415

MemoryAddress

Page 10: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

…or least-significant bits An equivalent way to find the placement of a memory address in the

cache is to look at the least significant k bits of the address. With our four-byte cache

we would inspect the twoleast significant bits ofour memory addresses.

Again, you can see thataddress 14 (1110 in binary)maps to cache block 2(10 in binary).

Taking the least k bits ofa binary value is the sameas computing that valuemod 2k.

00011011

Index

0000000100100011010001010110011110001001101010111100110111101111

MemoryAddress

Page 11: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

The second question was how to determine whether or not the data we’re interested in is already stored in the cache (hit or miss).

If we want to read memoryaddress i, we can use themod trick to determinewhich cache block wouldcontain i.

But other addresses mightalso map to the same cacheblock. How can wedistinguish between them?

For instance, cache block2 could contain data fromaddresses 2, 6, 10 or 14.

How can we find data in the cache?

0123

Index

0123456789

101112131415

MemoryAddress

Page 12: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Adding tags We need to add tags to the cache, which supply the rest of the

address bits to let us distinguish between different memory locations that map to the same cache block.

00011011

Index

0000000100100011010001010110011110001001101010111100110111101111

Tag Data

00??0101

Page 13: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Adding tags We need to add tags to the cache, which supply the rest of the

address bits to let us distinguish between different memory locations that map to the same cache block.

00011011

Index

0000000100100011010001010110011110001001101010111100110111101111

Tag Data

00110101

Page 14: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Figuring out what’s in the cache Now we can tell exactly which addresses of main memory are stored

in the cache, by concatenating the cache block tags with the block indices.

00011011

Index Tag Data

00110101

00 + 00 = 000011 + 01 = 110101 + 10 = 011001 + 11 = 0111

Main memoryaddress in cache block

Page 15: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

One more detail: the valid bit When started, the cache is empty and does not contain valid data. We should account for this by adding a valid bit for each cache block.

When the system is initialized, all the valid bits are set to 0. When data is loaded into a particular cache block, the corresponding valid

bit is set to 1.

So the cache contains more than just copies of the data in memory; it also has bits to help us find data within the cache and verify its validity.

00011011

Index Tag Data

00110101

00 + 00 = 0000Invalid

??????

Main memoryaddress in cache block

1001

ValidBit

Page 16: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

One more detail: the valid bit When started, the cache is empty and does not contain valid data. We should account for this by adding a valid bit for each cache block.

When the system is initialized, all the valid bits are set to 0. When data is loaded into a particular cache block, the corresponding valid

bit is set to 1.

So the cache contains more than just copies of the data in memory; it also has bits to help us find data within the cache and verify its validity.

00011011

Index Tag Data

00110101

00 + 00 = 0000InvalidInvalid

01 + 11 = 0111

Main memoryaddress in cache block

1001

ValidBit

Page 17: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

What happens on a cache hit When the CPU tries to read from memory, the address will be sent to

a cache controller. The lowest k bits of the address will index a block in the cache. If the block is valid and the tag matches the upper (m - k) bits of the m-bit

address, then that data will be sent to the CPU. Here is a diagram of a 32-bit memory address and a 210-byte cache.

0123......

10221023

Index Tag DataValidAddress (32 bits)

=

To CPU

Hit

1022

Index

Tag

Page 18: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

18

What happens on a cache miss The delays that we’ve been assuming for memories (e.g., 2ns) are

really assuming cache hits. If our CPU implementations accessed main memory directly, their cycle

times would have to be much larger. Instead we assume that most memory accesses will be cache hits, which

allows us to use a shorter cycle time. However, a much slower main memory access is needed on a cache

miss. The simplest thing to do is to stall the pipeline until the data from main memory can be fetched (and also copied into the cache).

Page 19: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Loading a block into the cache After data is read from main memory, putting a copy of that data into

the cache is straightforward. The lowest k bits of the address specify a cache block. The upper (m - k) address bits are stored in the block’s tag field. The data from main memory is stored in the block’s data field. The valid bit is set to 1.

0123...

...

...

Index Tag DataValidAddress (32 bits)

1022

Index

Tag

Data

1

Page 20: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

What if the cache fills up? Our third question was what to do if we run out of space in our cache,

or if we need to reuse a block for a different memory address. We answered this question implicitly on the last page!

A miss causes a new block to be loaded into the cache, automatically overwriting any previously stored data.

This is a least recently used replacement policy, which assumes that older data is less likely to be requested than newer data.

We’ll see a few other policies next.

Page 21: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

One-byte cache blocks don’t take advantage of spatial locality, which predicts that an access to one address will be followed by an access to a nearby address.

What can we do?

Spatial locality

Page 22: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

What we can do is make the cache block size larger than one byte.

Here we use two-byte blocks, sowe can load thecache with twobytes at a time.

If we read fromaddress 12, thedata in addresses12 and 13 wouldboth be copied tocache block 2.

Spatial locality

0123456789

101112131415

MemoryAddress

0123

Index

Page 23: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Now how can we figure out where data should be placed in the cache?

It’s time for block addresses! If the cache block size is 2n bytes, we can conceptually split the main memory into 2n-byte chunks too.

To determine the block address of a byteaddress i, you can do the integer division

i / 2n

Our example has two-byte cache blocks, sowe can think of a 16-byte main memory asan “8-block” main memory instead.

For instance, memory addresses 12 and 13both correspond to block address 6, since12 / 2 = 6 and 13 / 2 = 6.

Block addresses

0123456789

101112131415

ByteAddress

0

1

2

3

4

5

6

7

BlockAddress

Page 24: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Once you know the block address, you can map it to the cache as before: find the remainder when the block address is divided by the number of cache blocks.

In our example, memory block 6belongs in cacheblock 2, since6 mod 4 = 2.

This correspondsto placing datafrom memorybyte addresses12 and 13 intocache block 2.

Cache mapping

0123456789

101112131415

ByteAddress

0123

Index

0

1

2

3

4

5

6

7

BlockAddress

Page 25: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

When we access one byte of data in memory, we’ll copy its entire block into the cache, to hopefully take advantage of spatial locality.

In our example, if a program reads from byte address 12 we’ll load all of memory block 6 (both addresses 12 and 13) into cache block 2.

Note byte address 13 corresponds to the same memory block address! So a read from address 13 will also cause memory block 6 (addresses 12 and 13) to be loaded into cache block 2.

To make things simpler, byte i of a memory block is always stored in byte i of the corresponding cache block.

Data placement within a block

1213

ByteAddress

2

CacheBlockByte 1Byte 0

Page 26: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Locating data in the cache Let’s say we have a cache with 2k blocks, each containing 2n bytes. We can determine where a byte of data belongs in this cache by

looking at its address in main memory. k bits of the address will select one of the 2k cache blocks. The lowest n bits are now a block offset that decides which of the 2n bytes

in the cache block will store the data.

Our example used a 22-block cache with 21 bytes per block. Thus, memory address 13 (1101) would be stored in byte 1 of cache block 2.

m-bit Address

k bits(m-k-n) bitsn-bit Block

Offset Tag Index

4-bit Address

2 bits1 bit1-bit Block

Offset 1 10 1

Page 27: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

A picture

1

0123

Index Tag DataValid

Address 13 (4 bits)

=

Hit

2

Block offset

Mux

Data

8 8

8

1 10

Tag Index (2 bits)

1 0

Page 28: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

An exercise

n

0123

Index Tag DataValid

Address (4 bits)

=

Hit

2

Block offset

Mux

Data

8 8

8

n nn

Tag Index (2 bits)

111

1

0101

0xCA 0xFE0xDE 0xAD0xBE 0xEF0xFE 0xED

0

0

For the addresses below, what byte (value) is read from the cache (or is there a miss)?

1010 1110 0001 1101

Page 29: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

An exercise

n

0123

Index Tag DataValid

Address (4 bits)

=

Hit

2

Block offset

Mux

Data

8 8

8

n nn

Tag Index (2 bits)

111

1

0101

0xCA 0xFE0xDE 0xAD0xBE 0xEF0xFE 0xED

0

0

For the addresses below, what byte (value) is read from the cache (or is there a miss)?

1010 (0xDE) 1110 (miss, invalid) 0001 (0xFE) 1101 (miss, bad tag)

Page 30: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

30

Using arithmetic An equivalent way to find the right location within the cache is to use

arithmetic again.

We can find the index in two steps, as outlined earlier. Do integer division of the address by 2n to find the block address. Then mod the block address with 2k to find the index.

The block offset is just the memory address mod 2n. For example, we can find address 13 in a 4-block, 2-byte per block

cache. The block address is 13 / 2 = 6, so the index is then 6 mod 4 = 2. The block offset would be 13 mod 2 = 1.

m-bit Address

k bits(m-k-n) bitsn-bit Block

Offset Tag Index

Page 31: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

A diagram of a larger example cache Here is a cache with 1,024

blocks of 4 bytes each, and 32-bit memory addresses.

0123......

10221023

Index Tag DataValid

Address (32 bits)

=

Hit

??

Tag

? bits

Mux

Data

8 8 8 8

8

Page 32: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

A diagram of a larger example cache Here is a cache with 1,024

blocks of 4 bytes each, and 32-bit memory addresses.

0123......

10221023

Index Tag DataValid

Address (32 bits)

=

Hit

1020

Tag

2 bits

Mux

Data

8 8 8 8

8

Page 33: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Example: Direct Mapped Cache (E = 1)

S = 2s sets

Direct mapped: One line/block per setAssume: cache block size 8 bytes

t bits 0…01 100Address of int:

0 1 2 7tagv 3 654

0 1 2 7tagv 3 654

0 1 2 7tagv 3 654

0 1 2 7tagv 3 654

find set

Page 34: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Example: Direct Mapped Cache (E = 1)

t bits 0…01 100Address of int:

0 1 2 7tagv 3 654

match: assume yes = hitvalid? +

block offset

tag

Direct mapped: One line/block per setAssume: cache block size 8 bytes

Page 35: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Example: Direct Mapped Cache (E = 1)

t bits 0…01 100Address of int:

0 1 2 7tagv 3 654

match: assume yes = hitvalid? +

int (4 Bytes) is here

block offset

No match: old line is evicted and replaced

Direct mapped: One line/block per setAssume: cache block size 8 bytes

Page 36: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

A larger example cache mapping Where would the byte from 32-bit memory address 6146 be stored in

this direct-mapped (one block per set) 1024(210)-set cache with 4(22)-byte blocks?

What are the Block offset? (which byte within the block?) Set index? (which set?) Tag?

Page 37: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

A larger example cache mapping Where would the byte from 32-bit memory address 6146 be stored in

this direct-mapped (one block per set) 1024(210)-set cache with 4(22)-byte blocks?

What are the Block offset? (which byte within the block?) Set index? (which set?) Tag?

We can determine this with the binary force. 6146 in binary is 0000 0000 0000 0000 0001 1000 0000 0010. The lowest 2 bits, 10, mean this is the second byte in its block. The next 10 bits, 1000000000, are the block number itself (512). Tag is 1.

Equivalently, you could use arithmetic instead. The block offset is 6146 mod 4, which equals 2. The block address is 6146/4 = 1536, so the index is 1536 mod 1024, or 512.

Page 38: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Exampleint sum_array_rows(double a[16][16]){ int i, j; double sum = 0;

for (i = 0; i < 16; i++) for (j = 0; j < 16; j++) sum += a[i][j]; return sum;}

32 B = 4 doubles

assume: cold (empty) cache,a[0][0] goes here

int sum_array_cols(double a[16][16]){ int i, j; double sum = 0;

for (j = 0; i < 16; i++) for (i = 0; j < 16; j++) sum += a[i][j]; return sum;}

Ignore the variables sum, i, j

Page 39: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Disadvantage of direct mapping The direct-mapped cache is easy: indices and offsets can be computed

with bit operators or simple arithmetic, because each memory address belongs in exactly one block.

But, what happens if a program uses addresses 2, 6, 2, 6, 2, …?

00011011

Index

0000000100100011010001010110011110001001101010111100110111101111

MemoryAddress

Page 40: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Disadvantage of direct mapping The direct-mapped cache is easy: indices and offsets can be computed

with bit operators or simple arithmetic, because each memory address belongs in exactly one block.

However, this isn’t reallyflexible. If a program usesaddresses 2, 6, 2, 6, 2, ...,then each access will resultin a cache miss and a loadinto cache block 2.

This cache has four blocks,but direct mapping mightnot let us use all of them.

This can result in moremisses than we might like.

00011011

Index

0000000100100011010001010110011110001001101010111100110111101111

MemoryAddress

Page 41: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

A fully associative cache A fully associative cache permits data to be stored in any cache block,

instead of forcing each memory address into one particular block. When data is fetched from memory, it can be placed in any unused block of

the cache. This way we’ll never have a conflict between two or more memory

addresses which map to a single cache block. In the previous example, we might put memory address 2 in cache

block 2, and address 6 in block 3. Then subsequent repeated accesses to 2 and 6 would all be hits instead of misses.

If all the blocks are already in use, it’s usually best to replace the least recently used one, assuming that if it hasn’t used it in a while, it won’t be needed again anytime soon.

Locality?

Page 42: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

The price of full associativity However, a fully associative cache is expensive to implement.

Because there is no index field in the address anymore, the entire address must be used as the tag, increasing the total cache size.

Data could be anywhere in the cache, so we must check the tag of every cache block. That’s a lot of comparators!

...

...

...

Index Tag (32 bits) DataValid Address (32 bits)

=

Hit

32

Tag

=

=

Page 43: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Set AssociativityE = 2e lines/blocks per set

S = 2s sets

setline

0 1 2 B-1tagv

valid bit B = 2b bytes per cache block (the data)

Nominal cache size:S x E x B data bytes

Page 44: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

E-way Set Associative Cache (Here: E = 2)E = 2: Two lines per set

Assume: cache block size 8 bytes

t bits 0…01 100Address of short int:

0 1 2 7tagv 3 654 0 1 2 7tagv 3 654

0 1 2 7tagv 3 654 0 1 2 7tagv 3 654

0 1 2 7tagv 3 654 0 1 2 7tagv 3 654

0 1 2 7tagv 3 654 0 1 2 7tagv 3 654

find set

Page 45: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

E-way Set Associative Cache (Here: E = 2)E = 2: Two lines per set

Assume: cache block size 8 bytes

t bits 0…01 100Address of short int:

0 1 2 7tagv 3 654 0 1 2 7tagv 3 654

compare both

valid? + match: yes = hit

block offset

tag

Page 46: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

E-way Set Associative Cache (Here: E = 2)E = 2: Two lines per set

Assume: cache block size 8 bytes

t bits 0…01 100Address of short int:

0 1 2 7tagv 3 654 0 1 2 7tagv 3 654

match both

valid? + match: yes = hit

block offset

short int (2 Bytes) is here

No match: • One line in set is selected for eviction and replacement• Replacement policies: random, least recently used (LRU), …

Page 47: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Exampleint sum_array_rows(double a[16][16]){ int i, j; double sum = 0;

for (i = 0; i < 16; i++) for (j = 0; j < 16; j++) sum += a[i][j]; return sum;}

32 B = 4 doubles

assume: cold (empty) cache,a[0][0] goes here

int sum_array_rows(double a[16][16]){ int i, j; double sum = 0;

for (j = 0; i < 16; i++) for (i = 0; j < 16; j++) sum += a[i][j]; return sum;}

Ignore the variables sum, i, j

Page 48: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

What about writes? Multiple copies of data exist:

L1, L2, Main Memory, Disk What to do one a write-hit?

Write-through (write immediately to memory) Write-back (defer write to memory until replacement of line)

Need a dirty bit (line different from memory or not) What to do on a write-miss?

Write-allocate (load into cache, update line in cache) Good if more writes to the location follow

No-write-allocate (writes immediately to memory) Typical

Write-through + No-write-allocate Write-back + Write-allocate

Page 49: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Important Cache Topics Replacement algorithms

Which block is picked for replacement? For direct-mapped cache: only one choice For set associative cache: multiple choices

– Candidate algorithms: LRU, MRU, random– What information must be stored to implement these?

Cache consistency: copies of data not identical Write-back cache has only valid copy of block in system Problem: cache and memory have different versions

What if I/O device does DMA transfer to/from memory? Problem: caches have different versions

What if a write modifies an instruction? What if system has multiple CPUs, each with its own caches?

Cache size (nominal) Includes only data: not tag, dirty, valid, or replacement bits Power of 2: Number of sets? Block/line size? Associativity?

Page 50: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Software Caches are More Flexible Examples

File system buffer caches, web browser caches, etc.

Some design differences Almost always fully associative

so, no placement restrictions index structures like hash tables are common

Often use complex replacement policies misses are very expensive when disk or network involved worth thousands of cycles to avoid them

Not necessarily constrained to single “block” transfers may fetch or write-back in larger units, opportunistically

Page 51: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Direct-Mapped Cache SimulationM= 4 bit addresses (16 bytes total), B=2 bytes/block, S=4 sets, E=1 entry/set

Address trace (single byte reads):0 [00002], 1 [00012], 13 [11012], 8 [10002], 0 [00002]

xt=1 s=2 b=1

xx x

1 0 m[1] m[0]

v tag data0 [00002] (miss)

(1)

1 0 m[1] m[0]

v tag data

1 1 m[13] m[12]

13 [11012] (miss)

(3)

1 1 m[9] m[8]

v tag data8 [10002] (miss)

(4)

1 0 m[1] m[0]

v tag data

1 1 m[13] m[12]

0 [00002] (miss)

(5)

0 M[0-1]1

1 M[12-13]1

1 M[8-9]1

1 M[12-13]1

0 M[0-1]1

1 M[12-13]1

0 M[0-1]1

Page 52: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Why Use Middle Bits as Index?

High-order bit indexing Adjacent memory lines would map

to same cache entry Poor use of spatial locality

Middle-order bit indexing Consecutive memory lines map to

consecutive cache lines Can hold C-byte region of address

space in cache at one time

4-line cache High-orderbit indexing

Middle-orderbit indexing

00011011

0000000100100011010001010110011110001001101010111100110111101111

0000000100100011010001010110011110001001101010111100110111101111

Page 53: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Cache Organization: Another View View cache as 2D array: # sets * associativity Cache size = associativity * # sets * block size Index bits pick row; all tags in row compared in parallel

address

set associativity

# sets

=? =? =? =?

4:1 Mux

to CPU

tag index offset

Page 54: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Cache Organizations For caches of the same size (capacity)

direct mapped(set associativity = 1)

fully associative(#sets = 1)

set-associative(associativity > 1,

#sets > 1)

Page 55: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Understanding caches: a quiz

Essential relationships: Cache size = size of block * # sets * set associativityBlock size = 2 ^ (# offset bits)Number of sets = 2 ^ (# index bits)# tag bits + # index bits + # offset bits = address size

(assuming machine is byte-addressable)

See if you can answer these questions:1: Suppose we have a 64 KB, 4-way set-associative cache with 32 byte blocks.

How many index bits are used from the address?

2: A 16 KB direct-mapped cache is accessed with a 32 bit address. If the block size is 8 bytes, how many bits wide is the tag?

3: Suppose we have an 8 KB fully-associative cache with 16 byte blocks. How big is the tag for each entry, assuming a 32-bit address?

Page 56: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Question 1

Suppose we have a 64 KB, 4-way set-associative cache with 32 byte blocks. How many index bits are used from the address?

Since blocks are 32-bytes each, 5 offset bits are needed.

Determine total blocks in cache. 64KB/32bytes = 2^16/2^5 = 2^11 blocks.

Cache is 4-way set-associative so there are four columns in cache, so each column has 2^11/2^2 = 2^9 blocks.

There are therefore 2^9 sets, so we need 9 index bits.

tag index offset

5918

? ? ?

Page 57: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Question 2

A 16 KB direct-mapped cache is accessed with a 32 bit address. If the block size is 8 bytes, how many bits wide is the tag?

Since blocks are 8 bytes each, 3 offset bits required. (2^3 = 8)

Total blocks in the cache = 16KB/8bytes = 2^14/2^3 = 2^11.

There is only one column in this cache (set associativity = 1) so the number of sets in the cache = 2^11.

Number of index bits required is therefore 11.

The size of the tag is therefore 32 - 11 - 3 = 18 bits/tag.

tag index offset

31118

? ? ?

Page 58: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Question 3

Suppose we have an 8 KB fully-associative cache with 16 byte blocks. How big is the tag for each entry, assuming a 32-bit address?

Note first that there is only one set in a fully-associative cache. This means no index bits are used to select the set.

The 16-byte blocks require 4 offset bits (2^4 = 16).

The tag is therefore 32 - 4 = 28 bits in length.

tag offset

428

? ? ?

Page 59: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Intel Core Duo

Each core includes L1 i- and d-caches

Intel calls this L2 a “smart cache

In Pentium D 900 (released 2005) each core has its own 2

MB L2 cache

Shared L2 cache facilitates data sharing

Power can be turned off to unused L2 portions

Core Duo released January 2006

Page 60: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Intel Core i7

High end of Intel “core” brand, 731M transistors, 1366 pins.

Each core has 32KB i-cache, 32KB d-cache,

and a 256KB L2.

Quadcore Core i7 announced late 2008, six-core addition to launch March 2010

8MB L3 cache is shared by all

cores.

First Intel CPU to have integrated memory controller: 3 channel DDR3,

over 25 GB/s memory throughput

Page 61: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Intel Core i7 Cache Hierarchy

Regs

L1 d-cache

L1 i-cache

L2 unified cache

Core 0

Regs

L1 d-cache

L1 i-cache

L2 unified cache

Core 3

L3 unified cache(shared by all cores)

Main memory

Processor package

Page 62: Cache Performance Metrics Miss Rate  Fraction of memory references not found in cache (misses / accesses) = 1 – hit rate  Typical numbers (in percentages):

Writing Cache Friendly Code Repeated references to variables are good (temporal

locality) Stride-1 reference patterns are good (spatial locality) Examples:

Cold cache, 4-byte words, 4-word blocks, large arrays

int sumarrayrows(int a[M][N]){ int i, j, sum = 0;

for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum;}

int sumarraycols(int a[M][N]){ int i, j, sum = 0;

for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum;}

Miss rate = Miss rate = 1/4 = 25% 100%