1 Caches load multiple bytes per block to take advantage of spatial locality If cache block size =...

21
1 Caches load multiple bytes per block to take advantage of spatial locality If cache block size = 2 n bytes, conceptually split memory into 2 n -byte chunks: block addresses = address / 2 n = (address >> n) Spatial locality 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Byte Address 0 1 2 3 Index 0 1 2 3 4 5 6 7 Block Address m-bit Address k bits (m-k-n) bits n-bit Block Offset Tag Index
  • date post

    21-Dec-2015
  • Category

    Documents

  • view

    213
  • download

    0

Transcript of 1 Caches load multiple bytes per block to take advantage of spatial locality If cache block size =...

Page 1: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

1

Caches load multiplebytes per block totake advantage ofspatial locality

If cache block size= 2n bytes,conceptually splitmemory into2n-byte chunks:block addresses= address / 2n

= (address >> n)

Spatial locality

0000000100100011010001010110011110001001101010111100110111101111

ByteAddress

0123

Index

0

1

2

3

4

5

6

7

BlockAddress

m-bit Address

k bits(m-k-n) bitsn-bit Block

Offset Tag Index

Page 2: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

2

How big is the cache?

Byte-addressable machine, 16-bit addresses, cache details: direct-mapped block size = one byte index = 5 least significant bits

Two questions: How many blocks does the cache hold?

How many bits of storage are required to build the cache (data plus all overhead including tags, etc.)?

Page 3: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

3

Disadvantage of direct mapping

The direct-mapped cache is easy: indices and offsets can be computed with bit operators or simple arithmetic, because each memory address belongs in exactly one block

However, this isn’t reallyflexible. If a program usesaddresses 2, 6, 2, 6, 2, ...,then each access will resultin a cache miss and a loadinto cache block 2

This cache has four blocks,but direct mapping mightnot let us use all of them

This can result in moremisses than we might like

00011011

Index

0000000100100011010001010110011110001001101010111100110111101111

MemoryAddress

Page 4: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

4

A fully associative cache

A fully associative cache permits data to be stored in any cache block, instead of forcing each memory address into one particular block—when data is fetched from memory, it can be placed in any

unused block of the cache—this way we’ll never have a conflict between two or more

memory addresses which map to a single cache block

In the previous example, we might put memory address 2 in cache block 2, and address 6 in block 3. Then subsequent repeated accesses to 2 and 6 would all be hits instead of misses.

If all the blocks are already in use, it’s usually best to replace the least recently used one, assuming that if it hasn’t used it in a while, it won’t be needed again anytime soon.

Page 5: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

5

The price of full associativity

However, a fully associative cache is expensive to implement.—Because there is no index field in the address anymore, the

entire address must be used as the tag, increasing the total cache size.

—Data could be anywhere in the cache, so we must check the tag of every cache block. That’s a lot of comparators!

...

...

...

Index Tag (32 bits) DataValid Address (32 bits)

=

Hit

32

Tag

=

=

Page 6: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

6

Set associativity

An intermediate possibility is a set-associative cache—The cache is divided into groups of blocks, called sets—Each memory address maps to exactly one set in the cache,

but data may be placed in any block within that set If each set has 2x blocks, the cache is an 2x-way associative cache Here are several possible organizations of an eight-block cache

01234567

Set

1-way associativity8 sets, 1 block each

2-way associativity4 sets, 2 blocks each

4-way associativity2 sets, 4 blocks each

0123

Set

01

Set

Page 7: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

7

Writing to a cache

Writing to a cache raises several additional issues First, let’s assume that the address we want to write to is already

loaded in the cache. We’ll assume a simple direct-mapped cache:

If we write a new value to that address, we can store the new data in the cache, and avoid an expensive main memory access [but inconsistent]

HUGE problem in multiprocessors

Index Tag DataV Address

...

110

...

1 11010 42803

Data

42803

...

1101 0110

...

Index Tag DataV Address

...

110

...

1 11010 21763

Data

42803

...

1101 0110

...

Mem[1101 0110] = 21763

Page 8: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

8

Write-through caches

A write-through cache solves the inconsistency problem by forcing all writes to update both the cache and the main memory

This is simple to implement and keeps the cache and memory consistent

Why is this not so good?

Index Tag DataV Address

...

110

...

1 11010 21763

Data

21763

...

1101 0110

...

Mem[1101 0110] = 21763

Page 9: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

9

Write-back caches

In a write-back cache, the memory is not updated until the cache block needs to be replaced (e.g., when loading data into a full cache set)

For example, we might write some data to the cache at first, leaving it inconsistent with the main memory as shown before—The cache block is marked “dirty” to indicate this inconsistency

Subsequent reads to the same memory address will be serviced by the cache, which contains the correct, updated data

Index Tag DataDirty Address

...

110

...

1 11010 21763

Data

42803

1000 1110

1101 0110

...

Mem[1101 0110] = 21763

1225

V

1

Page 10: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

10

Finishing the write back

We don’t need to store the new value back to main memory unless the cache block gets replaced

E.g. on a read from Mem[1000 1110], which maps to the same cache block, the modified cache contents will first be written to main memory

Only then can the cache block be replaced with data from address 142Index Tag Data

...

110

...

10001 1225

Address Data

21763

1000 1110

1101 0110

...

1225

Index Tag Data

...

110

...

Dirty

0

Dirty

1 11010 21763

Address Data

21763

1000 1110

1101 0110

...

1225

V

1

V

1

Page 11: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

11

Write misses

A second scenario is if we try to write to an address that is not already contained in the cache; this is called a write miss

Let’s say we want to store 21763 into Mem[1101 0110] but we find that address is not currently in the cache

When we update Mem[1101 0110], should we also load it into the cache?

Index Tag DataV Address

...

110

...

1 00010 123456

Data

6378

...

1101 0110

...

Page 12: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

12

With a write around policy, the write operation goes directly to main memory without affecting the cache

This is good when data is written but not immediately used again, in which case there’s no point to load it into the cache yet

for (int i = 0; i < SIZE; i++)a[i] = i;

Write around caches (a.k.a. write-no-allocate)

Index Tag DataV

...

110

...

1 00010 123456

Address Data

21763

...

1101 0110

...

Mem[1101 0110] = 21763

Page 13: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

13

Allocate on write

An allocate on write strategy would instead load the newly written data into the cache

If that data is needed again soon, it will be available in the cache

Index Tag DataV Address

...

110

...

1 11010 21763

Data

21763

...

1101 0110

...

Mem[214] = 21763

Page 14: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

14

Which is it?

Given the following trace of accesses, can you determine whether the cache is write-allocate or write-no-allocate?— Assume A and B are distinct, and can be in the cache

simultaneously.

Load A

Store B

Store A

Load A

Load B

Load B

Load A

Miss

Miss

Miss

Hit

Hit

Hit

Hit

Answer: Write-no-allocateOn a write-allocate cache this would be a hit

Page 15: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

15

Memory System Performance

To examine the performance of a memory system, we need to focus on a couple of important factors.—How long does it take to send data from the

cache to the CPU?—How long does it take to copy data from

memory into the cache?—How often do we have to access main

memory? There are names for all of these variables.

—The hit time is how long it takes data to be sent from the cache to the processor. This is usually fast, on the order of 1-3 clock cycles.

—The miss penalty is the time to copy data from main memory to the cache. This often requires dozens of clock cycles (at least).

—The miss rate is the percentage of misses.

Lots ofdynamic RAM

A little staticRAM (cache)

CPU

Page 16: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

16

Average memory access time

The average memory access time, or AMAT, can then be computed

AMAT = Hit time + (Miss rate x Miss penalty)

This is just averaging the amount of time for cache hits and the amount of time for cache misses

How can we improve the average memory access time of a system?—Obviously, a lower AMAT is better—Miss penalties are usually much greater than hit times, so the

best way to lower AMAT is to reduce the miss penalty or the miss rate

However, AMAT should only be used as a general guideline. Remember that execution time is still the best performance metric.

Page 17: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

17

Performance example

Assume that 33% of the instructions in a program are data accesses. The cache hit ratio is 97% and the hit time is one cycle, but the miss penalty is 20 cycles:

To make AMAT smaller, we can decrease the miss rate—e.g. make the cache larger, add more associativity—but larger/more complex longer hit time!

Alternate approach: decrease the miss penalty—BIG idea: a big, slow cache is still faster than RAM!

Modern processors have at least two cache levels—too many levels introduces other problems (keeping data

consistent, communicating across levels)

Page 18: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

18

Opteron Vital Statistics

L1 Caches: Instruction & Data— 64 kB— 64 byte blocks— 2-way set associative— 2 cycle access time

L2 Cache:— 1 MB— 64 byte blocks— 4-way set associative— 16 cycle access time (total, not just miss penalty)

Memory— 200+ cycle access time

L1 cacheCPU MainMemory

L2 cache

Page 19: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

19

Associativity tradeoffs and miss rates

Higher associativity means more complex hardware But a highly-associative cache will also exhibit a lower miss rate

—Each set has more blocks, so there’s less chance of a conflict between two addresses which both belong in the same set

—Overall, this will reduce AMAT and memory stall cycles Figure from the textbook shows the miss rates decreasing as the

associativity increases

0%

3%

6%

9%

12%

Eight-wayFour-wayTwo-wayOne-way

Mis

s ra

te

Associativity

Page 20: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

20

Cache size and miss rates

The cache size also has a significant impact on performance—The larger a cache is, the less chance there will be of a conflict—Again this means the miss rate decreases, so the AMAT and

number of memory stall cycles also decrease

Miss rate as a function of both the cache size and its associativity

0%

3%

6%

9%

12%

15%

Eight-wayFour-wayTwo-wayOne-way

1 KB

2 KB

4 KB

8 KB

Mis

s ra

te

Associativity

Page 21: 1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.

21

Block size and miss rates

Finally, miss rates relative to the block size and overall cache size—Smaller blocks do not take maximum advantage of spatial

locality—But if blocks are too large, there will be fewer blocks available,

and more potential misses due to conflicts

1 KB

8 KB

16 KB

64 KB

256

40%

35%

30%

25%

20%

15%

10%

5%

0%

Mis

s ra

te

64164

Block size (bytes)