Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A...

25
Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing , Volume: 3 Issue: 6 , Nov.-Dec. 1999 Page(s): 44 -50
  • date post

    20-Dec-2015
  • Category

    Documents

  • view

    216
  • download

    1

Transcript of Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A...

Page 1: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Improving Proxy Cache Performance: Analysis of Three

Replacement Policies

Dilley, J.; Arlitt, M.

A journal paper of IEEE Internet Computing , Volume: 3 Issue: 6 , Nov.-Dec.

1999

Page(s): 44 -50

Page 2: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Outline

Introduction Related work New replacement policies Simulation Conclusion

Page 3: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Introduction

The goal of web cache Reducing the bandwidth demand on the

external network. Reducing the average time it takes for a

web page to load.

Page 4: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

The location of web cache

Client side Proxy side Web server side

Page 5: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Cache replacement policy

The reason why we need replacement A cache server has a fixed amount of

storage. When this storage space fills, the cache

must choose a set of objects to evict.

Page 6: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Cache replacement policy (cont.)

The goal of cache replacement policy To make the best use of available

resources, including disk and memory

space and network bandwidth.

Page 7: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Web proxy workload characterization

The characteristic of workload that we interest The patterns in the number of objects

referenced and the relationship among

accesses.

Page 8: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Web proxy workload characterization (cont.)

The workload that we choose to simulate Traces of actual live execution

The drawback of traces of actual live execution It does not capture changing behavior or

the behavior of a different user set.

Page 9: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Measures of efficiency

Hit rate• It’s the most popular measure.

Byte hit rate • It’s interesting on the limited bandwidth

and expensive resource. CPU and I/O system utilization Object retrieval latency (or page load time)

Page 10: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

An optimal cache replacement policy

To know a document’s future popularity

and choose the most advantageous way

to use its finite space.

Page 11: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

The heuristics to approximate optimal cache replacement

Caching a few documents with high

download latency might actually reduce

average latency. To maximize object hit rate, it is better to

keep many small popular objects.

Page 12: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Related work

Replacement algorithms Least Recently Used (LRU)

• It works well when there is a high

temporal locality in workload.• It’s data structure is doubly linked list to

implement.

Page 13: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Replacement algorithms (cont.)

Least Frequently Used (LFU)

• It evicts the the objects with the lowest

reference count.• It’s data structure is heap. • It’s drawback is the problem of cache

pollution.

Page 14: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Replacement algorithms (cont.)

LFU-Aging (LFUA)• It considers both an object’s access

frequency and its age in cache.• It avoid the problem of cache pollution.

Page 15: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Replacement algorithms (cont.)

Greedy Dual-Size (GDS)• It takes size and a cost function for

retrieving objects into account.

Page 16: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

New replacement policies

LFU with Dynamic Aging (LFUDA) It adds a cache age factor to the reference

count, when a new object is added or an

existing object is referenced.

Page 17: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

New replacement policies (cont.)

GDS-Frequency (GDSF) It considers reference frequency in

addition to object size. It assigns a key to each object. Key= (reference count of object / size )

plus cache age

Page 18: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Simulation

Two types of cache replacement algorithm Heap-based such as GDSF, LFUDA,

LRU_H• It’s data structure is heap. • Let memory usage grow to a high

watermark before examining any cache

metadata

Page 19: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Two types of cache replacement algorithm (cont.)

Listed-based such as LRU_L

• It’s data structure is linked list• Examining object metadata more often

to implement the LRU reference age.

Page 20: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

The result of simulation

Hit rate

Page 21: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Byte hit rate

Page 22: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

CPU utilization

Page 23: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Finding

LRU_L has higher and more variable response times than LRU_H because of the extra time spent computing the reference age and choose the object to evict.

LRU_L’s extra time seems to translate into a higher hit rate for LRU_L.

GDSF shows consistent improvement over LRU.

Page 24: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Conclusion

More sophisticated policy (heap-base for example) can be implement without degrading the cache’s performance.

The bottleneck is on I/O (network and disk), not on CPU utilization.

To achieve the greatest latency reduction, to optimize the cache hit rate.

Page 25: Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

While the bandwidth id dear, we must focus on improving the byte hit rate.

The impact of consistency validation of cached objects becomes more significant.