Yonsei University Chapter 4 Internal Memory. Yonsei University 4-2 Contents Computer Memory System...
-
Upload
mariah-lewis -
Category
Documents
-
view
224 -
download
0
Transcript of Yonsei University Chapter 4 Internal Memory. Yonsei University 4-2 Contents Computer Memory System...
YonseiYonsei UniversityUniversity
Chapter 4
Internal Memory
YonseiYonsei UniversityUniversity4-2
ContentsContents
• Computer Memory System Overview• Semiconductor Main Memory• Cache Memory• Pentium
YonseiYonsei UniversityUniversity4-3
Key Characteristics of Computer Memory SystemKey Characteristics of Computer Memory System Computer memory Computer memory
system over viewsystem over view
Location Processor
Internal (main)
External (secondary)
Capacity Word size
Number of words
Unit of Transfer Word
Block
Access Method Sequential
Direct
Associative
Performance Access time
Cycle time
Transfer rate
Physical Type Semiconductor
Magnetic
Optical
Magneto-Optical
Physical Characteristics Volatile/nonvolatile
Erasable/nonerasable
Organization
YonseiYonsei UniversityUniversity4-4
Characteristics of Memory SystemCharacteristics of Memory System
• Location– Processor– Internal(main)– External(secondary)
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-5
CapacityCapacity• Internal memory capacity
– Expressed byte or word
• External memory capacity– Expressed byte
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-6
Unit of TransferUnit of Transfer• Internal memory
– The unit of transfer equal to number of data into and out of memory module– Often equal to word length of may not be– Word
• natural unit of organization of memory• Size of the word equal to number of bits used to represent a number and to instruction length
– Addressable unit• Addressable unit is word• Some system addressing byte level
– Unit of transfer• Number of bits read out of or written into memory at
a time
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-7
Methods of AccessingMethods of Accessing• Sequential access
– Start at the beginning and read through in order– Access time depends on location of data and previous location– e.g. tape
• Direct access– Individual blocks have unique address– Access is by jumping to vicinity plus sequential search– Access time depends on location and previous location– e.g. disk
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-8
Methods of AccessingMethods of Accessing
• Random access– Individual addresses identify locations exactly– Access time is independent of location or previous access– e.g. RAM
• Associative access– Data is located by a comparison with contents of a portion of the store– Access time is independent of location or previous access– e.g. cache
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-9
PerformancePerformance• Access time
– Time it takes to perform read or writ operation (for random-access memory)
– Time it takes to position the read-write mechanism at desired location (for non-random-access memory)
• Memory cycle time – Applied to random-access memory– Consist of access time plus any additional time
required before next access
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-10
PerformancePerformance
• Transfer rate– Rate which data transferred into or out of memory
unit (for random-access memory)– For non-random-access memory
TN = Average time to read or write N bits
TA = Average access time
N = Number of bits
R = Transfer rate, in bits per second(bps)
R
NTT AN
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-11
Physical TypesPhysical Types
• Semiconductor– RAM
• Magnetic– Disk & Tape
• Optical– CD & DVD
• Others– Bubble– Hologram
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-12
Physical CharacteristicsPhysical Characteristics
• Decay
• Volatility
• Erasable
• Power consumption
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-13
OrganizationOrganization
• Physical arrangement of bits into words
• Not always obvious
• e.g. interleaved
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-14
The Bottom LineThe Bottom Line
• How much?– Capacity
• How fast?– Time is money
• How expensive?
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-15
The Memory HierarchyThe Memory Hierarchy• Relationships
– Faster access time, greater cost per bit– Greater capacity, smaller cost per bit– Greater capacity, slower access time
• As one goes down the hierarchy, the following occur– Decreasing cost per bit– Increasing capacity– Increasing access time– Decreasing frequency of access of the memory by the
processor
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-16
The Memory hierarchyThe Memory hierarchy Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-17
The Memory HierarchyThe Memory Hierarchy
• Registers– In CPU
• Internal or Main memory– May include one or more levels of cache– “RAM”
• External memory– Backing store
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-18
Performance of A Two-Level MemoryPerformance of A Two-Level Memory Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-19
Hierarchy ListHierarchy List
• Registers• L1 Cache• L2 Cache• Main memory• Disk cache• Disk• Optical• Tape
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-20
The Memory HierarchyThe Memory Hierarchy• Locality of reference
– During the course of the execution of a program, memory references tend to cluster
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-21
The Memory HierarchyThe Memory Hierarchy
• Additional levels can effectively added to the hierarchy in software
• Portion of main memory can be used as a buffer to hold data that to be read out to disk
• Such a technique, sometimes referred to as a disk cache
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-22
So You Want Fast?So You Want Fast?
• It is possible to build a computer which uses only static RAM (see later)
• This would be very fast
• This would need no cache– How can you cache cache?
• This would cost a very large amount
Computer memory Computer memory system over viewsystem over view
YonseiYonsei UniversityUniversity4-23
Semiconductor Memory TypesSemiconductor Memory Types Semiconductor Semiconductor main memorymain memory
Memory Type Category ErasureWrite
MechanismVolatility
Random-access memory(RAM)
Read-write memoryElectrically,
byte level Electrically Volatile
Read-only memory(ROM)
Read-only memory Not possible Mask
Nonvolatile
Programmable ROM(PROM)
Electrically
Erasable PROM(EPROM)
Read-mostly memory
UV light,
chip level
Flash memoryElectrically, block level
Electrically Erasable
PROM(EEPROM)
Electrically,
byte level
YonseiYonsei UniversityUniversity4-24
RAMRAM
• Misnamed as all semiconductor memory
is random access• Read/Write• Volatile• Temporary storage• Static or dynamic
– Require periodic charge refreshing to maintain
data storage
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-25
DRAMDRAM
• Bits stored as charge in capacitors• Charges leak• Need refreshing even when powered• Simpler construction• Smaller per bit• Less expensive• Need refresh circuits• Slower• Main memory
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-26
SRAMSRAM
• Bits stored as on/off switches• No charges to leak• No refreshing needed when powered• More complex construction• Larger per bit• More expensive• Does not need refresh circuits• Faster• Cache
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-27
Read Only Memory (ROMRead Only Memory (ROM))
• Permanent storage• Applications
– Microprogramming – Library subroutines– Systems programs (BIOS)– Function tables
• Problems– Data insertion step includes large fixed cost – No room for error
• Written during manufacture– Very expensive for small runs
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-28
Programmable ROM (PROM)Programmable ROM (PROM)• Nonvolatile & Written only once
– Writing performed electrically and may performed by supplier or customer after original chip fabrication
– Needs special equipment to program
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-29
Read “Mostly” Memory (RMM)Read “Mostly” Memory (RMM)
• Erasable Programmable ROM (EPROM)– Storage cell must erased by UV before written
operation– Read and written electrically – More expensive PROM– Advantage of multiple update capability
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-30
Electrically Erasable PROM(EEPROM)Electrically Erasable PROM(EEPROM)
• Takes much longer to write than read• Advantage • Nonvolatility & being updatable in using
ordinary bus control, address, data line
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-31
Flash MemoryFlash Memory
• Erase whole memory electrically• Entire flash memory can erased in one or
few seconds (faster than EPROM)• Possible to erase just blocks• Impossible to erase byte-level• Use only one transistor per bit • Achieves high density
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-32
Memory Cell OperationMemory Cell Operation Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-33
Chip LogicChip Logic
• A 16Mbit chip can be organised as 1M of 16 bit words
• A bit per chip system has 16 lots of 1Mbit chip with bit 1 of each word in chip 1 and so on
• A 16Mbit chip can be organised as a 2048 x 2048 x 4bit array– Reduces number of address pins
• Multiplex row address and column address• 11 pins to address (211=2048)• Adding one more pin doubles range of values
so x4 capacity
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-34
RefreshingRefreshing
• Refresh circuit included on chip• Disable chip• Count through rows• Read & Write back• Takes time• Slows down apparent performance
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-35
Typical 16 Mb DRAM (4M x 4)Typical 16 Mb DRAM (4M x 4) Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-36
Typical Memory Package Pins & SignalsTypical Memory Package Pins & Signals Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-37
Chip PackagingChip Packaging
• Pins support following signal lines– Address of word being accessed
• For 1M words, a total of 20(220=1M) pins needed(A0-A19)
– Data to be read out, consisting of 8lines(D0-D7)
– Power supply to the chip(Vcc)
– Ground pin(Vss)– Chip enable (CE)pin
– Program voltage(Vpp) that supplied during programming(writing operation)
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-38
Module OrganizationModule Organization
• How a memory module consisting of 256K 8-bit words could be organization– For 256K word, 18-bit address needed and
supplied to the module from external source
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-39
256 kbyte Memory Organization256 kbyte Memory OrganizationSemiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-40
Module OrganizationModule Organization
• Possible organization of memory consisting of 1M word by 8bit per word– Need four columns of chips, each column
containing 256K words arranged
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-41
1-Mbyte Memory Organisation1-Mbyte Memory Organisation Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-42
Error CorrectionError Correction• Hard Failure
– Permanent physical defect– Memory cell or cell affected cannot reliably store data– Stuck at 0 or 1or switch erratically between 0 and 1– Caused by harsh environmental abuse, manufacturing
defects and wear
• Soft Error– Random, non-destructive – No permanent damage to memory– Caused by power supply problems or alpha particles
• Detected using Hamming error correcting code
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-43
Error - Correcting Code Error - Correcting Code Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-44
Error CorrectionError Correction• If M-bit word of data stored, and code is K bit,
then actual size of stored word is M+K bits• New set of K code generated from M data bits
compared with fetched code bits• Comparison one of three results
– No errors detected therefore fetched data bits sent out– An error detected and possible to correct error then data
bit plus error-correction bits fed into corrector which produce corrected set of M bits set out
– An error detected, but impossible to correct
this condition reported
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-45
Hamming Error-Correcting CodeHamming Error-Correcting Code Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-46
Hamming CodeHamming Code• Parity bits• By checking the parity bits, discrepancies found in
circle A & circle C, but not in circle B• Only one of the seven compartments is in A&C but
not B• Syndrome word result of comparison of bit-by-bit• Each bit of syndrome is 0 or 1 according to if there is
or is not a match in position for two input• Syndrome word is K bit wide and has range between
0 and 2K-1
K MK
1 2
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-47
Increase in Word LengthIncrease in Word Length Semiconductor Semiconductor main memorymain memory
Single-Error-Correction
Single-Error Correction/Double-
Error Detection
Data Bits Check Bits %Increase Check Bits %Increase
8 4 50 5 62.5
16 5 31.25 6 37.5
32 6 18.75 7 21.875
64 8 10.94 8 12.5
128 8 6.25 9 7.03
256 9 3.52 10 3.91
YonseiYonsei UniversityUniversity4-48
Error CorrectionError Correction• To generate 4-bit syndrome with follow
– If syndrome contains all 0s, no error detected– If syndrome contains one & only one bit set to 1,
error occurred in one of 4 check bits. No correction needed
– If syndrome contains more than one bit set to 1, numerical value of syndrome indicates position of data bit in error. This data bit inverted for correction
– Position numbers are powers of 2 designated as check bits
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-49
Layout of Data Bits & Check BitsLayout of Data Bits & Check Bits
• To achieve these characteristics, data and check bits arranged into a 12-bit word as depicted
• Bit position whose position numbers are powers of 2 designated as check bits
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-50
Layout of Data Bits & Check BitsLayout of Data Bits & Check Bits Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-51
Error CorrectionError Correction
• Each check bit operates on every data bit position whose position number contains 1 in corresponding column position
87658
84324
764312
754211
MMMMC
MMMMC
MMMMMC
MMMMMC
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-52
Check Bit DegenerationCheck Bit Degeneration
• Data and check bits positioned properly in the 12-bit word.
• By laying out position number of each data bit in columns, the 1s in each row indicated by the check bit for that row
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-53
Check Bit DegenerationCheck Bit Degeneration Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-54
Hamming SEC-DEC CodeHamming SEC-DEC Code Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-55
Error CorrectionError Correction
• IBM 30xx implementations use 8-bit SEC-DED code for each 64 bits of data in main memory
• Size of main memory 12% larger than apparent to the user
• VAX computer use 7-bit SEC-DED for each 32 bits of memory, for 22% overhead
• A number of contemporary DRAMs use 9 check bits for each 128 bits of data, for 7% overhead
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-56
Error correctionError correction
• Single-error-correcting(SEC)• Single-error-correcting, double-error
-detecting(SEC-DED)
Semiconductor Semiconductor main memorymain memory
YonseiYonsei UniversityUniversity4-57
Cache & Main MemoryCache & Main Memory
• Small amount of fast memory• Sits between normal main memory and
CPU• May be located on CPU chip or module
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-58
PrinciplePrinciple• CPU requests contents of memory location• Check cache for this data• If present, get from cache (fast)• If not present, read required block from main
memory to cache• Then deliver from cache to CPU because of
phenomenon of locality of reference• Cache includes tags to identify which block
of main memory is in each cache slot
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-59
PrinciplePrinciple
• For mapping purposes, memory considered to consist of number of fixed-length block of K word each– Number of block : M = 2n/K
• Cache consist of C lines of K words each• Number of lines considerably less than
number of main memory blocks• Each line includes a tag that identifies which
particular block currently being stored
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-60
Cache/Main-Memory StructureCache/Main-Memory Structure Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-61
PrinciplePrinciple• Cache hit occurs
– data and address buffers disabled and communication only between processor and cache, with no system bus traffic
• Cache miss occurs– Desired address loaded onto the system bus and data
returned through data buffer to both cache and main memory
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-62
Cache Read OperationCache Read Operation Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-63
Elements of Cache DesignElements of Cache Design
Cache size
Mapping Function
Direct
Associative
Set Associative
Replacement Algorithm
Least recently used(LRU)
First in first out(FIFO)
Least frequently used(LFU)
Random
Write Policy
Write through
Write back
Write once
Line Size
Number of Caches
Single or two level
Unified or split
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-64
Cache SizeCache Size• Small enough that overall average cost per bit
close to that of main memory• Large enough that overall average access time
close to that cache alone• The larger cache, the larger number of gates
involved in addressing cache• The larger cache tend to slower than small cache• Cache size limited by available chip & board area• Impossible to arrive “optimum” size because
cache’s performance very sensitive to of workload
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-65
Factors For Cache SizeFactors For Cache Size
• Cost– More cache is expensive
• Speed– More cache is faster (up to a point)– Checking cache for data takes time
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-66
Typical Cache OrganizationTypical Cache Organization Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-67
Mapping FunctionMapping Function• Needed for determining which main memory
block currently occupies cache line• Direct mapping
– Cache of 64kByte– Cache block of 4 bytes
• i.e. cache is 16k (214) lines of 4 bytes– 16MBytes main memory– 24 bit address
• (224=16M)
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-68
Direct MappingDirect Mapping• Maps each block of main memory into only
one possible cache line
i = j modulo m
where
i = cache line number
j = main memory block number
m= number of lines in the cache
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-69
Direct Mapping Cache OrganizationDirect Mapping Cache Organization Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-70
Direct Mapping Cache Line TableDirect Mapping Cache Line Table
– Least Significant w bits identify unique word– Most Significant s bits specify one memory block– The MSBs are split into a cache line field r and a tag of
s-r (most significant)– This latter field identifies one of the m=2r lines of cache
Cache line Main memory block assigned
0 0, m, 2m, …, 2s- m1 1, m +1, 2m +1, …, 2s- m + 1.
.
.
.
.
. m -1 m –1, 2m –1, 3m –1,…, 2s-1
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-71
Direct Mapping ExampleDirect Mapping Example Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-72
Direct Mapping Address StructureDirect Mapping Address Structure
Tag s-r Line or Slot r Word w
8 14 2
• 24 bit address• 2 bit word identifier (4 byte block)• 22 bit block identifier
– 8 bit tag (=22-14)– 14 bit slot or line
• No two blocks in the same line have the same Tag field
• Check contents of cache by finding line and checking Tag
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-73
Direct MappingDirect Mapping
• Advantage– Direct mapping technique simple & in expensive to
implement
• Disadvantage– Fixed cache location for any given block– The hit ratio will be low
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-74
Associative MappingAssociative Mapping• A main memory block can load into any
line of cache• Memory address is interpreted as tag and
word• Tag uniquely identifies block of memory• Every line’s tag is examined for a match• Cache searching gets expensive
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-75
Fully Associative Cache OrganizationFully Associative Cache Organization Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-76
Associative Mapping ExampleAssociative Mapping Example Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-77
Tag 22 bit Word2 bit
Associative Mapping Address StructureAssociative Mapping Address Structure
• 22 bit tag stored with each 32 bit block of data
• Compare tag field with tag entry in cache to check for hit
• Least significant 2 bits of address identify which 16 bit word is required from 32 bit data block
• e.g.– Address Tag Data Cache
line– FFFFFC FFFFFC 24682468 3FFF
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-78
Associative MappingAssociative Mapping
• Replacement algorithms designed to maximize the hit ratio
• Advantage– Flexibility replacement of block when new block
read into the cache
• Disadvantage– Complex circuitry required to examine the tags of
all cache line in parallel
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-79
Set Associative MappingSet Associative Mapping
• Compromise that reduce disadvantage of direct & associative approaches
m = v × k
i = j modulo v
Where
i = cache set number
j = main memory block number
m= number of lines in the cache
k-way set associative mapping– V=m, k=1, the set associative technique reduces to direct
mapping– V=1, k=m, reduces to associative mapping
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-80
Set Associative MappingSet Associative Mapping• Cache is divided into a number of sets• Each set contains a number of lines• A given block maps to any line in a given set
– e.g. Block B can be in any line of set i
• e.g. 2 lines per set– 2 way associative mapping– A given block can be in one of 2 lines in only one set
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-81
K- Way Set Associative Cache OrganizationK- Way Set Associative Cache OrganizationCache Cache
memorymemory
YonseiYonsei UniversityUniversity4-82
Set Associative Mapping Address StructureSet Associative Mapping Address Structure
• Use set field to determine cache set to look in
• Compare tag field to see if we have a hit• e.g
– Address Tag Data Set number
– 1FF 7FFC 1FF 12345678 1FFF– 001 7FFC 001 11223344 1FFF
Tag 9 bit Set 13 bitWord2 bit
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-83
Set Associative Mapping ExampleSet Associative Mapping Example
• 13 bit set number• Block number in main memory is modulo
213 • 000000, 008000,…, FF8000map to same set
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-84
Two Way Set Associative Mapping ExampleTwo Way Set Associative Mapping ExampleCache Cache
memorymemory
YonseiYonsei UniversityUniversity4-85
Replacement Algorithms Replacement Algorithms • A new block brought into cache, one of the
existing block must replaced• Direct mapping
– Because there only one possible line for any particular block, must replace
• Associative & set associative techniques– Need replacement algorithm
• Least recently used(LRU)– Most effective– Replace that set that has been in the cache longest with
no reference – A line referenced, USE bit set to 1 and USE bit of the
other line set to 0
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-86
Replacement AlgorithmsReplacement Algorithms
• First in first out (FIFO)– replace block that has been in cache longest
• Least frequently used(LFU)– replace block which has experienced the fewest
references
• Random– Not based on usage
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-87
Write PolicyWrite Policy• Must not overwrite a cache block unless main
memory is up to date• If has not updated, old block in the cache be
overwritten• When multiple processors attached to the same
bus and each processor has its own local cache– A word altered in one cache, invalidate a word in other
cache
• Multiple CPUs may have individual caches• I/O may address main memory directly
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-88
Write ThroughWrite Through
• All writes go to main memory as well as cache
• Multiple CPUs can monitor main memory traffic to keep local (to CPU) cache up to date
• Lots of traffic• Disadvantage• Generates substantial memory traffic and
may create bottleneck
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-89
Write BackWrite Back• Minimizes memory writes• Updates initially made in cache only• Update bit for cache slot is set when update
occurs• If block is to be replaced, write to main
memory only if update bit is set• Other caches get out of sync• I/O must access main memory through cache
– Complex circuitry & potential bottleneck
• N.B. 15% of memory references are writes
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-90
Cache CoherencyCache Coherency• If data in one cache altered, invalidated the
corresponding word in main memory and same word in other caches
• Even if a writ-through policy used, the other cache may contain invalid data
• A system that prevent this problem said to maintain cache coherency– Bus watching with write through– Hardware transparency– Noncachable memory
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-91
Bus Watching With Write ThroughBus Watching With Write Through
• Each cache controller monitor address lines to detect write operations to memory by other bus masters
• If another master writes to a location in shared memory that also resides in cache memory– Cache controller invalidates that cache entry
• This strategy depends on the use of a write-through policy by all cache controller
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-92
Hardware TransparencyHardware Transparency
• Additional hardware used to ensure that all updates to main memory via cache reflected in all cache
• If one processor modifies a word in its cache – This update written to main memory– Any matching words in other caches similarly
updated
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-93
Noncachable MemoryNoncachable Memory
• Only a portion of main memory shared by more than one processor, and this designated as noncachable
• All accesses to shared memory are cache misses– Because shared memory never copied into the
cache
• The noncachable memory can identified using chip-select logic of high-address bits
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-94
Line sizeLine size• When a block of data retrieved and place in
the cache, the desired word and some number of adjacent words retrieved
• Two specific effects come into play– Larger blocks reduce the number of block that fit into a
cache. Because each block fetch overwrites older cache contents, a small number of blocks result in data overwritten shortly after it is fetched
– As a block become larger, each additional word farther from requested word, and therefore less likely to needed in near future
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-95
Line sizeLine size
• The relationship between block size and hit ratio– Complex– Depending on the locality characteristics of a
particular program– No definitive optimum value has been found
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-96
Number of CachesNumber of Caches
• On chip cache– Possible to have cache on the same chip as the
processor– Reduces processor’s external bus activity and speed
up execution times and increases overall system performance
• Most contemporary designs include both on-chip & external caches
• Two-level cache• Internal cache designed level 1(L1) and
external cache designed level 2(L2)• Reason for including L2 cache
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-97
Number of cachesNumber of caches• Reason for including L2 cache
– If there no L2 cache and processor makes access request for memory location not in L1 cache, processor must access DRAM or ROM memory across the bus
• Poor performance because of slow bus speed and slow memory access time
– If L2 SRAM cache used, missing information can quickly retrieved
– Effect of using L2 depends on hit rates in both L1,L2 cache
• To split the cache into two: one dedicated to instruction and one dedicated to data
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-98
Number of CachesNumber of Caches
• Potential advantage of a unified cache– For a given cache size, unified cache has higher
hit rate than split caches because it balance the load between instruction and data fetches automatically
– Only one cache needs to be designed
& implement
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-99
Number of CachesNumber of Caches
• The trend toward split caches– Superscalar machines (Pentium II, PowerPC)
which emphasize parallel instruction execution and prefetching of predicted future instruction
– Key advantage of the split cache design that eliminates condition for cache between instruction processor and execution unit
– Important in any design that relies on the pipelining of instruction
Cache Cache memorymemory
YonseiYonsei UniversityUniversity4-100
Pentium II Block DiagramPentium II Block Diagram Pentium Pentium II II and PowerPC and PowerPC
YonseiYonsei UniversityUniversity4-101
Structure Of Pentium II Data Cache Structure Of Pentium II Data Cache • LRU replacement algorithm• Writ-back policy
Pentium Pentium II II and PowerPC and PowerPC
YonseiYonsei UniversityUniversity4-102
Data Cache ConsistencyData Cache Consistency
• To provide cache consistency, data cache supports a protocol MESI(modified/exclusive/shared/invalid)
• Data cache includes two status bit per tag, so that each line can be in one of four states– Modified– Exclusive– Shared– Invalid
Pentium Pentium II II and PowerPC and PowerPC
YonseiYonsei UniversityUniversity4-103
MESI Cache Line StatesMESI Cache Line States
M
Modified
E
Exclusive
S
Shared
I
Invalid
This cache line valid? Yes Yes Yes No
The memory copy is…
Out of date Valid Valid -
Copies exist in other caches?
No No Maybe Maybe
A write to this line…Does not go
to busDoes not go
to bus
Goes to bus and updates cache
Goes directly to
use
Pentium Pentium II II and PowerPC and PowerPC
YonseiYonsei UniversityUniversity4-104
Cache ControlCache Control
• Internal cache controlled by two bits– CD (cache disable) & NW(not write – through)
• Two Pentium II instruction that used to control cache– INVD:invalidates(flash) internal cache memory and
signals external cache– WBINVD: writ back and invalidates cache, then
writes back and invalidates external cache
Pentium Pentium II II and PowerPC and PowerPC
YonseiYonsei UniversityUniversity4-105
Pentium II Cache Operation ModesPentium II Cache Operation Modes
Control Bits Operating Mode
CD NW Cache FillsWrite
ThroughsInvalidates
0 0 Enabled Enabled Enabled
1 0 Disabled Enabled Enabled
1 1 Disabled Disabled Disabled
Pentium Pentium II II and PowerPC and PowerPC
YonseiYonsei UniversityUniversity4-106
PowerPC Internal CachePowerPC Internal CachePentium Pentium II II
and PowerPC and PowerPC
Model Size Bytes/Line Organization
PowePC 6011 32-kbyte 32 8-way set associative
PowePC 6032 8-kbyte 32 2-way set associative
PowePC 6042 16-kbyte 32 4-way set associative
PowePC 620 2 32-kbyte 64 8-way set associative
YonseiYonsei UniversityUniversity4-107
PowerPC G3 Block DiagramPowerPC G3 Block Diagram Pentium Pentium II II and PowerPC and PowerPC
YonseiYonsei UniversityUniversity4-108
PowerPC Cache OrganizationPowerPC Cache Organization
• The L1 cache are eight-way set associative and us a version of the MESI cache coherency protocol
• The L2 cache is two-way set associative cache with 256K, 512K, of 1 Mbyte of memory
Pentium Pentium II II and PowerPC and PowerPC
YonseiYonsei UniversityUniversity4-109
Enhanced DRAM(EDRAM)Enhanced DRAM(EDRAM)• Integrates a small SRAM cache onto a
generic DRAM chip• Refresh operation conducted in parallel with
cache read operation, minimizing the time that chip is unavailable due to refresh
• Read path from row cache to the output port independent of write path from the I/O module to the sense amplifiers
• Enables a subsequent read access to cache to be satisfied in parallel with completion of the write operation
Advanced Advanced DRAM organizationDRAM organization
YonseiYonsei UniversityUniversity4-110
EDRAMEDRAM Advanced Advanced DRAM organizationDRAM organization
YonseiYonsei UniversityUniversity4-111
Cache DRAM(CDRAM)Cache DRAM(CDRAM)• Includes a larger SRAM cache than EDRAM• SRAM on the CDRAM can used in two way
– Used as a true cache, consisting of a number of 64-bit lines
– Used as a buffer to support the serial access of a block of data
Advanced Advanced DRAM organizationDRAM organization
YonseiYonsei UniversityUniversity4-112
Synchronous DRAM(SDRAM)Synchronous DRAM(SDRAM)• Exchanges data with processor synchronized to
external clock signal and running at the full speed of processor/memory bus without imposing wait states
• With synchronous access, DRAM moves data in & out under control of system clock
• Employs burst mode– To eliminate the address setup time and row and column
line precharge time after first access– In burst mode, series of data bits clocked out rapidly after
first bit has been accessed – Useful when all bits to be accessed in sequence and the
same row of the array as the initial access
Advanced Advanced DRAM organizationDRAM organization
YonseiYonsei UniversityUniversity4-113
SDRAMSDRAM
• Dual-bank architecture that improves opportunities for on-chip parallelism
• SDRAM performs best when transferring large blocks of data serially
Advanced Advanced DRAM organizationDRAM organization
YonseiYonsei UniversityUniversity4-114
SDRAMSDRAM Advanced Advanced DRAM organizationDRAM organization
YonseiYonsei UniversityUniversity4-115
Rambus Rambus DRAM(RDRAM)DRAM(RDRAM)• A more revolutionary approach to the
memory-bandwidth problem• The chip exchanges data with processor over 28
wires no more than 12cm long• Bus address up to 320RDRAM chip and rated at
500 Mbps• Special RDRM bus delivers address and control
information using asynchronous block-oriented protocol
• Gets a memory request over the high-speed bus
•
Advanced Advanced DRAM organizationDRAM organization
YonseiYonsei UniversityUniversity4-116
RamLinkRamLink• Memory interface with point-to-point
connections arranged in a ring• Data exchanged in the from of packets• Request packets initiate memory
transaction• Provides scalable architecture that
supports a small or large number of DRAMs and not dictate internal DRAM structure
Advanced Advanced DRAM organizationDRAM organization
YonseiYonsei UniversityUniversity4-117
RamLinkRamLink Advanced Advanced DRAM organizationDRAM organization
YonseiYonsei UniversityUniversity4-118
Characteristics of Two-Level MemoriesCharacteristics of Two-Level Memories Appendix 4AAppendix 4A
Main Memory Cache
Virtual Memory (Paging)
Disk Cache
Typical access time ratios
5/1 1000/1 1000/1
Memory management system
Implemented by special hardware
Combination of hardware and system software
System software
Typical block size 4 to 128 bytes 64 to 4096 bytes 64 to 4096 bytes
Access of processor to second level
Direct access Indirect access Indirect access
YonseiYonsei UniversityUniversity4-119
Relative Dynamic Frequency Relative Dynamic Frequency Appendix 4AAppendix 4A
Study[HUCK83
][KNUT71] [PATT82] [TANE78]
Language Pascal FORTRAN Pascal C SALWorkload Scientific Student System System System
Assign 74 67 45 38 42
Loop 4 3 5 3 4
Call 1 3 15 12 12
IF 20 11 29 43 36
GoTo 2 9 - 3 -
Other - 7 6 1 6
YonseiYonsei UniversityUniversity4-120
LocalityLocality• Each call represented by the line moving
down and to the right• Each return represented by the line moving
up and to the right • Window with depth equal to 5 defined
Appendix 4AAppendix 4A
YonseiYonsei UniversityUniversity4-121
The Call/Return Behavior of Programs The Call/Return Behavior of Programs Appendix 4AAppendix 4A
YonseiYonsei UniversityUniversity4-122
LocalityLocality
• Spatial locality– Refers to the tendency of execution to involve a
number of memory location that clustered
• Temporal locality – Refers to the tendency for a processor to access
memory locations that have been used recently
Appendix 4AAppendix 4A
YonseiYonsei UniversityUniversity4-123
Locality of Reference For Web PagesLocality of Reference For Web Pages Appendix 4AAppendix 4A
YonseiYonsei UniversityUniversity4-124
Operation of Two-Level MemoryOperation of Two-Level Memory
Ts = H ×T1 + (1 - H) × (T1 + T2 )
= T1 + (1 – H) × T2
where
Ts = average(system) access time
T1 = access time of M1 (e.g., cache, disk cache)
T2 = access time of M2 (e.g., main memory, disk)
H = hit ratio (fraction of time reference is found in M1)
Appendix 4AAppendix 4A
YonseiYonsei UniversityUniversity4-125
PerformancePerformance
Where
Cs = average cost per bit for the combined two-level memory
C1 = average cost per bit of upper-level memory M1
C2 = average cost per bit of lower-level memory M2
S1 = size of M1
S2 = size of M2
Would like Cs ≈ C2
(C1 >> C2 , requires S1 << S2 )
Appendix 4AAppendix 4A
SSSCSCC s
21
2211
YonseiYonsei UniversityUniversity4-126
Memory cost Vs. Memory SizeMemory cost Vs. Memory Size Appendix 4AAppendix 4A
YonseiYonsei UniversityUniversity4-127
PerformancePerformance• Consider the quantity T2 / T1 , which referred
to as the access efficiency
• If there is strong locality, possible to achieve high values of hit ratio even with relatively small upper-level memory size
• Small cache sizes will yield a hit ratio above 0.75 regardless of the size of main memory
Appendix 4AAppendix 4A
TTT
THs
1
2
1
)1(1
1
YonseiYonsei UniversityUniversity4-128
Access Efficiency Vs. Hit Ratio(TAccess Efficiency Vs. Hit Ratio(T22 / T / T11 ) ) Appendix 4AAppendix 4A
YonseiYonsei UniversityUniversity4-129
Hit Ratio Vs. Memory SizeHit Ratio Vs. Memory Size Appendix 4AAppendix 4A