Myocardial Revascularization East vs West “ Lecture in Memory of Prof. Zhu Guoying ”
Chapter 8 Memory Management Dr. Yingwu Zhu. Outline Background Basic Concepts Memory Allocation.
-
Upload
marybeth-hubbard -
Category
Documents
-
view
221 -
download
0
Transcript of Chapter 8 Memory Management Dr. Yingwu Zhu. Outline Background Basic Concepts Memory Allocation.
Chapter 8 Memory Chapter 8 Memory ManagementManagementDr. Yingwu Zhu
OutlineOutlineBackgroundBasic ConceptsMemory Allocation
BackgroundBackgroundProgram must be brought into
memory and placed within a process for it to be run
User programs go through several steps before being run
An assumption for this discussion:◦Physical memory is large enough to
hold an any sized process (VM is discussed in Ch. 9)
Multi-step processing of a user Multi-step processing of a user programprogram
OutlineOutlineBackgroundBasic Concepts
◦ Logical address vs. physical address◦ MMU
Memory Allocation
Logical vs. Physical Address Logical vs. Physical Address SpaceSpaceLogical address (virtual address)
◦Generated by the CPU, always starting from 0
Physical address ◦Address seen/required by the
memory unit Logical address space is bound to
a physical address space◦Central to proper memory
management
Binding logical address Binding logical address space to physical address space to physical address spacespaceBinding instructions & data into
memory can happen at 3 different stages◦ Compile time: If memory location known a priori,
absolute code can be generated; must recompile code if starting location changes
◦ Load time: Must generate relocatable code if memory location is not known at compile time
◦ Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., relocation and limit registers)
◦ Logical address = physical address for compile time and load time; logical address != physical address for execution time
Memory Management Unit Memory Management Unit (MMU)(MMU)Hardware device
◦logical address physical address (mapping)
Simplest MMU scheme: relocation register
The user program deals with logical addresses; it never sees the real physical addresses
OutlineOutlineBackgroundBasic ConceptsMemory Allocation
Memory Allocation, How?Memory Allocation, How?In the context of
◦Multiprocessing, competing for resources
◦Memory is a scarce resource shared by processes
Questions:◦#1: How to allocate memory to
processes?◦#2: What considerations need to be
taken in memory allocation?◦#3: How to manage free space?
Memory AllocationMemory AllocationContiguous allocationNon-contiguous allocation:
Paging
Contiguous AllocationContiguous AllocationFact: memory is usually split into
2 partitions◦ Low end for OS (e.g., interrupt vector)◦ High end for user processes where
allocation happens
Contiguous AllocationContiguous AllocationDefinition: each process is placed
in a single contiguous section of memory◦Single partition allocation◦Multiple-partition allocation
Contiguous AllocationContiguous AllocationSingle partition allocation, needs
hardware support◦ Relocation register: the base physical
address◦ Limit register: the range of logical address
Contiguous AllocationContiguous Allocation
Base register + limit registerdefine a logical address Space in memory !
Contiguous AllocationContiguous AllocationSingle partition allocation (high-
end partition), needs hardware support◦ Relocation register: the base physical
address◦ Limit register: the range of logical address◦ Protect user processes from each other,
and from changing OS code & data
Protection by Base + Limit Protection by Base + Limit RegistersRegisters
CPU < + Memory
Logicaladdress
Limit register
Relocation register
Trap: address error
yesPhysical address
no
Contiguous AllocationContiguous AllocationMultiple-partition allocation
◦Divide memory into multiple fixed-sized partitions
◦Hole: block of free/available memory Holes of various sizes are scattered thru
memory
◦When a process arrives, it is allocated from a hole large enough to hold it May create a new hole
◦OS manages Allocated blocks & holes
Multiple-Partition Multiple-Partition AllocationAllocation
OS
process 5
process 8
process 2
OS
process 5
process 2
OS
process 5
process 2
process 9
OS
process 5
process 9
process 2
process 10rm p8 add p9 add p10
Multiple-Partition Multiple-Partition AllocationAllocationDynamic storage allocation problem
◦How to satisfy a request of size n from a list of holes?
Three strategies◦First fit: allocate the first hole large
enough◦Best fit: allocate the smallest hole that is
big enough◦Worst fit: allocate the largest hole◦First & best fit outperform worst fit (in
storage utilization)
FragmentationFragmentationStorage allocation produces
fragmentation!External fragmentation
◦Total available memory space exists to satisfy a request, but it is not contiguous
Internal fragmentation◦Allocated memory may be slightly larger
than requested memory, this size difference is memory internal to a partition, but not being used
◦Why? Memory is allocated in block size rather than in the unit of bytes
ThinkingThinkingWhat fragmentations are
produced by multiple-partition allocation?
What fragmentation is produced by single-partition allocation?
Any solution to eliminate fragmentations?
Memory AllocationMemory AllocationContiguous allocationNon-contiguous allocation:
Paging
PagingPagingMemory allocated to a process is
not necessarily contiguous◦ Divide physical memory into fixed-sized
blocks, called frames (size is power of 2, e.g., 512B – 16MB)
◦ Divide logical address space into blocks of same size, called pages
◦ Memory allocated to a process in one or multiple of frames
◦ Page table: maps pages to frames, per-process structure
◦ Keep track of all free frames
Paging ExamplePaging Example
Address Translation Address Translation SchemeSchemeLogical address Physical addressAddress generated by the CPU is
divided into◦Page number (p): used as an index into a
page table which contains base address of each page in physical memory
◦Page offset (d): combined with base address to define the physical memory address that is sent to the memory unit
Logical AddressLogical Address
p d
n bitsm-n bits
Logical address space 2^m
Address TranslationAddress Translation
ExerciseExerciseConsider a process of size 72,776
bytes and page size of 2048 bytesHow many entries are in the page
table?What is the internal fragmentation
size?
DiscussionDiscussionHow to implement page tables?
Where to maintain page tables?
Implementation of Page Implementation of Page Tables (1)Tables (1)Option 1: hardware support,
using a set of dedicated registersCase study
16-bit address, 8KB page size, how many registers needed for the page table?
Using dedicated registers◦Pros◦Cons
Implementation of Page Implementation of Page Tables (2)Tables (2)Option 2: kept in main memory
◦Page-table base register (PTBR) points to the page table
◦Page-table length register (PTLR) indicates size of the page table
◦Problem?• Every data/instruction access requires 2 memory accesses: One for the page table and one for the data/instruction.
Option 2: Using memory to Option 2: Using memory to keep page tableskeep page tablesHow to handle 2-memory-
accesses problem?
Option 2: Using memory to Option 2: Using memory to keep page tableskeep page tablesHow to handle 2-memory-
accesses problem?Caching + hardware support
◦Use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs)
◦Cache page-table entries (LRU, etc.)◦Expensive but faster◦Small: 64 – 1024 entries
Associative MemoryAssociative MemoryAssociative memory – parallel search
Address translation (A´, A´´)◦If A´ is in associative register, get frame
# out◦Otherwise get frame # from page table
in memory
Page # Frame #
Paging with TLBPaging with TLB
Effective Memory-Access Effective Memory-Access TimeTimeAssociative Lookup = b time unitAssume one memory access time is x time unitHit ratio – percentage of times that a page
number is found in the associative registers; ratio related to number of associative registers
Hit ratio = Effective Access Time (EAT)
EAT = (x + b) + (2x + b)(1 – )
Example: memory takes 100ns, TLB 20ns, hit ratio 80%, EAT?
Memory ProtectionMemory ProtectionMemory protection implemented
by associating protection bit with each frame
Valid-invalid bit attached to each entry in the page table:◦“valid” indicates that the associated
page is in the process’s logical address space, and is thus a legal page
◦“invalid” indicates that the page is not in the process’s logical address space
Memory ProtectionMemory Protection
Page Table StructurePage Table StructureHierarchical PagingHashed page tablesInverted hash tables
Why Hierarchical Paging?Why Hierarchical Paging?Most modern computer systems
support a large logical address space, 2^32 – 2^64
Large page tables◦ Example: 32-bit logical address space, page
size is 4KB, then 2^20 page table entries. If address takes 4 bytes, then the page table size costs 4MB
◦ Contiguous memory allocation for large page tables may be a problem!
◦ Physical memory may not hold a single large page table!
Hierarchical PagingHierarchical PagingBreak up the logical address
space into multiple page tables◦Page table is also paged!
A simple technique is a two-level page table
Two-Level Paging Two-Level Paging ExampleExample A logical address (on 32-bit machine with 4K page
size) is divided into:◦ A page number consisting of 20 bits what’s the page
table size in bytes?◦ A page offset consisting of 12 bits
Since the page table is paged, the page number is further divided into:◦ A 10-bit page number ◦ A10-bit page offset
Thus, a logical address is as follows:
where pi is an index into the outer page table, and p2 is the displacement within the page of the outer page table
page number page offset
pi p2 d
10 10 12
Address TranslationAddress Translation2-level 32-bit paging architecture
Address Translation Address Translation ExampleExample
Page Table StructurePage Table StructureHierarchical PagingHashed page tablesInverted hash tables
Hashed Page TablesHashed Page TablesA common approach for handling
address space > 32 bits◦The virtual page number is hashed
into a page table. This page table contains a chain of elements hashing to the same location.
◦Virtual page numbers are compared in this chain searching for a match. If a match is found, the corresponding physical frame is extracted.
Hashed Page TablesHashed Page Tables
Page Table StructurePage Table StructureHierarchical PagingHashed page tablesInverted hash tables
Inverted Page TablesInverted Page TablesWhy need it?How?
◦One entry for each memory frame ◦Each entry consists of the virtual
address of the page stored in the memory frame, with info about the process that owns the page: <pid, page #>
◦One page table system widePros & Cons
Inverted Hash TablesInverted Hash TablesPros: reduce memory consumption for page
tablesCons: linear search performance!
ExerciseExerciseConsider a system with 32GB virtual
memory, page size is 2KB. It uses 2-level paging. The size of the page directory (outer page table) is 4KB. The physical memory is 512MB.Show how the virtual memory address is split in
page directory, page table and offset.How many (2nd level) page tables are there in this
system (per process)?How many entries are there in the (2nd level) page
table?What is the size of the frame number (in bits)
needed for implementing this?
SummarySummaryBasic conceptsMMU: logical addr. physical
addr.Memory Allocation
◦Contiguous ◦Non-contiguous: paging
Implementation of page tables Hierarchical paging Hashed page tables Inverted page tables TLB & effective memory-access time
Notes: Shared MemoryNotes: Shared Memoryipcs –mipcrm