12-os8

download 12-os8

of 15

Transcript of 12-os8

  • 7/31/2019 12-os8

    1/15

    ap er

    Memor Mana ement Strate

    Contents

    Background

    .

    Swapping

    Contiguous Memory Allocation

    Paging

    Structure of the Page Table

    egmentat on

    Example: Intel Pentium

    8. Memory Management Strategy 2

    Objectives

    To provide a detailed description of various ways oforganizing

    memor hardware

    To discuss various memory-management techniques, including

    paging and segmentation

    To provide a detailed description of the Intel Pentium, whichsupports both pure segmentation and segmentation with paging

    8. Memory Management Strategy 3

    8.1 Background

    Program must be brought (from disk) into memory and placed

    within a rocess for it to be executed

    .

    diskmemory

    PCB

    programload

    process

    Main memory and registers are the only storage that CPU canaccess directly

    registers are accessible within one CPU clock

    main memory access can take many CPU cycles processor stall

    Protection of memory is required to ensure correct operation this protection is provided by hardware

    8. Memory Management Strategy 4

  • 7/31/2019 12-os8

    2/15

    Base and Limit Registers

    Make sure that each process has a separate memory space

    ega a ress range:base address < base+limit

    8. Memory Management Strategy 5

    Hardware address protection with base/limit reg.

    base and limit registers can be loaded only by the operatingsystem, which uses a special privileged instruction

    Operating system can access to OS and users memory

    8. Memory Management Strategy 6

    Address Binding

    Address representation

    source

    programcompiler

    linkage editor

    or loader

    symbolic

    address

    relocatable

    address

    absolute

    address

    bind addresses of instructions and data to memory addresses

    (logical address) (physical address)

    8. Memory Management Strategy 7

    Multistep processing of a user program

    8. Memory Management Strategy 8

  • 7/31/2019 12-os8

    3/15

    Address Binding Schemes

    Compile time:

    If memory location known a priori, absolute code can begenerated; must recompile code if starting location changes.

    Load time:

    Must generate relocatable code if memory location is not known atcompile time. Binding is delayed until load time.

    Binding delayed until run time if the process can be moved during

    its execution from one memory segment to another.

    Need hardware support for address maps MMU

    Most general-purpose OS use this method

    8. Memory Management Strategy 9

    Logical vs. Physical Address Space

    logical address vs. physical address

    logical address(virtual address): address seen by a program

    physical address: address seen by the memory unit

    Memory management unit(MMU)-

    a program deals with logical address and it never sees the real

    physical addresses.

    Address binding scheme and logical/physical address

    compile-time and load-time binding logical addr = physical addr

    execution-time bindin lo ical addr h sical addr

    logical physical

    recent processor

    CPUMMU

    Memory

    address address

    8. Memory Management Strategy 10

    og ca = p yscaaddress address

    Dynamic relocation using a relocation register

    relocation register = base register

    Intel 80x86 family: 4 relocation registers (CS, DS, ES, SS)

    8. Memory Management Strategy 11

    Dynamic Loading

    Dynamic loading:

    Advantages

    Better memory-space utilization; unused routine is never loaded.

    Useful when large amounts of code are needed to handle infrequentlyoccurring cases. (e.g. error routine)

    D namic loadin does not re uire s ecial su ort from the

    operating system

    the responsibility of the users to design their program

    8. Memory Management Strategy 12

  • 7/31/2019 12-os8

    4/15

  • 7/31/2019 12-os8

    5/15

    Modified Swapping

    Modified versions of swapping are found on many systems (UNIX,

    Linux Windows,

    Swapping is normally disabled

    Swapping start if many processes are running and are using a

    Swapping is again halted when the load is reduced.

    memory

    process

    I/O

    memory

    8. Memory Management Strategy 17

    8.3 Contiguous Memory Allocation

    Main memory usually into two partitions:

    -

    usually in low memory with interrupt vector.

    User processes: userprocesses

    high

    then in high memory.

    In the contiguous memory allocation,

    OSintr vector low

    contiguous section of memory

    Relocation register scheme

    pro ec user processes rom eac o er, an rom c ang ng opera ng-

    system code and data.

    Use relocation register and limit register Relocation register: contains value of smallest physical address

    Limit register: contains range of logical addresses

    MMU ma s lo ical address d namicall into h sical address

    8. Memory Management Strategy 18

    Hardware support for relocation and limit registers

    0 logical address < limit

    protection

    8. Memory Management Strategy 19

    Multiple Partition Allocation

    Hole block of available memory;

    .

    When a process arrives, it is allocated memory from a hole large

    enough to accommodate it.

    Operating system maintains information about:(a) allocated partitions (b) free partitions (hole)

    OS OS OS OS

    process 5

    rocess 8

    process 5 process 5 process 5

    process 9

    Hole

    process 9

    process 10

    process 2 process 2 process 2 process 2

    8. Memory Management Strategy 20

  • 7/31/2019 12-os8

    6/15

    Dynamic Storage-Allocation Problem

    How to satisfy a request of size nfrom a list of free holes.

    First-fit:

    Allocate the first hole that is big enough.

    Best-fit: request 70K oca e e sma es o e a s g enoug

    must search entire list, unless ordered by size.

    Produces the smallest leftover hole.

    100Kfirst-fit

    Worst-fit:

    Allocate the largest hole

    must also search entire list.

    80K

    50K

    best-fit

    Produces the largest leftover hole.150Kworst-fit

    First-fit and best-fit are better than worst-fitin terms of time and storage utilization

    8. Memory Management Strategy 21

    Fragmentation

    External fragmentation

    total memor s ace exists to satisf a re uest, but it is not

    contiguous.

    50-percent rule: given N allocated blocks, another 0.5N blocks will belost due to external fra mentation in first-fit

    1/3 of memory may be unusable

    Internal fragmentation

    rea e memory n o xe s ze oc s, an a oca e memory n un

    of block. reduce the overhead of keep track of holes

    allocated memory may be slightly larger than requested memory; this

    s ze erence s memory n erna o a par on, u no e ng use .

    req size

    req sizeexternal

    fragmentationinternal

    fragmentation

    8. Memory Management Strategy 22

    Compaction

    Solutions to the external-fragmentation problem

    .

    2. permit noncontiguous physical memory space:

    paging, segmentation

    Compaction

    Shuffle memor contents to lace all free memor to ether in one

    large block.

    Compaction is possible onlyif relocation is dynamic, and is done at

    .

    pending I/O problem

    Latch job in memory while it is involved in I/O.

    Do I/O only into OS buffers.

    8. Memory Management Strategy 23

    8.4 Paging

    Paging permit a process to allocate noncontiguous physical

    memor .

    Page and Page frame

    Divide physical memory into fixed-sized blocks called page frames

    size is 2n

    , usually between 512B and 8KB . Divide logical memory into blocks of same size called pages.

    page page frame

    0

    1

    2 0

    mapping

    4

    52

    3

    46

    logical address space physical address space

    5

    8. Memory Management Strategy 24

  • 7/31/2019 12-os8

    7/15

    Address Translation Scheme

    logical address is divided into page number and page offset and

    a e number is translated into a e frame number

    page no. (p) offset (d)lo ical address

    mapping

    page frame no. (f) offset (d)physical address

    n

    Page Table

    contains the base address of each page in physical memory

    (page size = 2n)

    page rame num er

    page number is used as an index into a page table

    8. Memory Management Strategy 25

    Paging hardware: page table

    8. Memory Management Strategy 26

    Paging Example

    8. Memory Management Strategy 27

    Page Allocation

    Page allocation

    .

    To run a program of size npages, need to find nfree frames andload program into the allocated frames

    .

    (See next slides)

    Internal fragmentation

    no ex erna ragmen a on

    memory allocation unit: page frame

    internal fragmentation to average half page per process

    smaller page size smaller internal fragmentation,but larger page table

    larger page size larger internal fragmentation,u e c en s , sma er page a e

    Some CPUs and kernels support multiple page sizes

    8. Memory Management Strategy 28

  • 7/31/2019 12-os8

    8/15

    Free frames

    before allocation after allocation

    8. Memory Management Strategy 29

    Frame table and Page table

    Frame table

    indicate whether the frame is free or allocated, and

    if it is allocated, to which page of which process or processes pera ng sys em ma n a ns a copy o e page a e or eac

    process increases the context-switch time

    PCB

    PC

    statePC

    CPU

    registers

    registers

    8. Memory Management Strategy 30

    Structure of Page Table

    Hardware implementation of the page table

    .

    satisfactory only if page table is reasonably small (256 or less)

    2. page table is kept in main memory

    page-table base register (PTBR) points to the page table page-table length register (PTLR) indicates size of page table.

    page table

    memory

    registers

    page

    table

    PTBR

    PTLR

    loaded/modified by

    privileged instruction

    8. Memory Management Strategy 31

    Page Table in memory and TLB

    Every data/instruction access requires twomemory accesses.

    one for the data/instruction

    Translation Look-aside buffer(TLB)

    this delay would be intolerable

    A special fast-lookup hardware cachefor page table entries also calledAssociative memory

    Two memor access roblem can be solved b the use of TLBs

    Some TLBs store address-space identifier(ASIDs) in each TLB entry

    If TLB does not support separate ASIDs, then every time a new page

    - ,

    memory

    page

    table

    TLB

    8. Memory Management Strategy 32

  • 7/31/2019 12-os8

    9/15

    Translation Lookaside Buffer(TLB)

    TLB

    -

    page number frame number

    some TLBs use set-associative mapped cache

    Address translation (p, d)

    ,

    Otherwise, get frame number from page table in memory TLB miss(or, copy the frame number from page table to TLB and

    If TLB is full of entries, OS selects one entry for replacement

    replacement policies: LRU(least recently used), random

    8. Memory Management Strategy 33

    Paging hardware with TLB

    8. Memory Management Strategy 34

    Effective Memory Access Time

    (Example)

    TLB Lookup time = 20 ns

    memory cycle time = 100 ns

    Hit ratio = 80 %

    (percentage of times that a page number is found in the TLB)

    Effective memory Access Time (EAT)

    : access me = + = ns

    20% : access time = 20 + 100 + 100 = 220 ns

    EAT = 120 0.8 + 220 0.2 = 140 ns

    hit ratio is related to the size of TLB

    16 entries: hit ratio = 80%, EAT = 140ns

    en r es: ra o = , = . . = ns

    8. Memory Management Strategy 35

    Memory Protection

    Protection bits

    each frame provide finer level of memory protection

    define a page to be read-write or read-only, executable-only

    a t or va - nva t Valid bit(V) is attached to each entry in the page table

    V=1 (valid): legal page (the associated page is in the process

    logical address space)

    V=0 (invalid): illegal page

    frame no. P V

    0 eo 1

    2

    3

    4

    rw

    ro

    -

    1

    1

    0

    8. Memory Management Strategy 36

  • 7/31/2019 12-os8

    10/15

    Valid or Invalid Bit in a Page Table

    8. Memory Management Strategy 37

    Shared Pages

    An advantage of paging is the possibility of sharing common code

    Shared code

    If code is reentrantcode(non-self-modifying, read-only), it can beshared.

    Two or more processes can execute the same code at the same time

    Only one copy of the code is kept in physical memory.

    i.e. text editors com ilers window s stems. . , ,

    Each process has its own copy of registers and data storage

    Shared code must appear in same location in the logical address

    Private code and data

    Each process keeps a separate copy of the code and data The pages for the private code and data can appear anywhere

    in the logical address space

    8. Memory Management Strategy 38

    Shared Pages Example

    8. Memory Management Strategy 39

    8.5 Structure of the Page Table

    Hierarchical Paging

    Hashed Page Tables

    Inverted Page Tables

    8. Memory Management Strategy 40

  • 7/31/2019 12-os8

    11/15

    Hierarchical Page table (Multilevel Paging)

    Problem of large page table

    space(232 264) page table become excessively large

    contiguous page table is larger than page size o u on

    divide the page table into smaller pieces use multilevel paging

    dp2p1 logical address

    fPTBR

    dfphysical addresspage directory

    8. Memory Management Strategy 41

    page table

    Two-Level Page-Table Scheme

    8. Memory Management Strategy 42

    Two-Level Paging Example

    A logical address (on 32-bit machine with 4K page size) 80386

    dp2p1

    12-bit10-bit10-bit

    pa e num er o se

    Since the page table is paged, the page number is further divided into:a 10-bit index into outer page table(p1). a 10-bit displacement within

    the page of the page table(p2).

    (page size = 212 = 4KB, page table size = 210 x 4B/entry = 4KB)

    VAX

    dps

    9-bit21-bit2-bit

    page size = 512B page table size of each section: 221 4 = 8MB

    8. Memory Management Strategy 43

    pages e user-process page a es re uce ma n-memory use

    Multilevel Paging

    Three-level paging example

    nd

    dp3p2

    12-bit10-bit10-bit

    p1

    32-bit

    page

    - -

    page sizelarger than

    page size

    - -

    four memory accesses For 64-bit architecture, hierarchical page tables are generally

    8. Memory Management Strategy 44

    cons ere nappropr a e.

  • 7/31/2019 12-os8

    12/15

    Hashed Page Tables

    Common in address spaces > 32 bits.

    Hashed Page tables

    The virtual page number is hashed into a page table.

    This page table contains a chain of elements hashing to the same

    location.

    , ,

    Virtual page numbers are compared in this chain searching for a

    match. If a match is found, the corresponding physical frame is

    extracted.

    8. Memory Management Strategy 45

    a chain of elements (virtual, physical)

    8. Memory Management Strategy 46

    Inverted Page Table

    Problem of page table

    one entry for each page, or one slot for each virtual address

    each page table may be large (millions of entries).

    solution: inverted page table Inverted Page Table

    onl one a e table

    only one entry for each page of physical memory.

    decreases memory needed to store a page table,

    Increases time needed to search the table when a page reference

    occurs. (sequential search)

    Use hash table to limit the search to one or at most a few page

    table entries. at least two memory read (hash table, page table)

    8. Memory Management Strategy 47

    Inverted Page Table Architecture

    IBM RT

    8. Memory Management Strategy 48

  • 7/31/2019 12-os8

    13/15

    8.6 Segmentation

    An important aspect of memory-management scheme

    physical memory: a linear array of bytes (physical address)

    paging: also a linear array of bytes (logical address) what is a preferable real user view of memory ?

    A program is a collection of segments.

    main program, procedure, function, local variables,

    global variables, common block, stack, symbol table, arrays

    Segmentation is a memory management scheme that supportsthis user view of memory

    8. Memory Management Strategy 49

    Logical View of Segmentation

    1

    4

    2subroutine

    3 4 2

    3main symbol table

    user space physical memory space

    8. Memory Management Strategy 50

    Segmentation Architecture

    Logical address consists of a two tuple:

    < - >,

    Segment table

    maps two-dimensional logical address into one-dimensional physical

    addresses each table entry has:

    base contains the starting physical address of segment

    limit specifies the length of the segment.

    Segment-table base register (STBR)

    pon s o e segmen a e s oca on n memory.

    Segment-table length register (STLR)

    indicates number of se ments used b a ro ram

    segment numbersis legal ifs< STLR.

    8. Memory Management Strategy 51

    Segmentation Hardware

    STBR STLR

    8. Memory Management Strategy 52

  • 7/31/2019 12-os8

    14/15

    Protection and Sharing

    Relocation

    d namic and b se mentation table

    Protection.

    Each entry in segment table contains

    protection bit: read/write/execute privileges

    Instruction segment can be defined as read-only or execute-only.

    ac ng an array n s own segmen , segmen a on ar ware wautomatically check that indexes are legal or outside boundaries

    Since segments vary in length, memory allocation is a dynamic-

    8. Memory Management Strategy 53

    Example of Segmentation

    8. Memory Management Strategy 54

    8.7 Example: Intel Pentium

    Support both segmentation and segmentation with paging

    in rotected addressin mode

    Logical to physical address translation

    8. Memory Management Strategy 55

    Segmentation and Paging

    Support both segmentation and segmentation with paging

    in rotected addressin mode

    segmentation:

    two segment tabless: segment

    g: GDT/LDT

    LDT(local descriptor table) GDT(global descriptor table)

    p: privilege level

    translate 46-bit logical address into 32-bit linear address

    13 1 2 32

    paging

    a two-level paging scheme.

    a e offsetdir

    translate 32-bit linear address into 32-bit physical address

    10 10 12

    8. Memory Management Strategy 56

  • 7/31/2019 12-os8

    15/15

    GDTR

    LDTR

    CR3

    8. Memory Management Strategy 57

    Three-level Paging in Linux

    Linux has adopted a three-level paging so that it run on a variety of

    hardware latforms

    =

    8. Memory Management Strategy 58