CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory...

29
CIS250 OPERATING SYSTEMS

Transcript of CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory...

CIS250

OPERATING SYSTEMS

Memory Management

• Since we share memory, we need to manage it

• Memory manager only sees the address

• A program counter value indicates which memory address the CPU should fetch

• The instruction is decoded, operated and might be re-saved to memory

Chapter 9- Memory Management• Background

– an array of bytes containing an address– The CPU gets the instruction from memory based on

the program counter value– The memory unit sees the data as a stream of addresses

• The program is compiled and converted into an executable

• Program is loaded into memory and associated with a process; the process accesses the data and instructions from memory

• Process is swapped in and out

Binding

• Setting the actual address used by the relocator

• The sequence of addressing by a program– Address Binding– Dynamic Loading– Dynamic Linking– Overlays

Address Binding

• Binding the instructions and data to memory addresses

• The source program memory addresses are symbolic 0-x– The compiler binds this to relocatable addresses– The loader will bind relocatable to absolute

addresses 0-x• 14 bytes from the beginning of the module 74014

• The binding of instructions and data to memory can be done at:– compile time - when you know at compile time where

program will be in memory an absolute code is generated (.COM in DOS)

– load time - don’t know address at compile time, compiler generates a relocatable code; an offset in a program

– execution time - combined with special hardware, when a program is moved during exe

Dynamic Loading

• Routine is not loaded until it is called; good memory utilization– example: main program is loaded into memory; but

function is not loaded until it is called

• The relocatable linking loader will load and update the program address table

• Advantage: unused programs don’t take up memory (error rtns); done programatically, no extra support from the O/S

Dynamic Linking

• Opposite of static. With static linking, language libraries are combined into the binary program image (takes up more space)

• With dynamic, linking is postponed. A stub tells the linker (via an address) to to locate the library routine

• The O/S must allow multiple processes to access the same memory address

Overlays• Segments of routines

• Keep only the data and instructions in memory that are needed. A overlays B in mem

• Good for when your total program size > than the memory allocated

• No extra support from the O/S; programmatically ensure that overlays don’t interfere

• Used on systems with low resources

Address Space

• Logical - address generated by the CPU

• Physical - the address loaded into the memory address register; the set of all physical addresses corresponding to the logical address. The user program never sees this address.

• For compile and load time address binding, the logical and physical addresses are the same

• With executable and address binding, the logical (virtual) and physical differ. This is central to proper memory management.

• MMU - memory management unit - maps from virtual to physical address. The base register is the relocatable register. This value is added to each user process generated address. Ie 0 + 14000 (offset)

Swapping

• Copying a process from memory to disk to free up memory space for other processes

Swapping

• A process is swapped out of memory and placed in the backing store. (a backing store is a fast disk with a ready queue)– If binding is done at assembly or load, the

process must be placed in the same location– If binding done at execution, the process can be

placed anywhere.

• Context switch time - the time it takes to swap one process out, swap another in

• the transfer time depends on the amount of memory swapped. It should be low relative to the CPU time. The swap space is separate from the the file system.

• When you swap, make sure the process is idle. Otherwise an operation (I/O) may try to access the memory of the new process

• In WIN3.1, the user controls. In NT, MMU features: the scheduler can decide on the process.

• The relocation register - the base register is used to give the lowest physical address for a process. Add this to the user process to get the actual address.

• Main memory supports the O/S and the user programs

• Partitioned - the O/S and user; O/S usually loaded low (near the interrupt vector)

Contiguous Allocation

• Single Partition - to protect the O/S from the user; to protect the user programs from each other. Relocation and limit register. MMU maps dynamically and is loaded as part of the context switch.

• Multiple Partition - divide memory into fixed-sized partitions - each with one process. The degree of multi-programming = the number of partitions. IBM360 used.

• Fragmentation– External

– Internal

Fixed Partitioning• Batch processing

• The O/S keeps track of what memory is available - hole. When a process needs memory, the O/S searches for the holes.

• FCFS and RR examples.

Fixed Partitioning• Algorithms

– First fit - the first hole that’s big enough; fastest– Best fit - the smallest hole that’s big enough.

Good for storage utilization, but you search the entire list if not sorted by size.

– Worst fit - allocate the largest hole; searches entie list unless sorted; produces largest leftover hole - can be allocated to other processes.

Contiguous Allocation

• Single Partition

• Multiple Partition

• Fragmentation– External

– Internal

Paging• Method

• Page Table Structure

• Multilevel Paging

• Inverted Page Table

• Shared Pages

Segmentation

• Method

• Hardware

• Implementation of Segment Tables

• Protection and Sharing

• Fragmentation

Chapter 10 - Virtual Memory

Virtual Memory

• Background

• Definition

Demand Paging

• Definition

• Process

• Hardware support– Page Table– Secondary memory

Demand Paging Performance

• Page fault

• Effective access time computation

Page Replacement

• Process

• Algorithms– FIFO– Optimal– LRU

LRU Approximation

• Additional-Reference-Bits

• Second-Chance

• Enhanced-Second-Chance

• Counting Algorithms

• Page Buffering

• Allocation of Frames

• Thrashing

• Other Considerations

• Demand Segmentation