Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping...

134
Unit 3 Memory Management & Virtual Memory Course : Operating System Design

Transcript of Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping...

Page 1: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Unit 3 Memory Management

& Virtual Memory

Course : Operating System Design

Page 2: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Syllabus

Swapping, Demand paging, a hybrid System with swapping and demand

paging, memory management requirements, Memory partitioning,

Paging, Segmentation, Security Issues, Hardware and control structures,

Operating system software, Linux memory management, Windows8

memory management, Android Memory Management

Page 3: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,
Page 4: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Storage system / memory system deals with how data is stored in computer. Memory system can be classified into primary & secondary memory.

Primary Memory is also referred as Main Memory.

Programs are executed by processor which should be stored in main memory. Example of main memories are RAM (Random Access Memory), Cache memory, Registers, etc..

Cache memory is used to store the frequently used data between processor & main memory.

Registers are another kind of memory which is part of processors. Before processing the data by processor, it is stored in the processor registers. Register can be accumulator, data register, counter register, base registers, etc..

Primary memories are Volatile in nature which means data is lost when power supply is OFF. Primary memory is relatively small in size & is very costly.

Page 5: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Speed of primary memory is higher than that of secondary memory.

Secondary memory is used to store data permanently.

It is also referred as Backing Store.

In secondary storage devices, data is stored permanently.

We are able to retrieve the contents of this type of devices even after power failure.

This is the reason why this type of memory is referred as Non-Volatile Memory.

They are slow in speed of operation. They are comparatively less costly than primary memory.

File system is used to organize data the data in secondary devices.

Ex. of secondary storage devices are Hard disk (HDD), CD-ROM (Compact Disk Read Only Memory), DVD-ROM (Digital Video Disk Read Only Memory), Pen Drive (USB Drive), Floppy Drive, etc..

Page 6: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Next type of memory in hierarchy is cache memory which is less expensive & have higher capacity than registers.

They have high speed of operation.

Next type of memory in the hierarchy is cache memory but more than secondary storage devices.

That’s why the size of RAM is less than size of secondary storage devices. Ex. Typical computer system has 2-4 GB of RAM whereas the Hard Disk size varies between 320 GB up to 1 TB.

Secondary storage devices are less costly & have higher size.

Tape Back are kind of memory system which was used very earlier. They used magnetic tape to store the data. They are commonly comes in the format of cartridges & cassettes. Ex. Audio Cassettes, Video Cassettes.

Page 7: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Types of Memory

1. Primary Memory :- Is the main memory in the system. They are volatile in nature & can store data for temporary purpose only. They are small in size & more costly. The content of memory can retrieve till the power is on. When power goes off, contents of memory will be lost. Processor can directly communicate with this kind of memory. Ex. RAM, Cache, Registers.

2. Secondary Memory :- Is the storage devices in the system. They are non-

volatile & can store data permanently. We can fetch the data even after power failure. We have to use different file system to use this kind of memory like NTFS, FAT, ext, HFS, UFS, XFS, ReiserFS, etc. Example is Hard disk, CD-ROM, DVD, flash drives, memory sticks,etc..

3. Tertiary Memory :- It provides the third level of storage. It involves

robotic mechanism to mount & unmount removable mass storage media. It is used for accessing rarely used information. They are slow in speed. Used for extraordinary data storage. It is accessed automatically. Ex. Tape, Libraries, Optical Jukeboxes.

Page 8: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Memory Units

How to measure the size of memory system is called as Memory Units.

Memory system consists of number of Semiconductors memory cells.

One cell is capable of storing 1 bit of data. Data in memory system is in binary format, means in terms of 0’s & 1’s.

That’s why size of any memory system is in the power of 2.

Ex. Suppose number of memory cells are 8 then size of memory system would be 2^8 that is 256.

It means maximum decimal number stored would be 256 & not beyond it.

Conversely to store decimal number 65536 we need 16 bit that is 2 bytes of memory.

Following table shows different memory units used.

Page 9: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

1 bit 0 or 1 (means stores one number)

1 nibble 4 bits

1 byte 8 bits

1024 bytes 1 KB

1024 KB 1 MB

1024 MB 1 GB

1024 GB 1 TB

1024 TB 1 Petabyte

1024 Petabyte 1 Exabyte

1024 Exabyte 1 Zettabyte

1024 Zettabyte 1 Yottabyte

1024 Yottabye 1 Brontobyte

1024 Brontobyte 1 Geopbyte

Page 10: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Operating System Functions for Memory Mgt

O.S. performs the different activities to efficiently manage the memory. Generally the activities are performed by the O.S. kernel.

Kernel allocates & de-allocates the memory to user program.

Allocate memory for O.S. during system startup.

Partition the memory is done by O.S. kernel.

Keep track of the memory locations from main memory i.e. How many memory locations are used & how many memory locations are empty.

Implement the concept of virtual memory.

Protect memory locations from accessing of other processes.

Provide memory sharing for different programs.

It also manages swapping between main memory & disk.

Page 11: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Address Binding ………..

Generally program is stored on secondary storage devices. When we wanted to execute the program it should be transferred to primary memory.

During process execution, instructions & data are accessed from main memory.

When process completes its execution, memory space will be released & declared as available.

Process can be stored in any part of memory. But generally first few locations are reserved for operating system & remaining locations can be used for user processes.

User program goes through several steps out of which some steps may be optional-before being executed.

During development of source program, address will be assigned which is called Symbolic Address.

Page 12: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

……….. Address Binding

After compiling the program, compiler will bind the symbolic addresses into relocatable addresses.

Linker & Loader will in turn binds these relocatable addresses to absolute address which is address in physical memory.

Binding of addresses can be done at any time like compile time, load time or execution time.

Address binding scheme is shown in Figure 3.

Page 13: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Source Program

Object Module

Load Module

Compiler or Assembler

Linkage Editor

Loader

In-Memory Binary Memory Image

Other Object

Modules

System Library

Dynamically Loaded System

Library

Dynamic Linking

Compile Time

Load Time

Execution Time (Run Time)

Figure 3 : Address Binding

Page 14: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

……….. Address Binding

1. Compile Time Address Binding :- If we know in advance where the program will be stored in primary memory, that time we generate the absolute addresses. For example, If we know in advance that program is going to store on starting memory location 1000 then the generated compiler code will start from 1000 memory location. In this case of starting location changes then we have to regenerate the code with new starting

memory location.

2. Load Time Address Binding :- During compilation if we don’t know where the program will be stored then we generate the relocatable code. In this case the absolute addresses will be generated at the time of loading the program. If the starting addresses change, then we only need to change the relocatable register value. No need to generate code again.

3. Execution Time Address Binding :- If we are using the concept of virtual

memory then there are chances that process will move from one region to another region. In such situations absolute addresses will be generated during run time.

Page 15: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Logical & Physical Address Space 1. Logical Address :- Address generated by the CPU or the processor is

called as Logical Address. It is also known as Virtual Address. CPU generates the address in the range. The range of addresses is generated by processor is known as Logical Address Space or Virtual Address Space.

2. Physical Address :- Primary memory consists of different memory cells. Every cell is addressable. Actual address on primary memory or address seen by memory mgt unit is known as Physical Address. Range of physical addresses is known as Physical Address Space.

Generally, the compile time & load time address binding generate the same physical & logical address.

In execution time address binding, the logical & physical addresses differs.

If physical & logical addresses are not same then Memory Management Unit (MMU) convert the logical address into physical address with the help of relocation register.

Address conversion using memory management unit is shown in Figure 4

Page 16: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

CPU Physical Memory

+

Relocation Register

Table

Logical Address

500

Physical Address

4500

Figure 4 : Address Conversion Using MMU

Page 17: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

In the previous example the logical address generated by CPU is 500. Value of relocation register is 4000.

At the time of accessing the actual physical address, value of relocation register will be get added into logical address.

In this case, each logical address (500) will be get added to relocation register (4000) to get the physical address (4500).

Physical Address = Logical Address + Relocation Register

Page 18: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Dynamic Loading For program to execute, it should get loaded in primary memory. In primary

memory all parts of process that is program, data, stack, & all subroutine should present.

So memory can hold the processes whose size is smaller than main memory

So size of memory decides the size of program. In short, we cannot load the program whose size is 500 MB, if size of memory is 300 MB.

We cannot load the processes, whose exceed the size of primary memory.

We can use the concept of dynamic loading to solve this problem.

Dynamic loading allows partial loading of the processes. Its means we can load the part of the process which is required. We are not going to load the process until it is required.

Simply dynamic loading means load the part of the process as & when required.

Page 19: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

For example, Program for arithmetic operations consists of main program, subroutine for addition, subroutine for subtraction, subroutine for division, subroutine for multiplication. Initially, we load the main program only & not all the subroutine.

If the main program calls the addition routine then only it will be loaded in primary memory, otherwise it will stored on secondary storage.

Subroutines, will be getting called On Demand.

With the help of dynamic loading we can load the program whose size exceeds the size of memory. For example, if size of program is 500 MB, & size of memory is 300 MB, then we load some part of program say 150 MB & remaining part will be stored on secondary storage. When we need other part of program then only we load the remaining part otherwise not.

Advantage of Dynamic Loading is unused routines will never get loaded in Primary Memory.

No special h/w will be required instead it requires special code at the time designing the s/w.

O.S. provides the different libraries to implement the Dynamic Loading.

Page 20: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Dynamic Linking & Shared Libraries

Linking can be of two types, Static Linking & Dynamic Linking.

Static Linking is supported by some O.S.

Concept of dynamic linking is very similar to dynamic loading. In this scheme modules are loaded on demand.

Whenever there is reference for some module then only it will be loaded, otherwise stored on secondary storage device.

Page 21: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,
Page 22: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Memory Management is used to satisfy the following requirements :-

Relocation

Protection

Sharing

Logical Organization

Physical Organization

Relocation In this era, we are using multiprogramming O.S. Many processes are stored

in memory at a time.

We can not say at which location process will be stored.

Generally processes are transferred from one location to another location.

When we create process & placed in memory on some memory locations, we cannot say it will be stored on same location throughout its executions.

O.S. are using the concept of virtual memory wherein we need to swap-in & swap-out the processes.

Memory management module take cares of process relocation.

Page 23: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Protection In O.S. protection can be at different levels.

Every process should be protected from another process. O.S. process should be protected from unauthorized access.

Process should not access memory locations of other processes.

Generally, O.S. allocates some memory locations to process; it should access the allocated memory locations only.

Checking is performed by hardware at run time & invalid references generate the access violation interrupt, trap or exception to the O.S.

Memory protection should be done by the processor rather than O.S.

Memory management unit should provide the protection facility.

Page 24: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Sharing Processes should be allowed to share memory.

Multiple processes running the same program may share program code.

Memory sharing results in saving the memory.

Sharing between processes should be provided by memory management system.

Logical Organization Processes in memory often occupy linear sequences of addresses.

Programs are organized into modules.

Memory management is responsible for this logical organization.

Modules can be written & compiled independently with all references from one module to another resolved by the system at run time.

Protection can be given to different modules.

Modules can be shared among processes.

Page 25: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Physical Organization Computer memory is divided into at least 2 parts : Main memory &

Secondary Memory.

Main memory provides fast access at relatively high cost & it is volatile.

Secondary memory is slower & cheaper than main memory & is non-volatile.

Many times we use main memory & secondary memory in coordination.

It is the job of memory management unit, how to use the secondary memory, which part of secondary memory should be used, how to track it, etc..

Page 26: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,
Page 27: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Memory mgt means how available memory will be used efficiently to execute the user programs by O.S.

O.S. kernel takes care of memory mgt.

Following are some of memory mgt techniques :-

1. Memory Partitioning

2. Paging

3. Segmentation

4. Virtual Memory

It can be classified with the help of contiguous & non-contiguous memory allocation.

1. Contiguous memory allocation A. Fixed Partitioning (Equal Size / Unequal Size )

B. Dynamic Partitioning

2. Non-Contiguous memory allocation A. Paging

B. Segmentation

C. Virtual Memory

Page 28: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,
Page 29: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Main memory holds the O.S. & various user programs (processes).

Memory is divided into partitions, one for O.S. & other for user processes.

O.S. part cannot be used by use processes or vice a versa.

In contiguous memory allocation, process will be stored only on contiguous memory locations.

Memory partitioning can be fixed sized or dynamic partitioning.

1. Fixed Partitioning

Simple memory mgt technique.

In all memory management system, we dedicate some part of memory to O.S. Rest of memory is available to user processes.

Simple way is to partition memory into fixed size partition whose size cannot be changed.

There are ways for fixed partitioning :- Equal size partition & Variable Size/Unequal size partition.

Page 30: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

In Equal Size partition, we divide memory into fixed equal sized partitions.

One partition holds at the max one process.

Once all the partitions are full, next process has to wait till some partition is empty.

This method is no more useful in now a days computer system, but it is very easy to implement.

In Unequal Size partitioning, size of every partition is different. Size of partition is decided by the operating system.

Use of unequal size partitions provides a degree of flexibility to fixed partition.

Page 31: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Operating System (100 KB)

100 KB

100 KB

100 KB

100 KB

100 KB

100 KB

100 KB

100 KB

Operating System (200 KB)

10KB

50 KB

80 KB

100 KB

150 KB

240 KB

500 KB

700 KB

Figure 6 : Fixed Partitioning

Figure 6.1 : Equal Size Partition Figure 6.2 : Unequal Size Partition

Page 32: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Allocation of Memory with Fixed Size Partition In Equal size partitions, the placement of processes in memory is simple.

We can load the process in any available, as the size of all the partitions is equal.

It does not matter where we are loading the process.

If no partition is available to load the process, then that process need to wait till some partition gets empty.

In Unequal size partition, we use two methods.

Assign each process, the smallest partition within which process will fit.

In the first method we maintain the separate queue for every partition.

Advantage with this approach is that, processes are always assigned in such a way to minimize wasted memory within partition.

So, ultimately it helps to minimize the internal fragmentation.

In another approach, we maintain the single queue for all processes. When we need to load process, the smallest available partition that will hold the process is selected.

Page 33: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Disadvantages of Equal Size Partition A program may be too large to fit into a partition. In this case, the

programmer must design the program with the use of overlays, so that only a portion of program need be in main memory at any one time. When a module is needed that is not present, the user’s program must load that module into the programs partition, overlaying whatever programs or data are there.

Another is Internal Fragmentation. Memory utilization is very less. Even small program occupies entire partition. For example, there may be a program whose length is less than 20Kbytes; yet it occupies a 100-Kbyte partition. This process, in which there is wasted space internal to a partition is called as Internal Fragmentation.

Page 34: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

2. Dynamic Partitioning

In this, the size of partition is different.

Memory will be allocated as per requirement.

As the process will allocate the exact memory, it avoids the internal fragmentation.

Page 35: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

OS OS OS OS

OS

A

C

OS

A

OS OS

No Process A- Process loaded B- Process loaded C- Process loaded

A A

B

A

B

C

D

C

D

C

E

D

C

Process B leaves D-Process loaded Process A leaves E-Process loaded

Figure 7 : Dynamic Partitioning

Page 36: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Initially, memory is empty, only the O.S. resides in main memory.

Now if we want to load the process A, B, C in memory, it will be loaded as follows.

After loading the first three processes, at the end some memory is remaining that is called Hole. It is too small to hold the next process that is D.

After some time process B completes so O.S. removes it from primary memory.

Then process D will be loaded but it requires less space so it is creating hole.

After some time A also completes its execution & O.S. removes from primary memory & going to load process E.

Process E also create small hole.

During loading & unloading the processes, lots of hole will be created in primary.

Page 37: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

In dynamic partitioning, many memory hole are created the problem is called as External Fragmentation.

In external fragmentation, memory is available but it is not contiguous because of which we cannot use it.

External fragmentation can be removed with the help of compaction but it requires relocation of processes. Relocation of processes decreased the performance of system.

Placement Algorithm in Dynamic Partitioning

When no. of holes are available then how to allocate the available hole to the incoming process.

The process of allocating hole to the process is done with the help of different placement algorithms. The algorithms are as follows :-

1. Best Fit. 2. First Fit. 3. Worst Fit. 4. Next Fit.

Page 38: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

1. First Fit In this algorithm we allocate the first hole which is big enough. We search

for the hole from starting which is big enough to hold the process. We stop searching once, we get the hole which is enough to hold the process.

Ex. Given memory partitions are 101K, 501K, 201K, 301K, & 601K (in order). Size of processes is 213K, 418K, 113K, & 427K (in order) ?

213K is put in 501K partition.

418K is put in 601K partition.

113K is put in 288K partition (new partition 288K = 501K – 213K)

427K must wait as no partition of that size available.

Page 39: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

2. Best Fit In this algorithm we allocate the smallest hole but which is able to hold

the process completely. We search all the holes present in the memory for best hole. Best strategy is to sort the hole by size & search for the best hole.

Ex. Given memory partitions are 101K, 501K, 201K, 301K, & 601K (in order). Size of processes is 213K, 418K, 113K, & 427K (in order) ?

213K is put in 301K partition.

418K is put in 501K partition.

113K is put in 201K partition.

427K is put in 601K partition.

Page 40: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

3. Worst Fit In this algorithm we allocate the largest hole. We sort the hole & then

search for largest hole.

Ex. Given memory partitions are 101K, 501K, 201K, 301K, & 601K (in order). Size of processes is 213K, 418K, 113K, & 427K (in order) ?

213K is put in 601K partition.

418K is put in 501K partition.

113K is put in 388K partition (601K – 213K)

427K must wait as no partition with required size available.

Page 41: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

4. Next Fit

In this algorithm we keep track of where it finds the last suitable hole. For the next process it starts searching from last placed hole & search for hole which is enough to hold the process.

Page 42: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Fragmentation In case of dynamic partitioning the fragmentation is the major problem.

We cannot use static partitioning , as we cannot predict the size of processes in advance. This static partitioning is not the solution.

We have to use the dynamic partitioning but which faces the problem of Fragmentation.

Memory is divided into so many small blocks (holes). Those holes cannot be used to store the process as they are not contiguous.

In fragmentation, memory is get wasted. Not exact but about 10 to 20 % memory is getting wasted.

Fragmentation is of two types :-

1. Internal Fragmentation.

2. External Fragmentation.

Page 43: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

1. Internal Fragmentation

It is the difference between size of process & size of partition.

Internal refers to unusable storage inside the memory partition which is not used.

For example, size of process is 400K & size of partition is 512K then 112K memory will be simply getting wasted. It is nothing but the example of Internal Fragmentation.

Page 44: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

P1 (150K) (50k unused)

P2 (280K) (20k unused)

P3 (145K) (5k unused)

P4 (750K) (250k unused)

200K

300K

150K

1000K

Figure 3 :- Internal

Fragmentation

In the given example, in first partition 50K get wasted as size of partition is 200K & Process requirement is just 150K (200-150). In second partition, 20K (300 - 280). In third partition, 5K (150 - 145) In fourth partition, 250K (1000 - 750) will be get wasted.

Page 45: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

2. External Fragmentation

In this fragmentation, memory is divided into large number of small pieces (holes).

In this case free memory is available but it is spread all over the primary memory.

Even though memory is available we cannot allocate it as it is spread all over the primary memory.

External refers to unusable region which is outside the allocated regions.

Page 46: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Figure 4 :- External

Fragmentation

For example, In dynamic memory allocation, a block of 1500 K might be requested. In above diagram, largest block available is of only 300K. Even if there are 5 block of 300K free space, we cannot allocated it to the process as the space is not contiguous. As figure 4, 5 blocks of 300K are available but cannot be used as they are spread all over the primary memory. Memory will allocate to process only if memory is contiguous.

Allocated Partition

300K

Allocated Partition

300K

Allocated Partition

300K

Allocated Partition

300K

300K

Page 47: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Used

Used

Free

Used

Used

Used

Free

Used

Un-Used

Allocated Partition

Integral Fragmentation

External Fragmentation

Figure 5 : Fragmentation

Page 48: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,
Page 49: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Paging is one of memory management technique.

In previous method of memory management technique, we have to allocate memory continuously. If continuous memory is not available, then it is not possible to allocate memory to that process.

To avoid the drawback of contiguous memory allocation (i.e. Internal & External Fragmentation) we use the concept of Paging.

Paging supports the non-contiguous memory allocation without internal & external fragmentations.

Paging involves many concept like page, page table, frame, address conversion in paging, translation look aside buffer, page protection, page sharing, etc..

Page 50: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Page It is fixed size block of memory.

It is used in memory management & virtual memory.

Page is collection of memory locations.

Process is generally divided into number of pages.

Each page stores certain memory address.

Process is divided into pages & that collection of pages is called as Logical Memory of Process.

Each page have page number assigned by Operating System.

In the following example process is divided into 3 pages.

Page size in this example is 5. It means every page is capable of storing 5 locations.

First 5 locations can be considered as First page, next 5 locations as Second page, & last 5 locations as Third page.

Page size is decided by Operating system.

Generally page size is 512 KB but it may vary according to O.S.

Page 51: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

START MOV A,B SUB B,A

JNZ Z DIV A,C

0 1 2 3 4

MUL A,B DIV B,C

MUL B,5 Z : MOV A,5

JNE X

5 6 7 8 9

MOV A,B SUB B,A

JNZ Z DIV A,C X : END

10 11 12 13 14

Figure 12 : Local Memory

Frame Number

0

1

2

Page 52: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

It is desirable to have small page size to use memory efficiently.

Page size in some O.S. is given in following table :-

Computer Page Size

Atlas 512 48-bit words

Honeywell – Multics 1024 36-bit words

IBM 370/XA & 370/ESA 4 Kbytes

VAD Family 512 bytes

IBM AS/400 512 bytes

DEC Alpha 8 Kbytes

MIPS 4 Kbytes to 16 Mbytes

UltraSPARC 8 Kbytes to 4 Mbytes

Pentium 4 Kbytes or 4 Mbytes

Intel Pentium 4 Kbytes to 256 Mbytes

Intel core i7 4 Kbytes to 1 Gbytes

Page 53: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Page Frame

Main memory is divided into Fixed Sized Frame.

One frame is capable of storing one page.

Generally page size & frame size is same.

It is nothing but grouping of memory locations.

Group of memory locations are identified by Frame Number.

O.S. also maintain the information of free frame in free frame list.

In the following example physical memory is divided into different frames.

We have divided it into 3 numbers of frames (i.e. from figure 12, frame 0 to 3).

Each frame in given example (Figure 12) is capable of storing the page is whose size is 5.

Frame is given the number to identify it.

Page 54: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Page Table Page Table is data structure used in Paging Memory Management.

It is used for mapping between Virtual & Physical Address.

It is useful for converting virtual address to physical address.

It stores the page number & frame number of the page.

Page table stores the information of pages for that process. Each process has one page table.

Page table stores which page is stored in which frame.

Page table has entry for every page along with its corresponding frame number. From page table we can calculate the starting address of the page.

Page table can be different types like inverted page table, multilevel page table, virtualized page table, nested page tables, etc..

Typical page table is as follows :-

Page Number Frame Number

Page 55: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Paging Model One page contains the number of instructions.

One page frame is capable of storing the one page. Generally, the size of frame & size of page is same.

If size of page is 512 Kb then size of frame will be 512 Kb only.

Page example, suppose size of process is 1150 Kb & page size is 512 Kb, then process will be divided into 3 logical pages. 1st page which is of 512 Kb, 2nd page which is of 512 Kb, & 3rd page which is of 126 Kb (512 + 512 + 126 = 1150 Kb).

As shown in Figure 7, process is divided into 4 logical pages (page-0, page-1, page-2, & page-3).

Information of pages for the process is stored in page table along with frame number.

When processor wants to execute instruction from process it requests the page number & the offset.

From page number we can calculate the starting address of the page & from offset we can calculate the location within the page.

Page 56: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Page 0

Page 1

Page 2

Page 3

Page Number

0

1

2

Frame Number

2

5

8

3 10

Page 0

Page 1

Page 2

Page 3

0

1

2

3

4

5

6

7

8

9

10

Logical Memory

Page Table

Frame Number

Physical Memory

0

1

3

4

6

7

9

Free Frame List

Figure 7 : Paging Model

Page 57: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Address Conversion in Paging

“ Address conversion means, processor generate the logical address & which should be converted into physical address to access the appropriate memory locations. ”

CPU request the logical address which is divided into 2 parts that is page number & offset.

With the help of page number we can calculate the starting address of the page from page table.

Offset acts as location within the page as one page contains number of locations.

Starting address of page is calculated from page table & gets added with offset to get the effective address (i.e. physical address).

p = Page Number , d = Offset, f = Starting Address of page.

Address conversion using paging is shown in Figure 2

Page 58: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

CPU

p

d

f

d

f0000….0000

f1111….1111

f

p

Physical Address

Logical Address

Page Table

Physical Memory

Figure 2 : Address Conversion in Paging

Page 59: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

We take one example of address conversion.

Size of page is 5 byte means it contains 5 locations. Size of page frame is also 5 byte means one frame will store one page.

One location stores one instruction.

Suppose we want to execute following piece of program.

START MOV A, B SUB B, A

JNZ Z DIV A, C

MUL A, B DIV B, C MUL B, 5

Z : MOV A, 5 JNE X

MOV A, B SUB B, A

JNZ Z DIV A, C X : END

Page 60: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

The program is divided into 3 pages. One page contains the 5 instructions as size of page is 5 bytes (assume 1 instruction will be stored in 1 byte). 15 instructions will be stored in 3 pages.

Page 0 is stored on frame no. 1, page 1 stored on frame no. 2, & page 2 stored on frame no. 5

Now when processor executes the program, it will execute the instructions one by one.

When processor want to executes the instruction START which is stored on frame 1. Instruction is stored on 5th location. Processor will request as page 0 offset 0, as logically this instruction is stored on 0th location of page 0.

Actual Address will be generated by this formula :-

In paging method, we have no external fragmentation. Any free frame can be allocated to a process that needs it.

Actual Address = (Frame Number * Page Size) + Offset

Page 61: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

However, we may have some internal fragmentation. Notice that frames are allocated as units.

If the memory requirements of a process do not happen to coincide with page boundaries, the last frame allocated may not be completely full.

For example, if page size is 256 bytes & a process size is 1098 bytes.

First 256 bytes will be stored on 1st page, next 256 bytes on 2nd page, next 256 bytes on 3rd page, next 256 bytes will be stored on 4th page & remaining 74 bytes will be stored on 5th page.

In the 5th page 182 bytes will be wasted as only 74 bytes will be used.

Small page size minimizes the internal fragmentation. But page table entries will be increased.

Page size is getting larger & larger.

It is because increase in the size of process, data, main memory, etc..

Typically, page size is between 8 KB & 12 KB.

Some operating system supports multiple page sizes.

Solaris operating system uses 8 KB & 4 MB page size.

Usually, each page-table entry is 4 bytes long, but that size can vary as well

Whenever we need to execute the process, we need to calculates its size, number of pages needed to store the process. If process needs n number of pages then we must have n number of free frames.

Page 62: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Translation Look-Aside Buffer (TLB)

It is also called as Associative Mapping / Page Table Cache.

It is used to store the frequently accessed pages. It is acting like cache.

It is nothing but special, small & fast lookup hardware cache.

In the process of paging we need to refer page table many time. Page table contains many entries. To search the desired page in the page table requires time.

We can save the search time by adding the translation look aside buffer.

The speed of accessing in the TLB is faster than page table. As TLB contains the less entries than the page table.

The number of page entries on the TLB depends upon the operating system but generally it is in the range of 64 to 11024.

Whenever CPU generates the logical address, we need to convert it to physical address.

For generating the physical address we need page number & frame number. If the required page is present in the TLB then immediately physical address will be calculated without referring to the actual page table.

Page 63: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

We refer the page table only if page entry is not present in the TLB. It means we first check the TLB for presence of the page & if not present then only go to page table.

If the TLB is already full of entries, the operating system must select one for replacement. Replacement policies range from least recently used (LRU) to random.

If required page is found in the TLB then it is called as TLB Hit.

If required page is not found in the TLB then it is called as TLB Miss.

The percentage of times that a particular page number is found, in the TLB is called the Hit Ratio.

An 80-percent hit ratio means that we find the desired page number in the TLB 80 percent of the time.

The process of translation look aside buffer is shown in the Figure 4.

Page 64: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

CPU p d

p d

f

p

TLB Hit

Page Number

Frame Number

TLB

Logical Address

Physical Address

Physical Memory

Page Table

TLB Miss

Figure 4 : TLB

Page 65: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Protection Protection in paging is very important.

Page level protection means only the desired process will access the pages. One process cannot access the pages of another process.

We can achieve this by adding the protection bit with each page frame. This bit is kept in the page table.

Bit values signify whether the given page is read-write or read only page.

As every page goes through the page table to get the correct frame number. At the same time we can set the protection bit also.

If an attempt is made to write to a read-only page, it will cause a hardware tarp to the operating system.

We can create hardware to provide read-only, read-write, or execute-only protection or by providing separate protection bits for each kind of access, we can allow any combination of these accesses.

We can add the valid-invalid bit to each page in the page table.

When this bit is set to “Valid” & the page is in the range of logical address space of the process then we can consider the page as valid page.

Page 66: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

If page is not in the logical address space of the process then bit is set to invalid. Illegal addresses are trapped by use of the valid-invalid bit.

The operating system sets this bit for each page to allow or disallow access to the page.

Suppose, for example, a system with a 14-bit address space (0 to 16383), we have a program that should use only addresses 0 to 10468. Given a page size of 2 KB, we get the situation shown in Figure 5. Addresses in pages 0,1,2,3,4 & 5 are mapped normally through the page table.

If we try to generate an address in pages 6 or 7 but we find that the valid, invalid bit is set to invalid, & the computer will generate the error.

Page 67: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Page 0

Page 1

Page 2

Page 3

Page 4

Page 5

00000

10,468

12,287

3 V

3 V

3 V

3 V

3 V

3 V

3 V

3 V

Frame Number Valid-invalid bit

Page 0

Page 1

Page 2

Page 3

Page 4

Page 5

Page 0

0 1

2

3

4

5

6

7

8

9

Figure 5 : Page Protection

Page 68: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,
Page 69: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Segmentation is another method of memory management in the O.S.

Most of todays O.S. use the concept of segmentation.

Generally we consider program as a set of methods, procedures, or functions. User view of the program is shown in Figure 8.

Subroutine -2

Subroutine – 1 Symbol Table

Main Function

Stack

Figure 8 : User’s View of Program

Page 70: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

In segmentation, we divide the program into different segment.

Some segment contains instructions, some others contains data & other information.

It is not necessary to store the segment one after other. No ordering is required among segments.

The segments are of variable length. Length is defined by the purpose of segments in the program.

Element within the segment can be identified by their own offset from the beginning of the segment.

Logical address is nothing but the collection of segments.

At the time of requesting any instructions from the segment we have to request in terms of segment name & offset within the segment.

Program segment is shown in figure 9.

Page 71: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Subroutine – 1 Symbol Table

Main Function

Stack

Segment - 2

Subroutine – 2

Segment - 1

Segment - 5

Segment - 4

Segment - 3

Figure 9 : Program Segments

Page 72: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Segment No.

Base Limit

1 4500 1000

2 7400 200

3 3000 1500

4 6000 500

5 1000 1100

Segment - 5

Segment - 3

Segment - 1

Segment - 4

Segment - 2

1000

2100

3000

4500

5500

6500

7400

Physical Memory

Segment Table

Figure 10 : Example of Segmentation

Page 73: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

If we compile the C program it will be get divided into following segments:

Global Variables

Code Segment

Heap Segment

Stack Segment

Standard C Library

In assembly language programming also we divide the program into different segments like code segment, data segment, stack segment, extra segment, etc..

Segmentation Technique

In segmentation strategy we divide the program into different segments.

Generally, dividing the program into segment is done by the compiler.

Each segment has the name.

For Simplicity, we number the segment instead of referring it by the name.

Thus, logical address consists of two parts, that is segment number & offset.

Page 74: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

For every process we create segment table like page table which we create in paging method.

Segment table stores the starting address (base) & limit of each segment.

Starting address (base) means from which location the segment starts.

Limit indicates the range of addresses from the base address. Beyond it, it is not possible to access the address by the processor.

Logical address consists of two parts a segment number s & offset.

Segment Number s is used as an index to the segment table.

Offset o of logical address must be between 0 & segment limit.

If it is not, we trap to the operating system.

When offset is legal, it is added to the segment base to produce the address in physical memory of the desired byte.

Page 75: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

CPU s o

Limit Base

O < limit +

Segment 1

Segment 2

Segment 3

Segment 4

…… …… ….. ……

Segment n

Segment Table

No

Error

Yes Physical Address

Logical Address

Figure 11 : Segmentation

Page 76: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Use of segmentation is shown in Figure 11.

Segment table is thus an array of base-limit register pair.

//

And by using the offset we get the address within the segment.

So effective can be calculated as follows :-

Effective Address = Starting Segment Number + Offset

Page 77: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,
Page 78: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,
Page 79: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

During the execution of the program it should bring in the primary memory.

Primary memory is limited in size. Typically, it is in the range of 512 MB to 4 GB.

Swapping is the mechanism in which we temporarily swap out the processes on secondary storage devices.

Processes can be brought back in main memory whenever required.

Process of storing process on secondary storage is called as Swap-out, whereas, process of bringing back the process in main memory is called as Swap-in & this entire process is called as Swapping.

For example if we are executing the P1, P2, P3 processes. Operating system is executing the P1 process during which there is no need of process P3. We can swap out the third process on backing store (secondary storage device).

When we want to execute process P3, that time we swap in the process in primary memory & continue the execution.

Page 80: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

In case of Round Robin scheduling algorithm we can use the concept of swapping. When time slice for process finishes, we swap out the process on secondary storage device.

Again when scheduler chooses to execute the process for next time slice we can swap in the necessary process in primary memory.

Memory manager will take care of swap in & swap out of processes from primary memory to secondary memory & vice a versa.

In some O.S. like UNIX, we reserve the separate swap space for the swapping methods. Other O.S. internally manages the swap space.

At the time of swapping out process we must consider the different factors. It considers the time after which process will be needed to execute.

In the priority scheduling, we often swap out the low priority process to execute the higher priority processes. After execution of higher priority processes we swap in the low priority processes for execution.

Swapping method requires the backing store.

Page 81: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

It is nothing but the secondary storage devices.

It must be fast disk.

Size of backing store should be such that it should able to store all the processes.

When scheduler decides to execute some process, it calls the dispatcher.

Dispatcher checks whether process is in memory or not.

If process is in memory then it will be executed by the processor.

If process is not in main memory then it will bring in memory by swap-in & get executed by processor.

Page 82: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Operating System

User Space

Process P1

Process P2

Swap Out

Swap In

1

2

Backing Store

Main Memory

Figure 12 : Process of Swapping

Page 83: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

“A Map” is an array where each entry consists of an address of an allocatable resource & the number of resource units available there; the kernel interprets the address & units according to the type of map.

Initially, a map consists one entry that indicates the address & total number of resources.

For instance, the kernel treats each unit of the swap map as a group of disk blocks, it treats the address as a block offset from the beginning of the swap area.

Figure 1 illustrates an initial swap map that consists of 10,000 blocks starting at address 1.

Address Units

1 10000

Figure 1 : Initial Swap Map

Page 84: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

algorithm malloc /* algorithm for allocate map space */

input : (1) map address /* indicates which map to use */

(2) requested number of units

output : address, if successful

0, otherwise

{

for (every map entry)

{

if (current map entry can fit requested units)

{

if (requested units = = number of units in entry)

delete entry from map;

else

adjust start address of entry

return (original address of entry);

}

}

return (0);

}

Algorithm for Allocating Space from Maps

Page 85: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

As the kernel allocates & frees resources, it updates the map so that it continues to contain accurate information about free resources.

Previous algorithm malloc is for allocating space from maps.

The kernel searches the map for the first entry that contains enough space to accommodate the request.

If the request consumes all the resources of the map entry, the kernel removes the entry from the array & compresses the map (that is, the map has one fewer entries).

Otherwise, it adjusts the address & unit fields of the entry according to the amount of resources allocated.

Page 86: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

1 10000

Address Units

101 9000

Address Units

(a) (b)

151 9850

Address Units

251 9750

Address Units

(c) (d)

Figure 3 : Allocating Swap Space

Figure 3, shows the sequence of swap map configurations after allocating 100 units, 50 units, then 100 units again. The kernel adjusts the swap map to show that first 250 units have been allocated, & that it now contains 9750 free units starting at address 251.

Page 87: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

When freeing resources, the kernel finds their proper position in the map by address. Three cases are possible :-

1. The freed resources completely fill a hole in the map ; they are contiguous to the entries.

2. The freed resources partially fill a hole in the map.

3. The freed resources partially fill a hole but are not contiguous to any resources in the map. The kernel creates a new entry for the map & inserts it in the proper position.

Returning to the previous example, if the kernel frees 50 units of the swap resource starting at address 101, the swap map contains a new entry for the freed resources, since the returned resources are not contiguous to existing entries in the map.

If the kernel then frees 100 units of the swap resource starting at address 1, it adjusts the first entry of the swap map since the freed resources are contiguous to those in the first entry.

Figure 4, shows the sequence of swap map configurations corresponding to these events.

Page 88: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

251 9750

Address Units

101 50

Address Units

(a)

(b)

1 150

Address Units

(c)

Figure 4 : Freeing Swap Space

251 9700

251 9750

Page 89: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Suppose the kernel now requests 200 units of swap space. Because the first entry in the swap map only contains 150 units, the kernel satisfies the request from the second entry (Figure 5)

1 50

Address Units

(a)

1 150

Address Units

(b)

Figure 5 : Allocating Swap Space from the Second Entry in the Map

251 9750 451 9550

Page 90: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

The kernel chooses the swap device in a round robin scheme, provided it contains enough contiguous memory.

Administrators can create & remove swap devices dynamically.

If a swap device is being removed, the kernel does not swap data to it; as data is swapped from it, it empties out until it is free & can be removed.

Swapping Process Out

The kernel swaps a process out if it needs space in memory, which may result from any of the following :-

1. The fork system call must allocate space for a child process.

2. The brk system call increases the size of a process.

3. A process becomes larger by the natural growth of its stack.

4. The kernel wants to free space in memory for processes it had previously swapped out & should now swap in.

Page 91: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

When the kernel decides that a process is eligible for swapping from main memory, it decrements the reference count of each region in the process & swaps the region out of its reference count drops to 0.

The kernel allocates space on a swap device & locks the process in memory (for cases 1 - 3), preventing the swapper from swapping it out while the current swap operation is in progress.

The kernel saves the swap address of the region in the region table entry.

Page 92: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,
Page 93: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

DP is the part of Virtual memory management.

DP means on demand loading the pages in the primary memory.

Program can be loaded by two methods.

One method is load the entire program. Total program is in memory during its entire execution. In this case some unused modules are unnecessarily loaded in the memory resulting in wastage of memory.

Another method, is loading the main part of the program first, then loading the remaining program as per requirement. In this case unused module will never be loaded in memory & only required module will be loaded resulting in saving the memory space.

Using demand paging, pages are loaded into memory only when they are required. Pages that are never accessed are never loaded into physical memory.

Concept of DP system is like to a paging system with swapping which is shown in figure 10

Page 94: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Swap Out

Swap In Program

B

Program A

Main Memory Figure 10 : Demand Paging

Page 95: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Details :-

Machines whose memory architecture is based on pages & whose CPU has restartable instructions ( Restartable Instructions = If a machine executes “part” of an instruction & incurs a page fault, the CPU must restart the instruction after handling the fault, because intermediate computations done before the page fault may have been lost. ) can support a kernel that implements a demand paging algorithm, swapping pages of memory between main memory & a swap device.

For instance, machines that contain 1 or 2 megabytes of physical memory can execute processes whose sizes are 4 or 5 megabytes.

The kernel still imposes a limit on the virtual size of a process, dependent on the amount of virtual memory the machine can address.

DP is transparent to user programs except for the virtual size permissible to a process.

Page 96: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

When a process accesses a page that is not part of its working set, it incurs a Validity Page Fault. The kernel suspends execution of the process until it reads the page into memory & makes it accessible to the process.

When the page is loaded in memory, the process restarts the instruction it was executing when it incurred the fault.

Data Structure for Demand Paging The kernel has 4 data structures to support low-level memory management

functions & demand paging :- Page table entries, Disk block descriptors, Page frame data table (called pfdata for short), & the Swap-use table.

The kernel allocates space for the pfdata table once for the lifetime of the system but allocates memory pages for the other structures dynamically.

Following bit fields to support demand paging :- Valid, Reference, Modify, Copy on write, Age.

Page 97: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,
Page 98: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,
Page 99: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Virtual memory is memory mgt scheme like paging & segmentation.

Virtual memory make use of paging & segmentation to implement.

In this, only portion of program will be loaded in primary memory & rest of the program is stored on secondary storage device.

Virtual memory allows execution of partially loaded processes.

As a result of this sum of virtual address spaces of processes exceeds the capacity of the available physical memory.

We can consider the virtual memory is nothing but the collection of primary & secondary storage.

Only one necessary condition is there in which primary memory should be able to hold minimum amount of the address space of each active process.

With the help of virtual memory we are able to utilize the primary memory at the fullest.

Which part of the process should be bring into primary memory & where to place it is decided by the O.S.

Virtual memory system provides automatic migration of portions of address spaces between secondary & primary memory.

Details of VM mgt are generally transparent to programmers.

Page 100: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Degree of multiprogramming can be increased with the help of virtual memory.

Drawback of virtual memory is that speed of execution decreases

Speed is decreasing as we are migrating the pages from primary memory to secondary memory & vice a versa.

Instruction execution can be completed only if all code, data & stack location that instruction references reside in physical memory.

We have to make sure that all the proper parts of the program is in main memory at right time.

VM is implemented using DP. It is implemented in the demand segmentation also.

Page 101: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Need of Page Replacement VM concept is implemented with the help of DP.

In VM system, if we want to execute the processes, we calculate the requirement of all processes. Then we consider the no. of available page frames in the system.

With the help of this data we decide how many frame should be get allocated to each process.

Processes are executing & in one of the process page fault occurs.

O.S. determines where this page is resided on secondary storage device.

But O.S. comes to know that no frame is free in primary memory.

In case no frame is available then O.S. have lots of option. One of option is to terminate the process which is currently not executing, by which frame will be available & can load the desired page in memory. But due to this, system utilization & throughput will be get decreased.

Other option is to swap out the process which is currently not executing. But it reduces the degree of multiprogramming.

So best option is to PAGE REPLACEMENT POLICY.

Page 102: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Many page replacement algorithms are in use. Following are some of page replacement algorithm :-

1. FIFO Page Replacement.

2. Optimal Page Replacement.

3. LRU Page Replacement.

4. Counting Based Page Replacement.

1. FIFO Page Replacement FIFO = First In First Out.

FIFO acts like simple queue.

FIFO means page which arrived first will chosen for replacement. The need of replacement arises when we don’t have the free frame for the requested page.

For implementing this algorithm we need to check every page, when it arrived in primary memory.

One option is setting the timer for every page. When page arrived in primary memory we start the timer & go on incrementing after every second.

At the time of replacing the page we check the time for all pages & choose the page which has largest time.

Page 103: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Another option is just to use FIFO queue for all pages. When page arrives in primary memory we add at the end of FIFO queue. At the time of replacing the page, we replace the page which is at the head of queue.

For implementing, we need to assume certain things. We need to consider the number of page frames which are allocated to process.

We have to consider the reference string means in which sequence pages will be accessed by processor.

Example 1 :-

Consider the frame allocated for process is 3 means process will be executed within 3 frames only. At most 3 pages will be in primary memory, if processor wants any more pages then one of the frames should be get replaced with incoming page.

Reference String 2 3 2 1 5 2 4 5 3 2 5 2

This string indicates that process will execute the instructions from page no. 2 first then, page no. 3 then, from page no. 2 & so on. Pages get accessed by the processor will be in 232152453252 sequence.

Page 104: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Initially, all the three frames will be empty. For loading the first three pages [2, 3, 2] three page fault will ocuurs.

When processor want to start the program whose first instruction will be stored in page number 2, so it will load the page in available page frame. First page fault will occurs.

Next processor wants to execute the instruction from page number 3 which is not in primary memory so it will load the page in next frame. Second page fault will occurs.

Next processor wants to execute the instruction from page number 2 but which is already present in memory so it will execute the instruction from that page & no page fault will occurs.

2 2

3

2

3

2

3

1

5

3

1

5

2

1

5

2

4

5

2

4

3

2

4

3

2

4

3

5

4

3

5

2

2 3 2 1 5 2 4 5 3 2 5 2

F F F F F F F F F

F = Page Fault Occurs

Page 105: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Next processor wants to execute the instruction from page number 1 but this page is not present in memory so it will load the page & execute instruction & page fault will occurs.

Next processor wants to execute the instruction from page number 5 but this page is not present so it should bring in primary memory. But all the frame are full so we replace one of the frame. As per the FIFO policy, we will replace the page 2 as it was brought to memory firstly.

In the same manner, another pages will be loaded & process complete its execution.

FIFO replacement algorithm is very easy to understand & implement.

However, most of the time the performance of this algorithm is not good.

FIFO algorithm faces one more problem which is belady’s anomaly. “ Generally, if we increase the number of page frame, then number of page faults should decreases ”. But for some reference string as we increase the number of page frames the no. of page faults also increases. This unexpected behavior of the algorithm is known as Belady’s Anomalies.

For example, for given reference string

1 2 3 4 1 2 5 1 2 3 4 5

Page 106: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

For the above string, for 2 page frames the number of page faults are 12, for 3 frames the page faults are 9, for 4 page frames the number page faults are 10, for 5 frames the number of page faults are 5 & so on.

So even if we increase the number of page frames the number of page faults increase instead of decreasing. This situation is called “Belady’s Anomaly”.

2. Optimal Page Replacement Algorithm To avoid the drawbacks of FIFO we use another type of algorithm.

Now we use the Optimal Page Replacement Algorithm.

This replacement algorithm has the lowest page fault rate among all the algorithms.

It never suffers from the problems like “Belady’s Anomalies”.

The strategy for ORA is “Replace the page that will not be used for the longest period of time.”

We use the same reference string from the previous algorithm.

Page 107: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Example 2 :- Reference string for OPRA = 2 3 2 1 5 2 4 5 3 2 5 2.

Number of page frames are 3. Now we will see how the replacement of pages will be done with the help of OPRA.

In this algorithm, first three page faults will occurs by default for filling empty frames.

When we need to execute the instruction from page number 5, which is not present in memory. We need to bring that page in memory for which page replacement is necessary. In this case, we replace the page which will not be get referenced for longest time in the reference string. At this stage, page 2 will be referenced just after 1 reference in future. Page 3 will be referenced after 5 references & page 1 will never be referenced so it will replace the page number 1.

We repeat the same process for remaining pages also.

2 2

3

2

3

2

3

1

2

3

5

2

3

5

4

3

5

4

3

5

4

3

5

2

3

5

2

3

5

2

3

5

2 3 2 1 5 2 4 5 3 2 5 2

F F F F F F

Page 108: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

As the number of page faults are 6 which are much less than FIFO algorithms. If we ignore first three default page faults then the performance of optimal algorithm is almost double as that of FIFO algorithm.

Actually, no replacement algorithm will execute this string for less than 9 page faults within 3 page frames.

This algorithm is very difficult to implement. Because we require future knowledge about the string which is practically not possible. We cannot predicate which page will be referenced by processor.

This algorithm is used for only study & research purpose.

LRU = Least Recently Used Algorithm.

The performance of this algorithm is more than FIFO but less than optimal algorithm.

As the performance of FIFO is less & optimal algorithm is not possible to implement, we use another type of algorithm which LRU algorithm.

In FIFO we used the past history, whereas in optimal we use the future reference.

3. LRU Page Replacement

Page 109: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

In this algorithm, we can use recent past as the approximation of near future, then we can replace the page that has not been used for longest period of time.

In LRU we “Replace the page which was not used for longest period of time.”

LRU replacement associates with each page the time of that page’s last use. When a page must be replaced, LRU chooses the page that has not been used for the longest period of time. We can think of this as Optimal page replacement algorithm looking backward instead of forward.

In short LRU is opposite as that of OPRA.

Now we apply the LRU algorithm to previous reference string.

Example 3 :

The string is as follows & number frames will be again 3 only.

2 3 2 1 5 2 4 5 3 2 5 2

Page 110: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

2 2

3

2

3

2

3

1

2

5

1

2

5

1

2

5

4

2

5

4

3

5

4

3

5

2

3

5

2

3

5

2

2 3 2 1 5 2 4 5 3 2 5 2

F F F F F F F

Number of page fault with LRU page replacement are 7.

First 3 empty frames will be filled up by causing the 3 page fault. When processor wants to execute the instruction from page number 5 which

is not present in memory so need to bring the page in memory. As the frames are full we have to replace one of the page. With LRU algorithm we choose the page which was not uses for longer period. By observing, we come to know page 2 have just used before 2 reference, page 3 used before 3 references & page 1 used just before 1 reference. So we choose page number 3 to replace.

In case of next reference, which is for page number 2 which is already present, so need to replace the page.

Next for page number 4 we choose page number 1 to replace as it was not used for longer time. And we repeat the process till end.

Page 111: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Generally, LRU used for page replacement policy which is considered as good algorithm.

Now the problem arises, how to implement the LRU algorithm. Even it requires the support from hardware. The problem is how to determine an order for the frames with the help of time of last use.

We can use two methods that are Counters & Stack.

Counters :- We associate counter with each page in page table. When page enter in the page frame or page is referenced the value of counter is initialized. After every memory reference the value of counter incremented by 1. At the time of replacing the page we select a page which has lowest counter values.

Stack :- To keep a stack of page numbers. Whenever a page is referenced, it is removed from the stack & put on top. Means the most recently used page will be always at the top & least recently used page will be always at the bottom. We can implement this stack with the help of doubly linked list. With one pointing to head & another pointing to tail.

Page will be replaced which is at the tail end of doubly linked list.

Page 112: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Like the OPRA, this algorithm also not suffers from the Belady’s Anomalies.

The Optimal & LRU comes under the class of stack algorithm.

So they never suffer from the problem of Belady’s Anomalies.

In this type of algorithm we introduce the concept of counter for page references. Whenever there is reference for a page, the counter will be incremented by one. Counter indicates the number of times that page has been referenced.

Two types of algorithms are used in counting based approach :-

1. Least Frequently Used (LFU) algorithm :- In this algorithm, the page with lowest count will be replaced. We use this approach as currently used page have the largest page count. But problem may arise when initially page is used for more number of times & afterwards never used. In this case that page will be remaining forever even if we are not referencing it.

4. Counting Based Algorithm

Page 113: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

2. Most Frequently Used (MFU) algorithm :- This algorithm is opposite as that of LFU. In this algorithm we replace page which have the higher count.

Selection of Algorithms

Selection of algorithm depends on lots of factors.

Factors are like what type of operating system we are using, what type of application we are using, etc..

The efficiency of algorithms is determined with the help of number of page faults.

Generally we choose the algorithm which have lowest page fault rate.

At the same time, we consider the feasibility of that algorithm as well.

As we have seen the optimal algorithm is very efficient but very difficult to implement.

Frame Allocation

Now we concentrate on number of page frames allocated to each & every process.

Page 114: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

How the O.S. decides how many frames should be allocated to available processes in the memory.

If the number of frames available is 100 & 2 processes are there in the memory having the requirement 20 & 30 frame respectively.

Whether we are going to allocate all the 20 & 30 frames or fewer frames.

We consider the simple case in which 256 MB memory is available having the 1 MB page. Means 256 pages will be available. Suppose O.S. uses 156 frames, 100 frames will be used for user processes. Initially, we declare all 100 frames as empty. Under pure demand paging for the first 100 times page fault will occurs. After first 100 page fault we need to use page replacement if we want to load more pages.

There are different strategy to for frame allocation.

Minimum Number of Frames We need to keep in mind some assumption at the time of frame allocation.

We cannot allocate more frame than they are available. If we have 100 frames then we cannot allocate more than 100 frames.

Second, we must allocate at least minimum number of frames. This affects the performance of the system. Generally, if we allocate less frames, the page fault increases & vice a versa.

Page 115: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

We must allocate enough frames which is required to execute single instruction. For example, one instruction refers to 6 pages at a time. Then we should allocate minimum 6 frames. If we allocate less than 6 frames then that instruction will not complete.

The minimum number of frames is decided by the computer architectures.

Reference String 1 2 3 2 1 5 2 1 6 2 5 6 3 1 3 6 1 2 4 3

1. Execution with FIFO :- Total Page Fault =

Page 116: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Reference String 1 2 3 2 1 5 2 1 6 2 5 6 3 1 3 6 1 2 4 3

2. Execution with OPRA :- Total Page Fault =

3. Execution with LRU :- Total Page Fault =

Page 117: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,
Page 118: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Whenever we reference any memory address it is translated to physical address during memory access.

As we are using concept of VM, process gets swap in & swap out for many times. During swap in & swap out process, it uses different memory regions.

During swap in & swap out, process gets divided into many pieces.

Execution of Program

Few pieces of program are brought into main memory for execution.

Only main program is in resident set in memory.

When address is needed which is not available in main memory that time an interrupt is generated.

O.S. places the interrupted process in a blocking state & takes control.

For process execution to continue, O.S. brings required piece of process into main memory.

For bringing required piece, O.S. issues a disk I/O read request. During the disk I/O, operating system runs another process.

Once the required piece brought into memory, the stopped process again comes to running state.

Page 119: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Advantages of Breaking up a Process By breaking the process we can load only some pieces of each process.

We can load large processes into memory because of breaking up a process.

Paging Paging use for implementing VM.

In paging memory is divided into pages.

Page table maintains the information of pages for the process.

Page table contains page number & frame number.

One more bit is added in page table to indicate whether the page is in main memory or not.

Modified bit is needed to indicate if the page has been altered since it was last loaded into main memory.

If no change has been made, the page does not have to be written to the disk, when it needs to be swapped out.

Page 120: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Page Size An important h/W design is the size of page to be used. Several factors to

be include :-

Small page size results in less amount of internal fragmentation.

Small page size have more pages required per process.

If we increase number of pages then size of page table also increases.

If size of page is larger, then large portion of page tables is stored in main memory.

Secondary memory is designed to efficiently transfer large blocks of data so a large page size is better.

Segmentation – VM Implications

In segmentation, program is divided into different segments.

Programmers see the memory as consisting of multiple segments. If we are using virtual memory then programmer need not worry about memory limitations imposed by main memory.

In segmentation, memory is referenced using segment number & offset pair. Segment number & offset forms the memory address.

Page 121: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Segmentation simplifies handling of growing data structures.

Segmentation allows programs to be altered & recompiled independently, without requiring the entire set of programs to be relinked & reloaded due to use of multiple segments.

Data sharing is simple with segmentation.

Every process have their own segment table. Segment table maintains segment number & frame number.

Each entry contains the starting address of the corresponding segment & length of the segment.

Additional bit is added to know whether the segment is in main memory or not.

One more bit is added to check if segment has been modified since it was loaded in main memory.

Combined Paging & Segmentation We use the concept of paging & segmentation together in virtual memory.

Paging is having the ability to eliminate external fragmentation. Paging provides the efficient use of main memory.

Segmentation have the ability to handle growing data structures, modularity & support for sharing & protection.

Page 122: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,
Page 123: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

The design of memory management of an O.S. depends on three fundamental areas :-

1. Whether or not to use virtual memory techniques.

2. The use of paging or segmentation or both.

3. The algorithms employed for various aspects of memory management.

The choices made in first two areas depend on the hardware platform available. Earlier O.S. does not support virtual memory because system did not support paging or segmentation. But latest system supports virtual memory.

Second, pure segmentation systems are becoming increasingly rare.

Third, algorithms are the domains of O.S. software.

The key issue is Performance.

A rate at which the occurrence of page faults should be minimized.

Page 124: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Fetch Policy It is concerned with determining when a page should be brought into

memory. The solutions are with demand paging & pre-paging.

Demand Paging

Pre-paging We keep some pages in memory.

Pages which are not demanded by demand paging is also brought into memory.

Which pages to bring depends upon characteristics of secondary memory devices.

More efficient to bring in pages that reside contiguously on the disk.

Pre-paging & swapping both are different concepts.

Replacement Policy

Load Control

Page 125: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,
Page 126: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

It uses different concepts like paging, segmentation, & virtual memory.

Memory is divided into pages. Page size varies from architecture to architecture.

Page size is typically 4k in size.

64 bit addresses can be handled by three level page tables. On 32 bit system only two level page tables are used.

Paging support the TLB also.

It consists of following tables :-

Page Directory : Active process have single page directory that is size of one page.

Page Middle Directory : Entry in this table points to one page in the page table.

Page Table : Each entry points to one virtual page of the process.

Page 127: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,
Page 128: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

The windows virtual memory manager controls how memory is allocated & how paging is performed.

The memory manager is designed to operate over a variety of platforms & use pages size ranging from 4 Kbytes to 64 Kbytes.

Windows Virtual Address Map

On 32-bit platforms, each Windows user process sees a separate 32-bit address space, allowing 4GB of virtual memory per process.

Half portion is reserved for O.S.

Each user has 2GB of available user address space.

There an option that allows user space to be increased to 3 Gbytes, leaving 1 Gbyte for system space.

Default virtual address space seen by normal 32 bit user process is as Fig 1.

Page 129: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

0 64 – Kbyte region for NULL-pointer assignments (inaccessible)

64 – Kbyte region for NULL-pointer assignments (inaccessible)

2 – Gbyte user Address space (unreserved, usable)

2 – Gbyte region For the O.S. (Inaccessible)

Figure 11 : Windows Default 32-bit address

space 0xFFFFFFFF

Page 130: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Windows Paging

Whenever process is created, it is stored in 2GB user address space.

Process address space can be up to 2GB.

This space is divided into fixed sized pages, which can be brought into main memory.

O.S. manages these pages into regions. 3 regions are as follows :-

Available :- Addresses not currently used by this process.

Reserved :- Addresses that the virtual memory manager has set aside for a process so they cannot be allocated to another use. (e.g., preserving space for a stack to grow.)

Committed :- Addresses for which the virtual memory manager has initialized for use by the process to access virtual memory pages.

Page 131: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,
Page 132: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

Android is O.S. by Google & Open handset alliance.

It was initially developed by Android Inc but later purchased by Google.

Group of 84 companies formed alliance which is called as Open Handset Alliance.

Open Handset Alliance contains famous companies like Google, HTC, LG, Dell, Sony, Motorola, Samsung, Nvidia, etc..

Apple, Microsoft, Blackberry, Nokia, HP, etc are not of open handset alliance.

It is available since 2008 & it is open source. It is based on Linux 2.6

It is mobile O.S. targeting mobile devices like cell phones, tablets, gaming consoles, etc..

Android applications run on Virtual machine. Virtual machine is called as Dalvik Virtual Machine.

All the basic operations like process mgt, memory mgt, I/O mgt, etc.. Are handled by native stripped down Linux kernel.

Process & memory mgt is little different in android.

It uses its run time & Dalvik virtual machine to manage memory.

Each android application runs as separate process on Dalvik virtual machine.

Page 133: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

References

Maurice J.Bach “The Design of The Unix Operating system”, PHI, ISBN 978-81-203-0516-8

Dhanajay M. Dhamdhere, “Operating Systems: A Concept Based Approach”, 3rd Edition, McGrawHill Education, ISBN-13: 978-1-25-900558-9, ISBN-10: 1-25-900558-5

Page 134: Course : Operating System Design · Syllabus Swapping, Demand paging, a hybrid System with swapping and demand paging, memory management requirements, Memory partitioning, Paging,

References

Maurice J.Bach “The Design of The Unix Operating system”, PHI, ISBN 978-81-203-0516-8

Dhanajay M. Dhamdhere, “Operating Systems: A Concept Based Approach”, 3rd Edition, McGrawHill Education, ISBN-13: 978-1-25-900558-9, ISBN-10: 1-25-900558-5