Cpu Scheduling 6th Sem 2010 REPORT

67
1 IMPLEMENTATION OF CPU SCHEDULING ALGORITHMS Submitted by Abhishek Bajpai (0716410002) Devanshu Gupta (0716410031) Harshit Srivastava (0716410034) Kushagra Chawla (0716410057) DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING PRANVEER SINGH INSTITUTE OF TECHNOLOGY BHAUTI, KANPUR- 208020 January 2009

description

Project Guide: Ms. Priyanka PaulProject Members: Abhishek Bajpai Devanshu Gupta Harshit Srivastava Kushagra ChawlaCPU SchedulingBasis of multiprogramming operating systems which organizes jobs so that CPU always has one to execute Refers to the way processes are assigned to run on the available CPUs By switching CPU among processes, OS can make computer more productive

Transcript of Cpu Scheduling 6th Sem 2010 REPORT

Page 1: Cpu Scheduling 6th Sem 2010 REPORT

1

IMPLEMENTATION

OF

CPU SCHEDULING ALGORITHMS

Submitted by

Abhishek Bajpai (0716410002)

Devanshu Gupta (0716410031)

Harshit Srivastava (0716410034)

Kushagra Chawla (0716410057)

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

PRANVEER SINGH INSTITUTE OF TECHNOLOGY

BHAUTI, KANPUR- 208020

January 2009

Page 2: Cpu Scheduling 6th Sem 2010 REPORT

2

IMPLEMENTATION

OF

CPU SCHEDULING ALGORITHMS

Submitted by

Abhishek Bajpai (0716410002)

Devanshu Gupta (0716410031)

Harshit Srivastava (0716410034)

Kushagra Chawla (0716410057)

Submitted to the Department of Computer Science & Engineering

in partial fulfillment of the requirements

for the degree of

BACHELOR OF TECHNOLOGY

In

COMPUTER SCIENCE & ENGINEERING

PRANVEER SINGH INSTITUTE OF TECHNOLOGY

U.P. TECHNICAL UNIVERSITY

January 2009

Page 3: Cpu Scheduling 6th Sem 2010 REPORT

3

TABLE OF CONTENTS

Page

DECLARATION

4

CERTIFICATE

5

ACKNOWLEDGEMENT

6

ABSTRACT

7

LIST OF FIGURES

8

LIST OF SYMBOLS

9

LIST OF ABBREVIATIONS

10

CHAPTER 1 INTRODUCTION

11

CHAPTER 2 CPU SCHEDULING

14

CHAPTER 3 SCHEDULING ALGORITHMS

26

REFERENCES

67

Page 4: Cpu Scheduling 6th Sem 2010 REPORT

4

DECLARATION

We hereby declare that this submission is our own work and that, to the best of our knowledge and

belief, it contains no material previously published or written by another person nor material which

to a substantial extent has been accepted for the award of any other degree or diploma of the

university or other institute of higher learning, except where due acknowledge has been made in the

text.

Name:

Abhishek Bajpai (0716410002)

Devanshu Gupta (0716410031)

Harshit Srivastava (0716410034)

Kushagra Chawla (0716410057)

Page 5: Cpu Scheduling 6th Sem 2010 REPORT

5

CERTIFICATE

This is to certify that Project Report entitled “IMPLEMENTATION OF CPU SCHEDULING

ALGORITHMS” submitted by Abhishek Bajpai (0716410002), Devanshu Gupta (0716410031),

Harshit Srivastava (0716410034), Kushagra Chawla (0716410057) in partial fulfillment of the

requirement for the award of degree B. Tech. in Department of Computer Science & Engineering of

U.P. Technical University, is a record of the candidates’ own work carried out by them under my

supervision. The matter embodied in this report is original and has not been submitted for the award

of any other degree.

Mr. Anshuman Tyagi Ms. Priyanka Paul (Supervisor)

Associate Professor & HOD Lecturer

Dept of CSE, PSIT Dept of CSE, PSIT

Page 6: Cpu Scheduling 6th Sem 2010 REPORT

6

ACKNOWLEDGEMENT

It gives us a great sense of pleasure to present the report of the B. Tech. Project undertaken during

B. Tech. Pre Final Year. We owe special debt of gratitude to Ms. Priyanka Paul, Lecturer,

Department of Computer Science & Engineering, PSIT, Kanpur for her constant support and

guidance throughout the course of our work. Her sincerity, thoroughness and perseverance have

been a constant source of inspiration for us. It is only her cognizant efforts that our endeavors have

seen light of the day.

We also take the opportunity to acknowledge the contribution of Mr. Anshuman Tyagi, Associate

Professor and HOD, Department of Computer Science & Engineering, PSIT, Kanpur for his full

support and assistance during the development of the project.

We also do not like to miss the opportunity to acknowledge the contribution of all faculty members of

the department for their kind assistance and cooperation during the development of our project. Last

but not the least; we acknowledge our family and friends for their contribution in the completion of

the project.

Page 7: Cpu Scheduling 6th Sem 2010 REPORT

7

ABSTRACT

The project entitled "Implementation of CPU Scheduling Algorithms”, is basically a program which

simulates the following scheduling algorithms:

FCFS (First Come First Served)

SPN (Shortest Process Next)

SRT (Shortest Remaining Time)

Round-Robin

Priority Scheduling

CPU Scheduling is a key concept in computer multitasking, multiprocessing operating system and

real-time operating system designs. It refers to the way processes are assigned to run on the available

CPUs, since there are typically many more processes running than there are available CPUs.

CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be

allocated the CPU. By switching the CPU among processes, the operating system can make the

computer more productive. A multiprogramming operating system allows more than one process to

be loaded into the executable memory at a time and for the loaded process to share the CPU using

time-multiplexing.

Scheduling algorithm is the method by which threads, processes or data flows are given access to

system resources (e.g. processor time, communications bandwidth). The need for a scheduling

algorithm arises from the requirement for most modern systems to perform multitasking (execute

more than one process at a time) and multiplexing (transmit multiple flows simultaneously).

Page 8: Cpu Scheduling 6th Sem 2010 REPORT

8

LIST OF FIGURES

F.1 Interactions of Lexical Analyzer with Parser

F.2 Block Diagram of Lexical Analyzer

F.3 An input Buffer in two halves

F.4 Sentinels at end of each buffer half

F.5 Transition Diagram for Relational Operators

F.6 Transition Diagram for Reserved words and Identifiers

F.7 Transition Diagram for Unsigned Numbers

F.8 Transition Diagram for White Spaces

F.9 Iterative Waterfall Model

Page 9: Cpu Scheduling 6th Sem 2010 REPORT

9

LIST OF SYMBOLS

Ø Phi

* Closure

|s| Length of s

€ Epsilon

U Union

Σ Sigma

Page 10: Cpu Scheduling 6th Sem 2010 REPORT

10

LIST OF ABBREVIATIONS

RE Regular Expressions

relop Relational Operators

LE Less Than Equal To

NE Not Equal To

LT Less Than

EQ Equal To

GE Greater Than Equal To

GT Greater Than

delim Delimiter

SDLC Software Development Life Cycle

SRS Software Requirements Specification

IR Intermediate Representation

DFD Data Flow Diagram

Page 11: Cpu Scheduling 6th Sem 2010 REPORT

11

1. INTRODUCTION

1.1 SCHEDULING

An operating system is a program that manages the hardware and software resources of a computer.

It is the first thing that is loaded into memory when we turn on the computer. Without the operating

system, each programmer would have to create a way in which a program will display text and

graphics on the monitor. The programmer would have to create a way to send data to a printer, tell it

how to read a disk file, and how to deal with other programs.

In the beginning, programmers needed a way to handle complex input/output operations. The

evolution of computer programs and their complexities required new necessities. Because machines

began to become more powerful, the time a program needed to run decreased. However, the time

needed for handing off the equipment between different programs became evident and this led to

programs like DOS. As we can see the acronym DOS stands for Disk Operating System. This

confirms that operating systems were originally made to handle these complex input/output

operations like communicating among a variety of disk drives.

Earlier computers were not as powerful as they are today. In the early computer systems you would

only be able to run one program at a time. For instance, you could not be writing a paper and

browsing the internet all at the same time. However, today’s operating systems are very capable of

handling not only two but multiple applications at the same time. In fact, if a computer is not able to

do this it is considered useless by most computer users.

In order for a computer to be able to handle multiple applications simultaneously, there must be an

effective way of using the CPU. Several processes may be running at the same time, so there has to

be some kind of order to allow each process to get its share of CPU time.

An operating system must allocate computer resources among the potentially competing

requirements of multiple processes. In the case of the processor, the resource to be allocated is

execution time on the processor and the means of allocation is scheduling. The scheduling function

must be designed to satisfy a number of objectives, including fairness, lack of starvation of any

particular process, efficient use of processor time, and low overhead. In addition, the scheduling

Page 12: Cpu Scheduling 6th Sem 2010 REPORT

12

function may need to take into account different levels of priority or real-time deadlines for the start

or completion of certain processes.

Over the years, scheduling has been the focus of intensive research, and many different algorithms

have been implemented. Today, the emphasis in scheduling research is on exploiting multiprocessor

systems, particularly for multithreaded applications, and real-time scheduling.

In a multiprogramming system, multiple processes exist concurrently in main memory. Each process

alternates between using a processor and waiting for some event to occur, such as the completion of

an I/O operation. The processor or processors are kept busy by executing one process while the

others wait, hence the key to multiprogramming is scheduling.

1.2 GOAL & OBJECTIVE OF THE PROJECT

The implementation of the CPU scheduling algorithms is an automation that provides efficient and

errorless computation of waiting times, turnaround times, finishing times and normalized turnaround

times of First Come First Served (FCFS), Shortest Process Next (SPN), Shortest Remaining Time

(SRT), Round-Robin (RR), and Priority Scheduling algorithms. The system provides a clean and

convenient way to test the given data and do the analysis of the CPU scheduling algorithms

mentioned above.

For the purpose of analysis and testing, the user first specifies each process along with its information

such as arrival times and service times and then FCFS, SPN, SRT, RR, or Priority can be computed

producing output in appropriate format for readability.

1.3 SCOPE OF THE PROJECT

In order to make the computer more productive in multiprogramming, the operating system needs to

switch the CPU among processes. It must provides the basic algorithm to determine which process is

allowed to get the CPU at the current time, and whether that process is allowed to finish its execution

comparing to other processes in the system. Therefore, CPU scheduling algorithms such as First

Come First Served (FCFS), Shortest Process Next (SPN), Shortest Remaining Time (SRT), Round-

Robin (RR), and Priority Scheduling are among possible solutions for multiprogramming operating

system.

Page 13: Cpu Scheduling 6th Sem 2010 REPORT

13

A multiprogramming operating system allows more than one process to be loaded into the executable

memory at a time and for the loaded process to share the CPU using time multiplexing. Part of the

reason for using multiprogramming is that the operating system itself is implemented as one or more

processes, so there must be a way for the operating system and application processes to share the

CPU. Another main reason is the need for processes to perform I/O operations in the normal course

of computation. Since I/O operations ordinarily require orders of magnitude more time to complete

than do CPU instructions, multiprogramming systems allocate the CPU to another process whenever

a process invokes an I/O operation.

Page 14: Cpu Scheduling 6th Sem 2010 REPORT

14

2. CPU SCHEDULING

CPU scheduling is the basis of multi-programmed operating systems. By switching the CPU among

processes, the operating system can make the computer more productive. A multiprogramming

operating system allows more than one process to be loaded into the executable memory at a time

and for the loaded process to share the CPU using time-multiplexing.

Part of the reason for using multiprogramming is that the operating system itself is implemented as

one or more processes, so there must be a way for the operating system and application processes to

share the CPU. Another main reason is the need for processes to perform I/O operations in the

normal course of computation. Since I/O operations ordinarily require orders of magnitude more time

to complete than do CPU instructions, multiprogramming systems allocate the CPU to another

process whenever a process invokes an I/O operation.

Scheduling refers to the way processes are assigned to run on the available CPUs, since there

are typically many more processes running than there are available CPUs.

2.1 BASIC CONCEPTS

In a single-processor system, only one process can run at a time; any others must wait until the CPU

is free and can be rescheduled. The objective of multiprogramming is to have some process running

at all times, to maximize CPU utilization. The idea is relatively simple. A process is executed until it

must wait, typically for the completion of some I/O request. In a simple computer system, the CPU

then just sits idle. All this waiting time is wasted; no useful work is accomplished. With

multiprogramming, we try to use this time productively. Several processes are kept in memory at one

time. When one process has to wait, the operating system takes the CPU away from that process and

gives the CPU to another process. This pattern continues. Every time one process has to wait, another

process can take over use of the CPU.

Scheduling of this kind is a fundamental operating-system function. Almost all computer resources

are scheduled before use. The CPU is, of course, one of the primary computer resources. Thus, its

scheduling is central to operating-system design.

Page 15: Cpu Scheduling 6th Sem 2010 REPORT

15

2.1.1 CPU-I/O BURST CYCLE

The success of CPU scheduling depends on an observed property of processes: process execution

consists of a cycle of CPU execution and I/O wait. Processes alternate between these two states.

Process execution begins with a CPU burst. That is followed by an I/O burst, which is followed by

another CPU burst, then another I/O burst, and so on. Eventually, the final CPU burst ends with a

system request to terminate execution.

F.1. Alternating Sequence of CPU and I/O Bursts

The durations of CPU bursts have been measured extensively. Although they vary greatly from

process to process and from computer to computer, they tend to have a frequency curve as following:

Page 16: Cpu Scheduling 6th Sem 2010 REPORT

16

F.2. Histogram of CPU-burst durations

The curve is generally characterized as exponential or hyper exponential, with a large number of

short CPU bursts and a small number of long CPU bursts. An I/O bound program typically has many

short CPU bursts. A CPU-bound program might have a few long CPU bursts. This distribution can be

important in the selection of an appropriate CPU-scheduling algorithm.

2.1.2 CPU SCHEDULER

Whenever the CPU becomes idle, the operating system must select one of the processes in the ready

queue to be executed. The selection process is carried out by the short-term scheduler or CPU

scheduler. The scheduler selects a process from the processes in memory that are ready to execute

and allocates the CPU to that process.

The ready queue is not necessarily a first-in, first-out (FIFO) queue. It can be implemented as a

FIFO queue, a priority queue, a tree, or simply an unordered linked list. Conceptually, however, all

the processes in the ready queue are lined up waiting for a chance to run on the CPU. The records in

the queues are generally process control blocks (PCBs) of the processes.

Page 17: Cpu Scheduling 6th Sem 2010 REPORT

17

2.1.3 PREEMPTIVE SCHEDULING

CPU-scheduling decisions may take place under the following four circumstances:

1. When a process switches from the running state to the waiting state (e.g., as the result of an

I/O request or an invocation of wait for the termination of one of the child processes)

2. When a process switches from the running state to the ready state (e.g., when an interrupt

occurs)

3. When a process switches from the waiting state to the ready state (e.g., at completion of I/O)

4. When a process terminates

For situations 1 and 4, there is no choice in terms of scheduling. A new process (if one exists in the

ready queue) must be selected for execution. There is a choice, however, for situations 2 and 3.

When scheduling takes place only under circumstances 1 and 4, we say that the scheduling scheme is

nonpreemptive or cooperative; otherwise, it is preemptive. Under nonpreemptive scheduling, once

the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either

by terminating or by switching to the waiting state.

Cooperative scheduling is the only method that can be used on certain hardware platforms, because it

does not require the special hardware such as a timer needed for preemptive scheduling.

Unfortunately, preemptive scheduling incurs a cost associated with access to shared data.

2.1.4 DISPATCHER

Another component involved in the CPU-scheduling function is the dispatcher. The dispatcher is the

module that gives control of the CPU to the process selected by the short-term scheduler. This

function involves the following:

• Switching context

• Switching to user mode

• Jumping to the proper location in the user program to restart that program

Page 18: Cpu Scheduling 6th Sem 2010 REPORT

18

The dispatcher should be as fast as possible, since it is invoked during every process switch. The time

it takes for the dispatcher to stop one process and start another running is known as the dispatch

latency.

2.2 TYPES OF PROCESSOR SCHEDULING

The aim of processor scheduling is to assign processes to be executed by the processor or processors

over time, in a way that meets system objectives, such as response time, throughput, and processor

efficiency. In many systems, this scheduling activity is broken down into three separate functions:

long term, medium term, and short term scheduling. The names suggest the relative time scales with

which these functions are performed.

T.1. Types of Scheduling

Long-term scheduling is performed when a new process is created. This is a decision whether to add

a new process to the set of processes that are currently active. Medium-term scheduling is a part of

the swapping function. This is a decision whether to add a process to those that are at least partially

in main memory and therefore available for execution. Short-term scheduling is the actual decision of

which ready process to execute next.

The following figure relates the scheduling functions to the process state transitions:

Page 19: Cpu Scheduling 6th Sem 2010 REPORT

19

F.3 Scheduling and Process State Transitions

Scheduling affects the performance of the system because it determines which processes will wait

and which will progress. The given figure shows the queues involved in the state transitions of a

process. Fundamentally, scheduling is a matter of managing queues to minimize queuing delay and to

optimize performance in a queuing environment.

F.4. Queuing Diagram for Scheduling

The figure F.5 given below reorganizes the state transition diagram to suggest the nesting of

scheduling functions:

Page 20: Cpu Scheduling 6th Sem 2010 REPORT

20

F.5. Levels of Scheduling

2.2.1 LONG-TERM SCHEDULING

The long-term scheduler determines which programs are admitted to the system for processing. Thus,

it controls the degree of multiprogramming. Once admitted, a job or user program becomes a process

and is added to the queue for the short-term scheduler. In some systems, a newly created process

begins in a swapped-out condition, in which case it is added to a queue for the medium-term

scheduler.

In a batch system, or for the batch portion of a general-purpose operating system, newly submitted

jobs are routed to disk and held in a batch queue. The long-term scheduler creates processes from the

queue when it can. There are two decisions involved here. First, the scheduler must decide when the

Page 21: Cpu Scheduling 6th Sem 2010 REPORT

21

operating system can take on one or more additional processes. Second, the scheduler must decide

which job or jobs to accept and turn into processes.

The decision as to when to create a new process is generally driven by the desired degree of

multiprogramming. The more processes that are created, the smaller is the percentage of time that

each process can be executed (i.e., more processes are competing for the same amount of processor

time). Thus, the long-term scheduler may limit the degree of multiprogramming to provide

satisfactory service to the current set of processes. Each time a job terminates, the scheduler may

decide to add one or more new jobs.

The decision as to which job to admit next can be on a simple first-come-first served basis, or it can

be a tool to manage system performance. The criteria used may include priority, expected execution

time, and I/O requirements.

2.2.2 MEDIUM-TERM SCHEDULING

The medium-term scheduler temporarily removes processes from main memory and places them on

secondary memory or vice versa. This is commonly referred to as "swapping out" or "swapping in".

The mid-term scheduler may decide to swap out a process which has not been active for some time,

or a process which has a low priority, or a process which is page faulting frequently, or a process

which is taking up a large amount of memory in order to free up main memory for other processes,

swapping the process back in later when more memory is available, or when the process has been

unblocked and is no longer waiting for a resource. So, medium-term scheduling is part of the

swapping function.

Typically, the swapping-in decision is based on the need to manage the degree of multiprogramming.

On a system that does not use virtual memory, memory management is also an issue. Thus, the

swapping-in decision will consider the memory requirements of the swapped-out processes.

2.2.3 SHORT-TERM SCHEDULING

In terms of frequency of execution, the long-term scheduler executes relatively infrequently and

makes the coarse-grained decision of whether or not to take on a new process and which one to take.

Page 22: Cpu Scheduling 6th Sem 2010 REPORT

22

The medium-term scheduler is executed somewhat more frequently to make a swapping decision.

The short-term scheduler, also known as the dispatcher, executes most frequently and makes the fine-

grained decision of which process to execute next.

The short-term scheduler is invoked whenever an event occurs that may lead to the blocking of the

current process or that may provide an opportunity to preempt a currently running process in favor of

another. Examples of such events include:

• Clock interrupts

• I/O interrupts

• Operating system calls

• Signals (e.g., semaphores)

Page 23: Cpu Scheduling 6th Sem 2010 REPORT

23

2.3 SCHEDULING CRITERIA

Different CPU scheduling algorithms have different properties, and the choice of a particular

algorithm may favor one class of processes over another. In choosing which algorithm to use in a

particular situation, we must consider the properties of the various algorithms.

Many criteria have been suggested for comparing CPU scheduling algorithms. Which characteristics

are used for comparison can make a substantial difference in which algorithm is judged to be best.

The criteria include the following:

• CPU utilization: We want to keep the CPU as busy as possible. Conceptually, CPU

utilization can range from 0 to 100 percent. In a real system, it should range from 40 percent

(for a lightly loaded system) to 90 percent (for a heavily used system).

• Throughput: If the CPU is busy executing processes, then work is being done. One measure

of work is the number of processes that are completed per time unit, called throughput. For

long processes, this rate may be one process per hour; for short transactions, it may be 10

processes per second.

• Turnaround time: From the point of view of a particular process, the important criterion is

how long it takes to execute that process. The interval from the time of submission of a

process to the time of completion is the turnaround time. Turnaround time is the sum of the

periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU,

and doing I/O.

• Waiting time: The CPU scheduling algorithm does not affect the amount of time during

which a process executes or does I/O; it affects only the amount of time that a process spends

waiting in the ready queue. Waiting time is the sum of the periods spent waiting in the ready

queue.

• Response time: In an interactive system, turnaround time may not be the best criterion.

Often, a process can produce some output fairly early and can continue computing new results

while previous results are being output to the user. Thus, another measure is the time from the

submission of a request until the first response is produced. This measure, called response

time, is the time it takes to start responding, not the time it takes to output the response. The

turnaround time is generally limited by the speed of the output device.

Page 24: Cpu Scheduling 6th Sem 2010 REPORT

24

It is desirable to maximize CPU utilization and throughput and to minimize turnaround time,

waiting time, and response time.

The commonly used criteria can be categorized along two dimensions too. First, we can make a

distinction between user-oriented and system-oriented criteria. User oriented criteria relate to the

behavior of the system as perceived by the individual user or process. An example is response time

i.e., elapsed time between the submissions of a request until the response begins to appear as output.

This quantity is visible to the user and is naturally of interest to the user.

Other criteria are system oriented i.e., the focus is on effective and efficient utilization of the

processor. An example is throughput, which is the rate at which processes are completed. This is

certainly a worthwhile measure of system performance and one that we would like to maximize.

However, it focuses on system performance rather than service provided to the user. Thus,

throughput is of concern to a system administrator but not to the user population.

Another dimension along which criteria can be classified is those that are performance related and

those that are not performance related. Performance related criteria are quantitative and generally

can be readily measured. Examples include response time and throughput. Criteria that are not

performance related are either qualitative in nature or do not lend themselves readily to measurement

and analysis. An example of such a criterion is predictability.

The following table summarizes key scheduling criteria. These are interdependent, and it is

impossible to optimize all of them simultaneously e.g., providing good response time may require a

scheduling algorithm that switches between processes frequently. This increases the overhead of the

system, reducing throughput. Thus, the design of a scheduling policy involves compromising among

competing requirements.

Page 25: Cpu Scheduling 6th Sem 2010 REPORT

25

T.2. Scheduling Criteria

Page 26: Cpu Scheduling 6th Sem 2010 REPORT

26

3. SCHEDULING ALGORITHMS

CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to

be allocated the CPU. There are many different CPU scheduling algorithms. In this section, we

describe several of them.

3.1 FIRST-COME, FIRST-SERVED SCHEDULING

By far the simplest CPU-scheduling algorithm is the first-come, first-served (FCFS) scheduling

algorithm. With this scheme, the process that requests the CPU first is allocated the CPU first. The

implementation of the FCFS policy is easily managed with a FIFO queue. When a process enters the

ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the

process at the head of the queue. The running process is then removed from the queue. The code for

FCFS scheduling is simple to write and understand.

The average waiting time under the FCFS policy, however, is often quite long. Consider the

following set of processes that arrive at time 0, with the length of the CPU burst given in

milliseconds:

Process Burst Time

P1 24

P2 3

P3 3

If the processes arrive in the order P1, P2, P3, and are served in FCFS order, we get the result shown

in the following Gantt chart:

The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2, and 27

milliseconds for process P3. Thus, the average waiting time is (0+ 24 + 27) / 3 = 17 milliseconds. If

Page 27: Cpu Scheduling 6th Sem 2010 REPORT

27

the processes arrive in the order P1, P3, P2, however, the results will be as shown in the following

Gantt chart:

The average waiting time is now (6 + 0 + 3) / 3 = 3 milliseconds. This reduction is substantial. Thus,

the average waiting time under FCFS policy is generally not minimal and may vary substantially if

the process's CPU burst times vary greatly.

In addition, consider the performance of FCFS scheduling in a dynamic situation. Assume we have

one CPU-bound process and many I/O-bound processes. As the processes flow around the system,

the following scenario may result. The CPU-bound process will get and hold the CPU. During this

time, all the other processes will finish their I/O and will move into the ready queue, waiting for the

CPU. While the processes wait in the ready queue, the I/O devices are idle. Eventually, the CPU-

bound process finishes its CPU burst and moves to an I/O device. All the I/O-bound processes, which

have short CPU bursts, execute quickly and moves back to the I/O queues. At this point, the CPU sits

idle. The CPU-bound process will then move back to the ready queue and be allocated the CPU.

Again, all the I/O processes end up waiting in the ready queue until the CPU-bound process is done.

There is a convoy effect as all the other processes wait for the one big process to get off the CPU.

This effect results in lower CPU and device utilization than might be possible if the shorter processes

were allowed to go first.

The FCFS scheduling algorithm is nonpreemptive. Once the CPU has been allocated a process, that

process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O. The

FCFS algorithm is thus particularly troublesome for time-sharing systems, where it is important that

each user get a share of the CPU at regular intervals. It would be disastrous to allow one process to

keep the CPU for an extended period.

Page 28: Cpu Scheduling 6th Sem 2010 REPORT

28

3.2 SHORTEST-JOB-FIRST SCHEDULING

A different approach to CPU scheduling is the shortest-job-first (SJF) scheduling algorithm. This

algorithm associates with each process the length of the process's next CPU burst. When the CPU is

available, it is assigned to the process that has the smallest next CPU burst. If the next CPU bursts of

two processes are the same, FCFS scheduling is used to break the tie.

As an example of SJF scheduling, consider the following set of processes, with the length of the CPU

burst given in milliseconds:

Process Burst Time

P1 6

P2 8

P3 7

P4 3

Using SJF scheduling, we would schedule these processes according to the following Gantt chart:

The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9 milliseconds for

process P3, and 0 milliseconds for process P4. Thus, the average waiting time is (3 + 16 + 9 + 0) / 4

= 7 milliseconds. By comparison, if we were using the FCFS scheduling scheme, the average waiting

time would be 10.25 milliseconds.

The SJF scheduling algorithm is provably optimal, in that it gives the minimum average waiting time

for a given set of processes. Moving a short process before long one decrease the waiting time of the

short process more than it increases the waiting time of the long process. Consequently, the average

waiting time decreases.

The SJF algorithm can be either preemptive or nonpreemptive. The choice arises when a new process

arrives at the ready queue while a previous process is still executing. The next CPU burst of the

Page 29: Cpu Scheduling 6th Sem 2010 REPORT

29

newly arrived process may be shorter than what is left of the currently executing process. A

preemptive SJF algorithm will preempt the currently executing process, whereas a nonpreemptive

SJF algorithm will allow the currently running process to finish its CPU burst. Sometimes, Non-

Preemptive SJF scheduling is also called Shortest Process Next (SPN) scheduling & Preemptive

SJF scheduling is called Shortest Remaining Time (SRT) scheduling.

As an example, consider the following four processes, with the length of the CPU burst given in

milliseconds:

Process Arrival Time Burst Time

P1 0 8

P2 1 4

P3 2 9

P4 3 5

If the processes arrive at the ready queue at the times shown and need the indicated burst times, then

the resulting preemptive SJF schedule is as depicted in the following Gantt chart:

Process P1 is started at time 0, since it is the only process in the queue. Process P2 arrives at time 1.

The remaining time for process P1 (7 milliseconds) is larger than the time required by process P2 (4

milliseconds), so process P1 is preempted, and process P2 is scheduled. The average waiting time for

this example is ((10 - 1) + (1 - 1) + (17 - 2) + (5 - 3)) / 4 = 26 / 4 = 6.5 milliseconds. Nonpreemptive

SJF scheduling would result in an average waiting time of 7.75 milliseconds.

Page 30: Cpu Scheduling 6th Sem 2010 REPORT

30

3.3 PRIORITY SCHEDULING

The SJF algorithm is a special case of the general priority scheduling algorithm. A priority is

associated with each process, and the CPU is allocated to the process with the highest priority. Equal-

priority processes are scheduled in FCFS order. An SJF algorithm is simply a priority algorithm

where the priority (p) is the inverse of the (predicted) next CPU burst. The larger the CPU burst, the

lower the priority, and vice versa.

Note that we discuss scheduling in terms of high priority and low priority. Some systems use low

numbers to represent low priority; others use low numbers for high priority. This difference can lead

to confusion so; we assume that low numbers represent high priority.

As an example, consider the following set of processes, assumed to have arrived at time 0, in the

order P1, P2… P5 with the length of the CPU burst given in milliseconds:

Process Burst Time Priority

P1 10 3

P2 1 1

P3 2 4

P4 1 5

P5 5 2

Using priority scheduling, we would schedule these processes according to the following Gantt chart:

The average waiting time is 8.2 milliseconds.

Priorities can be defined either internally or externally. Internally defined priorities use some

measurable quantity or quantities to compute the priority of a process. e.g., time limits, memory

requirements, the number of open files, and the ratio of average I/O burst to average CPU burst have

Page 31: Cpu Scheduling 6th Sem 2010 REPORT

31

been used in computing priorities. External priorities are set by criteria outside the operating system,

such as the importance of the process, the type and amount of funds being paid for computer use, the

department sponsoring the work, and other, often political, factors.

Priority scheduling can be either preemptive or nonpreemptive. When a process arrives at the ready

queue, its priority is compared with the priority of the currently running process. A preemptive

priority scheduling algorithm will preempt the CPU if the priority of the newly arrived process is

higher than the priority of the currently running process. A nonpreemptive priority scheduling

algorithm will simply put the new process at the head of the ready queue.

A major problem with priority scheduling algorithms is indefinite blocking or starvation. A process

that is ready to run but waiting for the CPU can be considered blocked. A priority scheduling

algorithm can leave some low priority processes waiting indefinitely. In a heavily loaded computer

system, a steady stream of higher-priority processes can prevent a low-priority process from ever

getting the CPU. Generally, one of two things will happen. Either the process will eventually be run,

or the computer system will eventually crash and lose all unfinished low-priority processes.

A solution to the problem of indefinite blockage of low-priority processes is aging. Aging is a

technique of gradually increasing the priority of processes that wait in the system for a long time.

3.4 ROUND-ROBIN SCHEDULING

The round-robin (RR) scheduling algorithm is designed especially for timesharing systems. It is

similar to FCFS scheduling, but preemption is added to switch between processes. A small unit of

time, called a time quantum or time slice, is defined. A time quantum is generally from 10 to 100

milliseconds. The ready queue is treated as a circular queue. The CPU scheduler goes around the

ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum.

To implement RR scheduling, we keep the ready queue as a FIFO queue of processes. New processes

are added to the tail of the ready queue. The CPU scheduler picks the first process from the ready

queue, sets a timer to interrupt after 1 time quantum, and dispatches the process.

One of two things will then happen. The process may have a CPU burst of less than 1 time quantum.

In this case, the process itself will release the CPU voluntarily. The scheduler will then proceed to the

next process in the ready queue. Otherwise, if the CPU burst of the currently running process is

Page 32: Cpu Scheduling 6th Sem 2010 REPORT

32

longer than 1 time quantum, the timer will go off and will cause an interrupt to the operating system.

A context switch will be executed, and the process will be put at the tail of the ready queue. The CPU

scheduler will then select the next process in the ready queue.

The average waiting time under the RR policy is often long. Consider the following set of processes

that arrive at time 0, with the length of the CPU burst given in milliseconds:

Process Burst Time

P1 24

P2 3

P3 3

If we use a time quantum of 4 milliseconds, then process P1 gets the first 4 milliseconds. Since it

requires another 20 milliseconds, it is preempted after the first time quantum, and the CPU is given to

the next process in the queue, process P2. Since process P2 does not need 4 milliseconds, it quits

before its time quantum expires. The CPU is then given to the next process, process P3. Once each

process has received 1 time quantum, the CPU is returned to process P1 for an additional time

quantum. The resulting RR schedule is:

The average waiting time is 17/3 = 5.66 milliseconds.

In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in a

row. If a process's CPU burst exceeds 1 time quantum, that process is preempted and is put back in

the ready queue. The RR scheduling algorithm is thus preemptive.

The performance of the RR algorithm depends heavily on the size of the time quantum. At one

extreme, if the time quantum is extremely large, the RR policy is the same as the FCFS policy. If the

time quantum is extremely small (say, 1 millisecond), the RR approach is called processor sharing

Page 33: Cpu Scheduling 6th Sem 2010 REPORT

33

and creates the appearance that each of n processes has its own processor running at 1/n the speed of

the real processor.

In software, we also need to consider the effect of context switching on the performance of RR

scheduling. Let us assume that we have only one process of 10 time units. If the quantum is 12 time

units, the process finishes in less than 1 time quantum, with no overhead. If the quantum is 6 time

units, however, the process requires 2 quanta, resulting in a context switch. If the time quantum is 1

time unit, then nine context switches will occur, slowing the execution of the process accordingly.

F.6. Way in which a smaller time quantum increases Context Switches

3.5 CHARACTERSTICS OF SCHEDULING ALGORITHMS

The following table presents some summary information about the various scheduling algorithms that

are examined previously. The selection function determines which process, among ready processes,

is selected next for execution. The function may be based on priority, resource requirements, or the

execution characteristics of the process. Here, three quantities are significant:

w = time spent in system so far, waiting

e = time spent in execution so far

Page 34: Cpu Scheduling 6th Sem 2010 REPORT

34

s = total service time required by the process, including e

T.3. Characteristics of Various Scheduling Algorithms

The decision mode specifies the instants in time at which the selection function is exercised. There

are two general categories:

Nonpreemptive: In this case, once a process is in the Running state, it continues to execute

until it terminates or it blocks itself to wait for I/O or to request some operating system

service.

Preemptive: The currently running process may be interrupted and moved to the Ready state

by the operating system. The decision to preempt may be performed when a new process

arrives; when an interrupt occurs that places a blocked process in the Ready state; or

periodically, based on a clock interrupt.

Page 35: Cpu Scheduling 6th Sem 2010 REPORT

35

Preemptive policies incur greater overhead than nonpreemptive ones but may provide better service

to the total population of processes, because they prevent any one process from monopolizing the

processor for very long. In addition, the cost of preemption may be kept relatively low by using

efficient process switching mechanisms and by providing a large main memory to keep a high

percentage of programs in main memory.

3.6 COMPARISON OF SCHEDULING ALGORITHMS

As we compare the various scheduling algorithms, we will use the set of processes given in the

following table as a running example. We can think of these as batch jobs, with the service time

being the total execution time required. Alternatively, we can consider these to be ongoing processes

that require alternate use of the processor and I/O in a repetitive fashion. In this latter case, the

service times represent the processor time required in one cycle. In either case, in terms of a queuing

model, this quantity corresponds to the service time.

Process Arrival Time Service Time

A 0 3

B 2 6

C 4 4

D 6 5

E 8 2

For the given example, Figure F.7 shows the execution pattern for each algorithm for one cycle, and

Table T.4 summarizes some key results. First, the finish time of each process is determined. From

this, we can determine the turnaround time. In terms of the queuing model, turnaround time is the

residence time Tr, or total time that the item spends in the system waiting and providing service. A

more useful figure is the normalized turnaround time, which is the ratio of turnaround time to service

time. This value indicates the relative delay experienced by a process. Typically, the longer the

process execution time, the greater the absolute amount of delay that can be tolerated. The minimum

possible value for this ratio is 1.0 and increasing values correspond to a decreasing level of service.

Page 36: Cpu Scheduling 6th Sem 2010 REPORT

36

F.7. Comparison of Various Scheduling Algorithms

Page 37: Cpu Scheduling 6th Sem 2010 REPORT

37

T.4. Comparison of Various Scheduling Algorithms

3.6.1 PERFORMANCE COMPARISON

Clearly, the performance of various scheduling policies is a critical factor in the choice of a

scheduling policy. However, it is impossible to make definitive comparisons because relative

performance will depend on a variety of factors, including the probability distribution of service

times of the various processes, the efficiency of the scheduling and context switching mechanisms,

and the nature of the I/O demand and the performance of the I/O subsystem.

3.6.1.1 QUEUING ANALYSIS

In this, we use basic queuing formulas with the common assumptions of Poisson arrivals and

exponential service times & make the observation that any such scheduling discipline that chooses

the next item to be served independent of service time obeys the following relationship:

Page 38: Cpu Scheduling 6th Sem 2010 REPORT

38

Tr / Ts = 1 / 1 – p

Where,

Tr = turnaround time or residence time

Ts = average service time

p = processor utilization

3.6.1.2 SIMULATION MODELING

The following figures show the simulations of normalized turnaround time and average waiting time.

F.8. Simulation Result for Normalized Turnaround Time

Page 39: Cpu Scheduling 6th Sem 2010 REPORT

39

F.9. Simulation Result for Waiting Time

Looking at the normalized turnaround time, we can see that the performance of FCFS is very

unfavorable, with one-third of the processes having a normalized turnaround time greater than 10

times the service time; furthermore, these are the shortest processes. On the other hand, the absolute

waiting time is uniform, as is to be expected because scheduling is independent of service time. The

figures show round robin using a quantum of one time unit. Except for the shortest processes, which

execute in less than one quantum, round robin yields a normalized turnaround time of about 5 for all

processes, and treats all fairly. Shortest process next performs better than round robin, except for the

shortest processes. Shortest remaining time, the preemptive version of SPN, performs better than

SPN except for the longest 7% of all processes. We have seen that, among nonpreemptive policies,

FCFS favors long processes and SPN favors short ones.

Page 40: Cpu Scheduling 6th Sem 2010 REPORT

40

CODING

System Support

Applet Views

inputFrame.java - Input View

animeFrame.java - Anime View

statsFrame.java - Stats View

Class hierarchy

AlgAnime.java - Driver

Scheduler.java - Scheduling algorithm

process.java - process

GUI.java - GUI listener

packet.java - GUI packet

Scheduling Algorithms

FCFS.java

RR1.java

RR4.java

SPN.java

SRT.java

Page 41: Cpu Scheduling 6th Sem 2010 REPORT

41

/*************************************Algorithm Animation

PURPOSE: This is the main applet driver. It coordinates the animation

view, statistics view, and the input view. ***************************************/

import java.awt.*;

import java.applet.*;

import java.util.Vector;

import java.util.Observer;

import java.util.Observable;

public class AlgAnime extends Applet implements Observer {

animeFrame anime;

statsFrame stats;

inputFrame input;

Scheduler lesson;

Vector Q;

/*--------------------------------------------------------Constructor

PURPOSE: To set up all views, scheduling algorithm, and input queue-------------------*/

public void init() {

anime = new animeFrame();

anime.show();

Page 42: Cpu Scheduling 6th Sem 2010 REPORT

42

anime.resize(300,300);

stats = new statsFrame();

stats.show();

stats.resize(300,300);

input = new inputFrame(this);

input.show();

input.resize(500,500);

lesson=null;

Q = new Vector(1,1);

}

/*--------------------------------------------------------Update

PURPOSE: To respond to user input via GUIs; overridden version of system event handler

PARAMTERS: references argument specified by user, and associates GUI handler-------*/

public void update(Observable obj, Object arg) {

if (arg instanceof String) {

if (((String)arg).equals("pause") && lesson!=null)

lesson.thread.suspend();

else if (((String)arg).equals("resume") && lesson!=null)

lesson.thread.resume();

else if (((String)arg).equals("clear")) {

Page 43: Cpu Scheduling 6th Sem 2010 REPORT

43

input.resetGUI();

Q.setSize(0);

lesson.resetQ();

lesson.thread.stop();

lesson = null;

} // reset all vectors

else if (((String)arg).equals("quit")) {

input.hide(); input.dispose();

anime.hide(); anime.dispose();

stats.hide(); stats.dispose();

} // quit applet

} // string event

else if (arg instanceof Packet) {

getParms((Packet)arg);

anime.cleargrid();

int click = anime.setgrid(Q);

if (((Packet)arg).getAlg().equals("FCFS")) {

anime.setTitle("First Come First Serve Scheduling");

lesson = new FCFS (Q, stats, anime,input,click);

} // do FCFS

Page 44: Cpu Scheduling 6th Sem 2010 REPORT

44

else if (((Packet)arg).getAlg().equals("RR1")) {

anime.setTitle("Round Robin, q = 1");

lesson = new RR1 (Q, stats, anime,input,click);

} // do RR1

else if (((Packet)arg).getAlg().equals("RR4")) {

anime.setTitle("Round Robin, q = 4");

lesson = new RR4 (Q, stats, anime, input,click);

} // do RR4

else if (((Packet)arg).getAlg().equals("SPN")) {

anime.setTitle("Shortest Process Next");

lesson = new SPN(Q, stats, anime, input,click);

} // do SPN

else if (((Packet)arg).getAlg().equals("SRT")) {

anime.setTitle("Shortest Remaining Time");

lesson = new SRT(Q, stats, anime, input,click);

} // do SRT

}

} // handle event

/*-------------------------------------------------------Get Parms utility

PURPOSE: To sort out input given by user and set up input queue

Page 45: Cpu Scheduling 6th Sem 2010 REPORT

45

PARAMETERS: information packet GUI given by user--------------------------------------*/

private void getParms(Packet in) {

String name, t1, t2, t3, mark ="\n";

Integer I = new Integer(0);

int pos, a, s;

boolean empty=false;

t1 = in.getProc();

t2 = in.getArriv();

t3 = in.getServ();

do {

pos = t1.indexOf(mark); // extract process no. from user input

if (pos<0) {

name = t1;

empty = true;

}

else

name = t1.substring(0,pos);

t1 = t1.substring(pos+1);

pos = t2.indexOf(mark); // extract arrival time from user input

if (pos<0) {

Page 46: Cpu Scheduling 6th Sem 2010 REPORT

46

a = I.parseInt(t2);

empty = true;

}

else

a = I.parseInt(t2.substring(0,pos));

t2 = t2.substring(pos+1);

pos = t3.indexOf(mark); // extract service time from user input

if (pos<0) {

s = I.parseInt(t3);

empty = true;

}

else

s = I.parseInt(t3.substring(0,pos));

t3 = t3.substring(pos+1);

process temp = new process(name, a, s);

Q.addElement(temp);

} while (!empty);

} // set up Queue

}

Page 47: Cpu Scheduling 6th Sem 2010 REPORT

47

/**************************************Input Frame

PURPOSE: This is the input view that prompts user for input in order to

run simulation. ************************************************************/

import java.awt.*;

import java.util.Observable;

public class inputFrame extends Frame {

TextArea proc,arriv,serv;

Panel knobs;

Choice alg;

CheckboxGroup functions;

Checkbox[] fun;

Packet info;

GUI vigil;

String PrevBut;

/*--------------------------------------------------------Constructor

PURPOSE: Generates the input view frame and asks animation view to be ready

PARAMETERS: references the animation view-----------------------------------------------*/

public inputFrame(AlgAnime parent) {

super("Input View");

vigil = new GUI();

Page 48: Cpu Scheduling 6th Sem 2010 REPORT

48

vigil.addObserver((java.util.Observer)parent);

String sampleP = "1\n2\n3\n4\n5";

String sampleA = "0\n2\n4\n6\n8";

String sampleS = "3\n6\n4\n5\n2";

proc = new TextArea(150,10);

proc.setEditable(true);

proc.appendText(sampleP);

arriv = new TextArea(150,10);

arriv.setEditable(true);

arriv.appendText(sampleA);

serv = new TextArea(150,10);

serv.setEditable(true);

serv.appendText(sampleS);

alg = new Choice();

alg.addItem("FCFS");

alg.addItem("RR1");

alg.addItem("RR4");

alg.addItem("SPN");

alg.addItem("SRT");

functions = new CheckboxGroup();

Page 49: Cpu Scheduling 6th Sem 2010 REPORT

49

fun = new Checkbox[5];

fun[0] = new Checkbox("clear",functions,false);

fun[1] = new Checkbox("run",functions,false);

fun[2] = new Checkbox("pause",functions,false);

fun[3] = new Checkbox("resume",functions,false);

fun[4] = new Checkbox("quit",functions,false);

PrevBut = ""; // init

knobs = new Panel();

knobs.setLayout(new FlowLayout(FlowLayout.CENTER));

knobs.add(alg);

knobs.add(fun[0]);

knobs.add(fun[1]);

knobs.add(fun[2]);

knobs.add(fun[3]);

knobs.add(fun[4]);

Panel labels = new Panel();

labels.setLayout(new BorderLayout());

labels.add("Center", new Label("Arrival time:"));

labels.add("West", new Label("Process name:"));

labels.add("East", new Label("Service time:"));

Page 50: Cpu Scheduling 6th Sem 2010 REPORT

50

this.setLayout(new BorderLayout());

this.add("North",labels);

this.add("Center", arriv);

this.add("West",proc);

this.add("East",serv);

this.add("South", knobs);

} // set display

/*--------------------------------------------------------Handle Event

PURPOSE: To handle all events by dialog box------------------------------------------------*/

public boolean handleEvent (Event evtObj) {

if (evtObj.id == Event.WINDOW_DESTROY) {

this.dispose();

return true;

} // destroy button

else if (evtObj.id==Event.ACTION_EVENT)

if (evtObj.target==fun[0]) {

proc.setText("");

arriv.setText("");

serv.setText("");

fun[1].enable();

Page 51: Cpu Scheduling 6th Sem 2010 REPORT

51

String cmd = functions.getCurrent().getLabel();

vigil.input(cmd);

return true;

} // handle clear button

else if (evtObj.target==fun[1]) {

fun[1].disable();

info = new Packet(proc.getText(), arriv.getText(), serv.getText(), alg.getSelectedItem());

vigil.input(info);

proc.setEditable(false);

arriv.setEditable(false);

serv.setEditable(false);

return true;

} // handle run button

else if (evtObj.target==fun[2] || evtObj.target==fun[3]) {

String cmd = functions.getCurrent().getLabel();

if (PrevBut.equals(cmd))

return false;

else {

PrevBut = cmd; // stagger for next event

vigil.input(cmd);

Page 52: Cpu Scheduling 6th Sem 2010 REPORT

52

return true;

} // balance pause to resume request

} // handle pause/resume buttons

else if (evtObj.target==fun[4]) {

vigil.input("quit");

return true;

} // handle quit option

return false;

} // handle event

/*-----------------------------------------------------Reset input GUI

PURPOSE: To enable more input----------------------------------------------------------------*/

public void resetGUI() {

proc.setEditable(true);

serv.setEditable(true);

arriv.setEditable(true);

fun[1].enable();

}

}

Page 53: Cpu Scheduling 6th Sem 2010 REPORT

53

/*************************************Animation Frame

PURPOSE: This is the animation view that displays title of scheduling

algorithm, graphically animates algorithm, and traces clock events.******************/

import java.awt.*;

import java.util.Vector;

public class animeFrame extends Frame {

Canvas board;

TextField statusLine,algTitle;

Graphics gr;

int inc1; // origin in time axis

/*--------------------------------------------------------Constructor

PURPOSE: To generate the animation view frame-------------------------------------------*/

public animeFrame () {

super("Animation View");

board = new Canvas();

statusLine = new TextField(30);

statusLine.setEditable(false);

algTitle = new TextField(30);

algTitle.setEditable(false);

this.setLayout(new BorderLayout());

Page 54: Cpu Scheduling 6th Sem 2010 REPORT

54

add("North", algTitle);

add("South", statusLine);

add("Center", board);

} // constructor

/*-------------------------------------------------------- Handle Event

PURPOSE: To handle all events by dialog box------------------------------------------------*/

public boolean handleEvent(Event evtObj) {

if (evtObj.id == Event.WINDOW_DESTROY) {

dispose();

return true;

} // handle destroy button

return super.handleEvent(evtObj);

} // handle window options

/*-------------------------------------------------------- update status line

PURPOSE: To display current interesting event-----------------------------------------------*/

public void upstatus(String txt) {

statusLine.setText(txt);

} // update status string

/*-------------------------------------------------------- set Title

PURPOSE: To set title of algorithm being animated------------------------------------------*/

Page 55: Cpu Scheduling 6th Sem 2010 REPORT

55

public void setTitle(String txt) {

algTitle.setText(txt);

} // update title

/*--------------------------------------------------------Draw bar

PURPOSE: To draw a unit block representing time allotted

PARAMETERS: references process, and gets value of current clock time-----------------*/

public void drawbar(process P, int t) {

int x,y;

x=P.getNWcorner().x+P.getUnitLength()*(t-inc1);

y=P.getNWcorner().y;

gr.fillRect(x,y,P.getUnitLength(), P.getBarWidth());

} // draw Unit bar for P

/*-------------------------------------------------------- clear grid

PURPOSE: To reset animation grid--------------------------------------------*/

public void cleargrid() {

gr = board.getGraphics();

this.update(gr); // use system empty paint to "clear" screen

} // clears grid

Page 56: Cpu Scheduling 6th Sem 2010 REPORT

56

/*-------------------------------------------------------set grid

PURPOSE: To calculate and draw coordinate axes with their labels

PARAMETERS: references the input queue to extract needed data-------------------------*/

public int setgrid(Vector L) {

process temp = (process)L.firstElement();

int hbuf = temp.getName().length();

inc1 = temp.getArrival();

int inc2 = temp.getService();

for (int j=1; j < L.size(); j++) {

temp = (process)L.elementAt(j);

if (hbuf < temp.getName().length())

hbuf = temp.getName().length(); //max. length of process name

if (inc1 > temp.getArrival())

inc1 = temp.getArrival(); //min. arrival time

inc2 += temp.getService(); //sum of all service times

} // traverse input queue

hbuf = hbuf*5+10; // margin

inc2 += inc1; // offset

Dimension d = board.size();

int hinc = (int)((d.width-10)/(inc2-inc1+2));

Page 57: Cpu Scheduling 6th Sem 2010 REPORT

57

int vinc = (int)((d.height-10)/(L.size()+2));

temp.setUnitLength(hinc);

temp.setBarWidth(vinc);

int c=0;

for (int j=inc1; j<=inc2; j++,c++) {

if ((j-inc1)%5==0)

gr.drawString(String.valueOf(j),hbuf+c*hinc,15);

gr.drawLine(hbuf+c*hinc,15,hbuf+c*hinc,20);

} // set horizontal axis

gr.drawLine(hbuf,20,hbuf+(c-1)*hinc,20);

for (int j=0; j<L.size(); j++) {

gr.drawString(((process)L.elementAt(j)).getName(),5,40+j*vinc);

((process)L.elementAt(j)).upNWcorner(hbuf,10+20+j*vinc);

} // set vertical axis

return inc1;

} // draws grid scale

}

Page 58: Cpu Scheduling 6th Sem 2010 REPORT

58

/*********************************** Statistics Frame

PURPOSE: This is the statistics view that displays empirical data from

simulated CPU scheduling algorithm. This view is meant as a notebook for

students; therefore, this view is editable. ***************************************/

import java.awt.*;

import java.util.*;

public class statsFrame extends Frame {

TextArea pad;

Vector out;

String P,A,S,F,Tq,Tqs;

/*--------------------------------------------------------Constructor

PURPOSE: To generate the statistics view frame---------------------------------------------*/

public statsFrame() {

super("Statistics View");

this.setLayout(new BorderLayout());

pad = new TextArea(30,30);

pad.setEditable(true);

this.add("Center", pad);

} // set display

Page 59: Cpu Scheduling 6th Sem 2010 REPORT

59

/*--------------------------------------------------------Report

PURPOSE: To report the data of a scheduling algorithm

PARAMETERS: references the algorithm's title and finish queue--------------------------*/

public void report(Vector R,String title) {

pad.appendText("\n"+title+"\n\n");

out = R;

display();

} // report statistics to notepad

/*--------------------------------------------------------Handle Event

PURPOSE: To handle all events by dialog box------------------------------------------------*/

public boolean handleEvent (Event evtObj) {

if (evtObj.id == Event.WINDOW_DESTROY)

this.dispose();

return true;

} // handle destroy event

/*----------------------------------------------------Display

PURPOSE: To append data to the notebook view---------------------------------------------*/

private void display() {

process temp;

P = "Process";

Page 60: Cpu Scheduling 6th Sem 2010 REPORT

60

A = "Arrival Time";

S = "Service Time";

F = "Finish Time";

Tq = "Turnaround Time";

Tqs = "Tq/Ts";

buffer(P,A,S,F,Tq,Tqs);

for (int j=0; j<out.size(); j++) {

temp = (process)out.elementAt(j);

P += temp.getName();

A += temp.getArrival();

S += temp.getService();

F += temp.getFinish();

Tq += temp.getTq();

Tqs += temp.getTqs();

buffer(P,A,S,F,Tq,Tqs);

} // get info from each

pad.appendText(P+"\n"+A+"\n"+S+"\n"+F+"\n"+Tq+"\n"+Tqs+"\n");

} // display stats

Page 61: Cpu Scheduling 6th Sem 2010 REPORT

61

/*--------------------------------------------------------Buffer

PURPOSE: To buffer white space in order to create columns

PARAMETERS: references data string of process's name, arrival time,

service time, turnaround time, and turnaround ratio----------------------------------------------*/

private void buffer(String p,String a, String s, String f, String tq, String tqs) {

int max = Math.max(P.length(),Math.max(A.length(),Math.max(S.length(),Math.max(F.length(),

Math.max(Tq.length(),Tqs.length())))));

max += 5;

P = space (P,max);

A = space (A,max);

S = space (S,max);

F = space (F,max);

Tq = space (Tq,max);

Tqs = space (Tqs,max);

} // format with buffer spaces, left justfied

/*-------------------------------------------------------- Space

PURPOSE: To ensure all columns are of equal length---------------------------------------*/

private String space(String x, int m) {

while (x.length() < m)

x += " ";

Page 62: Cpu Scheduling 6th Sem 2010 REPORT

62

return x;

} // pad with spaces

} // stats Frame class

/***********************************CPU Scheduler

PURPOSE: This is the abstract base class for CPU scheduling. It uses the constructs of animation

view to display trace simulation, statistics view to display empirical data about the simulation, and

input view to generate input for the algorithm.*******************/

import java.util.Vector;

abstract public class Scheduler extends Object {

Vector readyQ, finishQ, Q;

int clock;

process P, T;

boolean idle;

public Thread thread;

statsFrame st;

animeFrame an;

inputFrame in;

/*--------------------------------------------------------Constructor

PURPOSE: To ask help from views, set up data, and begin simulation

PARAMTERS: references the input queue, stats and anime view. gets

Page 63: Cpu Scheduling 6th Sem 2010 REPORT

63

value of starting clock time.-----------------------------------------------------------------------*/

public Scheduler(Vector q, statsFrame s, animeFrame a, inputFrame i, int c) {

Q = q;

st = s;

an = a;

in = i;

clock = c-1; // stagger for run loop

idle = true;

readyQ = new Vector(1,1);

finishQ = new Vector(1,1);

} // constructor

/*--------------------------------------------------------Process Ready

PURPOSE: To determine if a process is ready

PARAMTERS: gets the value of current clock time, returns ready process if any--------*/

public process processready(int tick) {

for (int j=0; j<Q.size(); j++)

if (((process)(Q.elementAt(j))).getArrival() <= tick)

return (process)Q.elementAt(j);

return null;

} // clock

Page 64: Cpu Scheduling 6th Sem 2010 REPORT

64

/*--------------------------------------------------------Reset Queues

PURPOSE: To reset all data structures for scheduling algorithms--------------------------*/

public void resetQ() {

readyQ.setSize(0);

finishQ.setSize(0);

Q.setSize(0);

in.resetGUI();

} // reset all queues

} // Scheduler class

/******************************************process

PURPOSE: This is application specific, simplified process class (not to be confused with

java.lang.Process). This class stores all data and functionality needed to run CPU scheduling

algorithm animation***********************************************/

import java.awt.Point;

public class process extends Object {

String name;

int ArrivalTime, ServiceTime, FinishTime, TimeLeft, Tq;

double Tqs;

static int barwidth, unitLength;

Point NWcornerpt;

Page 65: Cpu Scheduling 6th Sem 2010 REPORT

65

/*--------------------------------------------------------Constructor

PURPOSE: To associate data with specific process

PARAMETERS: references name of process; gets value of arrival time and service time

--------------------------------------------------------*/

public process(String n, int a, int s) {

name=n;

ArrivalTime=a;

ServiceTime=TimeLeft=s;

NWcornerpt= new Point(0,0); // dummy init

} // constructor

// interface functions for anime

public Point getNWcorner() { return NWcornerpt; }

public int getBarWidth() { return barwidth; }

public int getUnitLength() { return unitLength; }

public int getArrival() { return ArrivalTime; }

public int getTminus() { return TimeLeft; }

public String getName() { return name; }

public int getService() { return ServiceTime; }

public int getFinish() { return FinishTime; }

public double getTq() { return Tq; }

Page 66: Cpu Scheduling 6th Sem 2010 REPORT

66

public double getTqs() { return Tqs; }

// update functions

public void setUnitLength(int x) { unitLength = x; }

public void setBarWidth(int x) { barwidth = x; }

public void upNWcorner(int x, int y) { NWcornerpt.translate(x,y); }

public void servicing() { TimeLeft--;}

/*--------------------------------------------------------Report

PURPOSE: To calculate empirical statistics about scheduled process

PARAMETERS: gets value of current clock time = finish time-----------------------------*/

public void report(int t) {

FinishTime=t;

Tq = FinishTime-ArrivalTime;

Tqs = Math.round (Tq / ServiceTime);

} // calculate data

} // process class

Page 67: Cpu Scheduling 6th Sem 2010 REPORT

67

REFERENCES

1. Rajib Mall, Software Engineering, Prentice-Hall of India Pvt. Ltd., 2007.

2. Software Engineering, http://www.scribd.com/doc/14340261/Software-Engineering-Rajib-Mall/

3. Abraham Silberschatz, Peter Baer Galvin, Greg Gagne, Operating System Concepts, 2008

4. William Stallings, Operating System, 2007