chap4.doc

30
Chapter 4 – Processes Creating an Executable Program Process Process – a program in execution. Related terms: Job, Step, Load Module, Task. Process execution must progress in sequential fashion. A process is more than a program code - It includes 3 segments: Program: code/text. Data: program variables. Prepared by Dr. Amjad Mahmood 4.1

Transcript of chap4.doc

Chapter 4: Processes

Chapter 4 Processes

Creating an Executable Program

Process Process a program in execution.

Related terms: Job, Step, Load Module, Task.

Process execution must progress in sequential fashion.

A process is more than a program code - It includes 3 segments: Program: code/text.

Data: program variables.

Stack: for procedure calls and parameter passing.

Note:

A program is a passive entity whereas a process is an active entity with a program counter specifying what to do next and a set of associated resources.

All multiprogramming OSs are build around the concept of processes.

Process States A process can be in one of many possible states: new: The process is being created but has not been admitted to the pool of executable processes by the operating system.

running: Instructions are being executed.

waiting: The process is waiting for some event to occur.

ready: The process is waiting to be assigned to a processor.

terminated: The process has finished execution.

Process Transitions

As a process executes, it changes its state

Process state transition diagram

The above figure indicates the types of events that lead to each state for a process; the possible transitions are as follows:

Null ( New: A new process is created to execute a program. This event occurs for any of the following reasons:

An interactive logon to the system by a user

Created by OS to provide a service on behalf of a user program

Spawn by existing program

The OS is prepared to take on a new batch job

New ( Ready: The OS moves a new process to a ready state when it is prepared to take on additional process (Most systems set some limit on the number of existing processes)

Ready ( Running: OS chooses one of the processes in the ready state and assigns CPU to it.

Running ( Terminated: The process is terminated by the OS if it has completed or aborted.

Running ( Ready: The most common reason for this transition are:

The running process has expired his time slot.

The running process gets interrupted because a higher priority process is in the ready state.

Running ( Waiting (Blocked): A process is put to this state if it requests for some thing for which it must wait:

A service that the OS is not ready to perform.

An access to a resource not yet available.

Initiates I/O and must wait for the result.

Waiting for a process to provide input.

Waiting ( Ready: A process from a waiting state is moved to a ready state when the event for which it has been waiting occurs.

Ready ( terminated: Not shown on the diagram. In some systems, a parent may terminate a child process at any time. Also, when a parent terminates, all child processes are terminated.

Blocked ( terminated: Not shown. This transition occurs for the reasons given above.

Another state, Suspend, can also be included in the model. The operating system may move a process from a blocked state to suspend state by temporarily taking them out of memory.

Linux Process States

1. TASK_RUNNING

2. TASK_INTERRUPTIBLE

3. TASK_UNINTERRUPTIBLE

4. TASK_ZOMBIE

5. TASK_STOPPED

6. TASK_EXCLUSIVEProcess Control Block (PCB)

Each process in the operating system is represented by a process control block (PCB) also called a task control block.

Information associated with each process includes: Process state new, ready, running, waiting...

Process identification information Unique process identifier (PID) - indexes (directly or indirectly) into the process table.

User identifier (UID) - the user who is responsible for the job.

Identifier of the process that created this process (PPID).

Program counter To indicate the next instruction to be executed for this process.

CPU registers include index registers, general purpose registers etc. so that the process can be restarted correctly after an interrupt occurs.

CPU scheduling information Such as process priority, pointers to scheduling queues etc.

Memory-management information Include base and limit register, page tables etc.

Accounting information Amount of CPU and real time used, time limits, account number, job or process numbers and so on.

I/O status information List of I/O devices allocated to this process, a list of open files etc.

Process Scheduling Queues

Job queue When a process enters a system, it is put in a job queue.

Ready queue set of all processes residing in main memory, ready and waiting to execute are kept in ready queue.

Device queues There may be many processes in the system requesting for an I/O. Since only one I/O request can be entertained for a particular device, a process needing an I/O may have to wait. The list of processes waiting for an I/O device is kept in a device queue for that particular device. An example of a ready queue and various device queues is shown below.

Schedulers

A process may migrate between the various queues. The OS must select, for scheduling purposes, processes from these queues. Long-term scheduler (or job scheduler) selects which processes should be brought into the ready queue. It is invoked very infrequently (seconds, minutes) ( (may be slow).

It controls the degree of multiprogramming.

Short-term scheduler (or CPU scheduler) selects which process should be executed next and allocates CPU.

Short-term scheduler is invoked very frequently (milliseconds) ((must be fast). Midterm scheduler selects which partially executed job, which has been swapped out, should be brought into ready queue.

Process Context

task switching

Refers to operating systems or operating environments that enable you to switch from one program to another without losing your spot in the first program. Many utilities are available that add task switching to DOS systems.

Note that task switching is not the same as multitasking. In multitasking, the CPU switches back and forth quickly between programs, giving the appearance that all programs are running simultaneously. In task switching, the CPU does not switch back and forth, but executes only one program at a time. Task switching does allow you to switch smoothly from one program to another.

Task switching is sometimes called context switching.

Process Switch

A process switch may occur whenever the OS has gained control of CPU. i.e., when:

Supervisor Call

Explicit request by the program (ex: file open). The process will probably be blocked.

Trap

An error resulted from the last instruction. It may cause the process to be moved to the Exit state.

Interrupt

The cause is external to the execution of the current instruction. Control is transferred to Interrupt Handler.

Context Switching When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process - this is called context switch.

The time it takes is dependent on hardware support.

Context-switch time is overhead; the system does no useful work while switching.

Steps in Context Switching

Save context of processor including program counter and other registers.

Update the PCB of the running process with its new state and other associate information.

Move PCB to appropriate queue - ready, blocked,

Select another process for execution.

Update PCB of the selected process.

Restore CPU context from that of the selected process.

Operations on Processes

OS should be able to create and delete processes dynamically.

Process Creation

When the OS or a user process decides to create a new process, it can proceed as follows:

Assign a new process identifier and add its entry to the primary process table.

Allocate space for the process (program+data) and user stack. The amount of space required can set to default values depending on the process type. If a user process spawns a new process, the parent process can pass these values to the OS.

Create process control block.

Set appropriate linkage to a queue (ready) is set.

Create other necessary data structures (e.g. to store accounting information).

Parent process creates children processes, which, in turn create other processes, forming a tree of processes. Resource sharing possibilities

Parent and children share all resources.

Children share subset of parents resources.

Parent and child share no resources.

Execution possibilities

Parent and children execute concurrently.

Parent waits until children terminate. Address space possibilities Child duplicate of parent.

Child has a program loaded into it.

UNIX examples

In Unix, every process has a unique process identifier (an integer).

fork system call creates new process. The child process consists of the copy of the address space of the parent process. Both parent and child processes continue execution at the instruction after the fork.

exec system call used after a fork to replace the process memory space with a new program.

wait system call moves a process off the ready queue until the termination of the child.Process Termination

A process terminates when it executes last statement and asks the operating system to delete it by using exit system call. At that time, the child process Output data from child to parent (via wait).

Process resources are deallocated by operating system.

Parent may terminate execution of children processes via appropriate system called (e.g. abort). A parent may terminate the execution of one of its children for the following reasons: Child has exceeded allocated resources.

Task assigned to child is no longer required.

Parent is exiting.

Operating system does not allow child to continue if its parent terminates.

Cascading termination.

A Linux Example

#include

void ChildProcess();

void main()

{

int pid, cid, r;

pid = getpid();

r = fork(); //create a new processif (r == 0) //r=0 -> its child{

cid = getpid(); //get process IDprintf("I am the child with cid = %d of pid =

%d \n", cid, pid);

ChildProcess();

exit(0);

}

else

{

printf("Parent waiting for the child...\n");

wait(NULL);

printf("Child finished, parent quitting too!");

}

}

void ChildProcess()

{

int i;

for (i = 0; i < 5; i++)

{

printf(%d ..\n", i);

sleep(1);

}

}

Cooperating Processes

The concurrent processes executing in the OS may be either independent or cooperating.

Independent process cannot affect or be affected by the execution of another process. It does not share data with any other process. Cooperating process can affect or be affected by the execution of another process. It shares data with other process(es).

Advantages of process cooperation are:

Information sharing

Computation speed-up

Modularity

Convenience

Producer-Consumer problem: An example of Cooperating Processes Paradigm for cooperating processes, producer process produces information that is consumed by a consumer process. unbounded-buffer places no practical limit on the size of the buffer.

bounded-buffer assumes that there is a fixed buffer size.

Shared data#define BUFFER_SIZE 10

Typedef struct {

. . .

} item;

item buffer[BUFFER_SIZE];

int in = 0;

int out = 0;

Producer Process

item nextProduced;

while (1) {

while (((in + 1) % BUFFER_SIZE) == out)

; /* do nothing */

buffer[in] = nextProduced;

in = (in + 1) % BUFFER_SIZE;

}

Consumer processitem nextConsumed;

while (1) {

while (in == out)

; /* do nothing */

nextConsumed = buffer[out];

out = (out + 1) % BUFFER_SIZE;

}

Interprocess Communication (IPC)

Mechanism for processes to communicate and to synchronize their actions. Message system processes communicate with each other without resorting to shared variables.

IPC facility provides two operations:

send(message) message size fixed or variable

receive(message)

If P and Q wish to communicate, they need to:

Establish a communication link between them

Exchange messages via send/receive

Implementation of communication link

Physical (e.g., shared memory, hardware bus)

Logical (e.g., logical properties)

Threads

A thread, also called a lightweight process (LWP), is the basic unit of CPU utilization.

It has its own program counter, a register set, and stack space.

It shares with the pear threads its code section, data section, and OS resources such as open files and signals, collectively called a task.

The idea of a thread is that a process has five fundamental parts: code ("text"), data, stack, file I/O, and signal tables. "Heavy-weight processes" (HWPs) have a significant amount of overhead when switching: all the tables have to be flushed from the processor for each task switch. Also, the only way to achieve shared information between HWPs is through pipes and "shared memory". If a HWP spawns a child HWP using fork(), the only part that is shared is the text.

Threads reduce overhead by sharing fundamental parts. By sharing these parts, switching happens much more frequently and efficiently. Also, sharing information is not so "difficult" anymore: everything can be shared.

User-Level and Kernel-Level Threads

There are tow types of thread: user-level and kernel-level.

User-level avoids the kernel and manages the tables itself.

These threads are implemented in user-level libraries rather than via system calls.

Often this is called "cooperative multitasking" where the task defines a set of routines that get "switched to" by manipulating the stack pointer.

Typically each thread "gives-up" the CPU by calling an explicit switch, sending a signal or doing an operation that involves the switcher. Also, a timer signal can force switches.

User threads typically can switch faster than kernel threads.

Thread States

Threads can be in one of the several states: ready, blocked, running, or terminated.

Like process, threads share the CPU and only one thread at a time is in running state.

What kinds of things should be threaded? If you are a programmer and would like to take advantage of multithreading, the natural question is what parts of the program should/ should not be threaded. Here are a few rules of thumb (if you say "yes" to these, have fun!):

Are there groups of lengthy operations that don't necessarily depend on other processing (like painting a window, printing a document, responding to a mouse-click, calculating a spreadsheet column, signal handling, etc.)?

Will there be few locks on data (the amount of shared data is identifiable and "small")?

Are you prepared to worry about locking (mutually excluding data regions from other threads), deadlocks (a condition where two COEs have locked data that other is trying to get) and race conditions (a nasty, intractable problem where data is not locked properly and gets corrupted through threaded reads & writes)?

Could the task be broken into various "responsibilities"? E.g. could one thread handle the signals, another handle GUI stuff, etc.?

PAGE 4.6Prepared by Dr. Amjad Mahmood