Operating systems

75
Operating System Engineering Red Sea University – Engineering Faculty 1 Course Title Operating Systems By: Lecturer. Elmustafa Sayed Ali Ahmed Electrical and Electronics engineering Dept.

description

operating system engineering

Transcript of Operating systems

Page 1: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

1

Course Title

Operating Systems By: Lecturer. Elmustafa Sayed Ali Ahmed Electrical and Electronics engineering Dept.

Page 2: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

2

Course Plan

Introduction: introduction to operating system and types of

systems.

Computer-System Structures: Computer System Operation and

storage, input output devise structure.

Operating-System Structures: system components, programs and

structure.

Processes: concept of processes, Scheduling and Cooperating.

Threads: Threading Issues and Multithreading Models.

CPU Scheduling: Basic Concepts Scheduling, Algorithms and

Multiple-Processor Scheduling.

Deadlocks: Deadlock Characterization, Prevention and Detection.

Memory Management: background, segmentation and paging.

Virtual Memory: Demand Paging and Page Replacement.

Page 3: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

3

(BASIC ISSUES/ Introduction);

What is an Operating System?

Mainframe Systems.

Desktop Systems.

Multiprocessor Systems.

Distributed Systems.

Clustered System.

Real -Time Systems.

Handheld Systems.

Computing Environments.

What is an Operating System?

- An operating system (OS) is the interface between the user and the

hardware. Acts as an intermediary between a user of a computer

and the computer hardware.

- The goals Operating system are:

Execute user programs and make solving user problems easier.

Make the computer system convenient to use.

Use the computer hardware in an efficient manner.

Computer system components

- The computer systems components divided in to :

Hardware: provides basic computing resources (CPU, memory,

I/O devices).

Operating system: controls and coordinates the use of the

hardware among the various application programs for the various

users.

Page 4: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

4

Applications programs: define the ways in which the system

resources are used to solve the computing problems of the users

(compilers, database systems, video games, business programs).

Users :( people, machines, other computers).

(Computer system components)

Operating System Definitions

- Operating system defined by three sections , they are:

Resource allocator: manages and allocates resources.

Control program: controls the execution of user programs and

operations of I/O devices.

Kernel: the one program running at all times (all else being

application programs).

Page 5: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

5

Mainframe Systems

- Mainframe system Reduce setup time by batching similar jobs.

Automatically transfers control from one job to another. First

rudimentary operating system.

Key features are:

Initial control in monitor.

Control transfers to job.

When job completes control transfers pack to monitor.

(Simple Batch System)

For Multi-programmed Batch Systems, Several jobs are kept in

main memory at the same time, and the CPU is multiplexed among

them.

Features of operating system needed for Multiprogramming are:

I/O routine supplied by the system.

Page 6: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

6

Memory management: the system must allocate the Memory to

several jobs.

CPU scheduling: the system must choose among several jobs

ready to run.

Allocation of devices.

(Multi- programmed Batch Systems)

Desktop Systems

Desktop system is a Personal computer, a computer system

dedicated to a single user.

The input and output devices used are; I/O devices – keyboards,

mice, display screens, small printers.

This system has a User convenience and responsiveness.

Can adopt technology developed for larger operating system‟ and

do not need advanced CPU utilization of protection features.

Page 7: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

7

Run under different operating systems such as (Windows versions,

Mac OS, UNIX, and Linux).

Parallel Systems

Parallel systems based on Multiprocessor systems with more than

on CPU in close communication.

Known as tightly coupled system because processors share

memory and a clock; communication usually takes place through

the shared memory.

Parallel system divided in two types, they are:

Symmetric multiprocessing (SMP)

Each processor runs and identical copy of the operating system.

Many processes can run at once without performance deterioration.

Most modern operating systems support SMP.

Asymmetric multiprocessing(AMP)

Each processor is assigned a specific task; master processor

schedules and allocated work to slave processors.

The Advantages of parallel system:

Increased throughput.

Economical.

Increased reliability.

Page 8: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

8

(Symmetric Multiprocessing Architecture)

Distributed Systems

Distribute the computation among several physical processors and

requires networking infrastructure.

Known as Loosely coupled system because of each processor has

its own local memory; processors communicate with one another

through various communications lines, such as high speed buses or

telephone lines.

The applications of the distributed systems are:

Local area networks (LAN) or Wide area networks (WAN).

May be either client-server or peer-to-peer systems.

The Advantages of distributed system:

Resources Sharing.

Computation speed up means load sharing.

Reliability.

Communications.

(General Structure of Client-Server)

Page 9: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

9

Clustered Systems

Clustering allows two or more systems to share storage providing

high reliability and have two types:

Asymmetric clustering: one server runs the application while

other server‟s standby.

Symmetric clustering: all N hosts are running the application.

Real-Time Systems

Real time system often used as a control device in a dedicated

application such as controlling scientific experiments, medical

imaging systems, industrial control systems, and some display

systems.

Real-Time systems may be either hard or soft real-time:

Hard real-time:

Secondary storage limited or absent, data stored in short term

memory, or read-only memory (ROM)

Conflicts with time-sharing systems, not supported by general-

purpose operating systems.

Soft real-time:

Limited utility in industrial control of robotics

Useful in applications (multimedia, virtual reality) requiring

advanced operating-system features.

Page 10: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

10

Handheld Systems

Hand held systems are classified as a small systems compared with

the above systems.

Examples; Personal Digital Assistants (PDAs) and Cellular

telephones.

Features are: Limited memory, slow processors and Small display

screens.

(BASIC ISSUES/ Computer-System Structures);

Computer System Operation.

I/O Structure.

Storage Structure.

Storage Hierarchy.

Hardware Protection.

General System Architecture.

Computer System Operation

In computer system, I/O devices and the CPU can execute

concurrently. Each device controller is in charge of a particular

device type and has a local buffer.

CPU moves data from/to main memory to/from local buffers and

the device controller informs CPU that it has finished its operation

by causing an interrupt.

Page 11: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

11

(Computer-System Architecture)

Interrupt transfers control to the interrupt service routine generally,

through the interrupt vector, which contains the addresses of all the

service routines.

Interrupt architecture must save the address of the interrupted

instruction and the incoming interrupts are disabled while another

interrupt is being processed to prevent a lost interrupt.

Interrupt may be a hardware interrupt or software. Hardware may

trigger an interrupt at any time by sending a signal to the CPU by

way of the system bus. Software trigger an interrupt is executing a

special operation called a system call.

(Interrupt system)

Page 12: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

12

Interrupt Handling

An operating system is interrupt driven. The operating system

preserves the state of the CPU by storing registers and the program

counter.

Operating system Separate segments of code determine what

action should be taken for each type of interrupt and determines

which type of interrupt has occurred:

Polling: CPU periodically checks each device to see if it

needs service. Polling is like picking up your phone every

few seconds to see if you have a call.

Vectored interrupt system: Requires that the interrupt

handler poll or send a signal to each device in turn in order to

find out which one sent the interrupt request

Input/output (I/O) Structure

CPU and device controllers all use a common bus for

communication. The CPU starts an I/O operation, and continuously

polls (checks) that device until the I/O operation finishes.

Device controller contains registers for communication with that

device, such registers are;

Input register, output register: for data.

Control register: to tell it what to do.

Status register: to see what it‟s done.

The operation of input and output devices categorized as

Synchronous and Asynchronous input output.

In Synchronous I/O after I/O starts, control returns to user program

only upon I/O completion. Wait instruction idles the CPU until the

Page 13: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

13

next interrupt also Wait loop (contention for memory access) .At

most one I/O request is outstanding at a time, no simultaneous I/O

processing.

In Asynchronous I/O after I/O starts, control returns to user

program without waiting for I/O completion. The key features of

this type are; System call – request to the operating system to allow

user to wait for I/O and Device-status table contains entry for each

I/O device indicating its type, address, and state completion. The

Operating system indexes into I/O device table to determine device

status and to modify table entry to include interrupt.

For high-speed I/O devices able to transmit information at close to

memory speeds a Direct Memory Access Structure is used.

In DMA, Device controller transfers blocks of data from buffer

storage directly to main memory without CPU intervention. Only

on interrupt is generated per block, rather than the one interrupt per

byte.

(Two I/O Methods)

Page 14: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

14

(Device-Status Table)

Storage Structure

Storage refers to the memory used in the computer systems to store

all data for computation and arranged in hierarchy depending on

the speed, capacity, cost and Volatility.

Main memory is only large storage media that the CPU can access

directly. And Secondary storage is extension of main memory that

provides large nonvolatile storage capacity.

Magnetic disks are rigid metal or glass platters covered with

magnetic recording material. In magnetic disk, disk surface is

logically divided into tracks, which are subdivided into sectors.

The disk controller determines the logical interaction between the

device and the computer.

Page 15: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

15

(Moving-Head Disk Mechanism)

Caching is a Use of high-speed memory to hold recently-accessed

data. And requires a cache management policy.

Caching introduces another level in storage hierarchy. This

requires data that is simultaneously stored in more than one level

to be consistent.

(Storage-Device Hierarchy)

Page 16: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

16

Hardware Protection

Hard ware protection deal with Dual-Mode Operation, I/O

Protection, Memory Protection and CPU Protection.

Dual-Mode Operation: Sharing system resources requires

operating system to ensure that an incorrect program cannot cause

other programs to execute incorrectly. Provide hardware support to

differentiate between at least two modes of operations.

User mode: execution done on behalf of a user.

Monitor mode (also kernel mode or system mode): execution

done on behalf of operating system.

Mode bit added to computer hardware to indicate the

current mode: monitor (0) or user (1).When an interrupt

or fault occurs hardware switches to monitor mode.

(Privileged instructions can be issued only in monitor mode)

I/O Protection: All I/O instructions are privileged instructions.

Must ensure that a user program could never gain control of the

computer in monitor mode (e.g.; a user program that, as part of its

execution, stores a new address in the interrupt vector).

Page 17: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

17

(Use of a System Call to Perform I/O)

Memory Protection: Must provide memory protection at least for

the interrupt vector and the interrupt service routines. In order to

have memory protection, add two registers that determine the

range of legal addresses a program may access:

Base register – holds the smallest legal physical memory

address.

Limit register – contains the size of the range.

Page 18: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

18

(Use of a Base and Limit Register)

CPU Protection: Timer used to interrupts computer after specified

period to ensure operating system maintains control.

Timer is decremented every clock tick, when timer reaches

the value 0, an interrupt occurs.

Timer commonly used to implement time sharing also used

to compute the current time.

General System Architecture

General system architecture of computing such as Local Area

Networks (LAN) and Wide Area Networks (WAN).

Local Area Networks (LAN): may two groups of processors

communicate with each other by using gateway.

Wide Area Networks (WAN): A network of processors connected

via a global network using communication processors.

Page 19: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

19

(Local Area Network Structure)

(Wide Area Network Structure)

Page 20: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

20

(BASIC ISSUES/ Operating-System Structures);

System Components:

Process Management.

Main Memory Management.

File Management.

I/O System Management.

Secondary Management.

Networking.

Protection System.

Command-Interpreter System.

Operating System Services.

System Calls.

System Programs.

System Structure.

Virtual Machines.

System Design and Implementation.

System Generation.

System Components (Process Management)

A process is a program in execution, needs certain resources,

including CPU time, memory, files, and I/O devices, to accomplish

its task.

The operating system is responsible for the following activities in

connection with process management:

Process creation and deletion.

Process suspension and resumption.

Provision of mechanisms for:

Process synchronization.

Process communication.

Page 21: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

21

System Components (Main-Memory Management)

Memory is a large array of words or bytes, each with its own

address. It is a repository of quickly accessible data shared by the

CPU and I/O devices.

Main memory is a volatile storage device. It loses its contents in

the case of system failure.

The operating system is responsible for the following activities in

connections with memory management:

Keep track of which parts of memory are currently being

used and by whom.

Decide which processes to load when memory space

becomes available.

Allocate and de-allocate memory space as needed.

System Components (File Management)

A file is a collection of related information defined by its creator.

Commonly, files represent programs (both source and object

forms) and data.

The operating system is responsible for the following activities in

connections with file management:

File creation and deletion.

Directory creation and deletion.

Support of primitives for manipulating files and directories.

Mapping files onto secondary storage.

File backup on stable (nonvolatile) storage media.

Page 22: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

22

System Components (I/O System Management)

The I/O system consists of:

A buffer-caching system.

A general device-driver interface.

Drivers for specific hardware devices.

System Components (Secondary-Storage Management)

Since main memory (primary storage) is volatile and too small to

accommodate all data and programs permanently, the computer

system must provide secondary storage to back up main memory.

Most modern computer systems use disks as the principle on-line

storage medium, for both programs and data.

The operating system is responsible for the following activities in

connection with disk management:

Free space management.

Storage allocation.

Disk scheduling.

System Components (Networking-Distributed Systems)

A distributed system is collection processors that do not share

memory or a clock. Each processor has its own local memory.

The processors in the system are connected through a

communication network and communication takes place using a

protocol.

Page 23: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

23

A distributed system provides user access to various system

resources. Access to a shared resource allows:

Computation speed-up.

Increased data availability.

Enhanced reliability.

System Components (Protection System)

Protection refers to a mechanism for controlling access by

programs, processes, or users to both system and user resources.

The protection mechanism must:

Distinguish between authorized and unauthorized usage.

Specify the controls to be imposed.

Provide a means of enforcement.

System Components (Command-Interpreter System)

Many commands are given to the operating system by control

statements which deal with:

Process creation and management.

I/O handling.

Secondary-storage management.

Main-memory management.

File-system access.

Protection.

Networking.

The program that reads and interprets control statements used as a

function to get and execute the next command statement and is

called variously:

Page 24: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

24

Command-line interpreter.

Shell (in UNIX).

Operating System Services

Operating System Services consists of Program execution, I/O

operations, File-system manipulation, Communications, Error

detection and Additional Operating System Functions.

Program execution: system capability to load a program into

memory and to run it.

I/O operations: since user programs cannot execute I/O operations

directly, the operating system must provide some means to perform

I/O.

File-system manipulation: program capability to read, write,

create, and delete files.

Communications: exchange of information between processes

executing either on the same computer or on different systems tied

together by a network. Implemented via shared memory or

message passing.

Error detection: ensure correct computing by detecting errors in

the CPU and memory hardware, in I/O devices, or in user

programs.

Page 25: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

25

Additional functions exist not for helping the user, but rather for

ensuring efficient system operations.

Resource allocation: allocating resources to multiple

users or multiple jobs running at the same time.

Accounting: keep track of and record which users use

how much and what kinds of computer resources for

account billing or for accumulating usage statistics.

Protection: ensuring that all access to system resources

is controlled.

System Calls

System calls provide the interface between a running program and

the operating system. Generally available as assembly-language

instructions. Languages defined to replace assembly language for

Systems programming allow system calls to be made directly (e.g.,

C, C++)

Three general methods are used to pass parameters between a

running program and the operating system.

Pass parameters in registers.

Store the parameters in a table in memory, and the table address is

passed as a parameter in a register.

Push (store) the parameters onto the stack by the program, and pop

off the stack by operating system.

Page 26: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

26

(Passing of Parameters as A Table)

The types of system calls divided into five operations, they are;

Process control.

File management.

Device management.

Information maintenance.

Communications.

(MS-DOS Execution)

Page 27: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

27

(Communication Model) (UNIX Running

Multiple Programs)

System Programs and structures

System programs provide a convenient environment for program

development and execution. The can be divided into:

File manipulation.

Status information.

File modification.

Programming language support.

Program loading and execution.

Communications.

Application programs.

Most users‟ view of the operation system is defined by system

programs, not the actual system calls.

MS-DOS System Structure

MS-DOS – written to provide the most functionality in the least

space not divided into modules

Page 28: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

28

Although MS-DOS has some structure, its interfaces and levels of

functionality are not well separated.

(MS-DOS Layer Structure)

UNIX System Structure

UNIX – limited by hardware functionality, the original UNIX

operating system had limited structuring. The UNIX OS consists of

two separable parts:

Systems programs

The kernel:

Consists of everything below the system-call interface and above

the physical hardware.

Provides the file system, CPU scheduling, memory management,

and other operating-system functions; a large number of functions

for one level.

Page 29: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

29

(UNIX System Structure)

Layered Approach

The operating system is divided into a number of layers (levels),

each built on top of lower layers. The bottom layer (layer 0), is the

hardware; the highest (layer N) is the user interface.

With modularity, layers are selected such that each uses functions

(operations) and services of only lower-level layers.

(OS/2 Layer Structure)

Page 30: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

30

Microkernel System Structure

Moves as much from the kernel into “user” space. The

communication takes place between user modules using message

passing.

Benefits:

Easier to extend a microkernel.

Easier to port the operating system to new architectures.

More reliable (less code is running in kernel mode).

More secure.

(Windows NT Client-Server Structure)

Virtual Machines

A virtual machine takes the layered approach to its logical

conclusion. It treats hardware and the operating system kernel as

though they were all hardware. Its provides an interface identical

to the underlying bare hardware.

The operating system creates the illusion of multiple processes,

each executing on its own processor with its own (virtual)

memory.

The resources of the physical computer are shared to create the

virtual machines:

Page 31: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

31

CPU scheduling can create the appearance that users have

their own processor.

Spooling and a file system can provide virtual card readers

and virtual line printers.

A normal user time-sharing terminal serves as the virtual

machine operator‟s console.

(System Models)

The advantages and disadvantages of Virtual Machines are:

The virtual-machine concept provides complete protection of

system resources since each virtual machine is isolated from all

other virtual machines. This isolation, however, permits no direct

sharing of resources.

A virtual-machine system is a perfect vehicle for operating systems

research and development. System development is done on the

virtual machine, instead of on a physical machine and so does not

disrupt normal system operation.

Page 32: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

32

The virtual machine concept is difficult to implement due to the

effort required to provide an exact duplicate to the underlying

machine.

Java Virtual Machine Compiled Java programs are platform-

neutral byte codes executed by a Java Virtual Machine (JVM).

JVM consists of:

Class loader.

Class verifier.

Runtime interpreter.

System Design Goals and System Implementation

User goals: operating system should be convenient to use, easy to

learn, reliable, safe, and fast.

System goals: operating system should be easy to design,

implement, and maintain, as well as flexible, reliable, error-free,

and efficient.

The Mechanisms determine how to do something; policies decide

what will be done. The separation of policy from mechanism is a

very important principle, it allows maximum flexibility if policy

decisions are to be changed later.

Traditionally written in assembly language, operating systems can

now be written in higher-level languages. Code written in a high-

level language:

Page 33: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

33

Can be written faster.

Is more compact.

Is easier to understand and debug.

An operating system is far easier to port (move to some other

hardware) if it is written in a high-level language.

System Generation

Operating systems are designed to run on any of a class of

machines; the system must be configured for each specific

computer site.

System generation program obtains information concerning the

specific configuration of the hardware system. Two definitions of

how system generation are:

Booting: starting a computer by loading the kernel.

Bootstrap program: code stored in ROM that is able to locate the

kernel, load it into memory, and start its execution.

Page 34: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

34

(BASIC ISSUES/ Processes);

Process Concept.

Process Scheduling.

Operations on Processes.

Cooperating Processes.

Inter-process Communication.

Communication in Client-Server Systems.

Process Concept

An operating system executes a variety of programs:

Batch system (jobs).

Time-shared systems (user programs or tasks).

A process is a program in execution; process execution must

progress in sequential fashion and includes:

Program counter.

Stack.

Data section.

The states of a process executes as:

New: The process is being created.

Running: Instructions are being executed.

Waiting: The process is waiting for some event to occur.

Ready: The process is waiting to be assigned to a process.

Terminated: The process has finished execution.

Page 35: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

35

(Process state diagram)

A Process Control Block (PCB) knows as an Information

associated with each process this information‟s are :

Process state.

Program counter.

CPU registers.

CPU scheduling information.

Memory-management information.

Accounting information.

I/O status information.

Process Control Block (PCB)

Page 36: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

36

(CPU Switch from Process to Process)

Process Scheduling Queues

The queues for process scheduling are:

Job queue: set of all processes in the system.

Ready queue: set of all processes residing in main Memory, ready

and waiting to execute.

Device queues: set of processes waiting for an I/O Device.

Process migration between the various queues.

Page 37: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

37

(Ready Queue and Various I/O Device Queues)

(Representation of Process Scheduling)

Page 38: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

38

Schedulers

The scheduling divided into:

Long-term scheduler (or job scheduler): selects which processes

should be brought into the ready queue. It is invoked very

infrequently (seconds, minutes) and (may be slow). The long-term

scheduler controls the degree of multiprogramming.

Short-term scheduler (or CPU scheduler): selects which process

should be executed next and allocates CPU. It is invoked very

frequently (milliseconds) and (must be fast).

The Processes can be described as either:

I/O-bound process: spends more time doing I/O than computations,

many short CPU bursts.

CPU-bound process: spends more time doing computations; few

very long CPU bursts.

When CPU switches to another process, the system must save the

state of the old process and load the saved state for the new

process.

Page 39: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

39

When the system does no useful work while switching the

operation known as Context-switch time is overhead.

Operations on Processes

Processes operation concepts depending on:

Process Creation

Parent process creates children processes, which, in turn create

other processes, forming a tree of processes.

Resource sharing known as:

Parent and children share all resources.

Children share subset of parent‟s resources.

Parent and child share no resources.

Execution:

Parent and children execute concurrently.

Parent waits until children terminate.

Address space:

Child duplicate of parent.

Child has a program loaded into it.

UNIX examples:

Fork system call creates new process.

Exec system call used after a fork to replace the process

„memory space with a new program.

Page 40: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

40

Process Termination

Process executes last statement and asks the operating system to

decide it (exit).

Output data from child to parent (via wait).

Process‟ resources are de-allocated by operating system.

Parent may terminate execution of children processes (abort).

Child has exceeded allocated resources.

Task assigned to child is no longer required.

Parent is exiting:

Operating system does not allow child to continue if its parent

terminates.

Cascading termination.

Cooperating Processes

As knows that Independent process cannot affect or be affected by

the execution of another process.

Cooperating process can affect or be affected by the execution of

another process. The Advantages of process cooperation are:

Information sharing.

Computation speed-up.

Modularity.

Convenience.

Page 41: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

41

Home work:

Write about Producer-Consumer, unbounded-buffer and bounded-buffer.

Inter-process Communication (IPC)

Inter-process Communication (IPC) is a Mechanism for processes

to communicate and to synchronize their actions. It uses the

strategy of Message system that allows processes communicate

with each other without resorting to shared variables.

IPC facility provides two operations:

Send (message), message size fixed or variable.

Receive (message).

As an example, If P and Q wish to communicate, they need to:

Establish a communication link between them.

Exchange messages via send/receive.

Implementation Questions

How are links established?

Can a link be associated with more than two processes?

How many links can there be between every pair of

communicating processes?

What is the capacity of a link?

Is the size of a message that the link can accommodate fixed or

variable?

Is a link unidirectional or bi-directional?

Direct Communication

In direct communication, Processes must name each other

explicitly:

Page 42: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

42

Send (P, message): send a message to process P.

Receive (Q, message): receive a message from process Q.

Properties of communication link:

Links are established automatically.

A link is associated with exactly one pair of communicating

processes.

Between each pair there exists exactly one link.

The link may be unidirectional, but is usually bi-directional.

Indirect Communication

The Operations done in an indirect communication are:

Create a new mailbox.

Send and receive messages through mailbox.

Destroy a mailbox.

Primitives are defined as:

Send (A, message): send a message to mailbox A.

Receive (A, message): receive a message from mailbox A.

In Mailbox sharing such P1, P2, and P3 share mailbox A. P1,

sends; P2 and P3 receive. If there is a question; Who gets the

message?Aswers are :

Allow a link to be associated with at most two processes.

Allow only one process at a time to execute a receive operation.

Allow the system to select arbitrarily the receiver. Sender is

notified who the receiver was.

Page 43: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

43

To achieve synchronization Message passing may be either

blocking or non-blocking. Send and receive primitives may be

either blocking or non-blocking:

Blocking is considered synchronous.

Non-blocking is considered asynchronous.

Queue of messages attached to the link; implemented in one of

three ways:

Zero capacity: 0 messages. Sender must wait for receiver

(rendezvous).

Bounded capacity: finite length of n messages Sender must wait if

link full.

Unbounded capacity: infinite length Sender never waits.

Communication in Client-Server Systems

Client server communications dial with;

Sockets: A socket is defined as an endpoint for communication.

Concatenation of IP address and port between two sockets, the

socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8

(Connection between two sockets)

Page 44: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

44

Remote Procedure Calls: Remote procedure call (RPC) abstracts

procedure calls between processes on networked systems. Using

Stubs: client-side proxy for the actual procedure on the server. The

client-side stub locates the server and marshals the parameters. The

server-side stub receives this message, unpacks the marshaled

parameters, and performs the procedure on the server.

Remote Method Invocation (Java): Remote Method Invocation

(RMI) is a Java mechanism similar to RPCs. RMI allows a Java

program on one machine to invoke a method on a remote object.

(RMI {JAVA})

(BASIC ISSUES/ Threads);

Overview.

Multithreading Models.

Overview

Thread is a basic unite of CPU utilization run as a section of

control, shares with other threads a thread ID, resources, address

spacing, program counter, register set and stack. A process has a

multiple threads of control it can be perform more than one task at

a time. Thread may be single or multiple threads.

Page 45: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

45

The difference between process and thread is; process is unit of

resource ownership sometimes called a heavyweight process). Has:

Address space, Program code, Global variables, OS resources

(files, I/O devices, etc.). Threads known as unit of scheduling

sometimes called a lightweight process is a single sequential

execution stream within a process.

There are two types of threads:

User thread: Thread management done by user-level threads

library, such as POSIX Pthreads, Mach C-threads and Solaris

threads.

Kernel Threads: Supported by the Kernel such as Windows

95/98/NT/2000, Solaris, Tru64 UNIX, BeOS and Linux.

Page 46: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

46

Multithreading Models

Multithreading Models divided into :

Many-to-One: Many user-level threads mapped to single kernel

thread. Used on systems that does not support kernel threads.

One-to-One: Each user-level thread maps to kernel thread such as

Windows 95/98/NT/2000.

Many-to-Many: Allows many user level threads to be mapped to

many kernel threads. Also allows the operating system to create a

sufficient number of kernel threads. Types such as Solaris 2 and

Windows NT/2000 with the Thread Fiber package.

Page 47: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

47

(BASIC ISSUES/ CPU Scheduling);

Basic Concepts.

Scheduling Criteria.

Scheduling Algorithms.

Multiple-Processor Scheduling.

Real-Time Scheduling.

CPU Scheduling

Maximum CPU utilization obtained with multiprogramming.

Scheduling dial with CPU and I/O Burst Cycle and the process

execution consists of a cycle of CPU execution and I/O wait.

(Alternating Sequence of CPU and I/O Bursts)

Page 48: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

48

CPU Scheduler Selects from among the processes in memory that

are ready to execute, and allocates the CPU to one of them. The

scheduling decisions may take place when process:

Switches from running to waiting state.

Switches from running to ready state.

Switches from waiting to ready.

Terminates.

Scheduling under 1 and 4 is no preemptive. All other scheduling is

preemptive. Dispatcher module gives control of the CPU to the

process selected by the short-term scheduler; this involves:

switching context

switching to user mode

jumping to the proper location in the user program to restart that

program

Dispatch latency known as time it takes for the dispatcher to stop

one process and start another running.

Scheduling Criteria

Scheduling Criteria divided into:

CPU utilization: keep the CPU as busy as possible.

Throughput: processes that complete their execution per time

unit.

Turnaround time: amount of time to execute a particular process.

Waiting time: amount of time a process has been waiting in the

ready queue

Response time: amount of time it takes from when a request was

submitted until the first response is produced, not output (for time-

sharing environment)

To achieve Optimization Criteria , gain must be obtained in

conditions :

Page 49: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

49

Max CPU utilization.

Max throughput.

Min turnaround time.

Min waiting time.

Min response time.

Scheduling Algorithms

Scheduling algorithms used to obtain high utilizations are:

First-Come, First-Served (FCFS) Scheduling.

Shortest-Job-First (SJR) Scheduling.

Round Robin (RR).

First-Come, First-Served (FCFS) Scheduling:

Process Burst Time

P1 24

P2 3

P3 3

Suppose that the processes arrive in the order: P1 , P2 , P3 .The

Gantt Chart for the schedule is:

Waiting time for P1 = 0; P2 = 24; P3 = 27.

Average waiting time: (0 + 24 + 27)/3 = 17.

Page 50: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

50

Suppose that the processes arrive in the order P2, P3, P1. The

Gantt chart for the schedule is:

Waiting time for P1 = 6; P2 = 0; P3 = 3.

Average waiting time: (6 + 0 + 3)/3 = 3.

Much better than previous case.

Convoy effect short process behind long process.

Shortest-Job-First (SJR) Scheduling:

- This type Associate with each process the length of its next CPU

burst. Use these lengths to schedule the process with the shortest

time. Two schemes:

Non preemptive: once CPU given to the process it cannot be

preempted until completes its CPU burst.

Preemptive: if a new process arrives with CPU burst length less

than remaining time of current executing process, preempt. This

scheme is known as the Shortest-Remaining-Time-First (SRTF).

- SJF is optimal: gives minimum average waiting time for a given

set of processes.

Process Arrival Time Burst Time

P1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4

Page 51: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

51

SJF (non-preemptive)

- Average waiting time = (0 + 6 + 3 + 7)/4 – 4.

SJF (preemptive)

- Average waiting time = (9 + 1 + 0 +2)/4 – 3.

- To Determining Length of Next CPU Burst, Can only estimate the

length. Also can be done by using the length of previous CPU

bursts, using exponential averaging.

tn=actual length of nth CPU burst.

tn+1= predicted value for the next CPU burst.

0<α<1

tn+1= αtn+(1-α)tn

- Priority Scheduling, A priority number (integer) is associated with

each Process the CPU is allocated to the process with the highest

priority (smallest integer ≡ highest priority).

Preemptive.

non preemptive.

- SJF is a priority scheduling where priority is the predicted next

CPU burst time.

Problem ≡ Starvation: low priority processes may never execute.

Page 52: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

52

Solution ≡ Aging: as time progresses increase the priority of the

process.

Round Robin (RR):

Process Burst Time

P1 53

P2 17

P3 68

P4 24

- The Gantt chart is:

- Typically, higher average turnaround than SJF, but better response.

- Each process gets a small unit of CPU time (time quantum),

usually 10-100 milliseconds. After this time has elapsed, the

process is preempted and added to the end of the ready queue.

- If there are n processes in the ready queue and the time quantum is

q, then each process gets 1/n of the CPU time in chunks of at most

q time units at once. No process waits more than (n-1)q time units.

- Performance

q large :FIFO

q Small: q must be large with respect to context switch, otherwise

overhead is too high.

Page 53: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

53

- Multilevel Queue:

Ready queue is partitioned into separate queues:

Foreground (interactive).

Background (batch).

Each queue has its own scheduling algorithm, foreground: RR and

background: FCFS.

Scheduling must be done between the queues.

Fixed priority scheduling; (i.e., serve all from foreground then

from background). Possibility of starvation.

Time slice; each queue gets a certain amount of CPU time which it

can schedule amongst its processes; i.e., 80% to foreground in RR.

Multiple-Processor Scheduling

- CPU scheduling more complex when multiple CPUs are available.

Homogeneous processors within a multiprocessor, Asymmetric

multiprocessing – only one processor accesses the system data

structures, alleviating the need for data sharing.

Real-Time Scheduling

- Hard real-time systems: required to complete a critical task

within a guaranteed amount of time.

- Soft real-time computing: requires that critical processes receive

priority over less fortunate ones.

Page 54: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

54

(BASIC ISSUES/ Deadlocks);

System Model.

Deadlock Characterization.

Methods for Handling Deadlocks.

Deadlock Prevention.

Deadlock Avoidance.

Deadlock Detection.

Recovery from Deadlock.

Introduction

- A set of blocked processes each holding a resource and waiting to

acquire a resource held by another process in the set leads to

deadlock problem.

- For example, assume a System has 2 tape drives, P1 and P2 each

hold one tape drive and each needs another one. Semaphores A and

B, initialized to 1.

P0 P1

wait (A); wait(B)

wait (B); wait(A)

- Figure above shows Bridge Crossing Example; Traffic takes place

only in one direction. Each section of a bridge can be viewed as a

resource. If a deadlock occurs, it can be resolved if one car backs

up (preempt resources and rollback). Several cars may have to be

backed up if a deadlock Occurs the Starvation is possible.

Page 55: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

55

System Model

- In system model a Resource types R1, R2……, Rm CPU cycles,

memory space, and I/O devices. Each resource type Ri has Wi

instances.

- Each process utilizes a resource as follows:

Request.

Use.

Release.

Deadlock Characterization

- Deadlock can arise if four conditions hold simultaneously;

Mutual exclusion: only one process at a time can use a resource.

Hold and wait: a process holding at least one resource is waiting

to acquire additional resources held by other processes.

No preemption: a resource can be released only voluntarily by the

process holding it, after that process has completed its task.

Circular wait: there exists a set {P0, P1, …, P0} of waiting

processes such that P0 is waiting for a resource that is held by P1,

P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting

for a resource that is held by Pn, and P0 is waiting for a resource

that is held by P0.

Methods for Handling Deadlocks

To handle the deadlock must ensure that the system will

never enter a deadlock state or allow the system to enter a deadlock

state and then recover. Most operating systems ignore the problem

Page 56: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

56

and pretend that deadlocks never occur in the system such as

including UNIX.

Deadlock Prevention

- Restrain the ways request can be made;

Mutual Exclusion: not required for sharable resources; must hold

for non-sharable resources.

Hold and Wait: must guarantee that whenever a process requests a

resource, it does not hold any other resources.

Require process to request and be allocated all its resources before

it begins execution, or allow process to request resources only

when the process has none.

Low resource utilization; starvation possible.

No Preemption:

If a process that is holding some resources requests another

resource that cannot be immediately allocated to it, then all

resources currently being held are released.

Preempted resources are added to the list of resources for which

the process is waiting.

Process will be restarted only when it can regain its old resources,

as well as the new ones that it is requesting.

Circular Wait: impose a total ordering of all resource types, and

require that each process requests resources in an increasing order

of enumeration.

Page 57: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

57

Deadlock Avoidance

- Requires that the system has some additional a priori information

available.

Simplest and most useful model requires that each process declare

the maximum number of resources of each type that it may need.

The deadlock-avoidance algorithm dynamically examines the

resource-allocation state to ensure that there can never be a

circular-wait condition.

Resource-allocation state is defined by the number of available and

allocated resources, and the maximum demands of the processes.

Deadlock Detection

- Allow system to enter deadlock state.

- Detection algorithm: The usage of algorithm dial with;

When, and how often, to invoke depends on;

How often a deadlock is likely to occur?

How many processes will need to be rolled back?

If detection algorithm is invoked arbitrarily, there may be

many cycles in the resource graph and so we would not be

able to tell which of the many deadlocked processes “caused”

the deadlock.

Page 58: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

58

Recovery from Deadlock

- Recovery scheme: Recovery from Deadlock dial with;

Abort all deadlocked processes.

Abort one process at a time until the deadlock cycle is eliminated.

In which order should we choose to abort?

Priority of the process.

How long process has computed, and how much longer to

completion.

Resources the process has used.

Resources process needs to complete.

How many processes will need to be terminated?

Is process interactive or batch?

Page 59: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

59

(BASIC ISSUES/ Memory Management);

Background.

Swapping.

Contiguous Allocation.

Paging.

Segmentation.

Segmentation with Paging.

Background

- Any Program must be brought into memory and placed within a

process for it to be run. Input queue is collection of processes on

the disk that are waiting to be brought into memory to run the

program and each user programs go through several steps before

being run.

- Address binding of instructions and data to memory addresses can

happen at three different stages;

Compile time: If memory location known a priori, absolute code

can be generated; must recompile code if starting location changes.

Load time: Must generate re-locatable code if memory location is

not known at compile time.

Execution time: Binding delayed until run time if the process can

be moved during its execution from one memory segment to

another. Need hardware support for address maps (e.g., base and

limit registers).

Page 60: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

60

Page 61: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

61

- The concept of addresses in memory related to logical address and

physical address. The concept of a logical address space that is

bound to a separate physical address space is central to proper

memory management.

Logical address: generated by the CPU; also referred to as virtual

address.

Physical address: address seen by the memory unit.

- Logical and physical addresses are the same in compile time and

load-time address-binding schemes; logical (virtual) and physical

addresses differ in execution-time address-binding scheme. The

user program deals with logical addresses; it never sees the real

physical addresses.

- Memory-Management Unit (MMU) known as Hardware device

that maps virtual to physical address. In MMU scheme, the value

in the relocation register is added to every address generated by a

user process at the time it is sent to memory.

(Dynamic relocation using relocation register)

Page 62: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

62

Swapping

- A process can be swapped temporarily out of memory to a backing

store, and then brought back into memory for continued execution.

- Backing store is a fast disk large enough to accommodate copies of

all memory images for all users; must provide direct access to

these memory images.

- Roll out, roll in known as swapping variant used for priority-based

scheduling algorithms; lower-priority process is swapped out so

higher-priority process can be loaded and executed.

- Major part of swap time is transfer time; total transfer time is

directly proportional to the amount of memory swapped. Modified

versions of swapping are found on many systems, i.e., UNIX,

Linux, and Windows.

(Schematic View of Swapping)

Page 63: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

63

Contiguous Allocation

- Main memory usually into two partitions:

Resident operating system, usually held in low memory with

interrupt vector.

User processes then held in high memory.

Single partition allocation:

Relocation-register scheme used to protect user processes from

each other, and from changing operating-system code and data.

Relocation register contains value of smallest physical address;

limit register contains range of logical addresses – each logical

address must be less than the limit register.

(Hardware Support for Relocation and Limit Registers)

Page 64: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

64

Multiple partition allocation:

Hole: block of available memory; holes of various size are

scattered throughout memory.

When a process arrives, it is allocated memory from a hole large

enough to accommodate it.

Operating system maintains information about: a) allocated

partitions b) free partitions (hole).

(Process allocation)

- A Dynamic Storage Allocation Problem is how to satisfy a request

of size n from a list of free holes.

First-fit: Allocate the first hole that is big enough.

Best-fit: Allocate the smallest hole that is big enough; must search

entire list, unless ordered by size. Produces the smallest leftover

hole.

Worst-fit: Allocate the largest hole; must also search entire list.

Produces the largest leftover hole.

First-fit and best-fit better than worst-fit in terms of speed and

storage utilization.

Page 65: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

65

- Fragmentation concept is divided into;

External Fragmentation – total memory space exists to satisfy a

request, but it is not contiguous.

Internal Fragmentation – allocated memory may be slightly larger

than requested memory; this size difference is memory internal to a

partition, but not being used.

Reduce external fragmentation by compaction.

Shuffle memory contents to place all free memory together in one

large block.

Compaction is possible only if relocation is dynamic, and is done

at execution time.

I/O problem:

Latch job in memory while it is involved in I/O.

Do I/O only into OS buffers.

Paging

- Logical address space of a process can be noncontiguous; process

is allocated physical memory whenever the latter is available.

Physical memory Divided into fixed-sized blocks called frames

(size is power of 2, between 512 bytes and 8192 bytes).

- Divide logical memory into blocks of same size called pages with

Keep track of all free frames. To run a program of size n pages,

need to find n free frames and load program. Set up a page table to

translate logical to physical addresses. Use an Internal

fragmentation.

Page 66: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

66

- Address generated by CPU is divided into:

Page number (p) – used as an index into a page table which

contains base address of each page in physical memory.

Page offset (d) – combined with base address to define the physical

memory address that is sent to the memory unit.

(Address Translation Architecture)

(Paging Example)

Page 67: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

67

Before allocation After allocation

(Free Frames)

- To Implementation of Page Table; Page table is kept in main

memory and there are two types of it;

Page-table base register (PTBR) points to the page table.

Page-table length register (PRLR) indicates size of the page table.

- In this scheme every data/instruction access requires two memory

accesses. One for the page table and one for the data/instruction.

The two memory access problem can be solved by the use of a

special fast-lookup hardware cache called associative memory or

translation look-aside buffers (TLBs).

Page 68: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

68

(Paging Hardware with TLB)

Segmentation

- Memory segmentation is one method of implementing memory

protection , Paging is another, and they can be combined.

Segmentation view memory as a collection of variable-sized

segments rather than a linear array of bytes and each segment can

have its own protection, grow independently

- The size of a memory segment is generally not fixed and may be

as small as a single byte. Segments usually represent natural

divisions of a program such as individual routines or data tables so

segmentation is generally more visible to the programmer.

- A program is a collection of segments. A segment is a logical unit

such as:

Main program.

Method.

Local variables.

Global variables.

Common block.

Stack.

Symbol table.

Arrays.

Page 69: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

69

(BASIC ISSUES/ Virtual Memory);

Background.

Demand Paging.

Page Replacement.

Thrashing.

Background

- Virtual memory is a separation of user logical memory from

physical memory. This separation takes place by:

Only part of the program needs to be in memory for execution.

Logical address space can therefore be much larger than physical

address space.

Allows address spaces to be shared by several processes.

Allows for more efficient process creation.

- Virtual memory can be implemented via:

Demand paging.

Demand segmentation.

Page 70: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

70

Demand Paging

- Bring a page into memory only when it is needed, Page is needed

when reference to it; invalid reference: A- abort or not-in-memory

B- brings to memory. The needs maybe;

Less I/O needed.

Less memory needed.

Faster response.

More users.

(Page demanding)

- With each page table entry a valid–invalid bit is associated (1

means in-memory, 0 means not-in-memory), but initially valid–

invalid but is set to 0 on all entries. During address translation, if

valid–invalid bit in page table entry is 0 means page fault.

- In case of Page Fault the OS looks at another table to decide,

Invalid reference means abort or just not in memory. Then may

take one of the decisions below;

Page 71: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

71

Get empty frame.

Swap page into frame.

Reset tables, validation bit = 1.

Restart instruction: Least Recently Used.

- If there is no free frame ; Page replacement algorithm take place

to find some page in memory, but not really in use, swap it out.

Same page may be brought into memory several times.

(Steps in Handling a Page Fault)

- Page Replacement completes separation between logical memory

and physical memory – large virtual memory can be provided on a

smaller physical memory.

Page 72: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

72

- Before use a page replacement, OS Find the location of the desired

page on disk; find a free frame;

If there is a free frame, use it.

If there is no free frame, use a page replacement algorithm to select

a victim frame.

- After selecting frame it‟s read the desired page into the (newly)

free frame. Update the page and frame tables and Restart the

process.

- Page Replacement Algorithms are;

First-In-First-Out (FIFO) Algorithm: Ensures that more frames

gained less page faults.

Example: assume Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

Assume: 3 frames (3 pages can be in memory at a time per

process):

Result: Three page faults occur.

Assume: 4 frames:

Result: Two faults occur.

Page 73: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

73

(FIFO Page Replacement)

Optimal Algorithm: Replace page that will not be used for

longest period of time.

Example: assume Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

Assume: 4 frames example.

Result: 6 faults occur.

(Optimal Page Replacement)

Page 74: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

74

Least Recently Used (LRU) Algorithm.

Example: assume Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

Assume: 4 frames example.

Result: 4 faults occur.

(LRU Page Replacement)

Second-Chance (clock) Page-Replacement Algorithm.

Page 75: Operating systems

Operating System Engineering

Red Sea University – Engineering Faculty

75

Counting Algorithms.

- Keep a counter of the number of references that have been made to

each page.

- LFU Algorithm: replaces page with smallest count.

Thrashing

- If a process does not have “enough” pages, the page fault rate is

very high. This leads to:

Low CPU utilization.

Operating system thinks that it needs to increase the degree of

multiprogramming.

Another process added to the system.

- Thrashing ≡ a process is busy swapping pages in and out.