Operating System Support (Ch 6) - Utah State...

41
Operating System Operating System Operating System Operating System Support Support Introduction Introduction Distributed systems act as resource Distributed systems act as resource f th d li f th d li managers for the underlying managers for the underlying hardware, allowing users access to hardware, allowing users access to memory, storage, CPUs, peripheral memory, storage, CPUs, peripheral devices, and the network devices, and the network Much of this is accomplished by Much of this is accomplished by 2 Much of this is accomplished by Much of this is accomplished by operating systems and network operating systems and network operating systems operating systems

Transcript of Operating System Support (Ch 6) - Utah State...

Operating SystemOperating SystemOperating System Operating System SupportSupport

IntroductionIntroduction

Distributed systems act as resource Distributed systems act as resource f th d l i f th d l i managers for the underlying managers for the underlying

hardware, allowing users access to hardware, allowing users access to memory, storage, CPUs, peripheral memory, storage, CPUs, peripheral devices, and the networkdevices, and the networkMuch of this is accomplished by Much of this is accomplished by

2

Much of this is accomplished by Much of this is accomplished by operating systems and network operating systems and network operating systemsoperating systems

Virtual MachinesVirtual MachinesBecause multiple processes can run at the Because multiple processes can run at the same time on a hardware device, an same time on a hardware device, an ,,operating system provides a virtual operating system provides a virtual machine, giving the user the impression of machine, giving the user the impression of control of a systemcontrol of a system–– This includes protection of each user process This includes protection of each user process

from interference by another processfrom interference by another processTo accomplish this, operating systems To accomplish this, operating systems typically uses two levels of access to the typically uses two levels of access to the

3

typically uses two levels of access to the typically uses two levels of access to the system resources system resources –– The only way to switch from one mode to The only way to switch from one mode to

another is through system calls not accessible another is through system calls not accessible to usersto users

Kernel ModeKernel Mode

All instructions are availableAll instructions are availableAll memory is accessibleAll memory is accessibleAll registers are accessibleAll registers are accessible

4

User ModeUser Mode

Memory access is restrictedMemory access is restricted––User not allowed to access memory User not allowed to access memory

locations that are outside a range of locations that are outside a range of addressesaddresses

Can’t access device registersCan’t access device registers

5

Network Operating SystemsNetwork Operating Systems

Autonomous nodesAutonomous nodes––Manage their own resourcesManage their own resources––One system image per nodeOne system image per node––Machines are connected via a networkMachines are connected via a network

Processes only scheduled locallyProcesses only scheduled locally––User action required for distributionUser action required for distribution

6

––User action required for distributionUser action required for distribution

E.g., Windows XP, UNIX, LINUXE.g., Windows XP, UNIX, LINUX

Distributed Operating SystemDistributed Operating SystemSingle (global) system imageSingle (global) system imageProcesses scheduled across nodesProcesses scheduled across nodesProcesses scheduled across nodesProcesses scheduled across nodes–– E.g., for load balancing, optimization of E.g., for load balancing, optimization of

communicationcommunication

No true distributed O/S in useNo true distributed O/S in use–– Legacy problemsLegacy problems–– User autonomy/integrity compromisedUser autonomy/integrity compromised

7

User autonomy/integrity compromisedUser autonomy/integrity compromised

Solution: middleware + O/SSolution: middleware + O/S

System LayersSystem Layers

Applications, services

Middleware

OS: kernel,libraries & servers

OS1Processes, threads,communication

OS2Processes, threads,communication

8

Computer &

Platformservers

network hardwareComputer &

network hardware

Node 1 Node 2

communication, ... communication, ...

RequirementsRequirements

Main components: kernels & server Main components: kernels & server processesprocessesRequirements:Requirements:––EncapsulationEncapsulation

Transparency for clientsTransparency for clients––ProtectionProtection

9

From illegitimate accessesFrom illegitimate accesses––Concurrent processingConcurrent processing

Requirements Requirements (cont)(cont)

Remote invocations require:Remote invocations require:––Communication subsystemCommunication subsystem––Scheduling of requestsScheduling of requests

10

Core O/S ComponentsCore O/S Components

Process manager

Communicationmanager

Process manager

11

Thread manager Memory manager

Supervisor

Core O/S FunctionsCore O/S FunctionsProcess managementProcess management–– Creating, managing, and destroying processes Creating, managing, and destroying processes Creating, managing, and destroying processes Creating, managing, and destroying processes –– Every process has an address space and one Every process has an address space and one

or more threadsor more threadsThread managementThread management–– Creating, synchronizing, and scheduling Creating, synchronizing, and scheduling

threadsthreadsCommunications managementCommunications management

All communications between threads in the All communications between threads in the

12

–– All communications between threads in the All communications between threads in the same computer and may include remote same computer and may include remote processesprocesses

Memory managementMemory management–– Control of physical and virtual memoryControl of physical and virtual memory

The O/S SupervisorThe O/S Supervisor

The supervisor The supervisor ––Dispatches interrupts, system call traps Dispatches interrupts, system call traps

and exceptionsand exceptions––Control of the memory management Control of the memory management

unit and hardware cachesunit and hardware caches––Manipulation of the processor and Manipulation of the processor and

13

floating point registersfloating point registers

ProtectionProtectionAll resources must be protected from All resources must be protected from interferenceinterferenceinterferenceinterferenceThis includes protection from access by This includes protection from access by malicious code, but it also includes malicious code, but it also includes protection from faults and other processes protection from faults and other processes that may be running on the same that may be running on the same computercomputerP t ti i ht l t th P t ti i ht l t th

14

Protection might also prevent the Protection might also prevent the bypassing of required activities such as bypassing of required activities such as login authentication and authorizationlogin authentication and authorization

Kernels and ProtectionKernels and Protection

Most processors have a hardware mode Most processors have a hardware mode register that permits privileged register that permits privileged register that permits privileged register that permits privileged instructions to be strictly controlledinstructions to be strictly controlled–– Supervisor modeSupervisor mode for privileged instructions for privileged instructions –– User modeUser mode for unprivileged instructionsfor unprivileged instructions

Separate Separate address spacesaddress spaces are allocatedare allocatedOnly the privileged Only the privileged kernelkernel can access the can access the

15

–– Only the privileged Only the privileged kernelkernel can access the can access the privileged spacesprivileged spaces

–– Usually a Usually a system call trapsystem call trap is required to access is required to access privileged instructions from user spaceprivileged instructions from user space

Protection OverheadProtection Overhead

Protection comes at a priceProtection comes at a priceProcessor cycles to switch between address Processor cycles to switch between address –– Processor cycles to switch between address Processor cycles to switch between address spacesspaces

–– Supervision and protection of system call trapsSupervision and protection of system call traps–– Establishment, authentication, and Establishment, authentication, and

authorization of privileged users and processesauthorization of privileged users and processes

Since all overhead consumes expensive Since all overhead consumes expensive

16

ppresources, it is always a key concern of IT resources, it is always a key concern of IT managersmanagers

Kernel TypesKernel Types

MicrokernelMicrokernelContains code that m st e ec te in Contains code that m st e ec te in ––Contains code that must execute in Contains code that must execute in kernel modekernel mode

––FunctionsFunctionsSetting device registersSetting device registersSwitching the CPU between processesSwitching the CPU between processesManipulating the MMUManipulating the MMU

17

p gp gCapturing hardware interrupts Capturing hardware interrupts

––Other OS functions in user modeOther OS functions in user modeFile systems, network communicationFile systems, network communication

Kernel Types Kernel Types (cont)(cont)

Monolithic kernelMonolithic kernel––Performs all O/S functionsPerforms all O/S functions––Usually very largeUsually very large

18

Kernel vs. MicrokernelKernel vs. Microkernel

Microkernel advantagesMicrokernel advantagesTestabilitTestabilit––TestabilityTestability

––ExtensibilityExtensibility––ModularityModularity––Binary emulationBinary emulationMonolithic kernel advantagesMonolithic kernel advantages

19

––The way most OSs operateThe way most OSs operate––PerformancePerformanceHybrid solutions may be idealHybrid solutions may be ideal

ProcessesProcessesA typical process includes an execution A typical process includes an execution environment and one or more environment and one or more threadsthreadsenvironment and one or more environment and one or more threadsthreadsAn An execution environmentexecution environment includes:includes:–– An address spaceAn address space–– Thread synchronizationThread synchronization–– Communication resources such as Communication resources such as semaphoressemaphores–– Communication interfaces such as Communication interfaces such as socketssockets–– Higher level resources such as open files and Higher level resources such as open files and

20

Higher level resources such as open files and Higher level resources such as open files and windowswindows

Many earlier versions of processes only Many earlier versions of processes only allowed a single thread, thus the term allowed a single thread, thus the term multimulti--threaded processthreaded process is often used for is often used for clarityclarity

Address SpacesAddress SpacesAn An address spaceaddress space is a management unit is a management unit for the virtual memory of a processfor the virtual memory of a processy py pIt consists of nonIt consists of non--overlapping regions overlapping regions accessible by the threads of the owning accessible by the threads of the owning processprocessEach Each regionregion has an has an extentextent (lowest virtual (lowest virtual address and size), read/write/execute address and size), read/write/execute permissions for the processes threads, permissions for the processes threads,

d h th it d d h th it d

21

and whether it grows up or down and whether it grows up or down It is page oriented, and gaps are left It is page oriented, and gaps are left between regions to allow for growthbetween regions to allow for growth

Address SpaceAddress Space

2N

Stack

Auxiliaryregions

22

Tex t

Heap

0

Linux Address SpacesLinux Address SpacesAddress spaces are a generalization of the Address spaces are a generalization of the (L)Unix model which had three regions:(L)Unix model which had three regions:(L)Unix model, which had three regions:(L)Unix model, which had three regions:––A fixed, unmodifiable A fixed, unmodifiable text regiontext region

containing program codecontaining program code––A A heapheap, extensible toward higher virtual , extensible toward higher virtual

addressesaddresses––A A stackstack, extensible toward lower virtual , extensible toward lower virtual

23

,,addressesaddresses

An indefinite number of additional regions An indefinite number of additional regions have since been addedhave since been added

StackStackGenerally there is a separate stack for Generally there is a separate stack for each threadeach threadWhenever a thread or process is Whenever a thread or process is interrupted, status information is stored interrupted, status information is stored on the stack that permits the process or on the stack that permits the process or thread to continue from the point at which thread to continue from the point at which it was interruptedit was interruptedUsually, memory allocated to the stack is Usually, memory allocated to the stack is

d h th th d d h th th d

24

recovered when the process or thread recovered when the process or thread retrieves the information and resumes, as retrieves the information and resumes, as interrupts occur and resumes in a lastinterrupts occur and resumes in a last--in, in, first out (LIFO) mannerfirst out (LIFO) manner

File RegionsFile Regions

A file stored offline can be loaded A file stored offline can be loaded i t ti i t ti into active memoryinto active memoryA A mapped filemapped file is accessed as an is accessed as an array of bytes in memoryarray of bytes in memory––Such a file can reduce access overhead Such a file can reduce access overhead

dramatically, as it is orders of dramatically, as it is orders of

25

y,y,magnitude faster to access memory magnitude faster to access memory than disk filesthan disk files

Shared Memory RegionsShared Memory Regions

Sometimes it is desirable to share Sometimes it is desirable to share memory between processes or between a memory between processes or between a memory between processes, or between a memory between processes, or between a process and the kernelprocess and the kernelReasons for sharing memory include:Reasons for sharing memory include:–– Libraries that might be large and would waste Libraries that might be large and would waste

memory if each process loaded a copymemory if each process loaded a copy–– Kernel calls that access system calls and Kernel calls that access system calls and

26

exceptionsexceptions–– Data sharing and communication between Data sharing and communication between

processes on shared tasksprocesses on shared tasks

Shared Memory Shared Memory –– ProblemsProblemsBus contentionBus contention––Solution: per processor cacheSolution: per processor cacheSolution: per processor cacheSolution: per processor cache

Introduces cache consistency problemsIntroduces cache consistency problems

Software cache consistencySoftware cache consistency––Solution approaches:Solution approaches:

WriteWrite--through cachethrough cacheDon’t cache updateable shared segmentsDon’t cache updateable shared segments

27

Don t cache updateable shared segmentsDon t cache updateable shared segmentsFlush cache in critical sectionsFlush cache in critical sectionsPrevent concurrent accessPrevent concurrent access

Process CreationProcess Creation

An operating system creates An operating system creates d d d dprocesses as neededprocesses as needed

In a distributed environment, there In a distributed environment, there are two independent aspects of the are two independent aspects of the creation process:creation process:––Choice of a target hostChoice of a target host

28

Choice of a target hostChoice of a target host––Creation of an execution environment Creation of an execution environment

and an initial thread within itand an initial thread within it

Process ManagementProcess ManagementProcess creationProcess creation––Parent process spawns child processParent process spawns child process––Choice of hostChoice of host

Performed by distributed system servicePerformed by distributed system service

––Creation & initialization of address spaceCreation & initialization of address space

Process controlProcess control––Create suspend resume killCreate suspend resume kill

29

Create, suspend, resume, killCreate, suspend, resume, kill

Process migrationProcess migration––Expensive operation!Expensive operation!

Process Management Process Management --CreationCreation

Creating a new execution Creating a new execution environmentenvironmentenvironmentenvironment––Standard, statically preStandard, statically pre--defineddefined––From an existing environmentFrom an existing environment

E.g., by fork()E.g., by fork()Usually physically shared text regionUsually physically shared text regionSome data regions shared or copied (copySome data regions shared or copied (copy--

30

Some data regions shared or copied (copySome data regions shared or copied (copyonon--write)write)

Choice of Target HostChoice of Target HostA A transfer policytransfer policy decides whether to decides whether to situate the process locally or remotelysituate the process locally or remotelysituate the process locally or remotelysituate the process locally or remotelyA A location policylocation policy determines which node determines which node should host the new processshould host the new processLocation policies may be static or adaptiveLocation policies may be static or adaptive–– Static policiesStatic policies ignore the current state of the ignore the current state of the

system and are designed based on the system and are designed based on the expected longexpected long term characteristics of the term characteristics of the

31

expected longexpected long--term characteristics of the term characteristics of the systemsystem

–– Adaptive policiesAdaptive policies are based on unpredictable are based on unpredictable runtime factors such as load on a noderuntime factors such as load on a node

Dynamic Location PoliciesDynamic Location Policies

Load sharingLoad sharing policies use a load policies use a load manger to allocate processes to manger to allocate processes to manger to allocate processes to manger to allocate processes to hostshosts––SenderSender--initiatedinitiated policies require the policies require the

node creating the process to specify the node creating the process to specify the hosthost

––ReceiverReceiver--initiatedinitiated when load is below a when load is below a

32

ReceiverReceiver initiatedinitiated when load is below a when load is below a certain threshold, advertises for more certain threshold, advertises for more workwork

––MigratoryMigratory policies can shift processes policies can shift processes between hosts at any timebetween hosts at any time

Process Management Process Management --MigrationMigration

Expensive operationExpensive operation––Not in widespread use for load balancingNot in widespread use for load balancing––May be useful for maintenanceMay be useful for maintenance

LongLong--running tasksrunning tasks

Choice of host (location policy)Choice of host (location policy)

33

Creating Execution EnvironmentCreating Execution EnvironmentOnce a host is selected, a new process Once a host is selected, a new process requires an address space with initialized requires an address space with initialized requires an address space with initialized requires an address space with initialized contents and default information such as contents and default information such as filesfiles–– The new address space can be defined The new address space can be defined

statically, or copied from an existing execution statically, or copied from an existing execution environmentenvironment

–– If copied, content may be shared and nothing If copied, content may be shared and nothing

34

If copied, content may be shared and nothing If copied, content may be shared and nothing written to the new environment until such time written to the new environment until such time as a write instruction occurs for either processas a write instruction occurs for either process

–– Then the shared content is divided. This Then the shared content is divided. This technique is called copytechnique is called copy--onon--write (next slide)write (next slide)

CopyCopy--onon--writewriteProcess A’s address space Process B’s address space

Kernel

RA RB

RB copiedfrom RA

35

a) Before write b) After write

Sharedframe

A's pagetable

B's pagetable

ThreadsThreads

A single process can have more than one A single process can have more than one activity going on at the same timeactivity going on at the same timeactivity going on at the same timeactivity going on at the same time–– For example, it may be performing an activity For example, it may be performing an activity

while at the same time it needs to be aware of while at the same time it needs to be aware of background eventsbackground events

–– There may also be background activities such There may also be background activities such as loading information into a buffer from a as loading information into a buffer from a

k t filk t fil

36

socket or filesocket or file

Servers may service many requests from Servers may service many requests from different users at the same timedifferent users at the same time

Client and Server ThreadsClient and Server Threads

N threads

Input-output

Thread 2 makes

T1

Thread 1

requests to server

generates results

Requests

Receipt &queuing

37

Server

N threads

Client

Multithreaded Server ArchitecturesMultithreaded Server Architectures

Worker PoolWorker Pool–– A predetermined fixed number of threads is A predetermined fixed number of threads is

l bl f d d d dl bl f d d d davailable for use as needed and returned to available for use as needed and returned to the pool after usethe pool after use

ThreadThread--perper--requestrequest–– A new thread is allocated for each new A new thread is allocated for each new

request, and discarded after userequest, and discarded after useThreadThread--perper--connectionconnection–– A new thread is allocated for each new A new thread is allocated for each new

connectionconnection

38

connectionconnection–– Several requests can use that thread Several requests can use that thread

sequentiallysequentially–– The thread is discarded when the connection is The thread is discarded when the connection is

closedclosed

Multithreaded Server ArchitecturesMultithreaded Server Architectures

ThreadThread--perper--object object A th d i ll t d f h A th d i ll t d f h ––A new thread is allocated for each A new thread is allocated for each remote objectremote object

––All requests for that object wait to use All requests for that object wait to use that threadthat thread

––The thread is discarded when the The thread is discarded when the connection to the object is destroyedconnection to the object is destroyed

39

connection to the object is destroyedconnection to the object is destroyed

Server ThreadsServer Threads

a Thread-per-request b Thread-per-connection c Thread-per-object

remote

workers

I/O remoteremote I/O

per-connection threads per-object threads

objects objects objects

40

a. Thread per request b. Thread per connection c. Thread per object

Client ThreadsClient Threads

Clients block while awaiting responseClients block while awaiting responseE g blocking receive()E g blocking receive()–– E.g., blocking receive()E.g., blocking receive()

Schedule other threads while waitingSchedule other threads while waitingMultiple server connectionsMultiple server connectionsE.g., web browsersE.g., web browsers–– Access multiple pages concurrentlyAccess multiple pages concurrently

41

–– Multiple requests for same pageMultiple requests for same page

What is a Tread?What is a Tread?

It is a program’s path of executionIt is a program’s path of execution––Most programs run as a single thread, Most programs run as a single thread,

which could cause problems if a which could cause problems if a program needs multiple events or program needs multiple events or actions to occur at the same timeactions to occur at the same time

42

What is a Thread?What is a Thread?MultiMulti--threading literally means multiple threading literally means multiple lines of a single program can be executed lines of a single program can be executed lines of a single program can be executed lines of a single program can be executed at the same timeat the same time––However, it is different from multiHowever, it is different from multi--

processing because all of the threads processing because all of the threads share the same address space for both share the same address space for both code and data, causing it to be less code and data, causing it to be less overheadoverhead

43

overheadoverheadSo, by starting a thread an efficient path So, by starting a thread an efficient path of execution is created while still sharing of execution is created while still sharing the original data area from the parentthe original data area from the parent

Threads vs. ProcessesThreads vs. ProcessesBoth methods workBoth methods workThreads more efficientThreads more efficientThreads more efficientThreads more efficient–– Creation (no new execution environment)Creation (no new execution environment)–– Context switchContext switch–– Initial page faultsInitial page faults–– CachingCaching–– Resource sharingResource sharing

44

Problem: concurrency controlProblem: concurrency control–– Threads share an environment with one Threads share an environment with one

another; therefore we have the problems of a another; therefore we have the problems of a shared memoryshared memory

Thread StatesThread StatesRunning stateRunning state: A thread is said to be in : A thread is said to be in the running state when it is being the running state when it is being the running state when it is being the running state when it is being executedexecutedReady stateReady state (Not Runnable): A thread in (Not Runnable): A thread in this state is ready for execution, but is not this state is ready for execution, but is not currently executedcurrently executed–– Once a thread gets access to the CPU, it gets Once a thread gets access to the CPU, it gets

moved to the Running statemoved to the Running state

45

moved to the Running statemoved to the Running state

Thread StatesThread States

Dead StateDead State: : A thread reaches the dead state when the run A thread reaches the dead state when the run –– A thread reaches the dead state when the run A thread reaches the dead state when the run method has finished executionmethod has finished execution

Waiting StateWaiting State (yielding): (yielding): –– In this state the thread is waiting for some In this state the thread is waiting for some

action to happenaction to happen–– Once that action happens, the thread goes into Once that action happens, the thread goes into

46

O ce t at act o appe s, t e t ead goes toO ce t at act o appe s, t e t ead goes tothe ready statethe ready state

–– Threads in the waiting state could be sleeping, Threads in the waiting state could be sleeping, suspended, blocked, or waiting on a monitorsuspended, blocked, or waiting on a monitor

Uses of ThreadsUses of Threads

Threads are used for all sorts of Threads are used for all sorts of applications from general interactive applications from general interactive applications, from general interactive applications, from general interactive drawing applications to gamesdrawing applications to games––For instance a program is not capable of For instance a program is not capable of

drawing pictures while reading drawing pictures while reading keystrokes. keystrokes.

So the program either has to give full So the program either has to give full

47

p g gp g gattention to listening to keystrokes or attention to listening to keystrokes or drawing picturesdrawing picturesOtherwise one thread can listen to the Otherwise one thread can listen to the keyboard while the other draws the pictureskeyboard while the other draws the pictures

Uses of ThreadsUses of Threads

Another good usage of threads is on Another good usage of threads is on t ith lti l CPU t ith lti l CPU a system with multiple CPUs or coresa system with multiple CPUs or cores

–– In this case each thread runs on a In this case each thread runs on a separate CPU, resulting is true separate CPU, resulting is true parallelism instead of time sharingparallelism instead of time sharing

48

User Space ThreadsUser Space Threads

Kernel is unaware of threadsKernel is unaware of threadsThreads are managed by runThreads are managed by run time systemtime systemThreads are managed by runThreads are managed by run--time systemtime systemThread information is in shared memoryThread information is in shared memorySystem calls have to be nonSystem calls have to be non--blockingblocking–– Extra overhead (additional system call)Extra overhead (additional system call)Cannot transfer control on page faultCannot transfer control on page faultThread scheduling part of the applicationThread scheduling part of the application

49

Thread scheduling part of the applicationThread scheduling part of the application–– InterInter--process thread scheduling impossibleprocess thread scheduling impossible

Kernel ThreadsKernel Threads

Managed by kernelManaged by kernelData structures in kernel spaceData structures in kernel spaceCreate/destroy are system callsCreate/destroy are system callsSystem calls and page faults not System calls and page faults not problematicproblematic

hh

50

Synchronization more expensiveSynchronization more expensive

Kernel vs. User Space ThreadsKernel vs. User Space Threads

Kernel space threads better in I/O Kernel space threads better in I/O intensive applicationsintensive applicationsintensive applicationsintensive applicationsUser space threads better for fine grained User space threads better for fine grained parallel computing intensive applicationsparallel computing intensive applicationsHybrid solutions possibleHybrid solutions possible–– E.g., application provides scheduling hintsE.g., application provides scheduling hints

51

Example: Fast ThreadsExample: Fast Threads

FastThreadsFastThreads is a hierarchic, eventis a hierarchic, event--based based thread scheduling systemthread scheduling systemthread scheduling systemthread scheduling system–– It manages a kernel on a computer with one or It manages a kernel on a computer with one or

more processors and a set of application more processors and a set of application processesprocesses

–– Each process has a user level scheduler, while Each process has a user level scheduler, while the kernel allocates virtual processors to the kernel allocates virtual processors to

52

processes.processes.

Part a of the next slide shows a kernel Part a of the next slide shows a kernel allocation processes on a threeallocation processes on a three--processor processor machine.machine.

Scheduler ActivationsScheduler Activations

ProcessA

ProcessB

Process

Kernel

P idl

P added

SA blocked

SA unblocked

SA preempted

53

Virtual processors Kernel P idle

P needed

A. Assignment of virtual processors to processes

B. Events between user-level scheduler & kerne Key: P = processor; SA = scheduler activation

Kernel NotificationsKernel Notifications

We see from previous slide:We see from previous slide:The p ocess notifies the ke nel hen The p ocess notifies the ke nel hen a a ––The process notifies the kernel when The process notifies the kernel when a a virtual processor is idlevirtual processor is idle and no longer and no longer needed or when needed or when an extra virtual an extra virtual processor is neededprocessor is needed

––TheThe kernel notifies a process when a kernel notifies a process when a scheduler activationscheduler activation notifies that notifies that

54

process’ scheduler of an event. There process’ scheduler of an event. There are four types of scheduler activation are four types of scheduler activation events, shown on the next slideevents, shown on the next slide

User Level Schedule NotificationUser Level Schedule Notification

A new virtual processor allocatedA new virtual processor allocatedScheduler activation blockedScheduler activation blockedScheduler activation unblockedScheduler activation unblockedScheduler activation preemptedScheduler activation preempted

55

Communication and InvocationCommunication and Invocation

What communication primitives does it What communication primitives does it supply?supply?supply?supply?Which protocols does it support and how Which protocols does it support and how open is the communication open is the communication implementation?implementation?What steps are taken to make What steps are taken to make communication as efficient as possible?communication as efficient as possible?

56

communication as efficient as possible?communication as efficient as possible?What support is provided for highWhat support is provided for high--latency latency and disconnected operation?and disconnected operation?

Invocation OverheadInvocation OverheadIn a typical employment situation, a In a typical employment situation, a worker might be expected to complete a worker might be expected to complete a g p pg p pnumber of tasks each day that contribute number of tasks each day that contribute toward the goals of the employer. To toward the goals of the employer. To accomplish those tasks, the worker might accomplish those tasks, the worker might also have to accomplish tasks, such as also have to accomplish tasks, such as commuting to work, that do not contribute commuting to work, that do not contribute directly to the employer’s goalsdirectly to the employer’s goalsA process also has tasks that are within A process also has tasks that are within

57

A process also has tasks that are within A process also has tasks that are within the scope of the process and tasks that do the scope of the process and tasks that do not contribute directly to goals. Process not contribute directly to goals. Process invocation can be compared to commuting invocation can be compared to commuting as an overhead expenseas an overhead expense

InvocationInvocationThe invocation of a local process takes The invocation of a local process takes place within local memory and may place within local memory and may place within local memory and may place within local memory and may involve a few tens of instruction cyclesinvolve a few tens of instruction cyclesInvocation of a remote process involves Invocation of a remote process involves network activities and possibly access to network activities and possibly access to files, and may require billions of files, and may require billions of instruction cycles in processors running at instruction cycles in processors running at speeds measured in gigahertzspeeds measured in gigahertz

58

speeds measured in gigahertzspeeds measured in gigahertzThese invocation activities are external to These invocation activities are external to the desired processing activities and the desired processing activities and increase costs without adding valueincrease costs without adding value

Delay Factors Delay Factors –– Address Address Space ChangesSpace Changes

59

Invocation Performance Invocation Performance –– Delay Delay FactorsFactors

Domain transitionsDomain transitionsAdd ess space changesAdd ess space changes––Address space changesAddress space changes

Network communicationNetwork communication––BandwidthBandwidth––Number of packetsNumber of packetsThread schedulingThread scheduling

60

Context switchingContext switching

LatencyLatency

Invocation costs are the delays Invocation costs are the delays required to set up communications required to set up communications required to set up communications required to set up communications and the nonand the non--goal performing goal performing overhead of invocationsoverhead of invocationsThese fixed overhead costs measure These fixed overhead costs measure the the latencylatency of the connectionof the connectionS b t ti l ff t i t i i i i S b t ti l ff t i t i i i i

61

Substantial efforts go into minimizing Substantial efforts go into minimizing and reducing latency costs in and reducing latency costs in distributed applicationsdistributed applications

Invocation Costs as a Percentage Invocation Costs as a Percentage of Throughputof Throughput

As more work is accomplished for a fixed As more work is accomplished for a fixed amount of overhead, the overhead is less amount of overhead, the overhead is less ,,of a concern as a percentage of total costsof a concern as a percentage of total costsWith very small data sizes, most of the With very small data sizes, most of the system time may be spent in overhead system time may be spent in overhead activitiesactivitiesWith large data transfers, the overhead With large data transfers, the overhead costs may be negligible as a percentage of costs may be negligible as a percentage of t t l tt t l t

62

total coststotal costsThe next slide illustrates this as a graph of The next slide illustrates this as a graph of RPC delays against packet size in RPC RPC delays against packet size in RPC transferstransfers

RPC Delay Against Parameter SizeRPC Delay Against Parameter Size

RPC delay

63

1000 2000

Requested datsize (bytes)

Packetsize

0

Lightweight RPCLightweight RPCThe RPC’s run on the same machineThe RPC’s run on the same machineOne way to reduce the overhead of a One way to reduce the overhead of a One way to reduce the overhead of a One way to reduce the overhead of a remote invocation is to share some of the remote invocation is to share some of the costscosts–– While process invocation is expensive, some While process invocation is expensive, some

activities can be done once and then reused, activities can be done once and then reused, while others must be done for each while others must be done for each communicationcommunication

Lightweight RPC attempts to minimize Lightweight RPC attempts to minimize

64

Lightweight RPC attempts to minimize Lightweight RPC attempts to minimize overhead by sharing process activities overhead by sharing process activities within parent processes that are prewithin parent processes that are pre--establishedestablished–– Overhead costs can be reduced by as much as Overhead costs can be reduced by as much as

twotwo--thirdsthirds

Lightweight RPCLightweight RPC

Client Server

4. Execute procedureand copy results

User stub stub

AA stack

1. Copy args

65

2. Trap to Kernel

User stub

Kernel

stub

3. Upcall 5. Return (trap)

Concurrent InvocationConcurrent Invocation

Another approach to reducing Another approach to reducing communication overhead is to reduce the communication overhead is to reduce the communication overhead is to reduce the communication overhead is to reduce the number of messages sent to establish the number of messages sent to establish the connection and to continue processing connection and to continue processing while communicating instead of while communicating instead of suspending threads until a message is suspending threads until a message is receivedreceived

66

Serialized and concurrent invocations are Serialized and concurrent invocations are compared on the next slidecompared on the next slide

Serialized and Concurrent Serialized and Concurrent Invocation TimingInvocation Timing

marshalSend

process args

transmissionmarshal

Send

process args

Serialised invocations Concurrent invocations

execute request

Send

Receiveunmarshal

marshal

Receiveunmarshal

process results

marshalSend

process args

marshalSend

process args

execute reques

Send

Receiveunmarshal

marshal

execute reques

Send

Receiveunmarshal

marshalReceive

unmarshalprocess results

Receiveunmarshal

67Client Server

Se d

Receiveunmarshal

process results

execute request

Send

Receiveunmarshal

marshal

unmarshalprocess results

time

Client Server

Delay Factors Delay Factors –– Network Network CommunicationCommunication

BandwidthBandwidthP k t iP k t iPacket sizePacket size–– Delay increases almost linearlyDelay increases almost linearlyPacket initializationPacket initialization–– E.g., headers, checksumsE.g., headers, checksumsMarshalling & data copyingMarshalling & data copying–– Across address spacesAcross address spaces

68

–– Across address spacesAcross address spaces–– Across protocol layersAcross protocol layersSynchronizationSynchronization

Characteristics of an Open Characteristics of an Open Distributed SystemDistributed System

Run only that system software at each Run only that system software at each computer that is necessary for it to carry out computer that is necessary for it to carry out computer that is necessary for it to carry out computer that is necessary for it to carry out its particular roleits particular roleAllow the software implementing any Allow the software implementing any particular service to be changed particular service to be changed independently of other facilitiesindependently of other facilitiesAllow for alternatives of the same service to Allow for alternatives of the same service to be provided, when this is required to suit be provided, when this is required to suit

69

be provided, when this is required to suit be provided, when this is required to suit different users or applicationsdifferent users or applicationsIntroduce new services without harming the Introduce new services without harming the integrity of existing onesintegrity of existing ones

Kernel ArchitectureKernel ArchitectureMonolithic kernels that serve all the Monolithic kernels that serve all the functions of the O/S may not be functions of the O/S may not be ideally suited for distributed ideally suited for distributed applicationsapplications––They are massive, undifferentiated and They are massive, undifferentiated and

intractableintractableAn alternative is to have the kernel An alternative is to have the kernel

70

perform the most basic abstractions perform the most basic abstractions and allow microkernels that can be and allow microkernels that can be adapted for specialized functions to adapted for specialized functions to manage system resourcesmanage system resources

Monolithic Kernel and MicrokernelMonolithic Kernel and Microkernel

Monolithic Kernel Microkernel

.......

.......

Key:

S4

S1 .......

S1 S2 S3

S2 S3 S4

71

Server: Dynamically loaded server program:Kernel code and data:

Key:

Roll of MicrokernelRoll of Microkernel

Middleware

Languagesupport

subsystem

Languagesupport

subsystem

OS emulationsubsystem ....

Microkernel

72

Hardware

The microkernel supports middleware via subsystems

Multiuser O/S and SemaphoresMultiuser O/S and SemaphoresWith many processes in a single hardware With many processes in a single hardware device, multiple processes need controls device, multiple processes need controls , p p, p pto enable the CPU (or CPUs) to switch to enable the CPU (or CPUs) to switch between tasksbetween tasksA semaphore can be thought of as an A semaphore can be thought of as an integer with two operations, down and upinteger with two operations, down and up–– The down operation checks to see if the value The down operation checks to see if the value

is > 0is > 0If it is it decrements the value and continuesIf it is it decrements the value and continues

73

If it is, it decrements the value and continuesIf it is, it decrements the value and continuesIf it is, the calling process is blockedIf it is, the calling process is blocked

–– The up operation does the oppositeThe up operation does the oppositeIt first checks to see if there are any now blocked It first checks to see if there are any now blocked processes that could not complete earlierprocesses that could not complete earlier

SemaphoresSemaphores–– If so, it unblocks one of them and continuesIf so, it unblocks one of them and continues–– Otherwise, it increments the semaphore valueOtherwise, it increments the semaphore valueOtherwise, it increments the semaphore valueOtherwise, it increments the semaphore valueOnce an up or down operation begins, no Once an up or down operation begins, no other operation can access the semaphore other operation can access the semaphore until the operation is complete or a until the operation is complete or a process blocksprocess blocksSemaphores are not a complete solution Semaphores are not a complete solution t ti f i t f i t ti f i t f i

74

to preventing processes from interfering to preventing processes from interfering with each otherwith each otherFor that reason, we have monitorsFor that reason, we have monitors

MonitorsMonitors

A monitor is a construct similar to an A monitor is a construct similar to an bj t t i i i bl d bj t t i i i bl d object, containing variables and object, containing variables and

proceduresprocedures––The variables can only be accessed by The variables can only be accessed by

calling one of the procedurescalling one of the procedures––The monitor allows only one process at The monitor allows only one process at

75

a time to execute a procedurea time to execute a procedure

What Can the O/S Do For Me?What Can the O/S Do For Me?

In a course on distributed systems, it In a course on distributed systems, it i l i l t l k t ti i l i l t l k t ti is logical to look at operating is logical to look at operating systems and network operating systems and network operating systems from the perspective of the systems from the perspective of the services we require for a distributed services we require for a distributed systemsystem

76

––What does a server require?What does a server require?––What does a client require?What does a client require?

What Does a Server Do?What Does a Server Do?

Waits for client requestsWaits for client requestsS t t th S t t th Serves many requests at the same Serves many requests at the same timetimeGives priority to important tasksGives priority to important tasksManages background tasksManages background tasksKeeps on going like the Energizer Keeps on going like the Energizer

77

Keeps on going like the Energizer Keeps on going like the Energizer bunnybunnyGobbles up memory, CPU cycles, and Gobbles up memory, CPU cycles, and disk spacedisk space

Care and Feeding of a ServerCare and Feeding of a Server

Servers tend to need a high level of Servers tend to need a high level of concurrencyconcurrencyconcurrencyconcurrency––This requires task management, which This requires task management, which

is best done by an operating systemis best done by an operating system––Using the stack concept, we separate Using the stack concept, we separate

support services from the application support services from the application services that the server performs services that the server performs Many services are provided by Many services are provided by

78

––Many services are provided by Many services are provided by Operating Systems (OS) and their Operating Systems (OS) and their extension, Network Operating Systems extension, Network Operating Systems (NOS)(NOS)

Basic Services of an O/SBasic Services of an O/STask PreemptionTask PreemptionTask PriorityTask PriorityTask PriorityTask PrioritySemaphoresSemaphoresLocal/Remote Inter Process Local/Remote Inter Process Communications (IPC)Communications (IPC)File System ManagementFile System ManagementSecurity FeaturesSecurity FeaturesThreadsThreads

79

ThreadsThreadsIntertask ProtectionIntertask ProtectionHigh Performance MultiHigh Performance Multi--user Filesuser FilesMemory ManagementMemory ManagementDynamic RunDynamic Run--Time ExtensionsTime Extensions

Extended ServicesExtended Services

Extended services are a category of Extended services are a category of iddl th t l it th t ti l iddl th t l it th t ti l middleware that exploit the potential middleware that exploit the potential

of networks to distribute applicationsof networks to distribute applications––DBMS, TP monitors, and objectsDBMS, TP monitors, and objects––Distributed computing environmentDistributed computing environment––Network operating systemsNetwork operating systems

80

et o ope at g syste set o ope at g syste s––Communications servicesCommunications services

Service NeedsService Needs

Ubiquitous communicationsUbiquitous communicationsAccess to network file and print servicesAccess to network file and print servicesAccess to network file and print servicesAccess to network file and print servicesBinary large objects (BLOBS)Binary large objects (BLOBS)Global directories and Network Yellow Global directories and Network Yellow PagesPagesAuthentication and authorization servicesAuthentication and authorization servicesSystem managementSystem management

81

y gy gNetwork timeNetwork timeDatabase and transaction servicesDatabase and transaction servicesInternet servicesInternet servicesObjectObject--oriented servicesoriented services

SummarySummary

Middleware and O/SMiddleware and O/SK lK lKernelsKernels––Monolithic vs. microkernelsMonolithic vs. microkernelsProcesses and threadsProcesses and threadsRPCRPC––DelaysDelays

82

yy––Lightweight RPCLightweight RPC