finalosrev

31
Subject : Operating Systems Revision Prof. Dr. Hanafy M. Ismail Question 1 a) List five services provided by an operating system. Explain how each provides convenience to the users. Explain also in which cases it would be impossible for user-level programs to provide these services. 1) Program execution. The operating system loads the contents (or sections) of a file into memory and begins its execution. A user level program could not be trusted to properly allocate CPU time. 2) I/O operations. Disks, tapes, serial lines, and other devices must be communicated with at a very low level. The user need only specify the device and the operation to perform on it, while the system converts that request into device- or controller-specific commands. User-level programs cannot be trusted to access only devices they should have access to and to access them only when they are otherwise unused. 3) File-system manipulation. There are many details in file creation, deletion, allocation, and naming that users should not have to perform. Blocks of disk space are used by files and must be tracked. Deleting a file requires removing the name file information and freeing the allocated blocks. Protections must also be checked to assure proper file access. User programs could neither ensure adherence to protection methods nor be trusted to allocate only free blocks and deallocate blocks on file deletion. 4) Communications. Message passing between systems requires messages to be turned into packets of

description

Question 1 Prof. Dr. Hanafy M. Ismail

Transcript of finalosrev

Page 1: finalosrev

Subject : Operating SystemsRevision

Prof. Dr. Hanafy M. Ismail Question 1

a) List five services provided by an operating system. Explain how each provides convenience to the users. Explain also in which cases it would be impossible for user-level programs to provide these services.

1) Program execution. The operating system loads the contents (or sections) of a file into memory and begins its execution. A user level program could not be trusted to properly allocate CPU time.

2) I/O operations. Disks, tapes, serial lines, and other devices must be communicated with at a very low level. The user need only specify the device and the operation to perform on it, while the system converts that request into device- or controller-specific commands. User-level programs cannot be trusted to access only devices they should have access to and to access them only when they are otherwise unused.

3) File-system manipulation. There are many details in file creation, deletion, allocation, and naming that users should not have to perform. Blocks of disk space are used by files and must be tracked. Deleting a file requires removing the name file information and freeing the allocated blocks. Protections must also be checked to assure proper file access. User programs could neither ensure adherence to protection methods nor be trusted to allocate only free blocks and deallocate blocks on file deletion.

4) Communications. Message passing between systems requires messages to be turned into packets of information, sent to the network controller, transmitted across a communications medium, and reassembled by the destination system. Packet ordering and data correction must take place. Again, user programs might not coordinate access to the network device, or they might receive packets destined for other processes.

5) Error detection. Error detection occurs at both the hardware and software levels. At the hardware level, all data transfers must be inspected to ensure that data have not been corrupted in transit. All data on media must be checked to be sure they have not changed since they were written to the media. At the software level, media must be checked for data consistency; for instance, whether the number of allocated and unallocated blocks of storage match the total number on the device. There, errors are frequently process-independent (for instance, the corruption of data on a disk), so there must be a global program (the operating system) that handles all types of errors. Also, by having errors processed by the operating system, processes need not contain code to catch and correct all the errors possible on a system.

Page 2: finalosrev

b) List three services provided by an operating system for ensuring the efficient operation of the system itself via resource sharing?

1) Resource allocation - When multiple users or multiple jobs running concurrently, resources must be allocated to each of them

2) Accounting - To keep track of which users use how much and what kinds of computer resources

3) Protection and security - The owners of information stored in a multiuser or networked computer system may want to control use of that information, concurrent processes should not interfere with each other

i. Protection involves ensuring that all access to system resources is controlled

ii. Security of the system from outsiders requires user authentication,

c) What are two fundamental approaches for users to interface with the operating system?

1) Command-line interface (CLI) or command interpreter. Primarily fetches a command from user and executes it

2) Graphical user interface (GUI). It provides a User-friendly desktop interface where the mouse is moved to position its pointer on images or icons on the screen.

d) Explain briefly how a system call is implemented? Illustrate your answer by an example.

1) Typically, a number associated with each system call2) System-call interface maintains a table indexed according to these

numbers3) The system call interface invokes intended system call in OS kernel and

returns status of the system call and any return values4) The caller need know nothing about how the system call is implemented.

Most details of OS interface hidden from programmer by API

Page 3: finalosrev

e) There are many variations in process and job control. For example:: single-tasking system (as in MS-DOS) Multi-tasking system (as in FreeBSD)

Illustrate program execution in both cases.For single-tasking system (as in MS-DOS):

The MS-DOS operating system has a command interpreter that is invoked when the computer is started. To run a program:

It loads the program into memory, writing over most of itself to give the program as much memory as possible.

It sets the instruction pointer to the first instruction of the program The program run, and either an error causes a trap, or the program

executes a system call to terminate. Then, the small portion of the command interpreter that was not

overwritten resumes execution. It reloads the rest of the command interpreter from disk.

Multi-tasking system (as in FreeBSD) When a user logs on to the system, the shell of the user’s choice is run. This shell is similar to the MS-DOS shell in that it accepts commands

and executing programs that the user requests. However, the command interpreter may continue running while another

program is executed. To start a new process, the shell executes a fork() system call. The selected program is loaded into memory via an exec() system call,

and the program is executed. The shell is then either waits for the process to finish or runs the process

“in the background” ( in this case, the shell immediately requests another command)

Page 4: finalosrev

when a process is running in the background, the process cannot receive input directly from the keyboard, because the shell is using this resource. I/O is done through files or through a GUI.

Meanwhile, the user is free to ask the shell to run other programs. When the process is done, it executes an exit() system call to terminate,

returning a status code of 0 or a nonzero error code.

f) Illustrate the layered approach for operating system structure? What are the advantages and disadvantages of this approach.

1) In which, the operating system is divided into a number of layers (levels), each built on top of lower layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.

2) An operating system layer is an implementation of an object made up of data and the operations that can manipulate those data. A layer consists of data structures and a set of routines that can be invoked by higher level layers. A layer can invoke the operations on lower-level layers.

3) The main advantage of the layered approach is simplicity of construction and debugging.

i. With modularity, layers are selected such that each uses functions (operations) and services of only lower-level layers.

ii. If an error is found during debugging of a particular layer, the error must be on that layer, because the layers below it are already debugged.

iii. A layer does not need to know how the operations provided by lower layers are implemented. Each layer hides the existence of certain data structures, operations, and hardware from higher-level layers.

4) The major difficulty with layered approach involves defining appropriately the various layers.

i. for example, the device driver for baking store (disk space used by virtual-memory algorithm) must be at a lower level than the memory management routines, because memory management routines requires the ability to use the backing store.

ii. Other requirements may not be obvious. The backing store would be normally above the CPU scheduler, because the driver may need to wait for I/O and the CPU can be rescheduled during this time. However, in large systems, the CPU scheduler may have more information about all active processes than can fit in memory.. Therefore information may need to be swapped in and out of memory, requiring the backing-store driver routine to be below the CPU scheduler.

Page 5: finalosrev

g) What is the supervisor or kernel mode? What is the user mode? What are the differences? Why are they needed?

Modern CPUs have two execution modes, the kernel mode and user mode, indicated by a mode bit. The kernel mode can use all general instructions as well as privileged instructions. Privileged instructions help the CPU access sensitive information (e.g., clear the cache) and carry out vital operations (e.g., I/O). On the other hand, only general instructions are available in the user mode. If a privileged instruction is executed in the user mode, the CPU treats the instruction as an illegal one and traps to the operating system.An operating system usually runs in the kernel mode so that it can have access to all hardware components and no user can affect the execution of the OS. This is for protection purpose. Question 2

a) Distinguish between a process and a program. Describe the sections of a process in memory.

1) A program is a passive entity, such as a file containing a list of instructions stored on disk (an executable file).

2) A process is an active entity, with a program counter specifying the next instruction to execute and a set of associated resources.

3) A program becomes a process when an executable file is loaded into memory

4) Although two processes may be associated with the same program, they are considered two separate execution sequences. For example a user may invoke many copies of the web browser program, each of these is a separate process (data, heap, and stack sections vary)

5) A process in memory includes: text section which contains the program code program counter and the contents of processor’s registers to

represent the current activity stack which contains temporary data ( such as function

parameters, return address, and local variables) data section which contains global variables heap which is memory dynamically allocated during

b) describe the states of a process? Sketch a diagram of process State transitions.

Process states are:1) new: The process is being created2) running: Instructions are being executed3) waiting: The process is waiting for some event to occur (such as I/O

completion or reception of a signal)4) ready: The process is waiting to be assigned to a processor5) terminated: The process has finished execution

Page 6: finalosrev

c) Each process is represented in the OS by a process control block (PCB). Describe the information included in the PCB.

1) Process state (new, ready, running, waiting, halted)2) Program counter. The address of the next instruction to be executed for

this process.3) CPU registers. They include accumulators, index registers, stack

pointers, and general-purpose registers.4) CPU scheduling information. It includes a process priority pointers to

scheduling queues and any other scheduling parameters.5) Memory-management information. It includes memory limits and other

information to be discussed later.6) Accounting information. It includes the amount of CPU and real time

used, time limits, account numbers, job or process numbers.7) I/O status information. It includes the list of I/O devices allocated to the

process, a list of open files.d) Describe the action taken by the kernel to context-switch between processes.

1) When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process

e) What is meant by job queue, ready queue, and device queue?1) Job queue – set of all processes in the system; as a process enter the system,

it is put into the job queue.2) Ready queue – set of all processes residing in main memory, ready and

waiting to execute. It is stored as a linked list. A ready-queue header contains

Page 7: finalosrev

pointers to the first and final PCBs in the list. Each PCB includes a pointer field that points to the next PCB in the ready queue.

3) Device queues – set of processes waiting for an I/O device

f) Describe the differences among short-term, medium-term, and long-term scheduling. Sketch a queueing-diagramrepresentation of process scheduling.

1) Long-term scheduler (or job scheduler) – selects processes from this pool and loads into main memory for execution (selects which processes brought into the ready queue).

2) Short-term scheduler (or CPU scheduler) – selects which ready process should be executed next and allocates CPU to it.

3) Some operating systems (such as time-sharing systems) introduces medium-term scheduler:

It removes processes from the main memory and thus reduces the degree of multiprogramming.

Later, the process can be reintroduced into memory, and its execution is continued where it left off.

This scheme is called swapping

g) Distinguish between I/O-bound process and CPU-bound process1) I/O-bound process – spends more time doing I/O than computations, many

short CPU bursts2) CPU-bound process – spends more time doing computations; few very long

CPU burstsh) Using the following program, explain what will be output at Line A. int value = 5;

int main() {pid_t pid;

pid = fork(); if (pid == 0) { /* child process */

value += 15;; }else if (pid > 0) { /* parent process */

wait (NULL);

Page 8: finalosrev

cout << “ parent value = “ << value; // Line Aexit(0);

}}

1) Because the parent and the child processes have their own copies of the data, the output at line A is parent value = 5 (not 20).

i) Define the difference between preemptive and nonpreemptive scheduling. Preemptive scheduling allows a process to be interrupted in the midst of its

execution, taking the CPU away and allocating it to another process. Nonpreemptive scheduling ensures that a process relinquishes control of the CPU only when it finishes with its current CPU burst.

j)Consider the following set of processes, with the length of the CPU burst given in milliseconds:The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0

1) Draw four Gantt charts that illustrate the execution of these processes using the following scheduling algorithms: FCFS, SJF, nonpreemptive priority (a smaller priority number implies a higher priority), and RR (quantum = 5)

2) What is the turnaround time of each process for each of the scheduling algorithms in part 1?

3) What is the waiting time of each process for each of the scheduling algorithms in part 1?

4) Which of the algorithms in part a results in the minimum average waiting time (overall processes)

Question 3a) What are the five major activities of an operating system in regard to process

management?1) The creation and deletion of both user and system processes2) The suspension and resumption of processes3) The provision of mechanisms for process synchronization4) The provision of mechanisms for process communication5) The provision of mechanisms for deadlock handling

Process Burst Time PriorityP1 20 3P2 5 1P3 10 3P4 5 4P5 15 2

Page 9: finalosrev

b) What are the three major activities of an operating system in regard to memory management?

1) Keep track of which parts of memory are currently being used and by whom.

2) Decide which processes are to be loaded into memory when memory space becomes available.

3) Allocate and deallocate memory space as needed.

c) What are the three major activities of an operating system in regard to secondary-storage management?

1) Free-space management.2) Storage allocation.3) Disk scheduling.

d) What is the purpose of system calls?1) System calls allow user-level processes to request services of the

operating system.e) What is the purpose of the command interpreter? Why is it usually separate from

the kernel?1) It reads commands from the user or from a file of commands and executes

them, usually by turning them into one or more system calls. It is usually not part of the kernel since the command interpreter is subject to changes.

f) In Unix systems, what system calls have to be executed by a command interpreter or shell in order to start a new process?

1) In Unix systems, a fork system call followed by an exec system call need to be performed to start a new process. The fork call clones the currently executing process, while the exec call overlays a new process based on a different executable over the calling process.

g) What is the purpose of system programs?1) System programs can be thought of as bundles of useful system calls.

They provide basic functionality to users so that users do not need to write their own programs to solve common problems.

h) What is the main advantage of the layered approach to system design? What are the disadvantages of using the layered approach?

1) As in all cases of modular design, designing an operating system in a modular way has several advantages. The system is easier to debug and modify because changes affect only limited sections of the system rather than touching all sections of the operating system. Information is kept only where it is needed and is accessible only within a defined and restricted area, so any bugs affecting that data must be limited to a specific module or layer.

Page 10: finalosrev

Question 4a) What is binding?Answer: Setting the actual addresses used by each non-relocatable address of a program.b) What is dynamic loading?Answer: Loading a routine only when it is called and not yet in memory; the routine will not be loaded into memory until first time it is called.c) What is the advantage of dynamic loading?

Answer: Routines that are never used are never loaded; thus more free memory.d) What is a relocation register?

Answer: A base register used to give the lowest physical address for a process.e) What is swapping?

Answer: Copying a process from memory to disk to allow space for other processes.

f) List ways of reducing the context switch time.a. Minimize the amount of memory of a process to be swapped.b. Increase the speed of the disk used for swapped-out processes.c. Overlap swapping and program execution.

g) List three ways of treating jobs which request too much memory, once started in a given partition.

a. Abort job with run-time message.b. Return control to job; if it can’t adjust to smaller memory, abort.c. Swap job out.

h) What is the system manager’s main problem in using fixed partitions?Answer: Determining the optimum number of partitions and their size.

i) List the common allocation methods. Which is the poorest? a. First-fit.b. Best-fit.c. Worst-fit — poorest.

j) What is internal fragmentation?Answer: Wasted space within a partition.k) What is external fragmentation?Answer: Wasted space due to empty partitions being too small for current processes.l) What are variable partitions?Answer: Partitions that can be moved in location, and can be changed in number.m) What is compaction? Why use it?Answer: Movement of processes to eliminate small partitions (holes). It allows smaller partitions to form fewer larger ones, to allow larger processes to run.

Page 11: finalosrev

n) What is paging?Answer: Splitting program up into a group of fixed-equal-sized partitions, allowing the parts to be non-contiguous.o) What is a frame?Answer: Fixed-size block of physical memory, each block of same size as page.p) What is contained in the page table?

Answer: Base address of each frame, and corresponding page number.q) How are the page numbers and offset numbers obtained?Answer: Logical address is split into two parts: right-hand bits give the offset numbers, and left-hand bits give the page number.r) Describe the page-to-frame translation.Answer: Logical address is split into page offset and page number. Scan page table for page number; corresponding entry is the frame number, which is combined with page offset to give physical address.s) How many frames are needed for each page?Answer: One.t) How much fragmentation occurs with paging? Which type?Answer: On the average, one-half of last page in each job; this is internal fragmentation.u) In what order are the frames assigned?Answer: In the order of the free frames list.v) List advantages of paging.

a. Sharing common code.b. Reducing fragmentation.

w) What table does the operating system use to keep track of frame allocations?

Answer: Frame table.x) What is segmentation?Answer: Breaking program up into its logical segments, and allocating space for these segments into memory separately. The segments may be of variable length, and need not be allocated contiguously.y) What is the logical address space? What is the physical address space?

a. Collection of segments or pages; each segment has a name and length, and is numbered. Each page is numbered, and has same length as any other, but is not named.

b. Actual memory regions.z) What advantages does segmentation have over paging?

a. Can associate protection with critical segments, without protecting other items.

b. Can share code and data with other users, tightly, so that only the desired portions are shared.

Page 12: finalosrev

Question 5a) Define the meaning of race condition? Answer the question first and use an

execution sequence to illustrate your answer. A race condition is a situation in which more than one processes or threadsare executing and accessing a shared data item concurrently, and the result depends on the order of execution. The following is a very simple counter updating example. The value of count may be 9, 10 or 11, depending on the order of execution of the machine instruction of count++ and count--.

int count = 10;process_1(...) {

// do something // do somethingcount++; count--;}

process_2(...) {

// do something // do somethingcount++; count--;}

The following execution sequence shows a race condition. There are two threads running concurrently (condition 1). Both processes access the shared variable count at the same time (condition 2). Finally, the computation result depends on the order of execution of the SAVE instructions (condition 3). The table below shows the result being 9; however, if the two SAVE instructions are switched (i.e., B’s runs first and A’s second), the result would be 11. Since all three conditions of a race conditions are met, we have a race condition process_1 process_2 Commentdo somthing do something count = 10 initiallyLOAD count process_1 executes count++ADD #1

LOAD count process_2 executes count--SUB #1

SAVE count count is 11 in memorySAVE count Now, count is 9 in memory

Stating that “count++ followed by count--” or “count-- followed by count++” would produce different results and hence a race condition is incorrect, because the threads do not access the shared variable count at the same time (i.e., Condition 2).

Page 13: finalosrev

Question 6a) What is a sector? track? cylinder?

a. Sector: smallest block that can be read or written on a disk.b. Track: collection of sectors all on same circumference on a single

surface.c. Cylinder: collection of all tracks of same radius on a multi-platter

disk.b) How is information on the disk referenced?

Answer: By the drive number, surface, track, and sector.c) Describe how multi-platter disks are organized into tracks and sectors.

Answer: The disks have from three to twenty platters rotating concentrically together. Each platter has two surfaces (except outer platters sometimes do not have outer surfaces), giving us N surfaces. These platters are divided up into C concentric cylinders, all rotating together around the same axis. The intersection of a cylinder with a surface is called a track, a circle on a single surface. Each track is divided up into K circular segments called sectors. So, the number of sectors on a disk = N* C* K.

d) 14.4 What is seek time?Answer: Time for the read/write head to find the desired cylinder.

e) What is latency time?Answer: Time for the disk to rotate to the start of the desired sector.

f) What characteristics determine the disk access speed?a. Seek time: time for head to reach specified track.b. Latency time: determined by rate of rotation.c. Transfer time: determined by rate of rotation, amount of data to be

transferred, and the density of the data on the disk.g) What is the problem with FCFS disk scheduling?

Answer: Head may swing wildly from track to track, temporarily skipping some tracks that need to be used. Very inefficient.

h) What is SSTF disk scheduling?Answer: Shortest-Seek-Time-First: processes I/O in tracks closest to the current position of the head.

i) What disadvantage does SSTF have?Answer: May result in starvation of some requests, if most requests are clustered together within a few tracks, but the others are far away.

j) Describe the SCAN algorithm.Answer: Start with lowest track with an I/O request, and process requests in track order, until highest is found; then proceed in reverse order.

k) How does the C-SCAN method vary from the SCAN method?Answer: After reaching highest track number request, it returns to the lowest track number request, without processing any in return route.

Page 14: finalosrev

Question 7a) Draw the state diagram of a process from its creation to termination, including

all transitions, and briefly elaborate every state and every transition.Answer: The following state diagram is taken from my class note and was discussed in class. Fill in the elaboration for each state and transition by yourself.

b) What are preemptive and non-preemptive scheduling policies? Elaborate your

answer.Answer: With the non-preemptive scheduling policy, scheduling only occurs when a process enters the wait state or terminates. With the preemptive scheduling policy, scheduling also occurs when a processswitches from running to ready due to an interrupt, and from waiting to ready (i.e., I/O completion).

c) Why are the first-come-first-served and shortest-job-next scheduling policies considered as special cases of the priority scheduling policy? What is the major problem of the priority scheduling policy? How can this problem be overcome?

Answer: FIFO and SJN are both priority scheduling policies. The FIFO policy uses arrival time (i.e., earlier means higher), and the SJN policy uses CPU burst length (i.e., shorter CPU burst means higher priority). The major problem of the priority scheduling policy is starvation, meaning lower priority processes/jobsmay not have a chance to run. Aging is a way to overcome starvation. Aging is a technique of gradually increasing the priority of processes/jobs that wait in the system for a long time. With aging, a low priority process that waits for a long time can eventually receive a priority high enough to run.

Page 15: finalosrev

Question 8a) Five processes A, B, C, D and E arrived in this order at the same time with

the following CPU burst and priority values. A smaller value means a higher priority.

Fill the entries of the following table with waiting time and average waiting time for each indicated scheduling policy and each process. Ignore context switching overhead.

Answer:

The above diagram shows the execution pattern of the round-robin algorithm with time quantum 2, where dashed arrows indicate waiting periods.

Page 16: finalosrev

b) A system has four processes P1, P2, P3 and P4, and two resource types R1 and R2, each of which has two instances. Suppose further P1 is requesting an instance of R1 and allocated an instance of R2; P2 is allocated an instance of R1; P3 is requesting an instance of R2 and allocated an instance of R1; and P4 is allocated an instance of R2. Do the following two problems: (1) Draw the resource-allocation graph, and (2) Does this system have a deadlock?

Answer: The resource-allocation graph below uses circles and rectangles to represent processes and resources, respectively. A solid arrow from a resource to a process indicates an allocated resource, while a dashed arrow from a process to a resource indicates a resource allocation request (i.e., not yet allocated).

Note that the system is not in a deadlock state even though the resource-allocation graph has a cycle P1 ) R1 ) P3 ) R2 ) P1, because this cycle can be removed. If P2, a process that does not require additional resource, is allow to run, it will return an instance of R1. Similarly, P4 can run and returns an instance of R2. At this point, P1 and P3 can have their requested resources and run to completion. In other word, we have found a safe sequence < P2, P4, P1, P3 >. Of course, there are other safe sequences.

( c ) Consider the following snapshot of a system with four resource types R1, R2, R3 and R4, and four processes A, B, C and D:

Is the system in a deadlock state? If the system is in a deadlock state, list all processes that involve in a deadlock. Show your computation step-by-step; Answer: Since Available = [0, 0, 0, 0], only C can run as its request is [0, 0, 0, 0]. After C completes, it returns its allocation [1, 0, 1, 1], making the new Available = [1, 0, 1, 1] = [0, 0, 0, 0]+[1, 0, 1, 1]. Now, D can run because its request is [1, 0, 1, 0], which is smaller than Available = [1, 0, 1, 1]. After D’s completion, it

Page 17: finalosrev

returns its allocation [0, 0, 0, 0]. As a result, the Available is still [1, 0, 1, 1]. At this time, neither A nor B can run, because both A and B require one R2 which is not available. Therefore, the system is in a deadlock state, and the involved processes are A and B.

d) Consider the following snapshot of a system:

Is this system in a safe state? Show your computation step-by-step; Answer: The following shows the steps to find a safe sequence (i.e., banker’s algorithm). Note that we always search for a candidate in the order of A, B, C, D and E.

Since Available = [3, 4, 0, 0] is greater than E’s Need=[3,3,0,0], E can run. After E completes, Available = [3,4,0,0]+[1,0,2,1]=[4,4,2,1].

Since Available=[4.4.2.1] is greater than A’s Need=[0,0,0,1], A can run. After A completes, Available = [4,4,2,1]+[1,1,0,2]=[5,5,2,3].

Since Available=[5,5,2,3] is greater than B’s Need=[1,1,2,2], B can run. After B completes, Available= [5,5,2,3]+[0,1,2,3]=[5,6,4,6].

Since Available=[5,6,4,6] is greater than D’s Need=[5,0,0,0], D can run. After D completes, Available = [5,6,4,6]+[0,0,1,1]=[5,6,5,7].

Since Available=[5,6,5,7] is greater than C’s Need=[0,4,5,1], C can run. Therefore, if the five processes are run in the order of E, A, B, D and C,

all of them can finish and the system is safe (i.e., < E,A,B,D, C > is a safe sequence). Note that safe sequence is not unique.

e) Consider the following snapshot of a system with four resource types R1, R2, R3 and R4, and four processes A, B, C and D:

Is the system in a deadlock state? If the system is in a deadlock state, list all processes that involve in a deadlock. Show your computation step-by-step; Answer: Since Available = [0, 0, 0, 0], only C can run as its request is [0, 0, 0, 0]. After C completes, it returns its allocation [1, 0, 1, 1], making the new Available

Page 18: finalosrev

= [1, 0, 1, 1] = [0, 0, 0, 0]+[1, 0, 1, 1]. Now, D can run because its request is [1, 0, 1, 0], which is smaller than Available = [1, 0, 1, 1]. After D’s completion, itreturns its allocation [0, 0, 0, 0]. As a result, the Available is still [1, 0, 1, 1]. At this time, neither A nor B can run, because both A and B require one R2 which is not available. Therefore, the system is in a deadlock state, and the involved processes are A and B.

Question 9

a) Define external and internal fragments. Consider the following memory management schemes: fixed-size partitions, variable-size partitions, and paging. Which sachems have external fragments, and which schemes have internal fragments? Why?

Answer: An external fragment is an unused memory block between two allocated (i.e., used) ones. An internal fragment is an unused memory area within an allocated memory block. Fixed-size partitions and paging do not have external fragments because all partitions and page frames are pre-allocated with fixedsizes. However, they may have internal fragments since a process may not use all the allocated space. Variable-size partitions do not have internal fragment; but, they have external fragments. Note that even though the variable-size partition scheme may allocate a bit more memory than requested, say to fit the boundary alignment requirement, we still consider its allocation being exact.

b) A paging system uses 16-bit address and 4K pages. The following shows the page tables of two running processes, Process 1 and Process 2. Translate the logical addresses in the table below to their corresponding physical addresses, and fill the table entries with your answers.

Page 19: finalosrev

Answer: Consider the logical address 11034 generated by process 1. Since page size is 4K = 4096, logical address 11034 is in page 2 = 11034/4096, and the offset is 11034 - 2× 4096 = 2842. From process 1’s page table, page 2 is in page frame 1, and, hence, the corresponding physical address is 1× 4096 + 2842 = 6938. The second logical address 12345 is translated the same way with process 2’s page table

c) Given memory holes (i.e., unused memory blocks) of 100K, 500K, 200K, 300K and 600K (in address order) as shown below, how would each of the first-fit, next-fit, best-fit and worst-fit algorithms allocate memory requests for 310K, 80K, 350K and 230K (in this order)? The shaded areas are used/allocated regions and are not available. You should clearly write down the size of each memory block and indicate its status (i.e., allocated or free).

i. First-fit:

Page 20: finalosrev
Page 21: finalosrev

Question 10a) What is virtual memory? Answer: A set of techniques and hardware allowing us to execute a program even when not entirely in memory.b) List cases where entire program need not be in memory, traditionally. Answer: Certain options of a program that are rarely used. Many error-handling sections Large arrays, list and tables, where only a small portion is used.c) List benefits of having only part of a program in memory.Answer: Simplifies the programming task. More user-programs can run concurrently in newly freed memory. Less swapping of entire programs, thus less I/O.d) What is demand paging?Answer: Design where a page is never swapped into memory unless needed.e) List advantages of demand paging.f) Answer: Decreases swap time and the amount of free physical memory,

allows higher degree of multiprogramming.

g) Why is there a valid/invalid bit? Where is it kept?Answer: To indicate whether an address is invalid, or a page is swapped out. It is kept in the page-frame table.h) What is a page fault?Answer: An interrupt caused by program needing a specific page not yet in memory.i) List six steps to process a page fault.Answer: Check page-frame table in PCB. If address invalid, abort program. If address valid, but its page not current, bring it in.Find free frame.Request I/O for the desired page.Update the page-frame table in PCB.Restart the instruction.j) Indicate states of instruction execution when a page fault occurs.Answer:

Page fault while fetching the instruction. Page fault while fetching the operands. Page fault while storing data to memory.k) How do you compute the effective access time for a demand-page system?Answer: Let p = probability of a page fault, t = memory access time, f = page-fault time.Then effective time = (1 - p) * t + p * f.l) What factors determine f, the page-fault time?m) Answer: Time to service interrupt. Time to swap page. Time to restart

process.