1
CSCI/CMPE 4334 Operating Systems CSCI/CMPE 4334 Operating Systems Review: Final ExamReview: Final Exam
Review
• Chapters 1 ~ 12 in your textbook
• Lecture slides
• In-class exercises 1~10 (on the course website)
• Review slides• Review for 2 mid-term exams
• Review for final exam
2
Review
• 5 questions (100 points) + 1 bonus question (20 points)– Process scheduling, concurrent processing
– Memory allocation, memory management policy
– I/O device management
• Question types– Q/A
3
Time & Place & Event
• 1:15pm ~ 3pm, May 12 (Tuesday)
• ENGR 1.272
• Closed-book exam
4
Basic OS Organization
5
Processor(s) Main Memory Devices
Process, Thread &Resource Manager
MemoryManager
DeviceManager
FileManager
Chapter 5: Device Management
• Abstract all devices (and files) to a few interfaces
• Device management strategies
• Buffering & Spooling
• Access time of disk device (see lecture slides and exercises)– FCFS, SSTF, Scan/Look, Circular Scan/Look
– See examples in Exercise (2)
6
7
Track (Cylinder)
Sect
or(a) Multi-surface Disk (b) Disk Surface (b) Cylinders
Cylinder (set of tracks)
Several techniques used to try to optimize the time required to service an I/O request
FCFS – first come, first served Requests are serviced in order received No optimization
SSTF – shortest seek time first Find track “closest” to current track Could prevent distant requests from being
serviced
8
Scan/Look Starts at track 0, moving towards last track,
servicing requests for tracks as it passes them
Reverses direction when it reaches last track Look is variant; ceases scan in a direction
once last track request in that direction has been handled
Circular Scan/Look Similar to scan/look but always moves in
same direction; doesn’t reverse Relies on existence of special homing
command in device hardware
9
Chapter 6: Implementing Processes,Threads, and Resources
• Modern process– Concurrent processing
– Fully utilize the CPU and I/O devices
• Address space
• Context switching
• State diagram
10
Process Manager
11
ProgramProgram ProcessProcess
Abstract Computing Environment
FileManager
MemoryManager
DeviceManager
ProtectionProtection
DeadlockDeadlockSynchronizationSynchronization
ProcessDescription
ProcessDescription
CPUCPU Other H/WOther H/W
SchedulerScheduler ResourceManager
ResourceManagerResource
Manager
ResourceManagerResource
Manager
ResourceManager
MemoryMemoryDevicesDevices
Process Mgr
Chapter 7: Scheduling
• Scheduling methods (see lecture slides and exercises)– First-come, first served (FCFS)
– Shorter jobs first (SJF) or Shortest job next (SJN)
– Higher priority jobs first
– Job with the closest deadline first
12
Process Model and Metrics P will be a set of processes, p0, p1, ..., pn-1
S(pi) is the state of pi {running, ready, blocked} τ(pi), the service time
The amount of time pi needs to be in the running state before it is completed
W (pi), the waiting timeThe time pi spends in the ready state before its first
transition to the running state TTRnd(pi), turnaround time
The amount of time between the moment pi first enters the ready state and the moment the process exits the running state for the last time
13
Chapter 8: Basic Synchronization
• Critical sections– ensure that when one process is executing in its
critical section, no other process is allowed to execute in its critical section, called mutual exclusion
• Requirements for Critical-Section Solutions– Mutual Exclusion– Progress– Bounded Waiting
14
Chapter 8: Basic Synchronization (cont'd)
• Semaphore and its P() and V() functions– Definition– Usage
15
Structure of process Pi
repeat entry section critical section exit section remainder sectionuntil false;
16
Mutual Exclusion. If process Pi is executing in its critical section,
then no other processes can be executing in their critical sections.
Progress If no process is executing in its critical section
and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely.
Bounded Waiting A bound must exist on the number of times that
other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
17
Mutual Exclusion. If process Pi is executing in its critical section,
then no other processes can be executing in their critical sections.
Progress If no process is executing in its critical section
and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely.
Bounded Waiting A bound must exist on the number of times that
other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
18
A semaphore, s, is a nonnegative integer variable that can only be changed or tested by these two indivisible (atomic) functions:
19
V(s): [s = s + 1]P(s): [while(s == 0) {wait}; s = s - 1]
20
Proc_0() { proc_1() { while(TRUE) { while(TRUE { <compute section>; <compute section>; P(mutex); P(mutex); <critical section>; <critical section>; V(mutex); V(mutex); } }} }
semaphore mutex = 1;fork(proc_0, 0);fork(proc_1, 0);
21
Proc_0() { proc_1() { . . . . . ./* Enter the CS */ /* Enter the CS */ P(mutex); P(mutex); balance += amount; balance -= amount; V(mutex); V(mutex); . . . . . .} }
semaphore mutex = 1;
fork(proc_0, 0);fork(proc_1, 0);
Chapter 9: High-Level Synchronization
• Simultaneous semaphore Psimultaneous(S1, ...., Sn)
• Event– wait()
– signal()
• Monitor– What is condition? Its usage?
– What are 3 functions of the condition?
22
Chapter 10: Deadlock
• 4 necessary conditions of deadlock– Mutual exclusion– Hold and wait– No preemption– Circular wait
• How to deal with deadlock in OS– Prevention– Avoidance– Recovery
23
Chapter 11: Memory Management
• Functions of memory manager• Abstraction• Allocation• Address translation• Virtual memory
• Memory allocation (see lecture slides and exercises)• Fixed partition method
• First-fit• Next-fit• Best-fit• Worst-fit
• Variable partition method24
Chapter 11: Memory Management (cont'd)
• Dynamic address space binding• Static binding
• Dynamic binding
25
Primary & Secondary Memory
26
CPU
Primary Memory(Executable Memory)
e.g. RAM
Secondary Memorye.g. Disk or Tape
• CPU can load/store• Ctl Unit executes code from this memory•Transient storage
• Access using I/O operations• Persistent storage
Load
Information can be loaded statically or dynamically
Functions of Memory Manager
• Allocate primary memory space to processes
• Map the process address space into the allocated portion of the primary memory
• Minimize access times using a cost-effective amount of primary memory
• May use static or dynamic techniques
27
Memory Manager• Only a small number of interface functions
provided – usually calls to:– Request/release primary memory space– Load programs– Share blocks of memory
• Provides following– Memory abstraction– Allocation/deallocation of memory– Memory isolation– Memory sharing
28
Memory Abstraction• Process address space
– Allows process to use an abstract set of addresses to reference physical primary memory
29
Mapped to objectother than memory
Process Address Space Hardware Primary Memory
Memory layout / organization how to divide the memory into blocks for
allocation? Fixed partition method: divide the memory once before
any bytes are allocated. Variable partition method: divide it up as you are
allocating the memory. Memory allocation
select which piece of memory to allocate to a request
Memory organization and memory allocation are close related
30
31
OperatingSystem
Region 3
Region 2
Region 1
Region 0 N0
N1
N2
N3
pi
pi needs ni units
ni
How to satisfy a request of size n from a list of free blocks First-fit: Allocate the first hole that is big enough Next-fit: Choose the next block that is large
enough Best-fit: Allocate the smallest hole that is big
enough; must search entire list, unless ordered by size. Produces the smallest leftover hole.
Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover hole.
32
33
CPU
0x02010
0x10000+
MARload R1, 0x02010
0x12010
•Program loaded at 0x10000 Relocation Register = 0x10000•Program loaded at 0x04000 Relocation Register = 0x04000
Relocation Register
Relative Address
We never have to change the load module addresses!
Performed automagically by processor
34
Image for pj
Image for pi
Swap pi outSwap pi out
Swap pj in
A characteristic of programs that is very important to the strategy used by virtual memory systems is spatial reference locality Refers to the implicit partitioning of code and
data segments due to the functioning of the program (portion for initializing data, another for reading input, others for computation, etc.)
Can be used to select which parts of the process should be loaded into primary memory
35
Chapter 12: Virtual Memory
• Address translation between virtual memory and physical memory
• Terms• Page
• Page frame
• Paging algorithms• Static allocation
• Dynamic allocation
36
Chapter 12: Virtual Memory (cont'd)
• Fetch policy • when a page should be loaded• On demand paging
• Replacement policy (see lecture slides and exercises)• which page is unloaded• Policies
• Random • Balady’s optimal algorithm*• LRU*• LFU• FIFO• LRU approximation
• Placement policy • where page should be loaded
37
Chapter 12: Virtual Memory (cont'd)
• Thrashing• Load control
38
39
Memory Image for pi
Secondary Memory
Primary Memory
Since binding changes with time, use a dynamic virtual address map, t
40
VirtualAddressSpace
t
41
PrimaryMemory
Virtual AddressSpace
Supv spaceUser space
Paging Disk(Secondary Memory)
2. Lookup (Page i, Addr k)
3. Translate (Page i, Addr k) to (Page Frame j, Addr k)
4. Reference (Page Frame j, Addr k)
1. Reference to Address k in Page i (User space)
Virtual memory system transfers “blocks” of the address space to/from primary memory
Fixed size blocks: System-defined pages are moved back and forth between primary and secondary memory
Variable size blocks: Programmer-defined segments – corresponding to logical fragments – are the unit of movement
Paging is the commercially dominant form of virtual memory today
42
A page is a fixed size, 2h, block of virtual addresses
A page frame is a fixed size, 2h, block of physical memory (the same size as a page)
When a virtual address, x, in page i is referenced by the CPU If page i is loaded at page frame j, the
virtual address is relocated to page frame j If page is not loaded, the OS interrupts the
process and loads the page into a page frame
43
44
Page # Line #Virtual Address
g bits h bits
tMissing Page
Frame # Line #j bits h bits
Physical Address
MAR
CPU Memory
“page table”
Two basic types of paging algorithms Static allocation Dynamic allocation
Three basic policies in defining any paging algorithm Fetch policy – when a page should be loaded Replacement policy –which page is unloaded Placement policy – where page should be
loaded
45
Determines when a page should be brought into primary memory
Usually don’t have prior knowledge about what pages will be needed
Majority of paging mechanisms use a demand fetch policy Page is loaded only when process references it
46
When there is no empty page frame in memory, we need to find one to replace Write it out to the swap area if it has been
changed since it was read in from the swap area Dirty bit or modified bit
pages that have been changed are referred to as “dirty”
these pages must be written out to disk because the disk version is out of date this is called “cleaning” the page
Which page to remove from memory to make room for a new page We need a page replacement algorithm
47
There are too many processes in memory and no process has enough memory to run. As a result, the page fault is very high and the system spends all of its time handling page fault interrupts.
48
Good Luck!
Q/A
49
Top Related