Exam2 Review Bernard Chen Spring 2007. Deadlock Example semaphores A and B, initialized to 1 P0 P1...
-
Upload
bennett-fitzgerald -
Category
Documents
-
view
227 -
download
1
Transcript of Exam2 Review Bernard Chen Spring 2007. Deadlock Example semaphores A and B, initialized to 1 P0 P1...
Deadlock Characterization
Deadlock can arise if four conditions hold simultaneously:
1. Mutual Exclusion2. Hold and Wait3. No Preemption4. Circular Wait
Resource-Allocation Graph
A set of vertices V and a set of edges E.
V is partitioned into two types: P= {P1, P2, …, Pn}, the set consisting of
all the processes in the system. R= {R1, R2, …, Rm}, the set consisting
of all resource types in the system.
Resource-Allocation Graph
E is also partitioned into two types:
request edge –directed edge P1 →Rj
assignment edge –directed edge Rj→Pi
7.4 Deadlock Prevention
We try to ensure that one of the four necessary conditions cannot hold, then we can prevent it
1. Mutual Exclusion2. Hold and Wait3. No Preemption4. Circular Wait
Resource-Allocation Graph Scheme Claim edge Pi → Rj indicated that process Pj
may request resource Rj in the future; represented by a dashed line.
Claim edge converts to request edge when a process requests a resource.
Request edge converted to an assignment edge when the resource is allocated to the process.
When a resource is released by a process, assignment edge reconverts to a claim edge.
Banker’s Algorithm
Two algorithms need to be discussed:
1. Safety state check algorithm
2. Resource request algorithm
Wait-for Graph Maintain wait-for graph1. Nodes are processes.2. Pi→Pj if Pi is waiting for Pj.
Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle, there exists a deadlock.
Binding of Instructions and Data to Memory
Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes
Load time: Must generate relocatable code if memory location is not known at compile time
Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another.
Logical vs. Physical Address Space
The concept of a logical address space that is bound to a separate physical address space is central to proper memory management
Logical address–generated by the CPU; also referred to as virtual address
Physical address– address seen by the memory unit
Contiguous Allocation
Main memory usually into two partitions:
Resident operating system, usually held in low memory with interrupt vector
User processes then held in high memory
Memory Allocation
The simplest method for memory allocation is to divide memory into several fix-sized partitions
Initially, all memory is available for user processes and is considered one large block of available memory, a hole.
Dynamic Storage-Allocation Problem How to satisfy a request of size n from a
list of free holes First-fit: Allocate the first hole that is
big enough Best-fit: Allocate the smallest hole that
is big enough; must search entire list, unless ordered by size (Produces the smallest leftover hole)
Worst-fit: Allocate the largest hole; must also search entire list (Produces the largest leftover hole)
Fragmentation All strategies for memory allocation suffer
from external fragmentation external fragmentation: as process are
loaded and removed from memory, the free memory space is broken into little pieces
External fragmentation exists when there is enough total memory space to satisfy the request, but available spaces are not contiguous
Fragmentation
If the hole is the size of 20,000 bytes, suppose that next process requests 19,000 bytes. 1,000 bytes are lose
This is called internal fragmentation- memory that is internal to a partition but is nor being used
Hardware Support on Paging If we want to access location I, we must
first index into page table, this requires one memory access
With this scheme, TWO memory access are needed to access a byte
The standard solution is to use a special, small, fast cache, called Translation look-aside buffer (TLB) or associative memory
Hierarchical paging Remember the example is a 32-bit
machine with a page size of 4 KB. A logical address is divided into a
page number consisting of 20 bits and a page offset consisting of 12 bits
10 10 12
Segmentation The user specifies each address by two
quantities: a segment name and an offset
<segment-number, offset>
Compare with page scheme, user specifies only a single address, which is partitioned by hardware into a page number and an offset, all invisible to the programmer
Segmentation Although the user can refer to objects in
the program by a two-dimensional address, the actual physical address is still a one-dimensional sequence
Thus, we need to map the segment number
This mapping is effected by a segment table
In order to protect the memory space, each entry in segment table has a segment base and a segment limit