ES_UNIT_4
-
Upload
brem-kumar -
Category
Documents
-
view
217 -
download
0
Transcript of ES_UNIT_4
-
8/12/2019 ES_UNIT_4
1/40
UNIT 4
2 Marks
1. WHAT IS AN RTOS?RTOS is an OS for Embedded system for response time and event controlled processes.
2. WHAT ARE RTOS BASIC SERVICES?RTOS Services:
Basic OS functions - PM, RM, MM, DM, FSM, I/o, etc. RTOS main functions - RT task scheduling and latency control Time management - Time Allocation, time slicing & monitoring for efficiency. Predictability - Predicting time behavior and initiation of task synchronization Priorities Management - Allocation and Inheritance IPC - Synchronization of Tasks using IPC.3. WHY WE NEED RTOS?We need RTOS for the following reasons,
When efficient scheduling in needed for multitasks with time constraints. Task synchronization is needed. Interrupt latency Control is needed.4. WHAT ARE THE OCCASIONS WHERE WE NO NEED RTOS?
Small scale embedded system never use RTOS.
Instead of functions in RTOS standard lib functions in C can be used. Example: malloc(), free(),
fopen(), fclose(), etc.
-
8/12/2019 ES_UNIT_4
2/40
5. WHAT ARE RTOS TASK SCHEDULING MODELS? Control Flow Strategy Data Flow Strategy Control Data Flow Strategy6. WHAT ARE THE FEATURES OF CONTROL FLOW STRATEGY?
Complete control of i/p and o/ps. Co-operative scheduler adopts this strategy. Worst case latencies are well defined.7. WHAT ARE THE FEATURES OF DATA FLOW STRATEGY? Interrupt occurrences are predictable. Task control not deterministic. Ex. Network. Pre-emptive scheduler adopts this strategy.8. WHAT ARE THE FEATURES OF CONTROL DATA FLOW STRATEGY?
Task scheduler functions are designed with predefined time-out delays. WC latency is deterministic, because the maximum delay is per-defined. Cyclic Co-operative Scheduling, Pre-emptive Scheduling, Fixed Time Scheduling, Dynamic
RT Scheduling use this strategy.
9. WHAT ARE BASIC FUNCTIONS OF RTOS?There are various functions that are available in RTOS. They are as follows:
Kernel
-
8/12/2019 ES_UNIT_4
3/40
Error Handling Functions Service and system clock Functions Device drivers, Network Stack send and receive Functions Time and delay Functions Initiate and start Functions Task state switching Functions ISR Functions Memory Functions IPC Functions10.WHAT IS THE NEED FOR TESTED RTOS?While designing a complex embedded system, we need a tested bug free codes for the following.
Multiple task functions in C or C++.
Real time clock based software timers (RTCSWT).
Software for Co-operative scheduler.
Software for a Pre-emptive scheduler.
Device drivers and Device managers.
Functions for Inter Process Communications.
Network functionsError handling functions and exception handling functions.
Testing and System debugging software.
-
8/12/2019 ES_UNIT_4
4/40
A readily available RTOS package provides the above functions and a lot of time is saved for codingthe same.
11.WHAT ARE THE OPTIONS IN RTOS?There are various options for RTOS. They are as follows:
Own RTOS. Linux Based RTOS. C/OS-II. PSoS, VxWorks, Nucleus, Win CE, Palm OS.12.WHAT ARE DIFFERENT PHASES OF SYSTEM DEVELOPMENT METHOLOGY?
To understand the reasons for this study we need to take a look at the different phases of the system
development methodology and observe where the characteristics of the RTOS emerge.
Four fundamental development phases come to light:
Analysis: determines WHAT the system or software has to do;
Design: HOW the system or software will satisfy these requirements;
Implementation: DOING IT, i.e. implementing the system or software;
Maintenance: USING IT, i.e. using the system or software.
In a waterfall model, one supposes that all these phases are consecutive. In practice, this is neverpossible. Most developments end up being chaotic, with all the phases being executedsimultaneously. Adopting a pragmatic approach, the methodology used should just be a framework to
guide producing the correct documents, whilst performing appropriate reviews and audits at the right
time.
13.WHAT IS BOUNDED DISPATCH TIME?When the system is not loaded, there will be just one thread waiting in a ready state to be
xecuted. With higher loads, there might be multiple threads in the ready list. The dispatch time
should be independent of the number of threads in the list.
In good RTS design, taking into account the thread priorities, the list is organized when anelement is added to the list so that when a dispatch occurs, the first thread in the list can be taken.
14.WHAT IS THE NEED FOR MAX NUMBER OF TASKS?A task, thread or process may be considered as an OS object. Each object in the OS needs some
memory space for the object definition. The more complex the object, the more attributes it will
-
8/12/2019 ES_UNIT_4
5/40
have, and the bigger the definition space will be. If there is for example an MMU in the system, themapping tables are extra attributes for the task and more system space is needed for all this.
This definition space may be part of the system or part of the task. If it is part of the system, thenin most cases the RTOS would like to reserve the maximum space it allocates to these tables. In thiscase, the maximum number of tasks which may coexist in the system is then a system parameter.
Another approach could be full dynamic allocation of this space. The maximum number of tasks isthen only limited by the available memory in the system shared among object tables, code, etc.
15.WHY MIN RAM IS REQUIRED PER TASK?Memory footprint is an important issue in an embedded system despite the cost reductions in
silicon and disk memory these days. The size of the OS, or the system space necessary to run the OS
with all the objects defined is important. A task needs to run RAM for the changing parts of the taskcontrol block (the task object definition) and for the stack and heap to be capable of executing theprogram (which might be in ROM or RAM). It is the RTOS vendors choice as to the minimum levelof RAM to allocate for this. It is also up to the vendor to indicate the size of the minimum application
for the intended purpose of the RTOS.
16.WHAT IS THE NEED FOR MAXIMUM ADDRESSABLE MEMORY SPACE?Each task can address a certain memory space. Each vendor may have a different memory model,
depending on whether he relies on X86 segments or not. This depends largely on the product historyof the RTOS. For instance whether it was initially developed for the flat address space of Motorola
processors or for the segmented X86 Intel series. The 64K segments in a 8086 processor is animportant limitation for modern software. RT systems that need to be back- compatible with this type
of hardware constitute a burden for the designer. On the other hand, it should be noted that thislimitation is not imposed by the RTOS, making new designs possible outside the 64K space permodule.
17.DRAW THE TASK STATE TRANSITION DIAGRAM
18.WHEN DOES THE PRIORITY INVERSION OCCURS?
-
8/12/2019 ES_UNIT_4
6/40
Priority inversion is a situation where in lower priority tasks will run blocking higher priority
tasks waiting for resource (mutex). The solution to priority inversion is Priority inheritance.
19.What are Classical Problems of Synchronization?
Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem20.HOW CAN THE DEADLOCK BE RECOVERED?
The deadlock condition can be recovered by,
Through preemption Rollback Killing Process21.DEFINE PROCESS.A process is a program that performs a specific function.
22.DEFINE TASK AND TASK STATE.A task is a program that is within a process. It has the following states:
1. Ready2. Running3. Blocked4. Idle23.DEFINE (TCB)The TCB stands for Task Control Block which holds the control of all the tasks within the block.
It has separate stack and program counter for each task.
24.WHAT IS A THREAD?A thread otherwise called a lightweight process (LWP) is a basic unit of CPU utilization, it
comprises of a thread id, a program counter, a register set and a stack. It shares with other threads
belonging to the same process its code section, data section, and operating system resources such asopen files and signals.
25.WHAT ARE THE BENEFITS OF MULTITHREADED PROGRAMMING?The benefits of multithreaded programming can be broken down into four major categories:
-
8/12/2019 ES_UNIT_4
7/40
Responsiveness Resource sharing Economy Utilization of multiprocessor architectures26.COMPARE USER THREADS AND KERNEL THREADS.User threads Kernel threads
User threads are supported above the kernel and are implemented by a thread library at the user
level Kernel threads are supported directly by the operating system
Thread creation & scheduling are done in the user space, without kernel intervention.
Therefore they are fast to create and manage Thread creation, scheduling and management aredone by the operating system. Therefore they are slower to create & manage compared to userthreads
Blocking system call will cause the entire process to block if the thread performs a blockingsystem call, the kernel can schedule another thread in the application for execution
27.DEFINE RTOS.A real-time operating system (RTOS) is an operating system that has been developed for real-
time applications. It is typically used for embedded applications, such as mobile telephones,
industrial robots, or scientific research equipment.
28.DEFINE TASK AND TASK RATES.An RTOS facilitates the creation of real-time systems, but does not guarantee that they are real-
time; this requires correct development of the system level software. Nor does an RTOS necessarilyhave high throughput rather they allow, through specialized scheduling algorithms and
deterministic behavior, the guarantee that system deadlines can be met. That is, an RTOS is valued
more for how quickly it can respond to an event than for the total amount of work it can do. Keyfactors in evaluating an RTOS are therefore maximal interrupt and thread latency
29. DEFINE INTER PROCESS COMMUNICATION.
Inter-process communication (IPC) is a set of techniques for the exchange of data among
multiple threads in one or more processes. Processes may be running on one or more computers
connected by a network. IPC techniques are divided into methods for message passing,synchronization, shared memory, and remote procedure calls (RPC). The method of IPC used mayvary based on the bandwidth and latency of communication between the threads, and the type of data
being communicated.
30. DEFINE SEMAPHORE.
A semaphore S is a synchronization tool which is an integer value that, apart from
initialization, is accessed only through two standard atomic operations; wait and signal.
-
8/12/2019 ES_UNIT_4
8/40
Semaphores can be used to deal with the n-process critical section problem. It can be also used tosolve various synchronization problems.
The classic definition of wait
wait (S){while (S
-
8/12/2019 ES_UNIT_4
9/40
35. WHAT IS PRIORITY INHERITANCE?
Priority inheritance is a method for eliminating priority inversion problems. Using this
programming method, a process scheduling algorithm will increase the priority of a process to themaximum priority of any process waiting for any resource on which the process has a resource lock.
36. DEFINE MESSAGE QUEUE.
A message queue is a buffer managed by the operating system. Message queues allow avariable number of messages, each of variable length, to be queued. Tasks and ISRs can send
messages to a message queue, and tasks can receive messages from a message queue (if it isnonempty). Queues can use a FIFO (First In, First Out) policy or it can be based on priorities.Message queues provide an asynchronous communications protocol.
37. DEFINE MAILBOX AND PIPE.
A mailboxes are software-engineering components used for interprocess communication, or
for inter-thread communication within the same process. A mailbox is a combination of a semaphoreand a message queue (or pipe).Message queue is same as pipe with the only difference that pipe is byte oriented while queue can beof any size.
38. DEFINE SOCKET.
A socket is an endpoint for communications between tasks; data is sent from one socket to
another.
39. DEFINE REMOTE PROCEDURE CALL.
Remote Procedure Calls (RPC) is a facility that allows a process on one machine to call aprocedure that is executed by another process on either the same machine or a remote machine.Internally, RPC uses sockets as the underlying communication mechanism.
40. DEFINE THREAD CANCELLATION & TARGET THREAD.
The thread cancellation is the task of terminating a thread before it has completed. A thread
that is to be cancelled is often referred to as the target thread. For example, if multiple threads areconcurrently searching through a database and one thread returns the result, the remaining threads
might be cancelled.
41. WHAT ARE THE DIFFERENT WAYS IN WHICH A THREAD CAN BECANCELLED?
Cancellation of a target thread may occur in two different scenarios: Asynchronous cancellation: One thread immediately terminates the target thread is calledasynchronous cancellation. deferred cancellation: The target thread can periodically check if it should terminate, allowing the
target thread an opportunity to terminate itself in an orderly fashion.
-
8/12/2019 ES_UNIT_4
10/40
42. DEFINE DEADLOCK.
A process requests resources; if the resources are not available at that time, the
process enters a wait state. Waiting processes may never again change state, because the resources
they have requested are held by other waiting processes. This situation is called a deadlock.
43. WHAT ARE CONDITIONS UNDER WHICH A DEADLOCK SITUATION MAY
ARISE?
A deadlock situation can arise if the following four conditions hold simultaneously in a system:
1. Mutual exclusion2. Hold and wait3. No pre-emption4. Circular wait44. WHAT ARE THE VARIOUS SHARED DATA OPERATING SYSTEM SERVICES?
explain how operating systems provide abstraction from the computer hardware.
describe the meaning of processes, threads and scheduling in a multitasking operating
system.
describe the role of memory management explaining the terms memory swapping, memorypaging, and virtual memory.
contrast the way that MS-DOS and UNIX implement file systems compare the design of
some real operating systems.
11 Marks
1. What Is Priority Inversions? (11)Priority inversion is a problematic scenario in scheduling when a higher priority task is
indirectly preempted by a lower priority task effectively "inverting" the relative priorities of the
two tasks.
This violates the priority model that high priority tasks can only be prevented from runningby higher priority tasks and briefly by low priority tasks which will quickly complete their use of
a resource shared by the high and low priority tasks.
Example of a priority inversion
Consider there is a task L, with low priority. This task requires resource R. Consider that
L is running and it acquires resource R. Now, there is another task H, with high priority. Thistask also requires resource R. Consider H starts after L has acquired resource R. Now H has to
wait until L relinquishes resource R.
Everything works as expected up to this point, but problems arise when a new task M
starts with medium priority during this time. ` Since R is still in use (by L), H cannot run. Since
-
8/12/2019 ES_UNIT_4
11/40
M is the highest priority unblocked task, it will be scheduled before L. Since L has been
preempted by M, L cannot relinquish R. So M will run till it is finished, then L will run - at least
up to a point where it can relinquish R - and then H will run. Thus, in above scenario, a task withmedium priority ran before a task with high priority, effectively giving us a priority inversion.
In some cases, priority inversion can occur without causing immediate harmthedelayed execution of the high priority task goes unnoticed, and eventually the low priority taskreleases the shared resource. However, there are also many situations in which priority inversion
can cause serious problems. If the high priority task is left starved of the resources, it might lead
to a system malfunction or the triggering of pre-defined corrective measures, such as a watch dogtimer resetting the entire system.
Priority inversion can also reduce the perceived performance of the system. Low priority
tasks usually have a low priority because it is not important for them to finish promptly (forexample, they might be a batch job or another non-interactive activity). Similarly, a high priority
task has a high priority because it is more likely to be subject to strict time constraintsit may
be providing data to an interactive user, or acting subject to real time response guarantees.Because priority inversion results in the execution of the low priority task blocking the high
priority task, it can lead to reduced system responsiveness, or even the violation of response time
guarantees.
A similar problem called deadline interchange can occur within earliest deadline first
scheduling (EDF).
SolutionsThe existence of this problem has been known since the 1970s, but there is no fool-proof
method to predict the situation. There are however many existing solutions, of which the most
common ones are:
Disabling all interrupts to protect critical sections
When disabled interrupts are used to prevent priority inversion, there are only two
priorities: preemptible, and interrupts disabled. With no third priority, inversion is impossible.
Since there's only one piece of lock data (the interrupt-enable bit), misordering locking isimpossible, and so deadlocks cannot occur. Since the critical regions always run to completion,
hangs do not occur. Note that this only works if all interrupts are disabled. If only a particular
hardware device's interrupt is disabled, priority inversion is reintroduced by the hardware's
prioritization of interrupts. A simple variation, "single shared-flag locking" is used on somesystems with multiple CPUs. This scheme provides a single flag in shared memory that is used
by all CPUs to lock all inter-processor critical sections with a busy-wait. Interprocessor
communications are expensive and slow on most multiple CPU systems. Therefore, most such
systems are designed to minimize shared resources. As a result, this scheme actually works wellon many practical systems. These methods are widely used in simple embedded systems, where
they are prized for their reliability, simplicity and low resource use. These schemes also require
clever programming to keep the critical sections very brief, under 100 microseconds in practicalsystems. Many software engineers consider them impractical in general-purpose computers.
-
8/12/2019 ES_UNIT_4
12/40
A priority ceiling
With priority ceilings, the shared mutex process (that runs the operating system code) hasa characteristic (high) priority of its own, which is assigned to the task locking the mutex. This
works well, provided the other high priority task(s) that tries to access the mutex does not have a
priority higher than the ceiling priority.
Priority inheritance
Under the policy of priority inheritance, whenever a high priority task has to wait for
some resource shared with an executing low priority task, the low priority task is temporarily
assigned the priority of the highest waiting priority task for the duration of its own use of the
shared resource, thus keeping medium priority tasks from pre-empting the (originally) low
priority task, and thereby effecting the waiting high priority task as well. Once the resource is
released, the low priority task continues at its original priority level.
2. Explain The Concept Deadlock in Detail? (11)A set of processes is deadlocked if each process in the set is waiting for an event that only
another process in the set can cause (including itself).
Waiting for an event could be:
waiting for access to a critical section waiting for a resource Note that it is usually a non-preemptable (resource). pre-emptable
resources can be yanked away and given to another.
Conditions for Deadlock
Mutual exclusion: resources cannot be shared. Hold and wait: processes request resources incrementally, and hold on to what they've
got.
No preemption: resources cannot be forcibly taken from processes. Circular wait: circular chain of waiting, in which each process is waiting for a resource
held by the next process in the chain.
Strategies for dealing with Deadlock
ignore the problem altogether ie. ostrich algorithm it may occur very infrequently, cost ofdetection/prevention etc may not be worth it.
-
8/12/2019 ES_UNIT_4
13/40
detection and recovery avoidance by careful resource allocation
prevention by structurally negating one of the four necessary conditions.
Deadlock Prevention
Difference from avoidance is that here, the system itself is build in such a way that there are nodeadlocks.
Make sure atleast one of the 4 deadlock conditions is never satisfied.
This may however be even more conservative than deadlock avoidance strategy.
Attacking Mutex conditionnever grant exclusive access. but this may not be possible for several resources.
Attacking preemptionnot something you want to do.
Attacking hold and wait conditionmake a process hold at the most 1 resource at a time.
make all the requests at the beginning. All or nothing policy. If you feel, retry. eg. 2-phase locking
Attacking circular waitOrder all the resources.
Make sure that the requests are issued in the correct order so that there are no cyclespresent in the resource graph. Resources numbered 1 ... n. Resources can be requested
only in increasing order. ie. you cannot request a resource whose no is less than any you
may be holding.
Deadlock Avoidance
Avoid actions that may lead to a deadlock.
-
8/12/2019 ES_UNIT_4
14/40
Think of it as a state machine moving from 1 state to another as each instruction is executed.
Safe State
Safe state is one where
It is not a deadlocked state
There is some sequence by which all requests can be satisfied.
To avoid deadlocks, we try to make only those transitions that will take you from one safestate to another. We avoid transitions to unsafe state (a state that is not deadlocked, and is not
safe)
eg.
Total # of instances of resource = 12(Max, Allocated, Still Needs)
P0 (10, 5, 5) P1 (4, 2, 2) P2 (9, 2, 7) Free = 3 - Safe
The sequence is a reducible sequence
the first state is safe.
What if P2 requests 1 more and is allocated 1 more instance?
- results in Unsafe state
So do not allow P2's request to be satisfied.
Banker's Algorithm for Deadlock Avoidance
When a request is made, check to see if after the request is satisfied, there is a (atleast
one!) sequence of moves that can satisfy all the requests. ie. the new state is safe. If so, satisfythe request, else make the request wait.
How do you find if a state is safe
n process and m resources
Max[n * m]
Allocated[n * m]
Still_Needs[n * m]Available[m]
Temp[m]
Done[n]
while () {
Temp[j]=Available[j] for all j
Find an i such thata) Done[i] = False
b) Still_Needs[i,j]
-
8/12/2019 ES_UNIT_4
15/40
Temp[j] += Allocated[i,j] for all j
Done[i] = TRUE}
}else if Done[i] = TRUE for all i then state is safe
else state is unsafe
}
Detection and Recovery
Is there a deadlock currently?
One resource of each type (1 printer, 1 plotter, 1 terminal etc.)
check if there is a cycle in the resource graph. for each node N in the graph do DFS(depth first search) of the graph with N as the root In the DFS if you come back to a nodealready traversed, then there is a cycle. }
Multiple resources of each type
m resources, n processes
Max resources in existence = [E1, E2, E3, .... Em]
Current Allocation = C1-n,1-m
Resources currently Available = [A1, A2, ... Am]
Request matrix = R1-n,1-m
Invariant = Sum(Cij) + Aj = Ej
Define A
-
8/12/2019 ES_UNIT_4
16/40
when a deadlock is detected, see which resource is needed. Take away the resource from the process currently having it. Later on, you can restart this process from a check pointed state where it may need to
reacquire the resource.
killing processes
3. Explain Inter Process Communication Briefly? (11)Inter-process communication (IPC) is a set of interfaces, which is usually programmed in
other for a programmer to communicate between a series of processes. This allows the running
of programs concurrently in an operating system.
There are quite a number of methods used in inter-process communications. They are:
Pipes: This allows the flow of data in one direction only. Data from the output is usually
buffered until the input process receives it which must have a common origin.
Named Pipes: This is a pipe with a specific name. It can be used in processes that do not have
a shared common process origin. Example is FIFO where the data is written to a pipe is first
named.
Message queuing: This allows messages to be passed between messages using either a singlequeue or several message queues. This is managed by the system kernel. These messages are co-
ordinated using an application program interface (API) .
Semaphores: This is used in solving problems associated with synchronization and avoiding
race conditions. They are integers values which are greater than or equal to zero .
Shared Memory: This allows the interchange of data through a defined area of memory.
Semaphore value has to be obtained before data can get access to shared memory.
Sockets: This method is mostly used to communicate over a network, between a client and aserver. It allows for a standard connection which I computer and operating system independent.
Mutual exclusion processes has a shortcoming which is the fact that it wastes the processor time.
-
8/12/2019 ES_UNIT_4
17/40
There are primitive interprocesses that block instead of wasting the processor time.
Some of these are:
Sleep and WakeupSLEEP is a system call that causes the caller to block, that is, be suspended until another
process wakes it up. The WAKEUP call has one parameter, the process to be awakened.
The Producer-Consumer ProblemIn this case, two processes share a common, fixed-size buffer. One of the processes puts
information into the buffer, and the other one, the consumer, takes it out. This could be easy with
3 or more processes in which one wakeup waiting bit is insufficient, another patch could bemade, and a second wakeup waiting bit is added of 8 or 32 but the problem of race condition will
still be there.
Events CounterThis involves programming a program without requiring mutual exclusion. Event counters
value can only increase and never decrease. There are three operations defined on an eventcounter for example, E:
1. Read (E): Return value of E2. Advance (E): Atomically increment E by 1.3. Await (E, v): Wait until E has a value of v or more.Two events counters are used. The first one, in would be to count the cumulative number of
items that the producer discussed above has put into the buffer since the program started running.
The other one out, counts the cumulative number of items that the consumer has removed fromthe buffer so far. Therefore, it is clear that in must be greater than or equal to out, but not morethat the size of the buffer. This is method that works with pipes discussed above.
MonitorsThis is about the best way of achieving mutual exclusion.
A Monitor is a collection of procedures, variables, and data structures that are grouped
together in a special kind of module or package. The monitor uses the wait and signal. The
"WAIT" is to indicate to the other process that the buffer is full and so causes the calling process
to block and allows the process that was earlier prohibited to enter the process at this point.
"SIGNAL" will allow the other process to be awakened by the process that entered during the
"WAIT".
-
8/12/2019 ES_UNIT_4
18/40
4. What are Signals Related IPC Functions? (11)One way for messaging is to use an OS function signal ( ). It is provided in Unix, Linux and
several RTOs. Unix and Linux OSes use signals profusely and have 31 different types of signals
for the various events. The task or process sending the signal uses a function signal ( ) having an
integer number n in the argument. A signal is function, which executes a software interruptinstruction INT n or SWI n.
A signal provides the shortest communication. The signed ( ) sends a output n for a process,
which enables the OS to unmask a signal mask of a process or task as per the n. The task iscalled signal handler and has coding similar to the ones in an ISR. The handler runs in a way
similar to a highest priority ISR. An ISR runs on an hardware interrupt provided that the
interrupt is not masked. The handler runs on the signal provided that the signal is not masked.
The signal ( ) forces the OS to first run a signaling process or task called signal handler.
When there is return from the signaled or forced task or process, the process, which sent the
signal, runs the codes as happens on a return from an ISR. A signal mask is the softwareequivalent of the flag at a register that sets on masking hardware interrupt. Unless masked by a
signal masked, the signal allows the execution of an ISR.
An integer number (for example n) represents each signal and that number associates afunction (or process or task) signal handler, an equivalent of the ISR. The signal handler has a
function called whenever a process communicates that number.
A signal handler is not called by a code. When the signal is sent from the process, OS
interrupts the process execution and calls the function for signal handling. On return from the
signal hander, the process continues as before.
For example, signal (5). The signal mask of a signal handler 5 is reset. The signal handler and
connect function associate the number 5. The function represented by number 5 is forced run by
the signal handler.
An advantage of using it is that unlike semaphores it takes the shortest possible CPU time to
force a handler to run. The signals are the interrupts that can be used as the IPC function ofsynchronizing.
A signal is unlike the semaphore. The semaphore has use as a token or resources key to let
another task process block, or which locks a resource to a particular task process for a givensection of the codes. A signal is just an interrupt that is shared and used by another interrupt-
servicing process. A signal raised by one process forces another process (signal handler) to
interrupt and catch that signal in case the signal is not masked (use of that signal handler is not
disabled). Signals are to be handler only for forcing the run is very high priority processes as itmay disrupt the usual schedule and priority inheritance mechanism. It may also cause reentrancy
problems.
-
8/12/2019 ES_UNIT_4
19/40
An important application of the signals is to be handle exceptions. (An exception is a process
that is executed on a specific reported run-time condition). A signal reports an error (called
exception) during the running of a task and then lets the scheduler initiate an error-handlingprocess or function or task. An error-handling signal handler handles the different error login of
other task. The devices driver functions also use the signals to call the handles (ISRs).
The following are the signal related IPC functions, which are generally not provided in theRTOS such as COS-II and provided in RTOS such a V x Works or OS such as Unix and Linux.
1. sigHandler ( ) to the signal context. The signal contexts to a signal identified by the signalnumber and define a pointer to the signal context. The signal context saves the registers on
signal.
2. Connect an interrupt vector to a signal provides the PC value for the signal handlerfunction address.
3. A function signal ( ) to send a signal identified by a number in the argument to a task.
4. Mask the signal.
5. Unmask the signal.
6. Ignore the signal.
5. What is Semaphore Flag and Explain Mutex as Resource Key? (6+5)Semaphore Flag
The OS system for semaphore as notice or token for an event occurrence. Semaphore
facilitates IPC for notifying (through a scheduler) a waiting task section change to the running
state upon event at presently running code section at an ISR or task. A semaphore as binarysemaphore is a token or resources key. The OS also provides for mutex access to a set of codes
in a task (or thread or process). The use of mutex is such that the priority inversion problem is
not solved in some OSes while it is solved in other OSes. The OS also provides for countingsemaphores. The OS may provide for POSIX standard P and V semaphores which can be used
for notifying event occurrence or as mutex or for counting. The timeout can be defined in the
argument with wait function for the semaphore IPC. The error pointer can also be defined in the
arguments for semaphore IPC function.
The following are the functions, which are generally provided in an OS, for example, COS-II
for the semaphores.
1. OSSemCreate, a semaphore function to create the semaphore in an event control block
(ECB). Initialize it with an initial value.
-
8/12/2019 ES_UNIT_4
20/40
2. OSSemPost, a function which sends the semaphore notification to ECB and it value
increments on event occurrence (used in ISRs as well as tasks).
3. OSSemPend, a function, which waits the semaphore from an event, and its value
decrements on taking note of that event occurrence (used in tasks not in ISRs).
4. OSSemAccept, a function, which reads and returns the present semaphore value and ifit shows occurrence of an event (by non-zero value) then it takes note of that and
decrements that value (no wait; used in ISRs as well as tasks).
5. OSSemQuery, a function, which queries the semaphore for an event occurrence or
non-occurrence by reading its value and returns the present semaphore value, and returns
pointer to the data structure OSSemdata. The semaphore value does not decrease. The
OSSemData points to the present value and table of the tasks waiting for the semaphore(used in tasks).
Mutex as Resource Key
An OS, using a mutex blocks a critical section in a task on taking the mutex by another
tasks critical section other task unlocks on releasing the mutex. The mutex wait by unlocking
task can be for a specified timeout.
There is a function in kernel called lock ( ). It locks a process to be resources till that
process executes unlock ( ). A wait loop creates and when wait is over the other process waitingfor the lock starts. Use of lock ( ) and unlock ( ) involves litte overhead compared to uses of
OSSemPend ( ) and OSSenPost ( ) when using a mutex. Overhead means number of operations
needed for blocking one process and starting another. However, a resource of high priority
should not lock the other processes by blocking an already running task in the followingsituation. Suppose a task is running and a little is left for its completion. The running time left for
it is less compared with the time that would be taken in blocking it and context switching. There
is an innovative concept of spin locking in certain OS schedulers. A spin lock is a powerful toolin the situation described before. The scheduler locking process for a task I waits in a loop to
cause the blocking of the running task first for a time interval t, then for (t-t), then (t-2t) and so
on. When this time interval spin-downs to 0, the task that requested the lock of the processornow unlocks the running taskI and blocks it from further running. The request is now granted to
task J to unblock and start running provided that task is of higher priority. A spin lock does not
let a running task to be blocked instantly, but first successively tries with or without decreasing
the trial periods before finally blocking a task. A spin-lock obviates need of context-switching bypre-emption and use of mutex function-calls to OS.
6. Explain Message Queue in Detail? (11)Some OSes do not distinguish, or make little distinction, between the use of queues, pipes
and mailboxes during the message communication among processes, while other OSes regard the
use of queues as different.
-
8/12/2019 ES_UNIT_4
21/40
A message queue is an IPC with the following features.
1. An OS provides for inserting and deleting the message pointers or messages.
2. Each queue for the message or message-pointers needs initialization (creation) before
using functions in kernel for the queue.
3. Each created queue has an ID.
4. Each queue has a user-definable size (upper limits for number of bytes).
5. When an OS call is to insert a message into the queue, the bytes are as per the pointed
number of bytes. For example, for an integer or float variables as pointer, there will be 4
bytes inserted per call. If the pointer is for an array of eight integers, then 32bytes will beinserted into the queue. When a message-pointer is inserted into queue, the 4 bytes
inserts, assuming 32-bit addresses.
6. When a queue becomes full, there is error handling function to handle that.
The OS functions for a queue, for example, in COS-II, can be as follows:
1. OSQCreate, a function that creates a queue and initializes the queue.
2. OSQPost, a function that sends a message into the queue as per the queue tail pointer,it can be used by tasks as well as ISRs.
3. The OSQPend waits for a queue message at the queue and reads and deletes that when
received (wait, used by tasks only, not used by ISRs).
4. OSQAccept deletes the present message at queue head after checking its presence yes
or no and after the deletion the queue head pointer increments. (No wait; used by ISRs aswell as tasks).
5. OSQFlush deletes the messages from queue head to tail. After the flush the queue headand tail points to QTop, which is the pointer at start of the queuing. (used by ISRs and
tasks)
6. OSQQuery queries the queue message block but the message is not deleted. Thefunction returns pointer to the message queue *Qhead if there are the messages in the
queue or else returns NULL. It returns a pointer to the structure of the queue data
structure for *QHEAD, number of queued messages, size and table of tasks waiting for
the messages from the queue. (queue is used in tasks)
7. OSQPostFront sends a message to front pointer, *QHEAD. Use of this function is
made in the following situations. A message is urgent or is higher priority than all thepreviously posted message into the queue (used in ISRs and tasks).
-
8/12/2019 ES_UNIT_4
22/40
In certain RTOS, a queue is given select option and the option is provided for priority or
FIFO. The task having priority if started deletes a queue message first in case the priority optionis selected. The task pending since longer period a queue message first in case the FIFO option is
selected.
7. What is the Purpose of Mailbox in Real Time Operating Systems? (11)A message-mailbox is for an IPC message that can be used only by a single destined task.
The mailbox message is a message-pointer or can be a message. (COS-II provides for sendingmessage-pointer into the box). The source (mail sender) is the task that sends the message
pointer to a created (initialize) mailbox (the box initially has the NULL pointer before the
message posts into the box). The destination is the place where the OSMBoxPend function waits
for the mailbox message and reads it when received.
A mobile phone LCD display task is an example that uses the message mailboxes as an IPC.
In the mailbox, when the time and data message from a clock-process arrives, the time isdisplayed at side corner on top line. When the message from another task to display a phone
number, it is displayed the number at middle at a line. When the message from another task to
display the signal strength at antenna, it is displayed at the vertical bar on the left.
Another example of using a mailbox is the mailbox for an error-handling task, which handles
the different error logins from other tasks.
The following may be the provisions at an OS for IPC functions when using the mailbox.
1. A task may put into mailbox only a pointer to the message-block or number of message
bytes as per the OS provisioning.
2. There are three types of the mailbox provisions.
A queue may be assumed a special case of a mailbox with provision for multiple messages ormessage pointers. An Os can provide for queue from which a read (deletion) can be on a FIFO
basic or alternatively an OS can provide for the multiple mailbox messages with each message
having a priority parameter. The read (deletion) can then only be on priority basic in casemailbox messages has multiple messages with priority assigned to high priority ones. Even if the
messages are inserted in a different priority, the deletion is as per the assigned priority parameter.
An OS may provision for mailbox and queue separately. A mailbox will permit one messagepointer per box and the queue will permit multiple messages or messages pointers. COS-II
RTOS is an example of such an OS.
The RTOS functions for mailbox for the use by the tasks can be the following:
1. OSMBoxcreate creates a box and initializes the mailbox contents with an NULL
pointer.
-
8/12/2019 ES_UNIT_4
23/40
2. OSMBoxPost sends (writes) a message to the box.
3. OSMBoxWait (pend) waits for a mailbox message, which is read when received.
4. OSMBoxAccept reads the current message pointer after checking the presence yes or
no (no wait).
5. OSMBoxQuery queries the mailbox when read and not needed later.
An ISR can post (but not wait) into the mailbox of a task.
8. What is Pipes? How it is used in Real Time Operating System? (11)The OS pipe functions are unlike message queue functions. The difference is that pipe
functions are similar to the ones used for devices such as file.
A message-pipe is a device for inserting (writing) and deleting from the between two giveninterconnected tasks or two sets of tasks. Writing and reading from a pipe is like using a C
command fwrite with a file name to write into a named file, and fread with a file name to read
from a named file. Pipes are also like Java Pipe Input Output Streams.
1. One task using the function fwrite in a sets of tasks, can insert (write) to pipe at the back
pointer address, *pBACK.
2. Another task using the function fread in a set of tasks can delete (read) from a pipe at the
front pointer address, *pFRONT.
3. In a pipe there may be no fixed number of bytes per message but there is end-pointer. Apipe can therefore be inserted limited of bytes and have a variable number of bytes per
messages between the initial and final pointers.
4. Pipe is unidirectional. One thread or task inserts into it and the other one deletes from it.
An example of the need for messaging and thus for IPC using a pipe is a network stack.The OS functions for pipe are the following:
i. pipeDevCreate for creating a device, which functions as pipe.
ii. Open ( ) for opening the device to enable its use from beginning of its allocated buffer,
its use is with options and restrictions (or permissions) defined at the time of opening.
iii. Connect ( ) for connecting a thread or task inserting bytes to the thread or taskdeleting bytes from the pipe.
iv. Write ( ) function for inserting (writing) from the bottom of the empty memory spacesin the buffer allocated to it.
-
8/12/2019 ES_UNIT_4
24/40
v. Read ( ) function for deleting (reading) from the pipe from the bottom of the unread
memory spaces in the buffer filled after writing into the pipe.
vi. Close ( ) for closing the device the enable its use from beginning of its allocated buffer
only after opening it again.
9. Explain About the Virtual Socket? (11)The use of pipe between a process at the card and a process at the host will have the
following problems.
1. We need the card information to be transferred from a process. A as bytes stream to the
host machine process B and the B sends messages as bytes stream to A. There is need for bi-directional communication between A and B.
2. We need the A and Bs ID or port as well as address information when communication.These must be specified either for the destination alone or for both source and destination (It
is similar to sending the messages in a letter along with address specification).
A protocol provides for communication along with the byte information of the address orport of the destination alone or addresses or ports of both source and destination. A protocol may
provide for the addresses as well as ports of both source and destination in case of the remote
processes (for example, in IP protocol). Also there are two types of the protocols.
1. There may be the need of using a connectionless protocol when sending and receiving
message streams. An example of such a protocol is UDP (user datagram protocol). UDP
protocol requires a UDP header, which contains source port (optional) and destination portnumber, length of the datagram and checksum for the header-bytes. Port means a process or
task for specific application. The number specific the process. Connectionless means there is
no connection establishment between source and destination before actual transfer of datastream can take place. Datagram means a set of data, which is independent and need not be in
sequence with the previously sent data. Checksum is sum of the bytes to enable the checking
of the erroneous data transfer. For remote communication, the address, for example, IPaddress is also required in the header.
2. There may be the need of using a connection-oriented protocol, for example, TCP.
Connection-oriented protocol means a protocol, which provides that there must first aconnection establishment between the source and destination, and then the actual transfer of
data stream can take place. At end, there must be connection termination.
-
8/12/2019 ES_UNIT_4
25/40
Socket provides a devices-like mechanism for bi-direction communication. It provides for
using a protocol between the source and destination processes for transferring the bytes; it
provides for establishing and closing a connection between the source and destination processesusing a protocol for transferring the bytes; it may provide for listening from multiple sources or
multicasting to multiple destinations. Two tasks at two distinct places are locally interconnect
through the sockets. Multiple tasks at multiple distinct places interconnect through the sockets toa socket at a server process. The client and server sockets can run on the same CPU or at distantCPUs on the Internet.
Sockets can be using different domain. For example, a socket domain can be TCP, another
socket domain may be UDP, the card and host example socket domains are different.
The use of a socket of IPC is analogous to the use of sockets for an Internet connection
between the browser and web server. A socket provides a bi-directional transfer of messages and
may also send protocol header. The transfer is between two or between the multiple clients and
server-process. Each socket may have the task source address (similar to a network or IP
address) and a port (similar to a process or thread) number. The source and destination sets of
tasks (addresses) may be on the same computer or on a network.
10.Explain the structure of OSAn OS provides an environment within which programs are executed. One of the most
important aspects of OS is its ability to multi program. Multi programming increases CPUutilization by organizing jobs (code and data) so that the CPU always has one to execute.
OS keeps several jobs in memory. This set of jobs can be a subset of jobs kept in the job pool
which contains all jobs that enter the system. OS picks and begins to execute one of the jobs in
memory. The job may have to wait for some task, such as I/O operation to complete. In a nonmulti programmed system, OS simply switches to and executes another job. When that job needs
to wait, CPU is switched to another job and so on. As long as at least on job needs to execute,CPU is never idle.
Multi programmed systems provide an environment in which the various system
resources are utilized effectively but they do not provide for user interaction with the computer
system. Time sharing or multi tasking is a logical extension of multi programming. In time
-
8/12/2019 ES_UNIT_4
26/40
sharing systems, CPU executes multiple jobs by switching among them but the switches occur so
frequently that the users can interact with each program while it is running.
Time sharing requires an interactive computer system which provides direct
communication between the user and the system. A time shared operating system allows many
users to share the computer simultaneously. It uses CPU scheduling and multi programming toprovide each user with a small portion of a time shared computer.
A program loaded into memory and executing is called a process.
Time sharing and multi programming require several jobs to be kept simultaneously in
memory. Since main memory is too small to accommodate all jobs, the jobs are kept initially on
the disk in thejob pool.
This pool consists of all processes residing on disk awaiting allocation of main memory.
If several jobs are ready to be brought into memory and there is not enough space, then the
system must choose among them. Making this decision isjob scheduling.
Having several programs in memory at the same time requires some form of memory
management. If several jobs are ready to run at the same time, the system must choose among
them. Making this decision is CPU scheduling.
In a time sharing system, the operating system must ensure reasonable response time
which is accomplished through swapping where processes are swapped in and out of main
memory to the disk.
Virtual memory is a technique that allows the execution of a process that is not
completely in memory. It enables users to run programs that are larger than actual physical
memory.
11.Explain process, task and threadProcess Concepts A process consists of executable program (codes), state of which is controlled by
OS, thestate during running of a process represented by process-status (running,blocked, or finished), process structureits data, objects and resources, and
process control block (PCB). Runs when it is scheduled to run by the OS (kernel) OS gives the control of the CPU on a processs request (system call). Runs by executing the instructions and the continuous changes of its state takes
Place as the program counter (PC) changes. Process is that executing unit of computation, which is controlled by some
process (of the OS) for a scheduling mechanism that lets it execute on the CPU
and by some process at OS for a resource management mechanism that lets it use
the system memory and other system resources such as network, file, display orprinter.
-
8/12/2019 ES_UNIT_4
27/40
Application program can be said to consist of number of processesExample - Mobile Phone Device embedded software
Software highly complex.
Number of functions, ISRs, processes threads, multiple physical and virtualdevice
drivers, and several program objects that must be concurrently processed on asingleprocessor.
Voice encoding and convoluting process the device captures the spoken words
through a speaker and generates the digital signals after analog to digitalconversion, the digits are encoded and convoluted using a CODEC,
Modulating process,
Display process,
GUIs (graphic user interfaces), and
Key input process for provisioning of the user interrupts
Process Control BlockA data structure having the information using which the OS controls the
Process state.
Stores in protected memory area of the kernel.Consists of the information about the process state
Information about the process state at Process Control BlockProcess ID,
-
8/12/2019 ES_UNIT_4
28/40
process priority,
parent process (if any),
child process (if any), and
address to the next process PCB which will run,
allocated program memory address blocks in physical memory and in secondary
(virtual)memory for the process-codes,
allocated process-specific data addressblocks
allocated process-heap (data generated during the program run) addresses,
allocated process-stack addresses for the functions called during running of theprocess,
allocated addresses of CPU register-save area as a process context represents by CPU
registers, which include the program counter and stack pointer
allocated addresses of CPU register-save area as a process context [Register-contents(define process context) include the program counter and stack pointer contents]
process-state signal mask [when mask is set to 0 (active) the process is inhibited from
running and when reset to 1, the process is allowed to run],
Signals (messages) dispatch table [process IPC functions],
OS allocated resources descriptors (for example, file descriptors for open files, devicedescriptors for open (accessible) devices, device-buffer addresses and status, socket
descriptor for open socket), and
Security restrictions and permissions.
ContextContext loads into the CPU registers from memory when process starts running, and the
registers save at the addresses of register-save area on the context switch to another
process
The present CPU registers, which include program counter and stack pointer are called
context
When context saves on the PCB pointed process-stack and register-save area addresses,
then the running process stops.
Other process context now loads and that process runs This means that the context
has switched.
THREADS AND TASKS
Thread Concepts
A thread consists of executable program (codes),state of which is
controlled by OS,
The state information thread-status (running, blocked, or finished), threadstructureits
data, objects and a subset of the process resources, and thread-stack. Considered a
lightweight process and a process level controlled entity.[Light weight means its runningdoes not depend on system resources] .
Process heavyweight
-
8/12/2019 ES_UNIT_4
29/40
Process considered as a heavyweight process and a kernel-level controlled entity.
Process thus can have codes in secondary memory from which the pages can be
swapped into the physical primary memory during running of the process. [Heavyweight means its running may depend on system resources]
May have process structure with the virtual memory map, file descriptors,
userID, etc. Can have multiple threads, which share the process structure threadAprocess or sub-process within a process that has its own program counter, its own
stack pointer and stack, its own priority parameter for its scheduling by a thread
scheduler Its variables that load into the processor registers on context switching.
Has own signal mask at the kernel. Threads signal mask
When unmasked lets the thread activate and run.
When masked, the thread is put into a queue of pending threads.Threads Stack
A thread stack is at a memory address block allocated by the OS.
Application program can be said to consist of number of threads or Processes:Multiprocessing OS
A multiprocessing OS runs more than one processes.
-
8/12/2019 ES_UNIT_4
30/40
When a process consists of multiple threads, it is called multithreaded
process.
A thread can be considered as daughter process.
A thread defines a minimum unit of a multithreaded process that an OS schedules
onto the CPU and allocates other system resources.
Thread parameters Each thread has independent parameters ID, priority, program counter, stack pointer,
CPU registers and its present status.
Thread states starting, running, blocked (sleep) and finished
Threads stack When a function in a thread in OS is called, the calling function state is placed on the
stack top. When there is return the calling function takes the state information from the stack top
A data structure having the information using which the OS controls the thread state.
Stores in protected memory area of the kernel. Consists of the information about the thread state
Thread and Task Thread is a concept used in Java or Unix.
A thread can either be a sub-process within a process or a process within an application
program.
To schedule the multiple processes, there is the concept of forming thread groups andthread libraries.
A task is a process and the OS does the multitasking.
Task is a kernel-controlled entity while thread is a process-controlled entity. A thread does not call another thread to run. A task also does not directly call another
task to run.
Multithreading needs a thread-scheduler. Multitasking also needs a task-scheduler.
There may or may not be task groups and task libraries in a given OS
Task and Task States
Task Concepts
An application program can also be said to be a program consisting of the tasks and task
behaviors in various states that are controlled by OS.
A task is like a process or thread in an OS.
Task term used for the process in the RTOSes for the embedded systems.For example, VxWorks and COS-II are the RTOSes, which use the term task.
A task consists of executable program (codes),state of which is controlled by OS, thestate during running of a task represented by information of process status (running,
blocked, or finished),process-structureits data, objects and resources, and task control
block (PCB). Runs when it is scheduled to run by the OS (kernel), which gives the control of the CPU
on a task request (system call) or a message.
-
8/12/2019 ES_UNIT_4
31/40
Runs by executing the instructions and the continuous changes of its state takes place as
the program counter (PC) changes.
Task is that executing unit of computation, which is controlled by some process at theOS scheduling mechanism, which lets it execute on the CPU and by some process at OS
for a resource-management mechanism that lets it use the system memory and other
system-resources such as network, file, display or printer. A task an independent process. No task can call another task. [It is unlike a C (or C++) function, which can call another
function.]
The task can send signal (s) or message(s) that can let another task run. The OS can only block a running task and let another task gain access of CPU to run the
servicing codes
Task States(i) Idle state [Not attached or not registered]
(ii) Ready State [Attached or registered]
(iii) Running state(iv) Blocked (waiting) state
(v) Delayed for a preset period
-
8/12/2019 ES_UNIT_4
32/40
Idle (created) state
The task has been created and memory allotted to its structure however, it is not ready
and is not schedulable by kernel.
Ready (Active) State
The created task is ready and is schedulable by the kernel but not running at present as
another higher priority task is scheduled to run and gets the system resources at this
instance.
Running stateExecuting the codes and getting the system resources at this instance. It will run till it
needs some IPC (input) or wait for an event or till it gets pre-empted by another higher
priority task than this one.
Blocked (waiting) state
Execution of task codes suspends after saving the needed parameters into its Context. Itneeds some IPC (input) or it needs to wait for an event or wait for higher priority task to
block to enable running after blocking.
Deleted (finished) stateDeleted TaskThe created task has memory deallotted to its structure. It frees the
memory. Task has to be re-created.
Function
Function is an entity used in any program, function, task or thread for performingspecific set of actions when called and on finishing the action the control returns to thefunction calling entity (a calling function or task or process or thread).
Each function has an ID (name)
has program counter and
has its stack, which saves when it calls another function and the stack restores on returnto the caller.
-
8/12/2019 ES_UNIT_4
33/40
Functions can be nested. One function call another, that can call another, and so on and
later the return is in reverse order
12.Explain the characteristics of ISRs and TasksInterrupt Service routine
ISR is a function called on an interrupt from an interrupting source.
Further unlike a function, the ISR can have hardware and software assignedpriorities.
Further unlike a function, the ISR can have mask, which inhibits execution on
the
event, when mask is set and enables execution when mask reset.
TaskTask defined as an executing computational unit that processes on a CPU and state
of which is under the control of kernel of an operating system.
Uses
ISRfor running on an event specific set of codes for performing a specific set of
actions for servicing the interrupt call.Taskfor running codes on context switching to it by OS and the codes can be in
endless loop for the event (s)
Calling Source
ISRinterrupt-call for running an ISR can be from hardware or software at anyInstance.
TaskA call to run the task is from the system (RTOS). RTOS can let another higherpriority task execute after blocking the present one. It is the RTOS (kernel) only that
controls the task scheduling.
Context SavingISREach ISR is an event-driven function code. The code run by change in program
counters instantaneous value. ISR has a stack for the program counter instantaneous
value and other values that must save.All ISRs can have common stack in case the OS supports nesting
TaskEach task has a distinct task stack at distinct memory block for the context(program counter instantaneous value and other CPU register values in task control
block) that must save .Each task has a distinct process structure (TCB) for it at distinct memory block
Response and Synchronization
ISRa hardware mechanism for responding to an interrupt for the interrupt source
-
8/12/2019 ES_UNIT_4
34/40
calls, according to the given OS kernel feature a synchronizing mechanism for the ISRs,
and that can be nesting support by the OS.
ISRa hardware mechanism for responding to an interrupt for the interrupt sourcecalls, according to the given OS kernel feature a synchronizing mechanism for the ISRs,
and that can be nesting support by the OS
StructureISRCan be considered as a function, which runs on an event at the interrupting
source.
A pending interrupt is scheduled to run using an interrupt handling mechanism inthe OS, the mechanism can be priority based scheduling.
The system, during running of an ISR, can let another higher priority ISR run.
Taskis independent and can be considered as a function, which is called to run by the
OS scheduler using a context switching and task scheduling mechanism of the OS.The system, during running of a task, can let another higher priority task run. The
kernel manages the tasks scheduling
Global Variables Use
ISRWhen using a global variable in it, the interrupts must be disabled and after
finishing use of global variable the interrupts are enabled (analogous to case of afunction).
TaskWhen using a global variable, either the interrupts are disabled and after
finishing use of global variable the interrupts are enabled or use of the semaphoresor lock functions in critical sections, which can use global variables and memory
buffers.
Posting and Sending Parameters
ISRusing IPC functions can send (post) the signals, tokens or messages. ISR cant use
the mutex protection of the critical sections by wait for the signals, tokens or messages.Taskcan send (post) the signals and messages.
can wait for the signals and messages using the IPC functions, can use the mutex or
lock protection of the code section by wait for the token or lock at the section beginningand messages and post the token or unlock at the section end.
13.Explain memory management function
Memory allocation
when a process is created, the memory manager allocates the memory addresses(blocks) to
it by mapping the process address space.
Threads of a process share the memory space of the process
Memory manager of the OSsecure, robust and well protected.
No memory leaks and stack overflows
-
8/12/2019 ES_UNIT_4
35/40
Memory leaks means attempts to write in the memory block not allocated to a process
or
data structure.
Stack overflow means that the stack exceeding the allocated memory block(s)
Memory Management after Initial Allocation
Memory Managing Strategy for a system
Fixed-blocks allocation
Dynamic -blocks Allocation
Dynamic Page-Allocation
Dynamic Data memory Allocation
Dynamic address-relocation
Multiprocessor Memory Allocation
Memory Protection to OS functions
Memory allocation in RTOSesRTOS may disable the support to the dynamic block allocation, MMU support to
dynamic page allocation and dynamic binding as this increases the latency of servicing
the tasks andISRs.
RTOS may not support to memory protection of the OS functions, as this increases the
latency of servicing the tasks and ISRs.
User functions are then can run in kernel space and run like kernel functions
RTOS may provide for disabling of the support to memory protection among the tasks
as this increases the memory requirement for each task
Memory Manager functions(i) use of memory address space by a process,
(ii) specific mechanisms to share the memory space and
(iii) specific mechanisms to restrict sharing of a given memory space(iv) optimization of the access periods of a memory by using an hierarchy of memory
(caches, primary and external secondary magnetic and optical memories).
Remember that the access periods are in the following increasing order: caches, primaryand external secondary magnetic and then or optical.
Fragmentation Memory Allocation
Problems
Fragmented not continuous memory addresses in two blocks of a process
Time is spent in first locating next free memory address before allocating that to theprocess.
A standard memory allocation scheme is to scan a linked list of indeterminate length tofind
a suitable free memory block.
-
8/12/2019 ES_UNIT_4
36/40
When one allotted block of memory is deallocated, the time is spent in first locating
next
allocated memory block before deallocating that to the process.
the time for allocation and de-allocation of the memory and blocks are variable (notdeterministic) when the block sizes are variable and when the memory is fragmented.
In RTOS, this leads to unpredicatble task performance
Memory management Example
RTOS COS-II
Memory partitioning
A task must create a memory partition or several memory partitions by using functionOSMemCreate ( )
Then the task is permitted to use the partition or partitions.
A partition has several memory blocks.
Task consists of several fixed size memory blocks.
The fixed size memory blocks allocation and de-allocation time takes fixed time(deterministic).
OSMemGet ( )to provide a task a memory block or blocks from the partition
OSMemPut ( )
to release a memory block or blocks to the partition
14.Explain device management function
Number of device driver ISRs in a system, Each device or device function having s a separate driver, which is as per its hardware Software that manages the device drivers of each device Provides and executes the modules for managing the devices and their drivers ISRs. effectively operates and adopts appropriate strategy for obtaining optimal performance
for the devices. Coordinates between application-process, driver and device-controller.
Device manager
Process sends a request to the driver by an interrupt; and the driver provides the actionsby executing an ISR.
Device manager polls the requests at the devices and the actions occur as per theirpriorities.
Manages IO Interrupts (requests) queues. creates an appropriate kernel interface and API and that activates the control register
specific actions of the device. [Activates device controller through the API and kernel
interface.] manages the physical as well as virtual devices like the pipes and sockets through a
common strategy.
-
8/12/2019 ES_UNIT_4
37/40
Device management has three standard approaches
Three types of device drivers:(i) Programmed I/Os by polling from each device its the service need from each device.
(ii) Interrupt(s) from the device driversdevice- ISR and
(iii) Device uses DMA operation used by the devices to access the memory.Most common is the use of device driver ISRs
Device Manager Functions
Device Detection and Addition Device Deletion Device Allocation and Registration Detaching and Deregistration Restricting Device to a specific process Device Sharing Device control Device Access Management Device Buffer Management Device Queue, Circular-queue or blocks of queues Management Device drivers updating and upload of new device-functions Backup and restoration
Device Types char devices and block devices
Set of Command Functions for the Device ManagementCommands for Device create open write read ioctl close and delete
IO control Command for Device (i) Accessing specific partition information (ii) Defining commands and control functions of device registers (iii) IO channel control
Three arguments in ioctl ( ) First Argument: Defines the chosen device and its function by passing as argument the
device descriptor (a number), for example, fd or sfd Example is fd = 1 for read, fd = 2 for
write. Second Argument: Defines the control option or uses option for the IO device, for
example, baud rate or other parameter optional function
-
8/12/2019 ES_UNIT_4
38/40
Third Argument: Values needed by the defined function are at the third argumentExample
Status = ioctl (fd, FIOBAUDRATE, 19200) is an instruction in RTOS VxWorks.
fd is the device descriptor (an integer returned when the device is opened) FIOBAUDRATE is the function that takes value = 19200 from the argument. This at configures the device for operation at 19200-baud rate.
Device Driver ISR functions
ISR functions intlock ( ) to disable device-interrupts systems, intUnlock ( ) to enable device-interrupts, intConnect ( ) to connect a C function to an interrupt vector Interrupt vector address for a device ISR points to its specified C function.
intContext ( ) finds whether interrupt is called when an ISR was in execution
Unix OS functions
UNIX Device driver functions Facilitates that for devices and files have an analogous implementation as far as possible. open ( ), close ( ), read ( ), write ( ) functions analogous to a file open,
close, read and write functions.
APIs and kernel interfaces in BSD (Berkley sockets for devices)
open, close, read write
in-kernel commands
(i) select ( ) to check whther read/write will succeed and then select
(ii) ioctl ( )
(iii)stop ( ) to cancel the output activity from the device.
(iv)strategy ( ) to permit a block read or write or character read or write
15.Explain RTOS task scheduling models.
Round Robin Time Slicing of tasks of equal priorities
Common scheduling models
Cooperative Scheduling of ready tasks in a circular queue. It closely relates to functionqueue scheduling.
-
8/12/2019 ES_UNIT_4
39/40
Cooperative Scheduling with Precedence Constraints
Cyclic scheduling of periodic tasks and Round Robin Time Slicing Scheduling of equalpriority tasks
Preemptive Scheduling
Scheduling using 'Earliest Deadline First' (EDF) precedence.
Common scheduling models
Rate Monotonic Scheduling usinghigher rate of events occurrence Firstprecedence
Fixed Times Scheduling
Scheduling of Periodic, sporadic and aperiodic Tasks
Advanced scheduling algorithms using the probabilistic Timed Petri nets (Stochastic) orMulti Thread Graph for the multiprocessors and complex distributed systems.
Round Robin Time Slice Scheduling of Equal Priority Tasks
Equal Priority Tasks
Round robin means that each ready task runs turn by in turn only in a cyclic queue for alimited time slice.
Widely used model in traditional OS.
Round robin is a hybrid model of clock-driven
model (for example cyclic model) as well as event driven (for example, preemptive)
A real time system responds to the event within a bound time limit and within anexplicit
time.
Tasks programs contexts at the five instances in the Time Scheduling Scheduler for C1 to
C5Programming model for the Cooperative Time sliced scheduling of the tasks
Program counter assignments on the scheduler call to tasks at two consecutive time slices.
Each cycle takes time = N + tslice
Case : Tcycle = N + Tslice
Same for every task = Tcycle
Tcycle ={Tslice )}*N + tISR.
tISR is the sum of all execution times for the ISRs
For an i-th task, switching time from one task to another be isst and task execution time
be
is etNumber of tasks = N
Worst-case latency
Same for every task in the ready list
Tworst = {N *(Tslice)} + tISR.
tISR is the sum of all execution times for the ISRs
i = 1, 2, , N +1 , N
-
8/12/2019 ES_UNIT_4
40/40
VoIP Tasks Example
Assume a VoIP [Voice Over IP.] router.
It routes the packets to N destinations from N sources.
It has N calls to route.
Each of N tasks is allotted from a time slice and is cyclically executed for routing
packet from a source to its destination
Round Robin
Case 1: Then each task is executed once and finishes in one cycle itself.
When a task finishes the execution before the maximum time it can takes, there is awaiting period in-between period between two cycles.
The worst-case latency for any task is then N tslice. A task may periodically need
execution. A task The period for the its need of required repeat execution of a task is an
integral multiple of tslice.
Case 2: Alternative model strategy
Case 2: Certain tasks are executed more than once and do not finish in one cycleDecomposition of a task that takes the abnormally long time to be executed.
The decomposition is into two or four or more tasks.
Then one set of tasks (or the odd numbered tasks) can run in one time slice, t'slice and
the another set of tasks (or the even numbered tasks) in another time slice, t''slice.
Decomposition of the long time taking task into a number of sequential states
Decomposition of the long time taking task into a number of sequential states or a
number of node-places and transitions as in finite state machine. (FSM).
Then its one of its states or transitions runs in the first cycle, the next state in the second
cycle and so on.
This task then reduces the response times of the remaining tasks that are executed after
a state change