Os Study Guide
-
Upload
luu-khac-nguyen -
Category
Documents
-
view
216 -
download
0
Transcript of Os Study Guide
-
7/28/2019 Os Study Guide
1/105
Chapter 1
Getting Started
Introduction
Welcome to COIT13152, Operating Systems. This chapter has
three main aims
To make you familiar with how COIT13152 will operate,
To revise some pre-requisite knowledge, and
To give an overview of the rest of the course.
There is a significant amount of reading this week.
There are a number of reasons why you should not let the amount
of reading stress you out or get you down. First off, it doesn't help
at all. So, in the words of Douglas Adams, "Don't Panic".
Second, much of the reading this week is a revision of material
you will have seen in other courses, particularly courses which
introduce how the hardware of a computer works. Third, the
remainder of the reading is generally an overview of the material
we are going to be covering this semester.
So, don't try to memorise all of the material you read this week.
Instead, aim to gain a basic understanding of what an operating
system is and what it does. Also you should make sure you have a
good understanding of how the hardware in a basic computer
operates. Lastly it is important that you are familiar with the
study resources available for you in COIT13152 (web site, CD-
ROM, online lectures, online animations etc,) and also what you
need to do to pass COIT13152.
Objectives
On completion of this chapter you will :
beaware of the requirements, resources, assessment andschedule for COIT13152, Operating Systems
have an idea about what an operating system is, what it
does and why it is important for a computing professional
to know about them
have an understanding of the history and development of
operating systems
know what the primary goals of an operating system are
have revised material about the hardware of a computer
-
7/28/2019 Os Study Guide
2/105
have gained an overview of the components and possible
structure of an operating system.
Resources
To complete the first week of work you will need: Text book chapters 1, 2 and 3
Study guide chapter 1
Course Profile for COIT13152
online lectures 1, 2 and 3 (on the COIT13152 Web site and
CD-ROM)
an Internet connection and an email address
Why Learn About Operating Systems
Lets start by describing what COIT13152 wont do. COIT13152
will not show you how to use Windows 98/NT, UNIX or any other
operating system.
COIT13152 will show you:
how an OS (operating system) works
the algorithms and data structures that make up an OS
the problems, solutions and trade offs in designing an OS
how an operating system influences you as a computing
professional
The aim of COIT13152 is for you to ACHIEVE AN
UNDERSTANDING OF HOW AN OPERATING SYSTEM
WORKS.
So why would you want to learn about that? What possible good
will it do you as a computing professional to know the details of
how virtual memory works or process scheduling?
An operating system is an essential part of a computer. Without
an operating system the computer won't work. If the operating
system is unreliable or inefficient the computer will be unreliableand inefficient and more importantly the people who use the
computer will not be able to perform the tasks they need to.
This is important because the main task of most computing
professionals is to help people use computers to complete tasks as
easily as possible. Either by writing programs (software
engineering) which enable people to complete their tasks or by
maintaining the systems (systems administration) which run these
programs. Knowledge of how operating systems work will help
you :
build software that is efficient and correctKnowledge of how operating systems work can improve
-
7/28/2019 Os Study Guide
3/105
the efficiency of your program.
decide which operating system you should purchase for a
client
There is a wide choice of operating systems each of which
is suitable for different purposes. Advertisers and
company sales people lie. As a computing professionalyou should be able understand what they are saying and
identify the lies.
figure out how to fix a computer that is behaving badly.
Picture it, your manager complains that his computer is
not very fast. What do you do? The obvious choice, buy
a faster CPU may not always make any difference.
Buying more RAM might be a better solution. Why?
Knowledge of operating systems and virtual memory
would help with the answer.
The Course Profile
Before going any further please make sure you have read through
the COIT13152 course profile. It contains additional information
about this course.
Reading 1.1
Course Profile for COIT13152
Assessment and the Web Site
While the course profile for COIT13152 describes the assessment
for COIT13152 it does not include copies of actual assignments
you must complete. To obtain copies of the assignments and all
the latest information about COIT13152 you need to refer to the
COIT13152 website.
Reading 1.2
Home page on web site for COIT13152
Using the Mailing Lists
The main COIT13152 mailing list will be used as a forum for
students to ask questions and for announcements from teaching
staff about COIT13152. It is important that you subscribe to this
list and check email regularly. More importantly if you have
general problems with the material in COIT13152 use the mailing
list to ask questions.
This is probably as good a time as any for you to subscribe to the
COIT13152 mailing list. Instructions for doing this are on the web
site.
A Gentle Introduction to Operating
-
7/28/2019 Os Study Guide
4/105
Systems
That's enough preparation, let's get stuck into operating systems.
The first step taken by the textbook is to provide you with some
idea of what an operating system is and what it is supposed to do.
Reading 1.3
Textbook, Chapter 1, Introduction and Section 1.1
It's always difficult to teach operating systems. Operating systems
are complex interacting systems that include a number of different
systems. Each of these systems influences the other. This makes
it difficult to decide how to introduce operating systems. For
example, when you introduce process scheduling it is useful to
introduce memory management. However, both topics are too
complex to introduce together.
To add to the complexity of what you will encounter in this
subject are two common drawbacks of the operating systems field
1 Same word different meaning or different word same meaning
It is common for different operating systems textbooks and
operating systems makers to use the same word but mean
totally different things. The opposite to this is when they use
two different words to mean the same thing. Learning to now
the difference and recognise it can be frustrating.
2 It depends
In COIT13152 you will be introduced to a number ofalgorithms or explanations on how certain things work. They
don't always work this way. One of the reasons for this is that
in explaining these tasks they must be simpilified, otherwise
they would be too difficult to learn. Another reason is that in
operating systems there is no one right way to do something.
There are lots of alternatives which each have the benefits and
drawbacks.
What an operating system does
While finding it hard to define what an operating system does thereading offers a number of suggestions including that the
operating system is like government. This analogy includes the
following two major components:
resource manager
The operating system manages the resources of the
computer system and helps share these resources amongst
the objects using the computer.
control program
The operating system controls access to and operation of
some parts of the computer system to ensure reliability
and correctness.
-
7/28/2019 Os Study Guide
5/105
What are the operating system's aims
The reading also suggests two primary aims for an operating
system
convenience for the user
Increasingly the aim with computers is to provide ease-of-use and simplicity for the people using computers to
achieve some task. In today's environment this is perhaps
the most important goal.
efficiency
A few years ago (not as long as you might think) the
primary aim of most operating systems was not user
convenience. Instead, because of the expensive nature of
computer hardware, it was efficiency. To get the most
bang for the bucks spent on the computer. While no longer
a primary aim of an operating system, efficiency is
something that must still be considered.
How things change
The point about the change the focus of computing is an important
one. Not only just in the study of operating systems but also for
computing professionals in general. Technology is no longer the
most costly and important consideration. Computer hardware is
much cheaper today than before. It is people and their time which
is much more expensive.
When hardware was expensive and the time of people was cheapcomputing people and operating systems had to be smart about
how to get the most out of the hardware. Now that people are
expensive and hardware is cheap computing people and operating
systems have to be smart about how to get the most out of people.
Evolution
There are a number of additional goals that people can attribute to
an operating system. A common goal today is evolvability, the
ability to evolve or adapt to changes in computers.
Operating systems are a very complex collection of programs andrequire a considerable amount of time and energy to create. By
their nature operating systems must know about and interact
closely with computer hardware. However, computer hardware
changes very quickly. These changes mean that an operating
system may also have to change and in some cases be completely
rewritten. A good operating system can grow to meet these
changes with a minimum of effort.
History
Operating systems such as Windows 98/NT and Linux didn'tspring straight from the fevered imaginations of modern operating
-
7/28/2019 Os Study Guide
6/105
system designers. Instead they are the culmination of over 50
years experience with computers and operating systems. Actually
much of the underlying concepts and theories behind most modern
operating systems were first developed in the 1960s. That is how
little things have really changed.
To gain an understanding of what operating systems are and whatthey do it is helpful to it know how operating systems evolved into
what they are today. You are not expected to memorise the
following and be able to quote it back verbatim.
Reading 1.4
Textbook, Chapter 1, Section 1.2 and 1.3
Many of the observations which led to the development of batched
and multi-programmed batched system still exist today and aredriving existing work. Some of these observations include:
the CPU is much faster than input/output (I/O) devices
it is more efficient if you can overlap the I/O of one job
with the CPU utilisation of another.
An important trend in computing is that as computers become
cheaper and more powerful you can cater more to the needs of
human beings. This is part of what lead to the development of
time-sharing and personal computers.
A few years ago many of the concepts introduced in COIT13152,Operating Systems, such as virtual memory or multi-
programming, could only be seen on large computers. The
personal computers used by students were too small and the
operating systems on these computers too primitive to use many
of the ideas. This is no longer the case. Windows 98/NT, the
most common operating system used by students, include almost
all the features discussed in COIT13152. This change means that
it is now more important that future computing professionals be
familiar with the concepts introduced in COIT13152.
As you go through COIT13152 you will see a number of concepts
being repeated. One you have seen in readings so far is efficiencyversus features. Remember, one of the aims of an operating
system is to make efficient use of the resources available in a
computer. The limits of the hardware and the desire for efficiency
limits the features which are available. As those limits increase
(computers get more powerful) new features can be added. Which
is why most modern computers provide graphical user interfaces
(GUIs).
Alternative Types of Systems
Up until most of your computing experience is with PCs.
It is important that you come to a realisation that there is much
-
7/28/2019 Os Study Guide
7/105
more to computing the personal computers. Many of you may end
up programming services intended for use by mobile phones and
other very non-PC like computers. The following reading
introduces you to some of the different types of operating systems
that are available.
Reading 1.5
Textbook, Chapter 1, Sections 1.4 through 1.10
Time and technology is again starting to catch up with the material
we cover in COIT13152. A few years ago parallel, distributed and
real-time systems were so advanced that most students could
never see the importance of them. These days you can see them
everywhere.
parallel systems
You can buy computers from Gateway and Dell which
have multiple CPUs (central processing units). Multiple
CPU computers are now widely used as disk and print
servers.
distributed systems
Electronic commerce (e-commerce) and the growth of the
Internet is increasing research and use of distributed
systems. In the not too distant future most of you will
have Internet agents who wonder around the Internet
performing tasks for you, if you haven't already. A fairly
well-known computer scientist was overheard saying,
"The Web is just another type of distributed system". real-time systems
Anyone driving a 1999 model Falcon (or Commodore) has
had an interaction with real-time operating systems.
Email Question
The following is an email question asked by a student in a
previous offering of COIT13152.
> Would it be a correct to say that MSDOS uses
> multiprogramming but not multitasking and that
> Unix uses both?
:) One of the "joys" of operating systems is that
there are so many levels. This of course means
there aren't always that many simple questions.
The simple answer to your question is: No it
would not be correct to say this.
The following attempts to explain why it isn't.
Also it tries to explain in a little more detail
the distinction between
"single"-programming multi-programming
-
7/28/2019 Os Study Guide
8/105
multi-taskingI doubt that many people have really grasped the
difference as explained by the COIT13152 text.
I'll start off by trying to explain these terms
using operating systems most of you will be
familiar with.
Single programming -> MS-DOS
Multi-programming -> Windows 3.1 (sort of)
Multi-tasking -> UNIX, Windows NT,
Windows
95/98
Program Execution
When a program is running (we are going to refer
to such an entity as a process, well most of the
time anyway) it "owns" the CPU. For a program to
be actually running (excuting) the CPU is
performing instructions which "belong" to that
program.
The basic instruction/execution cycle is
demonstrated by one of the animations available
from the COIT13152 website.
When does the process do during I/O?
Your programs generally want to do one of two
things
execute on the CPU
perform input/output (I/O)
Whenever the current process (remember this isthe term we're using for a program that is being
executed) performs some I/O operation (e.g. wants
to write/read to disk, use the printer, display
something on the graphics card etc) it stops
using the CPU for quite sometime.
The reason for this is that most I/O devices are
incredibly slow in comparison to the CPU. To
give you some idea here is an analogy we have
used in the Rocky tutes.
The CPU and RAM operates at nano-second speeds.
Most common hard-drives operate at millisecond
speeds. Most of you probably don't get just howmuch slower hard-drives are than RAM. To help
you understand this lets change the times into
something most people are more familiar with.
Let's say that the CPU/RAM are as fast as Carl
Lewis (a famous American sprinter who won Olympic
Gold Medals).
This means that they can run the 100 metres in
around 10 seconds.
So, if we keep the relative speeds, how fast can
the hard-drive run the 100 meters?
If you do the math you will find that it takesthe hard-drive around 115 days to run the 100
meters. This should give you some idea of how
-
7/28/2019 Os Study Guide
9/105
large the gap between RAM and the HDD is.
This is a problem. You really don't want the CPU
sitting around doing nothing for 115 days while
it waits for some information to be read/written
to disk. This is very wasteful.
Single programming is bad
This is exactly what happens in a single
programming operating system. Single programming
means that there is only ever one program
running. This ends up with a lot of wasted CPU
time and means that it can take longer to run
programs.
This is exactly what happens in DOS. DOS, as an
operating system, does not provide any support
for running more than one program.
There are a couple of fiddles you can do which
allow a DOS computer to run more than one
program but these features are not provided bythe operating system. In fact, you can only do
this because DOS isn't really an operating system
(at least by our definition).
DOS does not prevent programs from directly
accessing hardware.
It is this ability to directly access hardware
which allows these "workarounds" to run more than
one program.
In the end, we'll say that DOS is a single-
programming operating system. It only
supports/allows one program to be running at one
time.
Multi-programming
Solving this inefficency (having the CPU do
nothing for 115 days) was the reason for the
development of multi-programming.
Rather than have the CPU sit around doing
nothing while some I/O was being done. A
multiprogramming operating system will keep a
pool of processes. When the current running
process asks to do some I/O it "gives up" the
CPU. It is replaced on the CPU with another
process.
Then when the new running process asks to do
some I/O (or some other task which means it
doesn't need the CPU) the multiprogramming
operating system chooses another process to go
onto the CPU.
The important point to note here is that a new
process is only placed onto the process when the
current process "gives up" the CPU.
This is a bit like what happens under Windows
3.1.
A problem with multi-programming
There is a problem with multi-programming. If I
-
7/28/2019 Os Study Guide
10/105
include the following code in my process
while ( 1 )
{}
what happens?
Well this is an endless loop. In amultiprogramming operating system the current
running process is only replaced on the CPU when
it "gives up" the CPU by doing I/O (or other
tasks).
Once the endless loop gets going that is never
going to happen.
My process will hog the CPU and prevent anyone
else having a ago.
Multi-tasking: the solution
Multi-tasking solves this problem.
In a multi-programming operating system the
current process decides when it will give up the
CPU.
In a multi-tasking operating system the operating
system decides when the current process will give
up the CPU.
A multi-tasking operating system makes use of the
system timer. A clock which generates an
interrupt every X time units. When the timer
interrupt occurs the operating system wakes up
and asks a simple question
"Has the current process had enough time on theCPU?" If the answer is yes, then the operating
system removes the process from the CPU and
replaces it with another on.
This replacement happens many times per second.
Via this model it is a bit more difficult for the
CPU to be hogged.
Another Email Question
> What is the meaning in pg 15 of the text line 1
> multiprocessors can also save money compared to
> multiple single systems. What is the meaningof
> multiple single systems? I thought
multiprocessors
> is multiple single systems with many main CPU!
The definition of a single system they are
referring to here is of a single computer:
including CPU, RAM, I/O devices, keyboard,
monitor, hard drive etc.
Your standard PC is an example of a single
system. But it is a single system with on CPU
A multi-processor machine is a single system that
has many CPUs.
-
7/28/2019 Os Study Guide
11/105
When it refers to multiple single systems it
means many single systems, like your PC, all
joined together using some sort of network which
are working together on the same problem.
An example of this is the Beowulf clusters which
are being worked on. Check out
http://www.beowulf.org/
The idea is similar to the NOW (Network Of
Workstation) idea. http://now.cs.berkeley.edu/
Both projects take standard, "off-the-shelf"
computers and join them together with special
networks.
They can be cheaper than a multiprocessor system
because they are using commodity equipment. The
sheer number of PCs which are sold make them
very cheap, especially when compared to large
multiprocessor systems which might sell hundreds
if they are lucky.
Summary of the Introduction
Reading 1.6
Textbook, Chapter 1, Section 1.11
Exercises
Textbook, Chapter 1,
Exercises 1, 2, 4, 5, 8, 11
This brings an end to the section of this study guide chapter that
looks at chapter 1 of the textbook. You might find this to be a
good place to take a break from study (if you haven't already).
How a computer works
The operating system provides a link between the software you
write and use and the hardware of the computer system. This
means that the operating system must work closely with, and is
influenced by computer hardware. To fully understand how an
operating system works it is important that you are familiar withthe operation of computer hardware.
But it is revision
Many of you may already have been introduced to how a
computer operates - even if you have, please take the time to read
through the following readings. Before you can fully understand
what an operating system works it is important that you
understand how the hardware works.
Why is important? Well, the theory is (taken from years ofresearch into teaching and learning by experts) is that you need to
-
7/28/2019 Os Study Guide
12/105
construct a mental model that helps explain how a concept works.
If your mental model is faulty then you will find it difficult to
understand how a concept works. Since operating systems and
hardware are tightly related if you don't understand how computer
hardware works you won't be able to fully understand how an
operating system works.So what does that mean for me? Don't try to memorise the facts
included in the following material. Instead, try to picture in your
mind how hardware works. A good test of how well you have
grasped this material is to attempt to explain to someone else (who
doesn't know) how a computer works.
Reading 1.7
Textbook, Chapter 2, Introduction and Section 2.1
Instruction/Execution CycleThis is probably as good a place as any to revise the instruction
execution cycle. This is the cycle which the CPU of a computer
repeats many times a second. In its simplest from the instruction
execution cycle includes the following steps
fetch the instruction
In this step the CPU copies the content of the RAM
location at the address contained within the program
counter (the PC) into the instruction register (IR). Both
the PC and IR are registers on the CPU.
execute the instructionIn this step the CPU evaluates the instruction now in the
IR and performs the appropriate task.
increment the PC
This isn't really a full-fledged step of the instruction
execution cycle, however it is still important. The PC is
increment so that it points to the location of the next
instruction to execute.
check interrupts
This is where the hardware checks to see if any interrupts
have occurred. This usually entails checking whether ornot a particular bit has been set on the CPU. If the bit is
set the computer jumps to a particular interrupt handling
routine. Generally speaking the hardware provides a small
part of the interrupt handling routine with the majority of
it provided by the operating system.
An animation of the instruction execution process is available on
the COIT13152 Web site. Its available under online lecture 2 or
via the animations page.
The instruction execution cycle discussed here is a very simplified
version of the instruction execution cycle. The IE cycle formodern computers is actually much more complex and is designed
-
7/28/2019 Os Study Guide
13/105
to increase the performance of the system.
Interrupts generally fall into one of four categories:
software
Software interrupts are caused by system calls and "trap-
like" instructions. System calls are how user programs askthe operating system to do some tasks for it. Some
examples of those tasks include reading/writing to a disk
or getting some memory.
hardware
Hardware interrupts are caused by I/O devices, generally
when they complete some form of I/O. For example, the
disk drive has just finished getting some data from the
disk.
error
A range of error conditions, such as divide by 0 ormemory access errors, which result in error interrupts.
Error interrupts usually result in the current process being
killed.
timer
Time interrupts occur regularly at a set time period and are
generated by the system timer. Timer interrupts are
heavily used in CPU scheduling which is talked about in
the following weeks.
Can you see what would happen if the operating system wasn't
there to handle interrupts? Think about the tasks you or other
computer programmers would have to handle if there wasn't an
operating system to handle interrupts.
I/O Structure
A lot of the effort expended by a computer is in input/output (I/O).
Especially in this day of graphical user interfaces (GUIs). The
operating system has a major role to play in managing the I/O a
computer performs. The main reasons for this are
speed
I/O devices, when compared to the rest of computerhardware, are very, very slow. Something has to balance
and schedule the allocation and use of these hardware
devices. To manage their working together.
variety
I/O devices are the most varied forms of computer
hardware. They range from disk drives through to
keyboards and many other stange devices. These I/O
devices vary in terms of speed, amount of information and
many other characteristics. Something has to hide this
variety, the operating system.
In order to understand this role it is important that you first have
-
7/28/2019 Os Study Guide
14/105
an understanding of how I/O works.
Reading 1.8
Textbook, Chapter 2, Section 2.2
Two of the important points to come out of the previous reading,which you will find repeated throughout the semester, are
busy waiting is bad
Having the CPU loop around for long periods of time
doing meaningless work is not an efficient use of the
computer's resources. Remember, one of the aims of an
operating system is to provide efficient resource
management.
overlapping I/O and CPU utilisation is good
This is again related to efficient resource management.
Having I/O devices and the CPU doing work at the sametime is making very efficient use of the computer's
resources.
Storage Structure
To do any work a computer must be able to store and retrieve
information. This is where the computers storage media enter the
picture. The wide variety of characteristics of different storage
media means that the operating system has quite a task to perform.
Reading 1.9
Textbook, Chapter 2, Section 2.3
Storage Hierarchy
A typical computer has a number of different systems for storing
data and programs. The following reading talks about the
hierarchy of these systems and what's involved in balancing the
characteristics of the levels of this hierarchy to achieve reasonable
performance.
Reading 1.10
Textbook, Chapter 2, Section 2.4
Hardware Protection
Throughout the previous readings you have been introduced to the
benefits of overlapping I/O with CPU utilisation. Like most
things there is a down side to the benefits this provides. One of
those problems is the very fact that there are multiple jobs sharing
the system. Each of these jobs must be restricted to its own
resources to prevent inadvertent (or in the case of attempts to
break security, on purpose) intrusions into others. Any form of
protection provided by software (i.e. the operating system) can beworked around. To be "really safe" this form of protection must
-
7/28/2019 Os Study Guide
15/105
be provided by the hardware.
Reading 1.11
Textbook, Chapter 2, Section 2.5
Network StructureThis reading gives a brief overview of different types of networks.
Reading 1.12
Textbook, Chapter 2, Section 2.6
Exercises for Chapter 2
Exercises
Textbook, Chapter 2, questions 1-10
This is the last use of chapter 2 of the textbook. You might want
to use this as a place to take a break.
Operating-System Structures
Operating systems are amongst the largest and most complex
collection of algorithms and data structures. Windows 2000, the
latest version of Windows NT, has millions of lines of code. It's
obvious that a system this large must be broken up into smallercomponents to make it easier to understand, design, implement
and test. The remainder of the reading for this week gives you an
overview of the structure of a typical operating system. It does
this by introducing you to 3 ways of looking at an operating
system
services
The operating system must provide a set of services to
programs.
interface
To access the operating system services user programs
make use of an application programming interface (a set
of functions) called system calls.
structure
How an operating system is designed and structured
influences its speed and expandability.
Operating System Components and Services
Even though you may not be aware of it, as a programmer you
have made use of a number of the services provided by the
operating system. This is one of the reasons why learning aboutoperating systems is important for programmers. If you are aware
-
7/28/2019 Os Study Guide
16/105
of these services and how they are implemented you can make use
of these services to make programming simpler and your
programs more efficient.
The following reading gives you an overview of the standard
services that must be provided by most operating systems. The
sections in which they are introduced also happen to provide agood overview of what we will be studying over the rest of the
semester. These sections are process management, memory
management, file management, I/O system and secondary storage
management,and protection and security.
Reading 1.13
Textbook, Chapter 3, Introduction and Sections 3.1 and 3.2
System Calls
As you might have concluded from the previous readings theservices offered by an operating system are at very low level. So
low that people don't interact directly with the operating system.
People interact with programs that offer much higher-level
services. It is the programs that make use of the services offered
by the operating system.
Operating services are made available by system calls. System
calls are essentially system calls (not exactly but close enough).
The set of system calls offered by an operating system define the
interface used by programs. This means that the only
requirement to run a Windows program is something that
implements the collection of system calls (plus a few extras
libraries) that a Windows program needs. This set of system calls
can be provided by any operating system or even another user
program (with some special configuration). More on this after the
reading.
Reading 1.14
Textbook, Chapter 3, Section 3.3
As was mentioned prior to the reading the only distinction
between different operating systems, that matters to programs
(remember, all programs see of the operating system are the set ofsystem calls) are the set of system calls it provides. It is this fact
that is making the distinction between one operating system and
another increasingly meaningless. It is now possible for a
particular operating system to run the programs originally
compiled for a completely different operating system.
An example system call interface
In most operating systems each system call is assigned a particular
number. These numbers are defined in one of the source code
files for the operating system. Figure 1.2 shows a section from asource file from a version of the Linux operating system. Linux
-
7/28/2019 Os Study Guide
17/105
currently has about 163 different system calls. In this section you
can see the first 13. Most of these system calls are related to
manipulating files, directories and processes. You should be able
to make some connection between the Linux system calls in
Figure 1.2 and the types of system calls listed in Figure 3.2 of the
textbook.
#define SYS_exit 1#define SYS_fork 2#define SYS_read 3#define SYS_write 4#define SYS_open 5#define SYS_close 6#define SYS_waitpid 7#define SYS_creat 8#define SYS_link 9
#define SYS_unlink 10#define SYS_execve 11#define SYS_chdir 12#define SYS_time 13
Figure 1.2. Linux system call numbers
A program and its system calls
Even the simplest of programs make heavy use of system calls.
One way to see the system calls used by a process running under
the Linux operating system is to use the strace command.
Figure 1.3 shows a simple C++ program (all it does is print "hello
world" onto the screen) and the output of the strace command
when running that program.
#include
void main(){cout
-
7/28/2019 Os Study Guide
18/105
getpid0.50 0.000006 6 1
personality------ ----------- ----------- --------- -------------------------100.00 0.001191 53 10 total
As you can see from the output of the strace command even thissimple program used 53 system calls.
The following output from the strace command is from running
the Netscape Web browser, loading one page and then exiting.
You can see how the number of system calls really mounts up.
Over 29,000 systems calls just to start Netscape and load a single
Web page.
[david@faile david]$ strace -c/usr/local/netscape/netscapeexecve("/usr/local/netscape/netscape",
["/usr/local/netscape/netscape"], [/* 26 vars */]) = 0% time seconds usecs/call calls errorssyscall----------------------------------------------------------42.92 0.542800 333 1632 write16.90 0.213711 82 2614oldselect10.76 0.136065 37 3701 53 read6.94 0.087808 570 154
writev4.58 0.057954 7 8034
gettimeofday4.06 0.051396 7 7621
sigprocmask2.56 0.032341 22 1468 brk2.25 0.028403 3550 8 fsync1.47 0.018642 6214 3 wait41.33 0.016875 83 204 69 stat1.05 0.013249 97 137 18 open0.97 0.012251 70 174 171
access0.93 0.011713 9 1274 ioctl0.68 0.008547 7 1247 5 lseek0.35 0.004478 47 95
getdents0.35 0.004469 1490 3 fork0.30 0.003819 41 93 mmap0.29 0.003614 28 131 close0.24 0.003080 280 11 readv0.20 0.002555 134 19 9
connect0.18 0.002258 11 205 2
sigreturn0.15 0.001900 7 256 time0.10 0.001323 26 51
munmap0.09 0.001188 119 10
socket0.07 0.000850 213 4 1
unlink0.05 0.000605 7 84 fcntl0.04 0.000532 18 29
mprotect0.04 0.000479 10 50 fstat0.04 0.000470 26 18 lstat0.02 0.000264 132 2
rename0.02 0.000218 27 8 pipe
0.02 0.000207 15 14 uname0.01 0.000168 168 1symlink
-
7/28/2019 Os Study Guide
19/105
0.01 0.000149 9 17sigaction0.01 0.000113 113 1
ftruncate0.00 0.000035 12 3
fchmod0.00 0.000026 9 3
geteuid
0.00 0.000025 6 4getpid0.00 0.000025 6 4
getuid0.00 0.000016 8 2 dup20.00 0.000014 7 2
getgid0.00 0.000013 7 2 dup0.00 0.000012 12 1
setitimer0.00 0.000009 9 1
personality0.00 0.000008 8 1
setgid0.00 0.000007 7 1
setuid
0.00 0.000006 6 1getegid----------------------------------------------------------100.00 1.264690 29398 328 total
System Programs
Now we come to an area that is causing a great deal of trouble for
certain parts of the computer industry, system programs. The
following reading attempts to explain system programs.
Reading 1.15Textbook, Chapter 3, Section 3.4
The question of whether or not user programs are part of the
operating system has been around for quite a while. Over the last
few years the United States Department of Justice has been
seriously examining Microsoft's strategies when it comes to
operating systems and user programs. In particular, how
Microsoft has apparently been trying to use the spread of its
Windows operating systems to gain market share for its Web
browser.
For the purposes of COIT13152 we will use the more restrictive
definition of an operating system. That is, a definition that does
not include system programs as part of the operating system. The
operating system is the data structures and algorithms that reside
under the system call interface.
System Structure and Virtual Machines
Appropriate design of the internals of an operating system is
essential to make the task of creating, debugging and maintaining
such a large collection of code. The following reading examinessome of the details of how operating systems have been
-
7/28/2019 Os Study Guide
20/105
structured.
The section on virtual machines leads into how, using Java, it is
possible to create programs that are truly architecture-independent
and therefore highly portable.
Reading 1.16Textbook, Chapter 3, Sections 3.5 and 3.6
System Design, Implementation and
Generation
The implementation of software which is as large, complex and
fast-changing as operating systems over a long period of time by
smart people generates a lot of lessons. The following reading
introduces a few of these lessons.
Reading 1.17
Textbook, Chapter 3, Sections 3.7, 3.8 and 3.9, pp 78-83
So how does that help? One example is the separation of
mechanisms and policies. This is an example of the separation of
concerns, a principle that has applications in a number of
computing fields. For example it is very useful in the design of
web pages. A web site has a number of components
content
The actual information that the site contains.
presentation
How the pages actually look.
structure
How the pages which make up the site are structured.
Most web sites do not separate these concerns. Most web pages
generally have all these concerns mixed up together. Separating
these concerns increases the flexibility of a web page. If content
and presentation are separate you can apply different presentations
to the same content. For example, you might want to provide
different versions of a web page for people with different
browsers without having to manually rewrite the pages.
Exercises
Exercises
Textbook, Chapter 3, questions 1-11
Summary
Well you've done it. Finally reached the end of this marathonstudy guide chapter. Relax; take it easy, this is the largest amount
-
7/28/2019 Os Study Guide
21/105
of reading you will have to do for the entire semester for
COIT13152, Operating Systems. So what have you learnt?
from chapter 1 of the textbook, you will have gained some
idea of what an operating system is, what it does and how
and why operating systems have developed over the 50
years computers have been around.
from chapter 2 of the textbook, you will have revised
some details about the operation of computer hardware
including the instruction-execution cycle, I/O, interrupts,
context switches, the storage hierarchy and hardware
protection.
from chapter 3 of the textbook, you will have a general
overview of an operating system including the
components of an operating system and their
responsibilities, the services an operating system provides
to user programs, the concept of system calls, thedifference between system programs and the operating
system and how operating systems are designed and
implement. Also in here was a look at how operating
systems are structured.
That was the overview. From next week we start looking at some
of the details of the components of an operating system.
-
7/28/2019 Os Study Guide
22/105
Chapter 2
Processes
Introduction
On a modern computer system there can be many programs active
at any one time. It's the responsibility of the CPU (Central
Processing Unit) to actually execute the instructions for these
programs. However, with multiple programs somewhere between
start and finish at the same time something has to keep track all
these programs. Tracking these programs includes knowingwhere they are up to, what they are doing and what resources they
own. One of the responsibilities of the operating system is to keep
track of this information. It does this using processes.
Almost all the work (and in some operating systems all the work)
performed is done by processes. In this chapter we define what a
process is and how an operating system deals with them. The
concepts this chapter introduces includes
process states
During its lifetime a process will move through a number
of different states. Each state has its own characteristicsand there are limitations on which state transitions make
sense.
process description
The operating system must be able to keep track of
processes and the resources they own. This information is
stored in a number of operating system data structures
including Process Control Blocks (PCBs).
operations on processes
There are a number of standard operations which are
performed with operating systems, including creation,
termination and switching.
process scheduling
Typically there are multiple processes on a computer
which much share the computers resources, particularly
the Central Processing Unit (CPU). Process scheduling
deals with the decisions of how to share those resources
amongst processes.
threads
The concept of a thread is an extension of the process
concept and provides a number of benefits.
The importance of the process concept to operating systems
-
7/28/2019 Os Study Guide
23/105
means that the next four chapters of the study guide, four weeks of
COIT13152, deal with process related concepts.
Objectives
At the end of this chapter you should have a good knowledge of
the process concept. Including
the different states a process can be in
what causes process transitions from one state to another
what data structures are used to identify and track
processes
what is a context switch, what is a process switch, how are
these terms related, how are they different
what is a PCB, what is it used for
what is a thread or lightweight process
what advantages threads provide
Resources
Textbook, Chapter 4
Study Guide Chapter 2
Online lecture 4 includes audio and animations
The Process Concept
So what is a process? What are process states? What is a process
control block (PCB)? These and other questions are answered by
the next reading from the text that introduces the concept of a
process.
Reading 2.1
Textbook, Chapter 4, Introduction and Section 4.1
Slide 3 from online lecture 4 includes an animation of Figure 4.1
in the textbook. The understanding of process state transitions is
vital; the animation shows how processes move from one process
state to another. In COIT13152 this is what we will refer to as a
process switch. Later in this chapter you will see how the
COIT13152 definition is different (and smaller) than the definition
used in the text book.
IT IS IMPORTANT THAT YOU UNDERSTAND THE
DEFINITION OF PROCESS SWITCH USED IN COIT13152
and that it is slightly different from the definition used in the
textbook.
-
7/28/2019 Os Study Guide
24/105
The Operating System and Process States
One important aspect of the animation is that the operating system
is responsible for moving processes from one state to another.
You will see how later in the study guide.
When and how does the operating system run? In the animationyou will see Olive, the dinosaur which is COIT13152's "mascot",
moving onto the CPU. Olive is a representation of the operating
system, or at least parts of the operating system.
At the very start of this chapter it was stated that "almost all the
work done on a computer is performed by processes". Is the
operating system a process?
The answer to this question is, "it depends". It depends on the
particular operating system you are talking about. Different
operating systems are implemented in a different ways.
For the operating system in our animation, the answer is that the
operating system is not a process. This is represented by the fact
that Olive, the representation of the operating system, does not use
any of the process queues. In other operating systems there may
be a number of different processes which are used to perform
individual operating system tasks, including switching processes
from one state to another.
What this means is, that if the operating system shown in the
animation of slide 3 used the "operating system as a process"
approach then Olive (the representation of the operating system)
would appear in some of the queues when it isn't on the CPU. Inthe current animation the operating system is not a process. This
is why Olive just appears to hang around in no where in particular
when she isn't on the CPU.
It depends
This is one of the examples of "it depends" that you will see
repeated throughout COIT13152. There are many different
operating systems which all perform these tasks in different ways.
In COIT13152, because you are just learning about operating
systems, we can't introduce all the different ways operating
systems perform tasks. It would be too complex.
Instead the textbook introduces the concepts in a simplified and
generalised way. This approach introduces you to the important
concepts with each part of operating systems without
overwhelming you with complexity. However, it is important that
you realise that very few operating systems will implement
processes, CPU schedule or any other operating system related
concept in exactly the way discussed in COIT13152.
Process Scheduling and Operations
-
7/28/2019 Os Study Guide
25/105
Process scheduling is one of the major tasks an operating system
must perform in its role as resource manager. The aim of process
scheduling is to maximise usage of the CPU and other system
resources such as I/O devices by switching between processes.
The following reading gives a brief overview of process
scheduling and some of the events which cause process switching.Remember, the COIT13152 definition of a process switch is
slightly different from that used by the textbook. This difference
is explained soon.
Reading 2.2
Textbook, Chapter 4, Section 4.2
The number of running processes on a computer always equals the
number of CPUs. Remember, the definition of the running state is
that the process is actually on the CPU being executed. Only one
process can be using a CPU at any one time. So the number ofrunning process equals the number of CPUs. Traditionally most
computer systems you will have seen have only the one main
CPU. However, the trend today is making computer systems with
multiple CPUs much more common, especially as servers for
organisations.
All the other processes in a system (remember there could be
hundreds even thousands) are in queues associated with the other
states discussed in Reading 2.2. This means that the processes
could be
blockedWaiting for some I/O event to occur, i.e. The disk to finish
reading some information.
ready
All ready to execute but just waiting for its turn on the
CPU.
Process switch versus context switch
Process and context switching are closely related concepts, but
they are not the same, even though the textbook sometimes treats
them as the same. It is important that you understand that there ismost definitely a difference between a process and a context
switch. In COIT13152 you will be expected to know this
difference and use it accordingly.
Context
First, let's define what the context actually is. All CPUs contain a
number of registers that provide information about the current
state of execution. Some of these were mentioned in the previous
study guide chapter and include such registers as the program
counter, instruction register and processor status word. It is thisinformation that specifies the current execution state of the CPU.
-
7/28/2019 Os Study Guide
26/105
Context Switch
A context switch involves saving the current context of the CPU
and replacing it with another context. What this does is saves the
current program/process, allows you to run another
program/process and then at a later stage return back to the
original process by restoring its context. As far as the original
program/process knows, nothing interesting has happened.
Context switches usually happen whenever there is an interrupt.
There are four general types of interrupts, defined in the previous
study guide chapter, software, hardware, error and timer.
The animation on slide 3 of online lecture #4 includes some very
obvious context switches. Whenever Olive, the dinosaur, is
placed onto the CPU it performs a context switch. The context of
the process that was running on the CPU is saved somewhere in
RAM and the context for the operating system (or at least the
context for the section of the operating system which will handle
the process switch), represented by Olive, is placed onto the CPU.
Once Olive has performed the necessary work, such as creating a
new process, another context switch occurs, saving Olive's context
and placing the context of another process onto the CPU.
Context is a hardware concept; it is tied to the CPU of the system.
Context switches are performed hundreds, thousands and even
millions of times a second and are usually supported by hardware
in someway.
Process switchProcess switching is where the current running process is replaced
by another process. This occurs in a number of situations
including
the current running process finishes and moves from the
running to the zombie/dead state
the current running process requests some I/O and moves
from the running to the waiting state
the current running process uses up its time on the CPU
and moves from the running to the ready state
You will have seen examples of this occurring in the animation on
slide 3 of lecture 4.
Since it involves processes, remember processes are an operating
system construct; a process switch relies heavily on operating
system code and data structures. The steps involved in a process
switch include
interrupt occurs
All process switches will be initiated by an interrupt. When a
process finishes it usually executes a system call
(implemented as a software interrupt) called exit whichdestroys the process. When a processes share of time on the
-
7/28/2019 Os Study Guide
27/105
CPU is finished this is indicated by a timer interrupt. When a
process moves from the blocked state to the ready state it is
usually because of an I/O interrupt. Moving from running to
blocked is usually because of a request to do I/O (a software
interrupt).
Since the process switch almost always starts with aninterrupt, the first thing that happens is a context switch from
the current running process to the interrupt handler of the
operating system.
save the context of the current running process
restore the context of the operating system, in particular
the interrupt handler
The interrupt handler determines which part of the
operating system will handle the interrupt. Timer,
software and I/O interrupts will be handled by different
sections of the operating system. However, some part ofthese operating system sections will eventually initiate a
process switch which uses exactly the same steps.
change the PCB of the old running process to represent the
change in state
The process could be going from running to blocked,
zombie, or ready.
move the PCB of the old running process to the queue
associated with its new state
In many cases this will entail simply changing a pointer.
Remember, it is important that this process be very
efficient so it can be done quickly. Copying large amounts
of data (e.g. the PCB) every time there is a process switch
just wouldn't make good sense.
select another process from the ready queue
This is a very important step and one we examine in more
detail next week. It is important because it can take a long
time (by far the largest portion of a process switch) but
also because which process is chosen can directly
influence the performance of the system.
change the PCB of the chosen process to represent its new
state
"move" the new process from its current queue to running
replace the context of the kernel with that of the new
process
After this step the new running process starts off from
where it left off the last time it executed.
Process and context switch: the relationship
A context switch doesnt mean a process switch occurs.
For example, one of the types of interrupts is the timer interrupt.
Most computer systems have a timer that generates an interrupt at
-
7/28/2019 Os Study Guide
28/105
regular intervals. What usually happens at a timer interrupt is the
following
there is a context switch to an interrupt handler (part of the
operating system)
the interrupt handler decides whether the timer interruptmeans that anything should change
if it decides, no, then it restores the context of the
previously running process.
This is an example of where a context switch does not mean a
process switch.
However a process switch always involves a context switch
(perhaps even more than one). If you look through the rough list
of steps of what occurs during a process switch you will see that
there are a number of context switches within a process switch.
Operations on processes
So far we've looked at the states a process can be in and how a
process can move from one state to another. What we haven't
done so far is examine how processes are created and terminated.
This is where the next reading comes in.
Reading 2.3
Textbook, Chapter 4, Section 4.3
Creating a new process: a program
So how do you create a new process? The previous reading
mentioned the system calls fork and execve. The following
source code is for a simple UNIX C++ program which displays
the unique process identified (PID), remember each process must
have a unique identifier so that the operating system knows one
from the other, creates a child process and the child process
displays its PID.
Explanation
The following is a quick explanation of the lines in the program.
Lines 1 and 2 are simple include statements which make sure we
have the appropriate include files for the cout (iostream.h),
fork (unistd.h) and getpid (unistd.h) functions.
Line 3 declares the two integer variables we are going to use to
store the process identifiers for the two process we create.
Remember, in modern operating systems every process has a
unique process identifier, a number. The operating system uses
this process identifier to uniquely identify (that's a surprise) eachprocess. In this example we will be creating two process the
parent and the child.
-
7/28/2019 Os Study Guide
29/105
Line 4 is were we actually get the process identifier for the parent
process. We use the standard UNIX library call getpid which
returns the integer identifier.
In Line 5 we simply display onto the screen a short message
indicating the process identifier for the parent process.
Line 6 is the "guts" of this program. It is also the line which will
be different from anything most of you will have seen before.
This line actually gets "executed" twice. Once in the parent
process and once in the child process. The sequence goes
something like this
the parent process executes the fork library call
fork actually returns twice, once for the parent and once for
the child. It is the fork library call which actually creates the
child process. The new child process is almost identical to the
parent process. It will have the same code and a copy of the
same data. Even it's context will be a copy of the parent's
process. This means that the child is executing the same
program as the parent AND is up to the same place (just after
the fork). However, there is a slight difference.
the parent after fork
The fork function actually returns twice. Once for the parent
(which called the fork function) and once for the child process
(which the fork function created). In the parent process the
return value of the fork function will be equal to the process
identifier of the new child process. This makes it possible for
the program in the parent process to know who its childrenare.
So in our example program the parent process proceeds to
execute line 8.
the child after fork
The return value of the fork function in the child process is
equal to 0. So in our example program the child process will
proceed to execute line 7.
Line 7 is only ever executed by the child process and all it does is
display a message onto the screen that it is the child process.
Lines 8 and 9 are only ever executed by the parent process. Allthey do is to display a message showing that the parent process
knows the process identifier of the child process.
1 #include 2 #include
3 void main(){int child_pid, parent_pid;
4 parent_pid = getpid();
5 cout
-
7/28/2019 Os Study Guide
30/105
-
7/28/2019 Os Study Guide
31/105
Reading 2.6
Textbook, Chapter 4, Section 4.6
SummaryWell that's an overview of the process concept. Since the process
is how the operating system tracks all the work being done within
a computer system it is a foundation principle of operating
systems. You will find much of the following week's work
difficult if you do not fully understand the process concept.
Especially the difference between and what happens during a
process and a context switch.
Reading 2.7
Textbook, Chapter 4, Section 4.7
Exercises
Textbook Chapter 4: 1, 2, 4, 5, 6, 7 and 9
Threads
Until a few years ago the concept of a thread wasn't all that widely
implemented. However, over the last few years almost all the
modern operating systems, including versions of UNIX and
Windows NT, support multi-threading. What is a thread? How isit different from a process? Why are we worried about threads?
These are some of the questions answered in the following
reading.
Reading 2.8
Textbook Chapter 5
The concept of the process can be represented as having two
characteristics
resource ownership, and
Processes will own files, shared memory, I/O devices and
other resources from a computer system.
dispatch
Basically the location of execution (sometimes known as
the thread of execution), the current context and a stack.
In summary, this is where the process is currently up to.
Up until this section of the study guide we have been talking about
the older, heavyweight, process concept. This idea of a process,
which encompassed both resource ownership and dispatch, is the
traditional process concept which has been used for many years.
The move to threads/lightweight processes divides these two
-
7/28/2019 Os Study Guide
32/105
characteristics into two separate entities. Resource ownership
remains with some form of process. The unit of dispatch is used
to define threads. Normally a process, the resource ownership
concept, will be associated with a number of threads which all
share the resources of the process.
Multi-threading provides a number of benefits including
easy sharing of resources
All the threads for a single process share the resources of
that process. This can be useful in a number of
programming problems, such as file and Web servers.
efficiency
The main reason for this is that it requires less time and
resources to switch between threads (basically a context
switch) than it does to switch between heavyweight
processes (a process switch).
Email Question
> I find the use of the term "user thread" a
little
> difficult to understand. I would have thought
that, if as I
> take it the "user" is a person, a "user
thread" would be very
> slow, because it would have to wait for
keyboard input. It is
> obvious that the textbook cannot mean this.
What the book is talking about is
user LEVEL threads
and its related concept
kernel LEVEL threads
The missing LEVEL may be what is throwing you.
Kernel level refers to stuff/operations/work
which happens within the kernel. This "stuff"
will be done with the system mode bit set. When
that system mode bit is set we are "in the
kernel".
User level refers to "stuff" which happens when
the system mode bit is NOT set. i.e. We areoutside the kernel.
It doesn't mean users as in people.
User level is where the processes you and I run.
Their permission level.
One of the advantages of user level threads is
that can be a little more efficient because they
execute "at the same level" as the process.
There is no need to switch to kernel mode.
-
7/28/2019 Os Study Guide
33/105
Chapter 3
CPU Scheduling
Introduction
In its role as resource manager the operating system plays an
important part in ensuring the efficiency of a computer. One of
the most important resources in a computer is the CPU. CPU
scheduling is the way in which the operating system manages the
CPU and in multi-programming systems shares it amongst
multiple processes. This chapter examines how and why CPUscheduling is used. It also introduces some of the algorithms that
are used to perform CPU scheduling.
CPU scheduling is how the operating system selects the next
process to become the running process during a process switch.
Remember, if there is only one processor, there can only be one
running process at any instant in time.
Objectives
By the end of this chapter you should be aware of
the scheduling algorithms FCFS, round-robin, SJF and the
use of feedback queues
indefinite blocking (also known as starvation and
indefinite postponement), why it occurs and how to solve
the problem
the difference between an I/O bound and CPU bound
process
the different criteria used to measure scheduling
algorithms
the difference between pre-emptive and non-pre-emptive
scheduling algorithms
Resources
Textbook, Chapter 6
Study Guide Chapter 3
Online Lecture 5
The Basics
The following reading introduces many of the basic concepts
-
7/28/2019 Os Study Guide
34/105
which drive the design and development of CPU scheduling
algorithms.
Reading 3.1
Textbook, Chapter 6, Introduction and Section 6.1
Scheduling Criteria
There are a wide range of algorithms which can be used to
implement CPU scheduling. How do you decide which algorithm
is the most appropriate for your situation? There are range of
criteria which can be used to evaluate the performance of these
algorithms. The following reading introduces these criteria.
Reading 3.2
Textbook, Chapter 6, Section 6.2
Scheduling Algorithms
The following reading introduces some of the basic algorithms
which can be used to implement CPU scheduling. The actual
algorithms used in real operating systems are actually much more
complex adaptations of the algorithms introduced in this reading.
The important part in this reading is being aware of the relativeadvantages and disadvantages of each of the algorithms.
Reading 3.3
Textbook, Chapter 6, Sections 6.3 through 6.5
Email Question
> (In) the textbook there is an example of
> a preemptive SJF scheduler....
> Process Arrival Time Burst Time> P1 0 8
> P2 1 4
> P3 2 9
> P4 3 5
>
> The average waiting time has been calculated
as:
> ((10-1)+(1-1)+(17-2)+(5-3)) / 4 = 6.5
milliseconds
> I understand that for process P1 we calclulate
10-1
> since process P1 started at time=0 and ended
at> time=1 and then recommenced at time=10.
-
7/28/2019 Os Study Guide
35/105
> But What I DONT understand is how did we get
1-1 for
> P2 in the equation to calculate average time
and 17-
> 2 for P3 and 5-3 for P4.
The book has chosen a difficult to understand
method for representing this.
Waiting time is any time a process is ready to
run (i.e. it is in the ready queue) but it can't
get onto the CPU because there is another process
using the CPU. It is said to be waiting for the
CPU.
So to calculate the average waiting time you need
to figure out how long each process was ready to
run, but couldn't.
Let's look at each process in this example, one
by one.
Process 1 arrives at time 0 and executes on the CPU
until time 1
from time 1 to time 10 it is waiting to run
at time 10 it executes until time 17
the waiting time for process 1 is
10 - 1 = 9 milliseconds
Process 2
arrives at time 1 and executes on the CPU
until time 5
at this stage it has used up all its bursttime
the waiting time for process 0 is 0
it never had to wait
Process 3
arrives at time 2 but doesn't execute on the
CPU until time 17 (it is waiting all this
time)
at time 17 it executes until time 26 at which
time it is finished
the waiting time for process 3 is
17 - 2 = 15 milliseconds
Process 4
arrives at time 3 but doesn't execute until
time 5
at time 5 it executes until time 10
the waiting time for process 4 is
5 - 3 = 2 milliseconds
Algorithm Evaluation
The following reading describes methods which can be used todetermine which scheduling algorithm best suits, and some
-
7/28/2019 Os Study Guide
36/105
process scheduling models.
Reading 3.4
Textbook, Chapter 6, Sections 6.6 and 6.7
Deterministic modelling is one of the standard ways of testing
understanding of scheduling algorithms. You will notice in many
of the past exams for COIT13152 that there is almost always an
exam question which is an example of deterministic modelling.
Exercises 6.3 and 6.4 from the textbook are examples of
deterministic modelling.
Summary
Reading 3.5
Textbook, Chapter 6, Section 6.8
Exercises
Textbook Chapter 6
2, 3, 4, 5, 6, 8, 9, 10
Email Question
> I have a couple of questions about question
6.4 (C)
> from the textbook. Firstly are we supposed to
> create a forumla for calculating this or apesudo
> code for the algorithm.
Actually neither. As with parts a and b you are
meant to "create a deterministic model" of this
situation. Which is just a verbose way of
saying that you have to simulate the algorithm.
So just as you walked through the SJF algorithm
in part b in order to calculate the average turn
around time. You need to do this again.
However, this time the CPU doesn't start
executing until time 1.0 when all the processes
in this example have arrived.
> Secondly in the first part of the question
> it mentions that process1 is run at time 0
then
> later in the questions it mentions that
process1
> and process2 are waiting. I gather from this
> process3 has already been processed.
Let's take a look at the information about these
processes which was given earlier in the
question
Process Arrived Time Burst Time
P1 0.0 8P2 0.4 4
P3 1.0 1
-
7/28/2019 Os Study Guide
37/105
What this stuff means is that Process 1 is going
to arrive at time 0 and has a burst time of 8.
Since there are no other processes in the system,
at this time, then process 1 is placed onto the
CPU. It is the shortest job, so it goes first.
A bust time of 8 means that process 1 will
finish executing at time 8.0. By this stage bothprocess 2 (arriving at 0.4) and process 3
(arriving at 1.0) have arrived. At this time the
shortest job will be selected.
This will be process 3 because it only has a
burst time of 1.
-
7/28/2019 Os Study Guide
38/105
-
7/28/2019 Os Study Guide
39/105
Chapter 4
Concurrency and its
Problems
Introduction
Concurrent processes are those that exist at the same time.
Concurrency happens all the time in the modern computer world.
Everytime you are writing and email message while waiting for a
Web page to download you are making use of concurrency.Concurrency is a great advantage but it introduces problems. In
this chapter we examine those problems and ways to solve them.
This chapter, and the next one, are important and different from
other chapters for a number of reasons
increasing use of concurrency
A few years ago few programmers had to worry about
concurrency and the related problems. Today's modern
operating systems and other software systems make heavy
use of concurrency. It is increasingly likely that you will
have to write concurrent programs. the OS perspective
Operating systems themselves must solve these
concurrency problems otherwise they will not be able to
perform correctly. Also it is usual for the operating system
to provide some of the tools which can be used to address
these problems.
more than the OS perspective
This and the next chapter involve you in more than
thinking about how the operating system implements
features. You will be expected to write a number of
programs which solve the problems associated with co-
operating, concurrent processes.
Based on previous experience in COIT13152, the concepts
introduced in this chapter are amongst the most difficult you will
face this semester. Concurrency problems can be very subtle and
hard to reproduce. However, understanding these concepts is
somewhat like climbing a hill. It is really difficult to start with,
but once you reach the top it is really easy. The moral of this story
is don't get frustrated. Stick with it, ask questions and write lots of
programs using concurrency. If you do this then eventually it all
will become clear.
To help you gain more experience with these problems there are a
-
7/28/2019 Os Study Guide
40/105
number of concurrency problems (and sample solutions) included
at the end of this chapter. These problems are examples of
assignment and exam questions from over the last few years of
COIT13152.
Email Message
This is a copy of an email message sent to the COIT13152 mailing
list while students were working on the first assignment.
There are a number of concepts in computing that
I call brickwall concepts. When you are first
learning them it can feel like you are beating
your head against a brickwall.
Some example brickwall concepts include pointers
and recursion.
The nice thing about brickwall concepts (trust
me, it is a nice thing) is that if you beat yourhead against the brickwall longer enough you
break the brickwall. Once you've done this
understanding these concepts will be easy (apart
perhaps from the lingering headache).
Beating your head against the brickwall is an
analogy for reading lots of different
explanations, asking questions, thinking about
the concepts a lot and attempting a great many
practical exercises.
Most of you should currently be grappling with
another brickwall concept in computing:
concurrency.
Trust me, if you beat your head against the
brickwall enough so that you break the wall down
you will understand concurrency and find it quite
simple.
If you don't do enough to break that wall down
you will never really get concurrency.
So please, really get stuck in and beat your head
against the brickwall a lot. Over the next few
days I will be providing additional material in
the way of explanations and exercises. Please
make use of them.
Objectives
By the end of this chapter you will:
be aware of problems with co-operating, concurrent
processes
know what a race condition is
be aware of what a critical section is and why mutual
exclusion must be implemented on a critical section
be familiar with the difficulties of implementing mutual
exclusion with standard programming tools
-
7/28/2019 Os Study Guide
41/105
be able to use tools such as test-and-set instructions and
semaphores to implement mutual exclusion and solve
problems requiring co-operating, concurrent processes
have had an introduction to some of the classical
concurrency problems such as readers/writers and the
dining philosophers
Resources
Textbook, Chapter 7
Study Guide Chapter 4
Online lecture 6 this lecture includes both audio and animations
and can be found on both the COIT13152 Web site and CD-ROM.
Optional: BACI is a system you can use to write concurrentprograms. It is available from the COIT13152 Web site and CD-
ROM. You may find writing BACI programs help you understand
some of the concepts introduced in this chapter.
What is the problem?
So what is the problem? The program below is a simple example
of the problem. This program is a BACI program but is based
heavily on C++ so hopefully looks familiar. This one program
starts three processes. Each processe than proceeds to write to the
screen the message "hello from process X" (where X is either A B
or C)
void say_hello (char id){
cout
-
7/28/2019 Os Study Guide
42/105
$bainterp simpleExecuting PCODE ...hello from process Bhello from process hello from process AC
all finished
$bainterp simpleExecuting PCODE ...hello from process Chello from process Bhello from process A
all finished
See how the the output is always jumbled. Since all three
processes (A, B and C) are sharing the same resource (the screen)
there is a problem. A race condition. A race condition is where
the end result of the program is always different depending on
which process wins the race.
A more generic statement of the problem is: How can two or more
processes gain access to a global resource in a SAFE and
EFFICIENT manner.
The first stage to solving this conflict of interest is to recognise the
importance of the global resource. This resource then has to be
manipulated with care.
Reading 4.1
Textbook, Chapter 7, Introduction and Section 7.1
Race Conditions
The problem discussed in the previous reading is an example of a
race condition. Here's another explanation.
Whenever, two or more processes attempt to do a "complex"
operation on a global resource, there is a chance for disaster. A
complex operation is any operation that modifies the global entity,
but is achieved in a series ofdistinct, separable steps.
In the previous section we saw an example of a race condition
when three processes are writing quite a large message to thescreen. You should be aware that part of the problem is that
writing a large message to the screen actually entails a large
number of machine instructions. Since there are many
instructions it is possible for the process to be interrupted half way
through writing the message.
This problem can be even more subtle. Consider the simple
operation of incrementing a variable in a C++ program:
x := x + 1;
In many computers, this actually consists ofthree individual and
-
7/28/2019 Os Study Guide
43/105
distinct steps
read the variable from memory into a CPU register
increment the contents of the register
write the contents of the register back into memory.
What happens when two processes (e.g. process A and process B)
try to carry out this update at approximately the same time?
Usually, nothing bad happens and x ends up being two greater
than it was (since two processes have incremented it). The
problem comes if the three component instructions are interleaved
with each other.
Consider some of the possible permutations in which these six
instructions can be carried out. There are in fact 20 different
combinations. The following shows two of those combinations
which cause problems. In the following read_A means process
A has done the read, inc_B means process B has incrementedand so on.
read_A; inc_A; read_B; inc_B; write_B; write_A;
This results in x being incremented by 1! One of the
increments is lost.
read_A; read_B; inc_A; inc_A; write_B; write_B;
This also results in x being incremented by 1! One of the
increments is lost.
In fact only 2 of the 20 possible permutations results in the correct
answer! The two versions ofread_A; inc_A; write_A;read_B; inc_B; write_B; where one process completes
before the other starts.
These different permutations arise when there is a process switch
between any of the instructions, or if two CPUs are executing the
three instructions at the same time.
Note, despite its name, race conditions normally occur when one
process stops! The problems occur when the process stops
between determining that it is `safe' to do something and actually
doing it.
Any resource that could possibly be altered by ANY two or moreprocesses at the same time has to be protected. All such items, be
they hard disk, files, printers, ... are called critical resources.
Solving the race condition
As you will see in the following readings you can solve this
problem by implementing mutual exclusion. To solve the race
condtion for our example BACI program we could implement
mutual exclusion on the say_hello function. The end result of
this would be that only one process would be able to execute the
say_hello function at any one time. At the moment you can
have two or three processes trying to display their message at the
-
7/28/2019 Os Study Guide
44/105
same time. This is how we get the jumbled output.
We do this by making the say_hello function indivisible or
atomic. This means that once a process gets into that function it
can't be interrupted by another. This means, on BACI at least, that
no other process will enter the function.
The simplest way to do this in BACI is to make the say_hello
function atomic. To do this you will need to change the source
code forrace.cm. Change the line
void say_hello ( char id )
to
atomic void say_hello( char id )
Now, if you recompile and execute the program you should see
something like the following.
$bainterp simple
Executing PCODE ...hello from process Chello from process Bhello from process Aall finished
$bainterp simpleExecuting PCODE ...hello from process Chello from process Ahello from process Ball finished
$bainterp simpleExecuting PCODE ...hello from process Chello from process Bhello from process Aall finished
Can you see how by using the atomic keyword ensures that only
one process is ever inside the say_hello function and as a
result solves the problem of our jumbled output. We still have the
race condition though. Notice how the result is different
depending on the speed of execution.
Critical SectionsA critical section is simply a section of code for which it is
necessary to implement mutual exclusion. In the BACI program
we used in the previous section the say_hello function is the
critical section. Mutual exclusion is where access to some
resource is limited to a set number of processes. In some
instances it may be desired to limit access to the resource to just
one process/person. In other instances you may wish to limit
access to a set number of processes/people.
The following reading introduces the requirements for
implementing a critical section and also examines some attemptsto solve the critical solution problem.
-
7/28/2019 Os Study Guide
45/105
We've already seen previously one solution to the critical section
problem, the use of the atomic keyword in BACI. However,
the atomic keyword is not available in every language so we have
to look for other solutions. Some of these solutions can be
difficult to understand how they do (or don't) work. It may be
helpful to listen to the online lecture slides for this section.Reading 4.2
Textbook, Chapter 7, Section 7.2
Hopefully the attempts to solve the critical section problem, to
implement mutual exclusion, have demonstrated to you how
difficult it is to correctly write concurrent programs which co-
operate or share resources. Simply running the programs and
seeing what happens is not sufficient. You may run a set of
programs hundreds of times without noticing a problem caused by
a race condition.
This is one of the reasons you must get into the habit of reading
through your programs and testing them on paper.
The Bakery Algorithm
The following is the copy of an email sent to the COIT13152
mailing list during 1999 which attempts to offer further
explanation of the bakery algorithm. One of the software-based
solutions to the mutual exclusion problem.
Email QuestionIn a tute I ran today there seemed to be some common problems
with understanding the Bakery Algorithm. I'm hoping that the
following might make it a little easier.
The algorithm is meant to be similar to the approach used in
bakeries, delis and other stores where you
take a number as you enter
the serving person calls out the next lowest number
The implementation goes something like this (based on thetextbook, (pp 162-163)
choosing : array [0..n-1] of boolean;
This simply means