Chapter 1.3: Introduction These slides, originally provided by your authors, have been modified by...

32
Chapter 1.3: Introduction Chapter 1.3: Introduction These slides, originally provided by your authors, have been modified by your instructor.
  • date post

    21-Dec-2015
  • Category

    Documents

  • view

    218
  • download

    1

Transcript of Chapter 1.3: Introduction These slides, originally provided by your authors, have been modified by...

  • Slide 1
  • Chapter 1.3: Introduction These slides, originally provided by your authors, have been modified by your instructor.
  • Slide 2
  • 1.2 Silberschatz, Galvin and Gagne 2005 Operating System Concepts 1.2.3 I/O Structure and DMA Handing I/O in a computing system is very complicated. Inefficient attention to I/O, its access, volumes of storage, types, etc. can create significant bottlenecks in performance. Many kinds of devices with varying speeds High speed / large volume transfers usually accommodated differently from low speed low volume transfers. Various mechanisms for handling these phenomena. Nevertheless, I/O and its impact on overall performance and throughput of a computing system legislate for careful design and implementation. A general purpose computer system (our target at this time) consists of one or more processors and multiple device controllers for controlling and managing input / output operations. In general, one device (say a disk) has a single controller assigned to it. But there may, in fact, be many such devices connected to a single device controller too.
  • Slide 3
  • 1.3 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Device Controllers Again In general, device controllers are responsible for moving data to/from storage media and their own local buffer storage. We have looked at simple device controllers and low speed / low volume devices earlier when we very briefly presented the concept of an interrupt. For data transfer of large amounts of data, the aforementioned implementation provides horrible performance. Want an approach to transfer large volumes of data and have an interrupt generated only when the entire quantity of data is transferred. Enter: Direct Memory Access (DMA)
  • Slide 4
  • 1.4 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Direct Memory Access (DMA) DMA is used for high-speed I/O devices able to transmit information at close to memory speeds. Device controller transfers blocks of data from buffer storage directly to main memory without CPU intervention. CPU interrupted only when a complete block transfers is done rather than be interrupted constantly as in keyboard input. The input / output device directly accesses primary memory to effect the data transfer. Nice drawing in your textbook. DMA allows CPU to do other things like execute other processes!! This significantly increases throughput in the computing system!! Remember: what a computer does best is to compute and not input/output!
  • Slide 5
  • 1.5 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Computer System Architecture A first look at the Processors Single Processor (general purpose) Systems Most systems have a single, main processor that executes a single instruction set What is meant by a single instruction set? Explain (not in book) All computers have other smaller, special purpose processors with limited instruction sets, such as those in device controllers. Instruction set essentially only deals with transmitting data. These special purpose processors do not run user programs at all Oftentimes many of these processors receive their instructions or initial parameters (addresses, etc.) from a central processor to undertake certain operations; then the main processor allows the specialized processors to handle their own execution, buffering, etc These processors run autonomously and asynchronously from CPU.
  • Slide 6
  • 1.6 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Computer System Architecture Multiple Processors (that is, multiple CPUs) Multi-processor systems are sometimes called parallel systems or tightly-coupled systems Complicated operations dealing with How CPUs operate autonomously; How CPUs share memory How CPUs share processes or how processes are allocated to specific processors, How devices are shared among processors Are all processors equal or is there a pecking order among processors?
  • Slide 7
  • 1.7 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Computer System Architecture Multiple Processors (that is, multiple CPUs) The do have three features in common. 1. Increase Throughput Be careful. N processors does not mean n-times the throughput, as there are contention problems, shared resource problems, and communication issues among processors that must be accommodated. 2. Economy of Scale Generally less expensive to have a multiple processor system than multiple single-processor systems because they can share memory, devices, etc. But there are hosts of related issues to this feature too. 3. Increased Reliability Failure of one CPU and its load can be absorbed by other CPUs. There are associated terms here such as graceful degradation and fault-tolerant computing, where failures can be absorbed by other processors. There are numerous issues here some will discussed later.
  • Slide 8
  • 1.8 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Computer Systems Architecture the Processors Two types of Multi-Processor Systems Lets look at the processors and their relationships to each other: 1. Asymmetric Processors Master Slave Relationships 2. Symmetric Processors All Processors are Peers
  • Slide 9
  • 1.9 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Asymmetric and Symmetric Processors In Asymmetric Processing (Master/Slave), processors are assigned specific tasks One processor acts like the main processor and both schedules and allocates tasks to the other processors. In some configurations, however, these processors have assigned tasks. In a Symmetric Processor (Peer) configuration, all processors are peers. Consider: Solaris, a commercial version of Unix designed by Sun Microsystems. Here there may be many processors all running the same operating system. All processors are running simultaneously. This means that if there are n processors, n processes can be in simultaneous execution at the same time without significant degradation of performance. But this last claim is arguable! There can be significant inefficiencies in this arrangement and the efficient sharing of common resources can be sources of problems and certainly complexity. Almost all modern operating systems (Windows, MAC, Linux, etc.) provide support for symmetric multiprocessing.
  • Slide 10
  • 1.10 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Running Symmetric / Asymmetric Processors May involve both hardware or software augmentation. Special hardware may be used to differentiate the different processors. Alternatively, the software can be written to recognize a single Master and multiple Slave processors Certain versions of the same operating system can be written to support symmetric or asymmetric processing. SunOS Version 4 provides for asymmetric multiprocessing SunOS Version 5 (Solaris) provides for symmetric multiprocessing. These are both able to be run on the same hardware.
  • Slide 11
  • 1.11 Silberschatz, Galvin and Gagne 2005 Operating System Concepts 1.4 Operating System Structure Multiprogramming is needed for overall efficient use of computer. Single user cannot keep CPU and I/O devices busy at all times The very nature of the CPU is oriented around computations fast! I/O often involves electro-mechanical operations (disk) at least in part! I/O devices are orders of magnitude slower compared to CPU speeds! We simply cannot have this disparity if one is to make efficient use of an expensive resource while providing a satisfying performance / response environment ot users. Multiprogramming organizes jobs (code and data) so CPU always has one to execute. We are trying to get high CPU utilization! Remember: what a CPU does best is compute! Keep CPU busy!! A subset of total jobs in system (job pool) is kept in primary memory One job selected and run via job scheduling When it has to wait (for I/O for example), OS switches CPU to another job CPU is transferred from job to job while each of these await some non- computing need, such as an input/output operation or are awaiting the CPU. Shortcoming: does not provide significant user-interaction with system.
  • Slide 12
  • 1.12 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Operating System Structure continued) Timesharing (multitasking) is logical extension in which CPU switches between jobs so frequently that users can directly interact with each job while it is running, creating interactive computing Response time should be < 1 second Each user has at least one process in some form of execution. Gives user impression of a dedicated resource. Appears to be simultaneous access. Not exactly. Computer is, in fact, shared simultaneously. In time-sharing, transactions are normally quite short
  • Slide 13
  • 1.13 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Operating System Structure continued A time-shared OS uses CPU scheduling and multiprogramming to provide users with small time slices. A process continues to execute (typically) until it requests an I/O (interactive input from user or I/O from a storage device), or terminates, or exhausts its time slice. Such a call is called a system call and results in a trap but much more later There is much to learn about (we have chapters devoted to these topics especially Chapter 3 on Process Management Job Scheduling - processes desiring to be brought into memory), and (Job Queue) CPU scheduling - decisions on which ready job to run (Ready Queue) and Context Switching transferring CPU control from one process to another to effect concurrent execution and efficient use of a computers resources a process with some overhead!
  • Slide 14
  • 1.14 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Memory Layout for Multiprogrammed System In order to bring about multiprogramming and time sharing, several jobs must be (ideally) in memory and be ready to run when/if the CPU is dispatched to run that process. Swapping is a feature that allows jobs to be loaded into and out of memory (from disk). Virtual Memory is a feature that allows processes to be executed when the entire program is not in physical memory. Allows for execution of large programs Creates large, uniform storage arrays differentiating logical memory from physical memory. User is now no longer concerned over physical limitations of real memory. There is a significant number of very serious problems encountered when resources are shared, such as deadlocks, races, and effective management of disk space, permissions and more!
  • Slide 15
  • 1.15 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Operating-System Operations and Modes of Operation; Traps and Interrupts Modern Operating Systems are interrupt-driven. Interrupts are hardware initiated. Traps are typically exceptions software-generated events (system calls). Division by zero, request for operating system service (read, write, open, fork, and more) Attempts to access forbidden areas of memory Interrupts (various kinds) are handled by interrupt handlers (or interrupt service routine) to handle that kind of interrupt. Traps result from system calls and are handled by special routines for these services too. Much more in Chapter 3 and beyond.
  • Slide 16
  • 1.16 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Dual Mode Operation User and Kernel Modes Dual-mode operation allows OS to protect itself and system components Application Programs must run in User Mode; Operating system programs run in Kernel Mode (Privileged mode, supervisor mode, ) In Kernel Mode, other privileged instructions can be executed. Protected areas of memory can be accessed flag registers; status indicators, Direct communications with devices low level concerns can be accommodated Importantly, in kernel mode, the operating system can execute all available instructions. In User Mode, user requests services from Operating System via System Calls. Request a Read from a file; write to a file; Open a file, etc. Users are not allowed to directly process the reading of a file, etc. In user applications, requests for services are issued (read, write) that result in system calls. Instructions such as these cause a transition to Kernel Mode. Typically, a mode bit is set which allows privileged instructions to be executed. When the privileged instructions are completed, OS transitions back to User Mode (sets mode bit back to 1)
  • Slide 17
  • 1.17 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Dual Mode Operation All about Protection! Must protect the OS from accidental / inappropriate access by user programs. Privileged Instructions can only be executed in Kernel Mode (mode bit = 0). Unauthorized attempts to execute privileged instructions causes a trap to the OS. Much more later. Please recognize that Application Programs usually have many system calls (like requesting an I/O) during standard execution which usually take the form of a trap. Interrupts and traps are mechanisms used to provide needed services to users and the operating system while providing protection, monitoring, and control.
  • Slide 18
  • 1.18 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Dual Mode Operation All about Protection! Note: Attempts to execute privileged instructions (mode violations) are detected by the hardware. The hardware will Trap to the Operating System. The trap transfers control to the interrupt vector. Remember: An interrupt is a hardware-generated change-of-flow within the system. An interrupt handler is then executed to deal with the cause of the interrupt; control is returned to interrupted context and instruction. Remember: A trap is a software-generated interrupt. An interrupt can be used to signal the completion of an I/O, for example. A trap can be used to call operating system routines or to catch arithmetic overflow errors.
  • Slide 19
  • 1.19 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Example Trap versus Interrupt a bit more An interrupt is the OS reaction to an event that occurs outside of the current program's execution. A trap is the OS reaction to an event that occurs as a result of the current program's execution. So, if the program divides by 0, it would be caught via a trap. If the time slice is exhausted, current process is stopped via an interrupt. The timer is external to the program whereas the division by 0 is internal to the program.
  • Slide 20
  • 1.20 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Transition from User to Kernel Mode Need to be able manage the CPU and related resources. User process is running and performing computations. User process attempts a divide by zero. This results in a system call which traps to the operating system.
  • Slide 21
  • 1.21 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Overview of Management Activities Processor Memory Storage Input/Output Subsystems
  • Slide 22
  • 1.22 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Process Management A process is a program in execution. A unit of work within the system Program is a passive entity could be the contents of a file stored on disk; Process is an active entity. Process initiation needs resources to accomplish its task CPU, memory, I/O, files, initialization data (e.g. parameters) When, for example, the operating system is given the name of a file, a number of system calls will be executed to open, read, display, etc. as required. Process termination requires OS to reclaim reusable resources
  • Slide 23
  • 1.23 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Single threaded and Multi-threaded processes Single-threaded processes (for now) have a single program counter specifying location of next instruction to execute Process executes instructions one at a time, until completion (Discuss program counter / instruction register) Multi-threaded processes have one program counter per thread different animal! Typically the operating system has many processes running concurrently: Processes may be user processes or operating system processes running on one or more CPUs. So very much more later.
  • Slide 24
  • 1.24 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Process Management Activities First and foremost, the operating system is a manager of resources. This is primo! As a manager of system resources, the OS must: Create and delete both user and system processes Suspend and resume processes Provide mechanisms for process synchronization Provide mechanisms for process communication Provide mechanisms for deadlock handling Briefly discuss Much more detail in chapter on Process Management!
  • Slide 25
  • 1.25 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Memory Management Main Memory (primary memory; RAM) is a large array of words or bytes ranging in size from hundreds of thousands to billions! Instructions must be in memory in order to execute as discussed CPU reads instructions from memory into the CPU one at a time, decodes them, and executes them using registers in the CPU. Primary Memory: the only large storage device CPU can directly access. Data on disk must be first transferred into Memory by CPU- initiated I/O calls. Instructions for, say, user processes must also be first transferred from disk into Memory before they can be fetched and executed by the CPU.
  • Slide 26
  • 1.26 Silberschatz, Galvin and Gagne 2005 Operating System Concepts More Program instructions and data must be mapped to absolute addresses in memory so unambiguous storage locations can be accessed. A number of mechanisms used to support this need Example: base and displacement registers, and more Upon program termination, memory space is now freed up for reuse (NOT CLEARED!) In the chapter on Memory Management, we will discuss several memory management schemes!
  • Slide 27
  • 1.27 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Storage Management OS provides uniform, logical view of information storage Abstracts physical properties to logical storage unit - file Each medium (disk, tape, etc.) is controlled by device (i.e., disk drive, tape drive) Varying properties include access speed, capacity, data-transfer rate, access method (sequential or random) We oftentimes speak of Disk directories, - hierarchical arrangements of files or other directories File Allocation Tables (FAT) tables, Names, locations, sizes, permissions, Volume Tables of Contents (VTOC) Names locations, logical record sizes, physical blocks of storage, permissions, security, etc. All used to manage the disk resources.
  • Slide 28
  • 1.28 Silberschatz, Galvin and Gagne 2005 Operating System Concepts More on Storage Management File-System Management A file: collection of related information defined by its creator. Usually program files (source and object) and data. Data file may be numeric, alphabetic, alphanumeric, or binary. Data files may be free form (text) or formatted with fixed fields Files usually organized into directories may be hierarchical; may be linked. coarse directories and fine directories; index, sequence, data sets. and have access control on most systems to determine permissions. OS activities include Creating and deleting files and directories Primitives to manipulate files and directories / subdirectories Utility programs supplied by operating system Mapping files onto secondary storage (logical to physical names; locations)) Backup files onto stable (non-volatile) storage media
  • Slide 29
  • 1.29 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Mass-Storage Management Store data that does not fit in main memory or data that must be kept for a long period of time. IS non-volatile (permanent) Called secondary storage. OS programs (compilers, assemblers, data base engines, etc.) as well as user programs are stored typically on disk. Proper storage management - essential to the smooth operation of a computing system. Much of the speed of the overall computing system often hinges on quick, high-capacity disk subsystems and related search / storage algorithms. Tertiary storage includes optical storage, magnetic tape huge, cheap (slow) storage Great for archival files (EOM, EOY, other history files - storage of large amounts of data) More in Chapter 12.
  • Slide 30
  • 1.30 Silberschatz, Galvin and Gagne 2005 Operating System Concepts I/O Subsystem One purpose of OS is to hide peculiarities of hardware devices from the user We dont want to care to worry about physical layouts of disks (number of bytes / block; number of blocks / track, etc.) mapping of logical names to physical locations on disks; Volume Tables of Contents (VTOC) and so very much more. I/O subsystem responsible for Memory management of I/O including buffering (storing data temporarily while it is being transferred), caching (storing parts of data in faster storage for performance), spooling (the overlapping of output of one job with input of other jobs) ande much more. Device drivers for specific hardware devices know particulars of the specific devices they access and manage.
  • Slide 31
  • 1.31 Silberschatz, Galvin and Gagne 2005 Operating System Concepts Remainder of Slides: On your own. Do read corresponding text and slides. Do answer questions in text on these topics.
  • Slide 32
  • End of Chapter 1