ICT Folio...

100
Computer The NASA Columbia Supercomputer . A computer is a machine that manipulates data according to a list of instructions . The first devices that resemble modern computers date to the mid-20th century (around 1940 - 1945), although the computer concept and various machines similar to computers existed earlier. Early electronic computers were the size of a large room, consuming as much power as several hundred modern personal computers. [1] Modern computers are based on tiny integrated circuits and are millions to billions of times more capable while occupying a fraction of the space. [2] Today, simple computers may be made small enough to fit into a wristwatch and be powered from a watch battery . Personal computers , in various forms, are icons of the Information Age and are what most people think of as "a computer"; however, the most common form of computer in use today is the embedded computer . Embedded computers are small, simple devices that are used to control other devices — for example, they may be found in machines ranging from fighter aircraft to industrial robots , digital cameras , and children's toys . The ability to store and execute lists of instructions called programs makes computers extremely versatile and distinguishes them from calculators . The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, computers with capability and complexity ranging from that of a personal digital assistant to a supercomputer are all able to perform the same computational tasks given enough time and storage capacity.

description

This Folio contain all basic information about ICT. All this content is provided by wikipedia and some other resources that approved this document sharing all over the Internet.

Transcript of ICT Folio...

Page 1: ICT Folio...

Computer

The NASA Columbia Supercomputer.

A computer is a machine that manipulates data according to a list of instructions.

The first devices that resemble modern computers date to the mid-20th century (around 1940 - 1945), although the computer concept and various machines similar to computers existed earlier. Early electronic computers were the size of a large room, consuming as much power as several hundred modern personal computers.[1] Modern computers are based on tiny integrated circuits and are millions to billions of times more capable while occupying a fraction of the space.[2] Today, simple computers may be made small enough to fit into a wristwatch and be powered from a watch battery. Personal computers, in various forms, are icons of the Information Age and are what most people think of as "a computer"; however, the most common form of computer in use today is the embedded computer. Embedded computers are small, simple devices that are used to control other devices — for example, they may be found in machines ranging from fighter aircraft to industrial robots, digital cameras, and children's toys.

The ability to store and execute lists of instructions called programs makes computers extremely versatile and distinguishes them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, computers with capability and complexity ranging from that of a personal digital assistant to a supercomputer are all able to perform the same computational tasks given enough time and storage capacity.

History of computing

The Jacquard loom was one of the first programmable devices.

It is difficult to identify any one device as the earliest computer, partly because the term "computer" has been subject to varying interpretations over time. Originally, the term "computer" referred to a person who performed numerical calculations (a human computer), often with the aid of a mechanical calculating device.

The history of the modern computer begins with two separate technologies - that of automated calculation and that of programmability.

Page 2: ICT Folio...

Examples of early mechanical calculating devices included the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150-100 BC). The end of the Middle Ages saw a re-invigoration of European mathematics and engineering, and Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers. However, none of those devices fit the modern definition of a computer because they could not be programmed.

Hero of Alexandria (c. 10 – 70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions - and when.[3] This is the essence of programmability. In 1801, Joseph Marie Jacquard made an improvement to the textile loom that used a series of punched paper cards as a template to allow his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.

It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer that he called "The Analytical Engine".[4] Due to limited finances, and an inability to resist tinkering with the design, Babbage never actually built his Analytical Engine.

Large-scale automated data processing of punched cards was performed for the U.S. Census in 1890 by tabulating machines designed by Herman Hollerith and manufactured by the Computing Tabulating Recording Corporation, which later became IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.

Defining characteristics of some early digital computers of the 1940s (See History of computing hardware)

Page 3: ICT Folio...

NameFirst

operationalNumeral system

Computing mechanism

ProgrammingTuring

complete

Zuse Z3 (Germany)

May 1941 BinaryElectro-mechanical

Program-controlled by punched film stock

Yes (1998)

Atanasoff–Berry Computer (USA)

Summer 1941

Binary ElectronicNot programmable—single purpose

No

Colossus (UK)December 1943

Binary ElectronicProgram-controlled by patch cables and switches

No

Harvard Mark I – IBM ASCC (USA)

1944 DecimalElectro-mechanical

Program-controlled by 24-channel punched paper tape (but no conditional branch)

Yes (1998)

ENIAC (USA)November 1945

Decimal ElectronicProgram-controlled by patch cables and switches

Yes

Manchester Small-Scale Experimental Machine (UK)

June 1948 Binary ElectronicStored-program in Williams cathode ray tube memory

Yes

Modified ENIAC (USA)

September 1948

Decimal Electronic

Program-controlled by patch cables and switches plus a primitive read-only stored programming mechanism using the Function Tables as program ROM

Yes

EDSAC (UK) May 1949 Binary Electronic Stored-program in mercury delay line

Yes

Page 4: ICT Folio...

memory

Manchester Mark I (UK)

October 1949

Binary ElectronicWilliams cathode ray tube memory and magnetic drum memory

Yes

CSIRAC (Australia)

November 1949

Binary ElectronicStored-program in mercury delay line memory

Yes

A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as "the first digital electronic computer" is difficult (Shannon 1940). Notable achievements include:

EDSAC was one of the first computers to implement the stored program (von Neumann) architecture.

Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first working machine featuring binary arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the world's first operational computer.

The non-programmable Atanasoff–Berry Computer (1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory.

The secret British Colossus computers (1943)[5], which had limited programmability but demonstrated that a device using thousands of tubes could be

Page 5: ICT Folio...

reasonably reliable and electronically reprogrammable. It was used for breaking German wartime codes.

The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.

The U.S. Army's Ballistics Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes called the first general purpose electronic computer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of electronics). Initially, however, ENIAC had an inflexible architecture which essentially required rewiring to change its programming.

Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the stored program architecture or von Neumann architecture. This design was first formally described by John von Neumann in the paper "First Draft of a Report on the EDVAC", published in 1945. A number of projects to develop computers based on the stored program architecture commenced around this time, the first of these being completed in Great Britain. The first to be demonstrated working was the Manchester Small-Scale Experimental Machine (SSEM) or "Baby". However, the EDSAC, completed a year after SSEM, was perhaps the first practical implementation of the stored program design. Shortly thereafter, the machine originally described by von Neumann's paper—EDVAC—was completed but did not see full-time use for an additional two years.

Nearly all modern computers implement some form of the stored program architecture, making it the single trait by which the word "computer" is now defined. By this standard, many earlier devices would no longer be called computers by today's definition, but are usually referred to as such in their historical context. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture. The design made the universal computer a practical reality.

Microprocessors are miniaturized devices that often implement stored program CPUs.

Vacuum tube-based computers were in use throughout the 1950s. Vacuum tubes were largely replaced in the 1960s by transistor-based computers. When compared with tubes, transistors are smaller, faster, cheaper, use less power, and are more reliable. In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such

Page 6: ICT Folio...

as the Intel 4004, caused another generation of decreased size and cost, and another generation of increased speed and reliability. By the 1980s, computers became sufficiently small and cheap to replace simple mechanical controls in domestic appliances such as washing machines. The 1980s also witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household.

Stored program architecture

The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that a list of instructions (the program) can be given to the computer and it will store them and carry them out at some time in the future.

In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction.

Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.

Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time—with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. For example:

mov #0,sum ; set sum to 0 mov #1,num ; set num to 1loop: add num,sum ; add num to sum add #1,num ; add 1 to num cmp num,#1000 ; compare num to 1000 ble loop ; if num <= 1000, go back to 'loop' halt ; end of program. stop running

Page 7: ICT Folio...

Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in about a millionth of a second.[6]

However, computers cannot "think" for themselves in the sense that they only solve problems in exactly the way they are programmed to. An intelligent human faced with the above addition task might soon realize that instead of actually adding up all the numbers one can simply use the equation

and arrive at the correct answer (500,500) with little work.[7] In other words, a computer programmed to add up the numbers one by one as in the example above would do exactly that without regard to efficiency or alternative solutions.

Programs

A 1970s punched card containing one line from a FORTRAN program. The card reads: "Z(1) = Y + W(1)" and is labelled "PROJ039" for identification purposes.

In practical terms, a computer program might include anywhere from a dozen instructions to many millions of instructions for something like a word processor or a web browser. A typical modern computer can execute billions of instructions every second and nearly never make a mistake over years of operation.

Large computer programs may take teams of computer programmers years to write and the probability of the entire program having been written completely in the manner intended is unlikely. Errors in computer programs are called bugs. Sometimes bugs are benign and do not affect the usefulness of the program, in other cases they might cause the program to completely fail (crash), in yet other cases there may be subtle problems. Sometimes otherwise benign bugs may be used for malicious intent, creating a security exploit. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[8]

In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The

Page 8: ICT Folio...

command to add two numbers together would have one opcode, the command to multiply them would have a different opcode and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from—each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer just as if they were numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.

While it is possible to write computer programs as long lists of numbers (machine language) and this technique was used with many early computers,[9] it is extremely tedious to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember—a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) tend to be unique to a particular type of computer. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held videogame) cannot understand the machine language of an Intel Pentium or the AMD Athlon 64 computer that might be in a PC.[10]

Though considerably easier than in machine language, writing long programs in assembly language is often difficult and error prone. Therefore, most complicated programs are written in more abstract high-level programming languages that are able to express the needs of the computer programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[11] Since high level languages are more abstract than assembly language, it is possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.

The task of developing large software systems is an immense intellectual effort. Producing software with an acceptably high reliability on a predictable schedule and budget has proved historically to be a great challenge; the academic and professional discipline of software engineering concentrates specifically on this problem.

Page 9: ICT Folio...

Example

A traffic light showing red.

Suppose a computer is being employed to drive a traffic light. A simple stored program might say:

1. Turn off all of the lights2. Turn on the red light3. Wait for sixty seconds4. Turn off the red light5. Turn on the green light6. Wait for sixty seconds7. Turn off the green light8. Turn on the yellow light9. Wait for two seconds10. Turn off the yellow light11. Jump to instruction number (2)

With this set of instructions, the computer would cycle the light continually through red, green, yellow and back to red again until told to stop running the program.

However, suppose there is a simple on/off switch connected to the computer that is intended to be used to make the light flash red while some maintenance operation is being performed. The program might then instruct the computer to:

1. Turn off all of the lights2. Turn on the red light3. Wait for sixty seconds4. Turn off the red light5. Turn on the green light6. Wait for sixty seconds7. Turn off the green light8. Turn on the yellow light9. Wait for two seconds10. Turn off the yellow light11. If the maintenance switch is NOT turned on then jump to instruction number 212. Turn on the red light13. Wait for one second

Page 10: ICT Folio...

14. Turn off the red light15. Wait for one second16. Jump to instruction number 11

In this manner, the computer is either running the instructions from number (2) to (11) over and over or its running the instructions from (11) down to (16) over and over, depending on the position of the switch.[12]

How computers work

A general purpose computer has four main sections: the arithmetic and logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by busses, often made of groups of wires.

The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit called a microprocessor.

Control unit

The control unit (often called a control system or central controller) directs the various components of a computer. It reads and interprets (decodes) instructions in the program one by one. The control system decodes each instruction and turns it into a series of control signals that operate the other parts of the computer.[13] Control systems in advanced computers may change the order of some instructions so as to improve performance.

A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[14]

Diagram showing how a particular MIPS architecture instruction would be decoded by the control system.

The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:

Page 11: ICT Folio...

1. Read the code for the next instruction from the cell indicated by the program counter.

2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.

3. Increment the program counter so it points to the next instruction.4. Read whatever data the instruction requires from cells in memory (or perhaps

from an input device). The location of this required data is typically stored within the instruction code.

5. Provide the necessary data to an ALU or register.6. If the instruction requires an ALU or specialized hardware to complete, instruct

the hardware to perform the requested operation.7. Write the result from the ALU back to a memory location or to a register or

perhaps an output device.8. Jump back to step (1).

Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).

It is noticeable that the sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program - and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer that runs a microcode program that causes all of these events to happen.

Arithmetic/logic unit (ALU)

The ALU is capable of performing two classes of operations: arithmetic and logic.

The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers—albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?").

Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful both for creating complicated conditional statements and processing boolean logic.

Page 12: ICT Folio...

Superscalar computers contain multiple ALUs so that they can process several instructions at the same time. Graphics processors and computers with SIMD and MIMD features often provide ALUs that can perform arithmetic on vectors and matrices.

Memory

Magnetic core memory was popular main memory for computers through the 1960s until it was completely replaced by semiconductor memory.

A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595". The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is up to the software to give significance to what the memory sees as nothing but a series of numbers.

In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers; either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory as long as it can be somehow represented in numerical form. Modern computers have billions or even trillions of bytes of memory.

The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. Since data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.

Computer main memory comes in two principal varieties: random access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up

Page 13: ICT Folio...

instructions. In general, the contents of RAM is erased when the power to the computer is turned off while ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the software required to perform the task may be stored in ROM. Software that is stored in ROM is often called firmware because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM by retaining data when turned off but being rewritable like RAM. However, flash memory is typically much slower than conventional ROM and RAM so its use is restricted to applications where high speeds are not required.[15]

In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.

Input/output (I/O)

Hard disks are common I/O devices used with computers.

I/O is the means by which a computer receives information from the outside world and sends results back. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.

Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics[citation needed]. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O.

Multitasking

While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by having the computer switch rapidly between running each program in turn. One means by which this is done is with a special

Page 14: ICT Folio...

signal called an interrupt which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time", then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.

Before the era of cheap computers, the principle use for multitasking was to allow many people to share the same computer.

Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly - in direct proportion to the number of programs it is running. However, most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run at the same time without unacceptable speed loss.

Multiprocessing

Cray designed many supercomputers that used multiprocessing heavily.

Some computers may divide their work between one or more separate CPUs, creating a multiprocessing configuration. Traditionally, this technique was utilized only in large and powerful computers such as supercomputers, mainframe computers and servers. However, multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers have become widely available and are beginning to see increased usage in lower-end markets as a result.

Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[16] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see

Page 15: ICT Folio...

usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.

Networking and the Internet

Visualization of a portion of the routes on the Internet.

Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems like Sabre.

In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. This effort was funded by ARPA (now DARPA), and the computer network that it produced was called the ARPANET. The technologies that made the Arpanet possible spread and evolved. In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.

Further topics

Hardware

Page 16: ICT Folio...

The term hardware covers all of those parts of a computer that are tangible objects. Circuits, displays, power supplies, cables, keyboards, printers and mice are all hardware.

History of computing hardware

First Generation (Mechanical/Electromechanical)

CalculatorsAntikythera mechanism, Difference Engine, Norden bombsight

Programmable DevicesJacquard loom, Analytical Engine, Harvard Mark I, Z3

Second Generation (Vacuum Tubes)

Calculators

Atanasoff–Berry Computer, IBM 604, UNIVAC 60, UNIVAC 120

Programmable Devices

Colossus, ENIAC, Manchester Small-Scale Experimental Machine, EDSAC, Manchester Mark I, CSIRAC, EDVAC, UNIVAC I, IBM 701, IBM 702, IBM 650, Z22

Third Generation (Discrete transistors and SSI, MSI, LSI Integrated circuits)

MainframesIBM 7090, IBM 7080, System/360, BUNCH

MinicomputerPDP-8, PDP-11, System/32, System/36

Fourth Generation (VLSI integrated circuits)

Minicomputer VAX, IBM System i4-bit microcomputer Intel 4004, Intel 4040

8-bit microcomputer

Intel 8008, Intel 8080, Motorola 6800, Motorola 6809, MOS Technology 6502, Zilog Z80

16-bit microcomputer8088, Zilog Z8000, WDC 65816/65802

32-bit microcomputer80386, Pentium, 68000, ARM architecture

64-bit microcomputer[17] x86-64, PowerPC, MIPS, SPARC

Embedded computer 8048, 8051Personal computer Desktop computer, Home

computer, Laptop computer, Personal digital assistant (PDA), Portable computer, Tablet computer, Wearable

Page 17: ICT Folio...

computer

Theoretical/experimental

Quantum computer, Chemical computer, DNA computing, Optical computer, Spintronics based computer

Other Hardware Topics

Peripheral device (Input/output)

InputMouse, Keyboard, Joystick, Image scanner

Output Monitor, Printer

BothFloppy disk drive, Hard disk, Optical disc drive, Teleprinter

Computer bussesShort range RS-232, SCSI, PCI, USBLong range (Computer networking)

Ethernet, ATM, FDDI

Software

Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. When software is stored in hardware that cannot easily be modified (such as BIOS ROM in an IBM PC compatible), it is sometimes called "firmware" to indicate that it falls into an uncertain area somewhere between hardware and software.

Computer software

Operating system

Unix/BSDUNIX System V, AIX, HP-UX, Solaris (SunOS), IRIX, List of BSD operating systems

GNU/LinuxList of Linux distributions, Comparison of Linux distributions

Microsoft Windows

Windows 95, Windows 98, Windows NT, Windows 2000, Windows XP, Windows Vista, Windows CE

DOS 86-DOS (QDOS), PC-DOS, MS-DOS, FreeDOSMac OS Mac OS classic, Mac OS XEmbedded and real-time

List of embedded operating systems

Experimental Amoeba, Oberon/Bluebottle, Plan 9 from Bell Labs

LibraryMultimedia DirectX, OpenGL, OpenALProgramming library

C standard library, Standard template library

DataProtocol TCP/IP, Kermit, FTP, HTTP, SMTPFile format HTML, XML, JPEG, MPEG, PNG

User interface

Graphical user interface (WIMP)

Microsoft Windows, GNOME, KDE, QNX Photon, CDE, GEM

Text user interface Command line interface, shells

Page 18: ICT Folio...

Application

Office suiteWord processing, Desktop publishing, Presentation program, Database management system, Scheduling & Time management, Spreadsheet, Accounting software

Internet AccessBrowser, E-mail client, Web server, Mail transfer agent, Instant messaging

Design and manufacturing

Computer-aided design, Computer-aided manufacturing, Plant management, Robotic manufacturing, Supply chain management

GraphicsRaster graphics editor, Vector graphics editor, 3D modeler, Animation editor, 3D computer graphics, Video editing, Image processing

AudioDigital audio editor, Audio playback, Mixing, Audio synthesis, Computer music

Software Engineering

Compiler, Assembler, Interpreter, Debugger, Text Editor, Integrated development environment, Performance analysis, Revision control, Software configuration management

EducationalEdutainment, Educational game, Serious game, Flight simulator

GamesStrategy, Arcade, Puzzle, Simulation, First-person shooter, Platform, Massively multiplayer, Interactive fiction

MiscArtificial intelligence, Antivirus software, Malware scanner, Installer/Package management systems, File manager

Programming languages

Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine language by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of different programming languages—some intended to be general purpose, others useful only for highly specialized applications.

Programming Languages

Lists of programming languages

Timeline of programming languages, Categorical list of programming languages, Generational list of programming languages, Alphabetical list of programming languages, Non-English-based programming languages

Commonly used Assembly languages

ARM, MIPS, x86

Commonly used BASIC, C, C++, C#, COBOL, Fortran, Java, Lisp, Pascal

Page 19: ICT Folio...

High level languagesCommonly used Scripting languages

Bourne script, JavaScript, Python, Ruby, PHP, Perl

Professions and organizations

As the use of computers has spread throughout society, there are an increasing number of careers involving computers. Following the theme of hardware, software and firmware, the brains of people who work in the industry are sometimes known irreverently as wetware or "meatware".

Computer-related professions

Hardware-related

Electrical engineering, Electronics engineering, Computer engineering, Telecommunications engineering, Optical engineering, Nanoscale engineering

Software-related

Computer science, Human-computer interaction, Information technology, Software engineering, Scientific computing, Web design, Desktop publishing

The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.

OrganizationsStandards groups ANSI, IEC, IEEE, IETF, ISO, W3CProfessional Societies ACM, ACM Special Interest Groups, IET, IFIPFree/Open source software groups

Free Software Foundation, Mozilla Foundation, Apache Software Foundation

Notes

1. In 1946, ENIAC consumed an estimated 174 kW. By comparison, a typical personal computer may use around 400 W; over four hundred times less. (Kempf 1961)

2. Early computers such as Colossus and ENIAC were able to process between 5 and 100 operations per second. A modern "commodity" microprocessor (as of 2007) can process billions of operations per second, and many of these operations are more complicated and useful than early computer operations.

3. Heron of Alexandria . Retrieved on 2008-01-15.4. The Analytical Engine should not be confused with Babbage's difference engine

which was a non-programmable mechanical calculator.

Page 20: ICT Folio...

5. B. Jack Copeland, ed., Colossus: The Secrets of Bletchley Park's Codebreaking Computers, Oxford University Press, 2006

6. This program was written similarly to those for the PDP-11 minicomputer and shows some typical things a computer can do. All the text after the semicolons are comments for the benefit of human readers. These have no significance to the computer and are ignored. (Digital Equipment Corporation 1972)

7. Attempts are often made to create programs that can overcome this fundamental limitation of computers. Software that mimics learning and adaptation is part of artificial intelligence.

8. It is not universally true that bugs are solely due to programmer oversight. Computer hardware may fail or may itself have a fundamental problem that produces unexpected results in certain situations. For instance, the Pentium FDIV bug caused some Intel microprocessors in the early 1990s to produce inaccurate results for certain floating point division operations. This was caused by a flaw in the microprocessor design and resulted in a partial recall of the affected devices.

9. Even some later computers were commonly programmed directly in machine code. Some minicomputers like the DEC PDP-8 could be programmed directly from a panel of switches. However, this method was usually used only as part of the booting process. Most modern computers boot entirely automatically by reading a boot program from some non-volatile memory.

10. However, there is sometimes some form of machine language compatibility between different computers. An x86-64 compatible microprocessor like the AMD Athlon 64 is able to run most of the same programs that an Intel Core 2 microprocessor can, as well as programs designed for earlier microprocessors like the Intel Pentiums and Intel 80486. This contrasts with very early commercial computers, which were often one-of-a-kind and totally incompatible with other computers.

11. High level languages are also often interpreted rather than compiled. Interpreted languages are translated into machine code on the fly by another program called an interpreter.

12. Although this is a simple program, it contains a software bug. If the traffic signal is showing red when someone switches the "flash red" switch, it will cycle through green once more before starting to flash red as instructed. This bug is quite easy to fix by changing the program to repeatedly test the switch throughout each "wait" period—but writing large programs that have no bugs is exceedingly difficult.

13. The control unit's rule in interpreting instructions has varied somewhat in the past. While the control unit is solely responsible for instruction interpretation in most modern computers, this is not always the case. Many computers include some instructions that may only be partially interpreted by the control system and partially interpreted by another device. This is especially the case with specialized computing hardware that may be partially self-contained. For example, EDVAC, the first modern stored program computer to be designed, used a central control unit that only interpreted four instructions. All of the arithmetic-related instructions were passed on to its arithmetic unit and further decoded there.

Page 21: ICT Folio...

14. Instructions often occupy more than one memory address, so the program counters usually increases by the number of memory locations required to store one instruction.

15. Flash memory also may only be rewritten a limited number of times before wearing out, making it less useful for heavy random access usage. (Verma 1988)

16. However, it is also very common to construct supercomputers out of many pieces of cheap commodity hardware; usually individual computers connected by networks. These so-called computer clusters can often provide supercomputer performance at a much lower cost than customized designs. While custom architectures are still used for most of the most powerful supercomputers, there has been a proliferation of cluster computers in recent years. (TOP500 2006)

17. Most major 64-bit instruction set architectures are extensions of earlier designs. All of the architectures listed in this table existed in 32-bit forms before their 64-bit incarnations were introduced.

History of computing hardware

Computing hardware is a platform for information processing (block diagram).History of computing

Hardware before 1960Hardware 1960s to presentHardware in Soviet Bloc countries

Computer scienceOperating systemsPersonal computersLaptops

Page 22: ICT Folio...

Software engineeringProgramming languages

Artificial intelligenceGraphical user interfaceInternetWorld Wide WebComputer and video games

Timeline of computing Timeline of computing 2400 BC–1949 1950–1979 1980–1989 1990—

More timelines...

More...

The history of computing hardware covers the history of computer hardware, its architecture, and its impact on software. Originally calculations were computed by humans, who were called computers,[1] as a job title. See the history of computing article for methods intended for pen and paper, with or without the aid of tables. For a detailed timeline of events, see the computing timeline article.

The von Neumann architecture unifies our current computing hardware implementations.[2] The major elements of computing hardware are input,[3] output,[4] control[5] and datapath (which together make a processor),[6] and memory.[7] They have undergone successive refinement or improvement over the history of computing hardware. Beginning with mechanical mechanisms, the hardware then started using analogs for a computation, including water and even air as the analog quantities: analog computers have used lengths, pressures, voltages, and currents to represent the results of calculations.[8] Eventually the voltages or currents were standardized and digital computers were developed over a period of evolution dating back centuries. Digital computing elements have ranged from mechanical gears, to electromechanical relays, to vacuum tubes, to transistors, and to integrated circuits, all of which are currently implementing the von Neumann architecture.[9]

Since digital computers rely on digital storage, and tend to be limited by the size and speed of memory, the history of computer data storage is tied to the development of computers. The degree of improvement in computing hardware has triggered world-wide use of the technology. Even as performance has improved, the price has declined,[10] until computers have become commodities, accessible to ever-increasing sectors[11] of the world's population. Computing hardware thus became a platform for uses other than computation, such as automation, communication, control, entertainment, and education.

Page 23: ICT Folio...

Each field in turn has imposed its own requirements on the hardware, which has evolved in response to those requirements.[12]

Earliest calculators

Suanpan (the number represented on this abacus is 6,302,715,408)

Devices have been used to aid computation for thousands of years, using one-to-one correspondence with our fingers.[13] The earliest counting device was probably a form of tally stick. Later record keeping aids throughout the Fertile Crescent included clay shapes, which represented counts of items, probably livestock or grains, sealed in containers.[14] The abacus was used for arithmetic tasks. The Roman abacus was used in Babylonia as early as 2400 BC. Since then, many other forms of reckoning boards or tables have been invented. In a medieval counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.[15]

A number of analog computers were constructed in ancient and medieval times to perform astronomical calculations. These include the Antikythera mechanism and the astrolabe from ancient Greece (c. 150-100 BC), and are generally regarded as the first mechanical computers.[16] Other early versions of mechanical devices used to perform some type of calculations include the planisphere; some of the inventions of Abū Rayhān al-Bīrūnī (c. AD 1000); the Equatorium of Abū Ishāq Ibrāhīm al-Zarqālī (c. AD 1015); the astronomical analog computers of other medieval Muslim astronomers and engineers, and the Astronomical Clock Tower of Su Song during the Song Dynasty.

Scottish mathematician and physicist John Napier noted multiplication and division of numbers could be performed by addition and subtraction, respectively, of logarithms of those numbers. While producing the first logarithmic tables Napier needed to perform many multiplications, and it was at this point that he designed Napier's bones, an abacus-like device used for multiplication and division.[17] Since real numbers can be represented as distances or intervals on a line, the slide rule was invented in the 1620s to allow multiplication and division operations to be carried out significantly faster than was previously possible.[18] Slide rules were used by generations of engineers and other mathematically inclined professional workers, until the invention of the pocket calculator. The engineers in the Apollo program to send a man to the moon made many of their calculations on slide rules, which were accurate to three or four significant figures.[19]

Page 24: ICT Folio...

A mechanical calculator from 1914. Note the lever used to rotate the gears.

German polymath Wilhelm Schickard built the first digital mechanical calculator in 1623, and thus became the father of the computing era.[20] Since his calculator used techniques such as cogs and gears first developed for clocks, it was also called a 'calculating clock'. It was put to practical use by his friend Johannes Kepler, who revolutionized astronomy. An original calculator by Pascal (1640) is preserved in the Zwinger Museum. Machines by Blaise Pascal (the Pascaline, 1642) and Gottfried Wilhelm von Leibniz (1671) followed. Leibniz once said "It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used."[21]

Around 1820, Charles Xavier Thomas created the first successful, mass-produced mechanical calculator, the Thomas Arithmometer, that could add, subtract, multiply, and divide. It was mainly based on Leibniz' work. Mechanical calculators, like the base-ten addiator, the comptometer, the Monroe, the Curta and the Addo-X remained in use until the 1970s. Leibniz also described the binary numeral system,[22] a central ingredient of all modern computers. However, up to the 1940s, many subsequent designs (including Charles Babbage's machines of the 1800s and even ENIAC of 1945) were based on the decimal system;[23] ENIAC's ring counters emulated the operation of the digit wheels of a mechanical adding machine.

1801: punched card technology

As early as 1725 Basile Bouchon used a perforated paper loop in a loom to establish the pattern to be reproduced on cloth, and in 1726 his co-worker Jean-Baptiste Falcon improved on his design by using perforated paper cards attached to one another for efficiency in adapting and changing the program. The Bouchon-Falcon loom was semi-automatic and required manual feed of the program. In 1801, Joseph-Marie Jacquard developed a loom in which the pattern being woven was controlled by punched cards. The series of cards could be changed without changing the mechanical design of the loom. This was a landmark point in programmability.

Page 25: ICT Folio...

Punched card system of a music machine. Also referred to as Book music, a one-stop European medium for organs

In 1833, Charles Babbage moved on from developing his difference engine to developing a more complete design, the analytical engine, which would draw directly on Jacquard's punched cards for its programming.[24] In 1835, Babbage described his analytical engine. It was the plan of a general-purpose programmable computer, employing punch cards for input and a steam engine for power. One crucial invention was to use gears for the function served by the beads of an abacus. In a real sense, computers all contain automatic abacuses (technically called the arithmetic logic unit or floating-point unit). His initial idea was to use punch-cards to control a machine that could calculate and print logarithmic tables with huge precision (a specific purpose machine). Babbage's idea soon developed into a general-purpose programmable computer, his analytical engine. While his design was sound and the plans were probably correct, or at least debuggable, the project was slowed by various problems. Babbage was a difficult man to work with and argued with anyone who didn't respect his ideas. All the parts for his machine had to be made by hand. Small errors in each item can sometimes sum up to large discrepancies in a machine with thousands of parts, which required these parts to be much better than the usual tolerances needed at the time. The project dissolved in disputes with the artisan who built parts and was ended with the depletion of government funding. Ada Lovelace, Lord Byron's daughter, translated and added notes to the "Sketch of the Analytical Engine" by Federico Luigi, Conte Menabrea.[25]

A reconstruction of the Difference Engine II, an earlier, more limited design, has been operational since 1991 at the London Science Museum. With a few trivial changes, it works as Babbage designed it and shows that Babbage was right in theory. The museum used computer-operated machine tools to construct the necessary parts, following tolerances which a machinist of the period would have been able to achieve. The failure of Babbage to complete the engine can be chiefly attributed to difficulties not only related to politics and financing, but also to his desire to develop an increasingly sophisticated computer. [26] Following in the footsteps of Babbage, although unaware of his earlier work, was Percy Ludgate, an accountant from Dublin, Ireland. He independently

Page 26: ICT Folio...

designed a programmable mechanical computer, which he described in a work that was published in 1909.

IBM 407 tabulating machine, (1961).

Punched card with the extended alphabet.

In 1890, the United States Census Bureau used punched cards, sorting machines, and tabulating machines designed by Herman Hollerith to handle the flood of data from the decennial census mandated by the Constitution.[27] Hollerith's company eventually became the core of IBM. IBM developed punch card technology into a powerful tool for business data-processing and produced an extensive line of specialized unit record equipment. By 1950, the IBM card had become ubiquitous in industry and government. The warning printed on most cards intended for circulation as documents (checks, for example), "Do not fold, spindle or mutilate," became a motto for the post-World War II era.[28]

Leslie Comrie's articles on punched card methods and W.J. Eckert's publication of Punched Card Methods in Scientific Computation in 1940, described techniques which were sufficiently advanced to solve differential equations[29] or perform multiplication and division using floating point representations, all on punched cards and unit record machines. In the image of the tabulator (see left), note the patch panel, which is visible on the right side of the tabulator. A row of toggle switches is above the patch panel. The Thomas J. Watson Astronomical Computing Bureau, Columbia University performed astronomical calculations representing the state of the art in computing.[30]

Page 27: ICT Folio...

Computer programming in the punch card era revolved around the computer center. The computer users, for example, science and engineering students at universities, would submit their programming assignments to their local computer center. in the form of a stack of cards, one card per program line. They then had to wait for the program to be queued for processing, compiled, and executed. In due course a printout of any results, marked with the submitter's identification, would be placed in an output tray outside the computer center. In many cases these results would comprise solely a printout of error messages regarding program syntax etc., necessitating another edit-compile-run cycle.[31] Punched cards are still used and manufactured to this day, and their distinctive dimensions[32] (and 80-column capacity) can still be recognized in forms, records, and programs around the world.

1930s–1960s: desktop calculators

The Curta calculator can also do multiplication and division

By the 1900s, earlier mechanical calculators, cash registers, accounting machines, and so on were redesigned to use electric motors, with gear position as the representation for the state of a variable. By the 1920s Lewis Fry Richardson's interest in weather prediction led him to propose numerical analysis to model the weather; to this day, the most powerful computers on Earth are needed to adequately model its weather using the Navier-Stokes equations.[33]

Companies like Friden, Marchant Calculator and Monroe made desktop mechanical calculators from the 1930s that could add, subtract, multiply and divide. The word "computer" was a job title assigned to people who used these calculators to perform mathematical calculations. During the Manhattan project, future Nobel laureate Richard Feynman was the supervisor of the roomful of human computers, many of them women mathematicians, who understood the differential equations which were being solved for the war effort. Even the renowned Stanisław Ulam was pressed into service to translate the mathematics into computable approximations for the hydrogen bomb, after the war.

In 1948, the Curta was introduced. This was a small, portable, mechanical calculator that was about the size of a pepper grinder. Over time, during the 1950s and 1960s a variety of different brands of mechanical calculator appeared on the market. The first all-electronic desktop calculator was the British ANITA Mk.VII, which used a Nixie tube display and 177 subminiature thyratron tubes. In June 1963, Friden introduced the four-

Page 28: ICT Folio...

function EC-130. It had an all-transistor design, 13-digit capacity on a 5-inch (130 mm) CRT, and introduced reverse Polish notation (RPN) to the calculator market at a price of $2200. The model EC-132 added square root and reciprocal functions. In 1965, Wang Laboratories produced the LOCI-2, a 10-digit transistorized desktop calculator that used a Nixie tube display and could compute logarithms.

Advanced analog computers

Cambridge differential analyzer, 1938

Before World War II, mechanical and electrical analog computers were considered the "state of the art", and many thought they were the future of computing. Analog computers take advantage of the strong similarities between the mathematics of small-scale properties — the position and motion of wheels or the voltage and current of electronic components — and the mathematics of other physical phenomena, e.g. ballistic trajectories, inertia, resonance, energy transfer, momentum, etc.[34]

Modeling physical phenomena with electrical voltages and currents [35] [36] [37] as the analog quantities, yields great advantage over using mechanical models:

1) Electrical components are smaller and cheaper; they're more easily constructed and exercised.2) Though otherwise similar, electrical phenomena can be made to occur in conveniently short time frames.

Centrally, these analog systems work by creating electrical analogs of other systems, allowing users to predict behavior of the systems of interest by observing the electrical analogs. The most useful of the analogies was the way the small-scale behavior could be represented with integral and differential equations, and could be thus used to solve those equations. An ingenious example of such a machine, using water as the analog quantity, was the water integrator built in 1928; an electrical example is the Mallock machine built in 1941. A planimeter is a device which does integrals, using distance as the analog quantity. Until the 1980s, HVAC systems used air both as the analog quantity and the controlling element. Unlike modern digital computers, analog computers are not very flexible, and need to be reconfigured (i.e., reprogrammed) manually to switch them from working on one problem to another. Analog computers had an advantage over early

Page 29: ICT Folio...

digital computers in that they could be used to solve complex problems using behavioral analogues while the earliest attempts at digital computers were quite limited.

A Smith Chart is a well-known nomogram.

Since computers were rare in this era, the solutions were often hard-coded into paper forms such as graphs and nomograms,[38] which could then produce analog solutions to these problems, such as the distribution of pressures and temperatures in a heating system. Some of the most widely deployed analog computers included devices for aiming weapons, such as the Norden bombsight [39] and the fire-control systems, [40] such as Arthur Pollen's Argo system for naval vessels. Some stayed in use for decades after WWII; the Mark I Fire Control Computer was deployed by the United States Navy on a variety of ships from destroyers to battleships. Other analog computers included the Heathkit EC-1, and the hydraulic MONIAC Computer which modeled econometric flows.[41]

The art of analog computing reached its zenith with the differential analyzer,[42] invented in 1876 by James Thomson and built by H. W. Nieman and Vannevar Bush at MIT starting in 1927. Fewer than a dozen of these devices were ever built; the most powerful was constructed at the University of Pennsylvania's Moore School of Electrical Engineering, where the ENIAC was built. Digital electronic computers like the ENIAC spelled the end for most analog computing machines, but hybrid analog computers, controlled by digital electronics, remained in substantial use into the 1950s and 1960s, and later in some specialized applications. But like all digital devices, the decimal precision of a digital device is a limitation,[43] as compared to an analog device, in which the accuracy is a limitation.[44] As electronics progressed during the twentieth century, its problems of operation at low voltages while maintaining high signal-to-noise ratios [45] were steadily addressed, as shown below, for a digital circuit is a specialized form of analog circuit, intended to operate at standardized settings (continuing in the same vein, logic gates can be realized as forms of digital circuits). But as digital computers have become faster and use larger memory (e.g., RAM or internal storage), they have almost entirely displaced analog computers. Computer programming, or coding, has arisen as another human profession.

Page 30: ICT Folio...

Early digital computers

Punched tape programs would be much longer than the short fragment shown.

The era of modern computing began with a flurry of development before and during World War II, as electronic circuit elements [46] replaced mechanical equivalents and digital calculations replaced analog calculations. Machines such as the Z3, the Atanasoff–Berry Computer, the Colossus computers, and the ENIAC were built by hand using circuits containing relays or valves (vacuum tubes), and often used punched cards or punched paper tape for input and as the main (non-volatile) storage medium.

In this era, a number of different machines were produced with steadily advancing capabilities. At the beginning of this period, nothing remotely resembling a modern computer existed, except in the long-lost plans of Charles Babbage and the mathematical musings of Alan Turing and others. At the end of the era, devices like the EDSAC had been built, and are universally agreed to be digital computers. Defining a single point in the series as the "first computer" misses many subtleties (see the table "Defining characteristics of some early digital computers of the 1940s" below).

Alan Turing's 1936 paper[47] proved enormously influential in computing and computer science in two ways. Its main purpose was to prove that there were problems (namely the halting problem) that could not be solved by any sequential process. In doing so, Turing provided a definition of a universal computer which executes a program stored on tape. This construct came to be called a Turing machine; it replaces Kurt Gödel's more cumbersome universal language based on arithmetics. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. This limited type of Turing completeness is sometimes viewed as a threshold capability separating general-purpose computers from their special-purpose predecessors.

Page 31: ICT Folio...

Design of the von Neumann architecture (1947)

For a computing machine to be a practical general-purpose computer, there must be some convenient read-write mechanism, punched tape, for example. With a knowledge of Alan Turing's theoretical 'universal computing machine' John von Neumann defined an architecture which uses the same memory both to store programs and data: virtually all contemporary computers use this architecture (or some variant). While it is theoretically possible to implement a full computer entirely mechanically (as Babbage's design showed), electronics made possible the speed and later the miniaturization that characterize modern computers.

There were three parallel streams of computer development in the World War II era; the first stream largely ignored, and the second stream deliberately kept secret. The first was the German work of Konrad Zuse. The second was the secret development of the Colossus computers in the UK. Neither of these had much influence on the various computing projects in the United States. The third stream of computer development, Eckert and Mauchly's ENIAC and EDVAC, was widely publicized.[48][49]

Program-controlled computers

A reproduction of Zuse's Z1 computer.

Working in isolation in Germany, Konrad Zuse started construction in 1936 of his first Z-series calculators featuring memory and (initially limited) programmability. Zuse's purely mechanical, but already binary Z1, finished in 1938, never worked reliably due to problems with the precision of parts.

Zuse's subsequent machine, the Z3 [50] , was finished in 1941. It was based on telephone relays and did work satisfactorily. The Z3 thus became the first functional program-controlled, all-purpose, digital computer. In many ways it was quite similar to modern machines, pioneering numerous advances, such as floating point numbers. Replacement

Page 32: ICT Folio...

of the hard-to-implement decimal system (used in Charles Babbage's earlier design) by the simpler binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. This is sometimes viewed as the main reason why Zuse succeeded where Babbage failed.

Programs were fed into Z3 on punched films. Conditional jumps were missing, but since the 1990s it has been proved theoretically that Z3 was still a universal computer (ignoring its physical storage size limitations). In two 1936 patent applications, Konrad Zuse also anticipated that machine instructions could be stored in the same storage used for data – the key insight of what became known as the von Neumann architecture and was first implemented in the later British EDSAC design (1949). Zuse also claimed to have designed the first higher-level programming language, (Plankalkül), in 1945 (which was published in 1948) although it was implemented for the first time in 2000 by a team around Raúl Rojas at the Free University of Berlin – five years after Zuse died.

Zuse suffered setbacks during World War II when some of his machines were destroyed in the course of Allied bombing campaigns. Apparently his work remained largely unknown to engineers in the UK and US until much later, although at least IBM was aware of it as it financed his post-war startup company in 1946 in return for an option on Zuse's patents.

Colossus

Colossus was used to break German ciphers during World War II.

During World War II, the British at Bletchley Park (40 miles north of London) achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was attacked with the help of electro-mechanical machines called bombes. The bombe, designed by Alan Turing and Gordon Welchman, after the Polish cryptographic bomba by Marian Rejewski (1938) came into use in 1941.[51] They ruled out possible Enigma settings by performing chains of logical deductions implemented electrically. Most possibilities led to a contradiction, and the few remaining could be tested by hand.

The Germans also developed a series of teleprinter encryption systems, quite different from Enigma. The Lorenz SZ 40/42 machine was used for high-level Army communications, termed "Tunny" by the British. The first intercepts of Lorenz messages began in 1941. As part of an attack on Tunny, Professor Max Newman and his colleagues helped specify the Colossus [52] . The Mk I Colossus was built between March and

Page 33: ICT Folio...

December 1943 by Tommy Flowers and his colleagues at the Post Office Research Station at Dollis Hill in London and then shipped to Bletchley Park.

Colossus was the first totally electronic computing device. The Colossus used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Details of their existence, design, and use were kept secret well into the 1970s. Winston Churchill personally issued an order for their destruction into pieces no larger than a man's hand. Due to this secrecy the Colossi were not included in many histories of computing. A reconstructed copy of one of the Colossus machines is now on display at Bletchley Park.

American developments

In 1937, Shannon produced his master's thesis[53] at MIT that implemented Boolean algebra using electronic relays and switches for the first time in history. Entitled A Symbolic Analysis of Relay and Switching Circuits, Shannon's thesis essentially founded practical digital circuit design. George Stibitz completed a relay-based computer he dubbed the "Model K" at Bell Labs in November 1937. Bell Labs authorized a full research program in late 1938 with Stibitz at the helm. Their Complex Number Calculator,[54] completed January 8, 1940, was able to calculate complex numbers. In a demonstration to the American Mathematical Society conference at Dartmouth College on September 11, 1940, Stibitz was able to send the Complex Number Calculator remote commands over telephone lines by a teletype. It was the first computing machine ever used remotely, in this case over a phone line. Some participants in the conference who witnessed the demonstration were John von Neumann, John Mauchly, and Norbert Wiener, who wrote about it in their memoirs.

Atanasoff–Berry Computer replica at 1st floor of Durham Center, Iowa State University

In 1939, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed the Atanasoff–Berry Computer (ABC),[55]a special purpose digital electronic calculator for solving systems of linear equations. (The original goal was to solve 29 simultaneous equations of 29 unknowns each. However the punch card mechanism has encountered some fatal errors during the process, the completed machine was only able to solve a few equations in its completed form.) The design used over 300 vacuum tubes for high speed and employed capacitors fixed in a mechanically rotating drum for memory.

Page 34: ICT Folio...

Though the ABC machine was not programmable, it was the first to use electronic circuits. ENIAC co-inventor John Mauchly examined the ABC in June 1941, and its influence on the design of the later ENIAC machine is a matter of contention among computer historians. The ABC was largely forgotten until it became the focus of the lawsuit Honeywell v. Sperry Rand, the ruling of which invalidated the ENIAC patent (and several others) as, among many reasons, having been anticipated by Atanasoff's work.

In 1939, development began at IBM's Endicott laboratories on the Harvard Mark I. Known officially as the Automatic Sequence Controlled Calculator,[56] the Mark I was a general purpose electro-mechanical computer built with IBM financing and with assistance from IBM personnel, under the direction of Harvard mathematician Howard Aiken. Its design was influenced by Babbage's Analytical Engine, using decimal arithmetic and storage wheels and rotary switches in addition to electromagnetic relays. It was programmable via punched paper tape, and contained several calculation units working in parallel. Later versions contained several paper tape readers and the machine could switch between readers based on a condition. Nevertheless, the machine was not quite Turing-complete. The Mark I was moved to Harvard University and began operation in May 1944.

ENIAC

ENIAC performed ballistics trajectory calculations with 160 kW of power.

The US-built ENIAC (Electronic Numerical Integrator and Computer) was the first electronic general-purpose computer. Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, it was 1,000 times faster than the Harvard Mark I. ENIAC's development and construction lasted from 1943 to full operation at the end of 1945.

When its design was proposed, many researchers believed that the thousands of delicate valves (i.e. vacuum tubes) would burn out often enough that the ENIAC would be so frequently down for repairs as to be useless. It was, however, capable of up to thousands of operations per second for hours at a time between valve failures. It proved to the potential consumers that electronics could be useful for large-scale computing. The support from the public proved to be very crucial in the future.

ENIAC was unambiguously a Turing-complete device. A "program" on the ENIAC, however, was defined by the states of its patch cables and switches, a far cry from the

Page 35: ICT Folio...

stored program electronic machines that evolved from it. To program it meant to rewire it.[57] (Improvements completed in 1948 made it possible to execute stored programs set in function table memory, which made programming less a "one-off" effort, and more systematic.) It was possible to run operations in parallel, as it could be wired to operate multiple accumulators simultaneously. Thus the sequential operation which is the hallmark of a von Neumann machine occurred after ENIAC.

First-generation von Neumann machines

Even before the ENIAC was finished, Eckert and Mauchly recognized its limitations and started the design of a stored-program computer, EDVAC. John von Neumann was credited with a widely-circulated report describing the EDVAC design in which both the programs and working data were stored in a single, unified store. This basic design, denoted the von Neumann architecture, would serve as the foundation for the world-wide development of ENIAC's successors.[58]

Nine-track magnetic tape.

Magnetic core memory. Each core is one bit.

In this generation of equipment, temporary or working storage was provided by acoustic delay lines, which used the propagation time of sound through a medium such as liquid mercury (or through a wire) to briefly store data. As series of acoustic pulses is sent along a tube; after a time, as the pulse reached the end of the tube, the circuitry detected whether the pulse represented a 1 or 0 and caused the oscillator to re-send the pulse. Others used Williams tubes, which use the ability of a television picture tube to store and retrieve data. By 1954, magnetic core memory[59] was rapidly displacing most other forms of temporary storage, and dominated the field through the mid-1970s.

Page 36: ICT Folio...

The first working von Neumann machine was the Manchester "Baby" or Small-Scale Experimental Machine, developed by Frederic C. Williams and Tom Kilburn and built at the University of Manchester in 1948;[60] it was followed in 1949 by the Manchester Mark I computer which functioned as a complete system using the Williams tube and magnetic drum for memory, and also introduced index registers.[61] The other contender for the title "first digital stored program computer" had been EDSAC, designed and constructed at the University of Cambridge. Operational less than one year after the Manchester "Baby", it was also capable of tackling real problems. EDSAC was actually inspired by plans for EDVAC (Electronic Discrete Variable Automatic Computer), the successor to ENIAC; these plans were already in place by the time ENIAC was successfully operational. Unlike ENIAC, which used parallel processing, EDVAC used a single processing unit. This design was simpler and was the first to be implemented in each succeeding wave of miniaturization, and increased reliability. Some view Manchester Mark I / EDSAC / EDVAC as the "Eves" from which nearly all current computers derive their architecture. Manchester University's machine became the prototype for the Ferranti Mark I. The first Ferranti Mark I machine was delivered to the University in February, 1951 and at least nine others were sold between 1951 and 1957.

The first universal programmable computer in the Soviet Union was created by a team of scientists under direction of Sergei Alekseyevich Lebedev from Kiev Institute of Electrotechnology, Soviet Union (now Ukraine). The computer MESM (МЭСМ, Small Electronic Calculating Machine) became operational in 1950. It had about 6,000 vacuum tubes and consumed 25 kW of power. It could perform approximately 3,000 operations per second. Another early machine was CSIRAC, an Australian design that ran its first test program in 1949. CSIRAC is the oldest computer still in existence and the first to have been used to play digital music.[62]

In October 1947, the directors of J. Lyons & Company, a British catering company famous for its teashops but with strong interests in new office management techniques, decided to take an active role in promoting the commercial development of computers. By 1951 the LEO I computer was operational and ran the world's first regular routine office computer job. In November 1951, the J. Lyons company began weekly operation of a bakery valuations job on the LEO (Lyons Electronic Office). This was the first business application to go live on a stored program computer.

Defining characteristics of some early digital computers of the 1940s (See History of computing hardware)

NameFirst

operationalNumeral system

Computing mechanism

ProgrammingTuring

complete

Zuse Z3 (Germany)

May 1941 BinaryElectro-mechanical

Program-controlled by punched film stock

Yes (1998)

Page 37: ICT Folio...

Atanasoff–Berry Computer (USA)

Summer 1941

Binary ElectronicNot programmable—single purpose

No

Colossus (UK)December 1943

Binary ElectronicProgram-controlled by patch cables and switches

No

Harvard Mark I – IBM ASCC (USA)

1944 DecimalElectro-mechanical

Program-controlled by 24-channel punched paper tape (but no conditional branch)

Yes (1998)

ENIAC (USA)November 1945

Decimal ElectronicProgram-controlled by patch cables and switches

Yes

Manchester Small-Scale Experimental Machine (UK)

June 1948 Binary ElectronicStored-program in Williams cathode ray tube memory

Yes

Modified ENIAC (USA)

September 1948

Decimal Electronic

Program-controlled by patch cables and switches plus a primitive read-only stored programming mechanism using the Function Tables as program ROM

Yes

EDSAC (UK) May 1949 Binary ElectronicStored-program in mercury delay line memory

Yes

Manchester Mark I (UK)

October 1949

Binary ElectronicWilliams cathode ray tube memory and magnetic drum memory

Yes

Page 38: ICT Folio...

CSIRAC (Australia)

November 1949

Binary ElectronicStored-program in mercury delay line memory

Yes

In June 1951, the UNIVAC I (Universal Automatic Computer) was delivered to the U.S. Census Bureau. Remington Rand eventually sold 46 machines at more than $1 million each. UNIVAC was the first 'mass produced' computer; all predecessors had been 'one-off' units. It used 5,200 vacuum tubes and consumed 125 kW of power. It used a mercury delay line capable of storing 1,000 words of 11 decimal digits plus sign (72-bit words) for memory. Unlike IBM machines it was not equipped with a punch card reader but 1930s style metal magnetic tape input, making it incompatible with some existing commercial data stores. High speed punched paper tape and modern-style magnetic tapes were used for input/output by other computers of the era.[63]

In 1952, IBM publicly announced the IBM 701 Electronic Data Processing Machine, the first in its successful 700/7000 series and its first IBM mainframe computer. The IBM 704, introduced in 1954, used magnetic core memory, which became the standard for large machines. The first implemented high-level general purpose programming language, Fortran, was also being developed at IBM for the 704 during 1955 and 1956 and released in early 1957. (Konrad Zuse's 1945 design of the high-level language Plankalkül was not implemented at that time.) A user group, was founded in 1955 to share their software and experiences with the IBM 701; this group, which exists to this day, was a progenitor of open source.

IBM 650 front panel wiring.

IBM introduced a smaller, more affordable computer in 1954 that proved very popular. The IBM 650 weighed over 900 kg, the attached power supply weighed around 1350 kg and both were held in separate cabinets of roughly 1.5 meters by 0.9 meters by 1.8 meters. It cost $500,000 or could be leased for $3,500 a month. Its drum memory was originally only 2000 ten-digit words, and required arcane programming for efficient computing. Memory limitations such as this were to dominate programming for decades

Page 39: ICT Folio...

afterward, until the evolution of hardware capabilities and a programming model that were more sympathetic to software development.

In 1955, Maurice Wilkes invented microprogramming,[64] which was later widely used in the CPUs and floating-point units of mainframe and other computers, such as the IBM 360 series. Microprogramming allows the base instruction set to be defined or extended by built-in programs (now called firmware or microcode).[65],[66]

In 1956, IBM sold its first magnetic disk system, RAMAC (Random Access Method of Accounting and Control). It used 50 24-inch (610 mm) metal disks, with 100 tracks per side. It could store 5 megabytes of data and cost $10,000 per megabyte. (As of 2008, magnetic storage, in the form of hard disks, costs less than one 50th of a cent per megabyte).

Second generation: transistors

Die of a Bipolar junction transistor KSY34 high-frequency NPN transistor, base and emitter connected via bonded wires

In the second half of the 1950s bipolar junction transistors (BJTs)[67] replaced vacuum tubes. Their use gave rise to the "second generation" computers. Initially, it was believed that very few computers would ever be produced or used.[68] This was due in part to their size, cost, and the skill required to operate or interpret their results. Transistors[69] greatly reduced computers' size, initial cost and operating cost. The bipolar junction transistor[70] was invented in 1947.[71] If no electrical current flows through the base-emitter path of a bipolar transistor, the transistor's collector-emitter path blocks electrical current (and the transistor is said to "turn full off"). If sufficient current flows through the base-emitter path of a transistor, that transistor's collector-emitter path also passes current (and the transistor is said to "turn full on"). Current flow or current blockage represent binary 1 (true) or 0 (false), respectively.[72] Compared to vacuum tubes, transistors have many advantages: they are less expensive to manufacture and are ten times faster, switching from the condition 1 to 0 in millionths or billionths of a second. Transistor volume is measured in cubic millimeters compared to vacuum tubes' cubic centimeters. Transistors' lower operating temperature increased their reliability, compared to vacuum tubes. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space.

Typically, second-generation computers[73][74] were composed of large numbers of printed circuit boards such as the IBM Standard Modular System [75] each carrying one to four logic gates or flip-flops. A second generation computer, the IBM 1401, captured about

Page 40: ICT Folio...

one third of the world market. IBM installed more than one hundred thousand 1401s between 1960 and 1964— This period saw the only Italian attempt: the ELEA by Olivetti, produced in 110 units.

This RAMAC DASD is being restored at the Computer History Museum.

Transistorized electronics improved not only the CPU (Central Processing Unit), but also the peripheral devices. The IBM 350 RAMAC was introduced in 1956 and was the world's first disk drive. The second generation disk data storage units were able to store tens millions of letters and digits. Multiple Peripherals can be connected to the CPU, increasing the total memory capacity to hundreds of millions of characters. Next to the fixed disk storage units, connected to the CPU via high-speed data transmission, were removable disk data storage units. A removable disk stack can be easily exchanged with another stack in a few seconds. Even if the removable disks' capacity is smaller than fixed disks,' their interchangeability guarantees an nearly unlimited quantity of data close at hand. But magnetic tape provided archival capability for this data, at a lower cost than disk.

Many second generation CPUs delegated peripheral device communications to a secondary processor. For example, while the communication processor controlled card reading and punching, the main CPU executed calculations and binary branch instructions. One databus would bear data between the main CPU and core memory at the CPU's fetch-execute cycle rate, and other databusses would typically serve the peripheral devices. On the PDP-1, the core memory's cycle time was 5 microseconds; consequently most arithmetic instructions took 10 microseconds (100,000 operations per second) because most operations took at least two memory cycles; one for the instruction, one for the operand data fetch.

During the second generation remote terminal units (often in the form of teletype machines like a Friden Flexowriter) saw greatly increased use. Telephone connections provided sufficient speed for early remote terminals and allowed hundreds of kilometers separation between remote-terminals and the computing center. Eventually these stand-alone computer networks would be generalized into an interconnected network of networks — the Internet.[76]

Post-1960: third generation and beyond

Page 41: ICT Folio...

Intel 8742 eight-bit microcontroller IC.

The explosion in the use of computers began with 'Third Generation' computers. These relied on Jack St. Clair Kilby's [77] and Robert Noyce's [78] independent invention of the integrated circuit (or microchip), which later led to the invention of the microprocessor,[79] by Ted Hoff and Federico Faggin at Intel.[80] The integrated circuit in the image on the right, for example, an Intel 8742, is an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip.

During the 1960s there was considerable overlap between second and third generation technologies.[81] IBM implemented its IBM Solid Logic Technology modules in hybrid circuits for the IBM System/360 in 1964. As late as 1975, Sperry Univac continued the manufacture of second-generation machines such as the UNIVAC 494. The Burroughs large systems such as the B5000 were stack machines which allowed for simpler programming. These pushdown automatons were also implemented in minicomputers and microprocessors later, which influenced programming language design. Minicomputers served as low-cost computer centers for industry, business and universities.[82] It became possible to simulate analog circuits with the simulation program with integrated circuit emphasis, or SPICE (1971) on minicomputers, one of the programs for electronic design automation (EDA). The microprocessor led to the development of the microcomputer, small, low-cost computers that could be owned by individuals and small businesses. Microcomputers, the first of which appeared in the 1970s, became ubiquitous in the 1980s and beyond. Steve Wozniak, co-founder of Apple Computer, is credited with developing the first mass-market home computers. However, his first computer, the Apple I, came out some time after the KIM-1 and Altair 8800, and the first Apple computer with graphic and sound capabilities came out well after the Commodore PET. Computing has evolved with microcomputer architectures, with features added from their larger brethren, now dominant in most market segments.

In the twenty-first century, multi-core CPUs became commercially available. Content-addressable memory (CAM) has become inexpensive enough to be used in networking, although no computer system has yet implemented hardware CAMs for use in programming languages. Currently, CAMs (or associative arrays) in software are programming-language-specific. Semiconductor memory cell arrays are very regular structures, and manufacturers prove their processes on them. This allows price reductions on memory products. After semiconductor memories became commodities, then computer software became less labor-intensive; programming languages became less arcane and more understandable to code in.[83] When the CMOS field effect transistor-

Page 42: ICT Folio...

based logic gates supplanted bipolar transistors, computer power consumption could decrease dramatically (A CMOS FET draws current during the transition between logic states, unlike the higher current draw of a BJT). This has allowed computing to become a commodity which is now ubiquitous, embedded in many forms, from the Internet, on satellites, aircraft, automobiles, and ships, and in televisions, cellphones and household appliances, in work-oriented tools and equipment, and in robots, toys, and games. Computing hardware and its software have even become a metaphor for the operation of the universe.[84]

Systems as complicated as computers require very high reliability. ENIAC remained on, in continuous operation from 1947 to 1955, for eight years before being shut down. Although a vacuum tube might fail, it would be replaced without bringing down the system. By the simple strategy of never shutting down ENIAC, the failures were dramatically reduced. Hot-pluggable hard disks, like the hot-pluggable vacuum tubes of yesteryear, continue the tradition of repair during continuous operation. Semiconductor memories routinely have no errors when they operate, although operating systems like Unix have employed memory tests on start-up to detect failing hardware. Most storage hardware is reliable enough today for consumers to routinely skip the memory tests during start-up.Today, the requirement of reliable performance is made even more stringent when server farms are the delivery platform. Google has managed this by using fault-tolerant software to recover from hardware failures, and is even working on the concept of replacing entire server farms on-the-fly, during a service event.[85]

In the late twentieth century and early twenty-first century, there has been convergence in the techniques of computer hardware and computer software, called reconfigurable computing. [86] [87] The concept is to use the software developed for the various virtual machines written in the extant programming languages, and implement them in hardware, using VHSIC, Verilog, or other hardware description languages. These soft microprocessors can execute programs in the C programming language, for example.

An indication of the rapidity of development of this field can be inferred by the history of the seminal article.[88] By the time that anyone had time to write anything down, it was obsolete. After 1945, others read John von Neumann's First Draft of a Report on the EDVAC, and immediately started implementing their own systems. To this day, the pace of development has continued, worldwide. [

Operating systemAn operating system (OS) is the software component of a computer system that is responsible for the management and coordination of activities and the sharing of the resources of the computer. The operating system acts as a host for application programs that are run on the machine. As a host, one of the purposes of an operating system is to handle the details of the operation of the hardware. This relieves application programs from having to manage these details and makes it easier to write applications. Almost all computers, including hand-held computers, desktop computers, supercomputers, and

Page 43: ICT Folio...

even modern video game consoles, use an operating system of some type. See also Computer systems architecture.

Operating systems offer a number of services to application programs and users. Applications access these services through application programming interfaces (APIs) or system calls. By invoking these interfaces, the application can request a service from the operating system, pass parameters, and receive the results of the operation. Users may also interact with the operating system by typing commands or using a graphical user interface (GUI, commonly pronounced “guoey”). For hand-held and desktop computers, the GUI is generally considered part of the operating system. For large multiuser systems, the GUI is generally implemented as an application program that runs outside the operating system. See also Computer programming; Human-computer interaction.

Common contemporary operating systems include Microsoft Windows, Mac OS X, Linux and Solaris. Microsoft Windows has a significant majority of market share in the desktop and notebook computer markets, while the server and embedded device markets are split amongst several operating systems. [1] [2]

Operating System-A Technology

An operating system is a collection of technologies which are designed to allow the computer to perform certain functions. These technologies may or may not be present in every operating system, and there are often differences in how they are implemented. However as stated above most modern operating systems are derived from common design ancestors, and are therefore basically similar.

Boot-strapping

In most cases, the operating system is not the first code to run on the computer at startup (boot) time. The initial code executing on the computer is usually loaded from firmware, which is stored in read only memory (ROM). This is sometimes called the BIOS or boot ROM.

The firmware loads and executes code located on a removable disk or hard drive, and contained within the first sector of the drive, referred to as the boot sector. The code stored on the boot sector is called the boot loader, and is responsible for loading the operating system's kernel from disk and starting it running.

Some simple boot loaders are designed to locate one specific operating system and load it, although many modern ones have the capacity to allow the user to choose from a number of operating systems.

Program execution

An operating system's most basic function is to support the running of programs by the users. On a multiprogramming operating system, running programs are commonly

Page 44: ICT Folio...

referred to as processes. Process management refers to the facilities provided by the operating system to support the creation, execution, and destruction of processes, and to facilitate various interactions, and limit others.

The operating system's kernel in conjunction with underlying hardware must support this functionality.

Executing a program involves the creation of a process by the operating system. The kernel creates a process by setting aside or allocating some memory, loading program code from a disk or another part of memory into the newly allocated space, and starting it running.

Operating system kernels store various information about running processes. This information might include:

A unique identifier, called a process identifier (PID). A list of memory the program is using, or is allowed to access. The PID of the program which requested its execution, or the parent process ID

(PPID). The filename and/or path from which the program was loaded. A register file, containing the last values of all CPU registers. A program counter, indicating the position in the program.

Interrupts

Interrupts are central to operating systems as they allow the operating system to deal with the unexpected activities of running programs and the world outside the computer. Interrupt-based programming is one of the most basic forms of time-sharing, being directly supported by most CPUs. Interrupts provide a computer with a way of automatically running specific code in response to events. Even very basic computers support hardware interrupts, and allow the programmer to specify code which may be run when that event takes place.

When an interrupt is received, the computer's hardware automatically suspends whatever program is currently running, and its registers and program counter are saved. This is analogous to placing a bookmark in a book when someone is interrupted by a phone call. This task requires no operating system as such, but only that the interrupt be configured at an earlier time.

In modern operating systems, interrupts are handled by the operating system's kernel. Interrupts may come from either the computer's hardware, or from the running program. When a hardware device triggers an interrupt, the operating system's kernel decides how to deal with this event, generally by running some processing code, or ignoring it. The processing of hardware interrupts is a task that is usually delegated to software called device drivers, which may be either part of the operating system's kernel, part of another

Page 45: ICT Folio...

program, or both. Device drivers may then relay information to a running program by various means.

A program may also trigger an interrupt to the operating system, which are very similar in function. If a program wishes to access hardware for example, it may interrupt the operating system's kernel, which causes control to be passed back to the kernel. The kernel may then process the request which may contain instructions to be passed onto hardware, or to a device driver. When a program wishes to allocate more memory, launch or communicate with another program, or signal that it no longer needs the CPU, it does so through interrupts.

Protected mode and supervisor mode

Modern CPUs support something called dual mode operation. CPUs with this capability use two modes: protected mode and supervisor mode, which allow certain CPU functions to be controlled and affected only by the operating system kernel. Here, protected mode does not refer specifically to the 80286 (Intel's x86 16-bit microprocessor) CPU feature, although its protected mode is very similar to it. CPUs might have other modes similar to 80286 protected mode as well, such as the virtual 8086 mode of the 80386 (Intel's x86 32-bit microprocessor or i386).

However, the term is used here more generally in operating system theory to refer to all modes which limit the capabilities of programs running in that mode, providing things like virtual memory addressing and limiting access to hardware in a manner determined by a program running in supervisor mode. Similar modes have existed in supercomputers, minicomputers, and mainframes as they are essential to fully supporting UNIX-like multi-user operating systems.

When a computer first starts up, it is automatically running in supervisor mode. The first few programs to run on the computer, being the BIOS, bootloader and the operating system have unlimited access to hardware. However when the operating system passes control to another program, it can place the CPU into protected mode.

In protected mode, programs may have access to a more limited set of the CPU's instructions. A user program may leave protected mode only by triggering an interrupt, causing control to be passed back to the kernel. In this way the operating system can maintain exclusive control over things like access to hardware, and memory.

The term "protected mode resource" generally refers to one or more CPU registers, which contain information that the running program isn't allowed to alter. Attempts to alter these resources generally causes a switch to supervisor mode.

Memory management

Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that

Page 46: ICT Folio...

a program does not interfere with memory already used by another program. Since programs time share, each program must have independent access to memory.

Cooperative memory management, used by many early operating systems assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen anymore, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs, or viruses may purposefully alter another program's memory or may affect the operation of the operating system itself. With cooperative memory management it takes only one misbehaved program to crash the system.

Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation, and paging. All methods require some level of hardware support (such as the 80286 MMU) which doesn't exist in all computers.

In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses will trigger an interrupt which will cause the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is usually a sign of a misbehaving program, the kernel will generally kill the offending program, and report the error.

Windows 3.1-Me had some level of memory protection, but programs could easily circumvent the need to use it. Under Windows 9x all MS-DOS applications ran in supervisor mode, giving them almost unlimited control over the computer. A general protection fault would be produced indicating a segmentation violation had occurred, however the system would often crash anyway.

Swapping Virtual Memory

The use of virtual memory addressing (such as paging or segmentation) means that kernel can choose which memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks.

If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel will be interrupted in the same way as it would if the program exceed its allocated memory. (See section on memory management.) Under UNIX this kind of interrupt is referred to as a page fault.

Page 47: ICT Folio...

When the kernel detects a page fault it will generally adjust the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet.

In modern operating systems, application memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be use by multiple programs, and what that memory area contains can be swapped or exchanged on demand.

Methods of Multitasking

Multitasking refers to the running of multiple independent computer programs on the same computer, giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time sharing, which means that each program uses a share of the computer's time to execute.

An operating system kernel contains a piece of software called a scheduler which determines how much time each program will spend executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access the CPU and memory. At a later time control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch.

An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malfunctioning program may prevent any other programs from using the CPU.

The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.)

On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP, (since Windows NT was targeted at professionals.)

Page 48: ICT Folio...

Very early UNIX did not provide preemptive multitasking, as it was not supported on the hardware. As it was designed from minicomputer concepts for multiple users accessing the system via remote terminals, it was quickly migrated to hardware which did allow preemptive multitasking, and was adapted support these features.

Modern operating systems extend the concepts of application preemption to device drivers and kernel code, so that the operating system has preemptive control over internal run-times as well. Under Windows Vista, the introduction of the Windows Display Driver Model (WDDM) accomplishes this for display drivers, and in Linux, the preemptable kernel model introduced in version 2.6 allows all device drives and some other parts of kernel code to take advantage of preemptive multi-tasking.

Under Windows and prior to Windows Vista and Linux prior to version 2.6 all driver execution was co-operative, meaning that if a driver entered an infinite loop it would freeze the system.

Disk access and file systems

Access to files stored on disks is a central feature of all operating systems. Computers store data on disks using files, which are structured in specific ways in order to allow for faster access, higher reliability, and to make better use out of the drive's available space. The specific way files are stored on a disk is called a file system, and enables files to have names and attributes. It also allows them to be stored in a hierarchy of directories or folders arranged in a directory tree.

Early operating systems generally supported a single type of disk drive and only one kind of file system. Early file systems were limited in their capacity, speed, and in the kinds of file names and directory structures they could use. These limitations often reflected limitations in the operating systems they were designed for, making it very difficult for an operating system to support more than one file system.

While many simpler operating systems support a limited range of options for accessing storage systems, more modern operating systems like UNIX and Linux support a technology known as a virtual file system or VFS. A modern operating system like UNIX supports a wide array of storage devices, regardless of their design or file systems to be accessed through a common application programming interface (API). This makes it unnecessary for programs to have any knowledge about the device they are accessing. A VFS allows the operating system to provide programs with access to an unlimited number of devices with an infinite variety of file systems installed on them through the use of specific device drivers and file system drivers.

A connected storage device such as a hard drive will be accessed through a device driver. The device driver understands the specific language of the drive and is able to translate that language into a standard language used by the operating system to access all disk drives. On UNIX this is the language of block devices.

Page 49: ICT Folio...

When the kernel has an appropriate device driver in place, it can then access the contents of the disk drive in raw format, which may contain one or more file systems. A file system driver is used to translate the commands used to access each specific file system into a standard set of commands that the operating system can use to talk to all file systems. Programs can then deal with these file systems on the basis of filenames, and directories/folders, contained within a hierarchical structure. They can create, delete, open, and close files, as well as gathering various information about them, including access permissions, size, free space, and creation and modification dates.

Various differences between file systems make supporting all file systems difficult. Allowed characters in file names, case sensitivity, and the presence of various kinds of file attributes makes the implementation of a single interface for every file system a daunting task. While UNIX and Linux systems generally have support for a wide variety of file systems, proprietary operating systems such a Microsoft Windows tend to limit the user to using a single file system for each task. For example the Windows operating system can only be installed on NTFS, and CDs and DVDs can only be recorded using UDF or ISO 9660

Device drivers

A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs.

The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, OSes essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these OS mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver will ensure that the device appears to operate as usual from the operating systems' point of view for any person.

Networking

Page 50: ICT Folio...

Currently most operating systems support a variety of networking protocols, hardware, and applications for using them. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections. Networks can essentially allow a computer's operating system to access the resources of a remote computer to support the same functions as it could if those resources were connected directly to the local computer. This includes everything from simple communication, to using networked file systems or even sharing another computer's graphics or sound hardware. Some network services allow the resources of a computer to be accessed transparently, such as SSH which allows networked users direct access to a computer's command line interface.

Client/server networking involves program on a computer somewhere which connects via a network to another computer, called a server. Servers, usually running UNIX or Linux, offer (or host) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the server's network address. Each port number is usually associated with a maximum of one running program, which is responsible for handling requests to that port. A daemon, being a user program, can in turn access the local hardware resources of that computer by passing requests to the operating system kernel.

Many operating systems support one or more vendor-specific or open networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access. Protocols like ESound, or esd can be easily extended over the network to provide sound from local applications, on a remote system's sound hardware.

Security

A computer being secure depends on a number of technologies working properly. A modern operating system provides access to a number of resources, which are available to software running on the system, and to external devices like networks via the kernel.

The operating system must be capable of distinguishing between requesters which should be allowed to be processed, and others which should not be processed. While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all.

In addition to the allow/disallow model of security, a system with a high level of security will also offer auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?").

Page 51: ICT Folio...

Internal security, or security from an already running program is only possible if all possibly harmful requests must be carried out through interrupts to the operating system kernel. If programs can directly access hardware and resources, they cannot be secured. Microsoft Windows has been heavily criticized for many years for Window's almost total inability to protect one running program from another, however since Windows isn't generally used as a server it has been considered less of a problem. In recent years, Microsoft has added limited user accounts, and more secure logins. However most people still operate their computers using Administrator accounts, which negates any possible internal security improvements brought about by these changes.

Linux and UNIX both have two tier security, which limits any system-wide changes to the root user, a special user account on all UNIX-like systems. While the root user has unlimited permission to affect system changes, programs as a regular user are limited only in where they can save files, and what hardware they can access. This limits the damage that a regular user can do to the computer while still providing them with plenty of freedom to do everything but affect system-wide changes. The user's settings are stored in an area of the computer's file system called the user's home directory, which is also provided as a location where the user may store their work, similar to My Documents on a windows system. Should a user have to install software or make system-wide changes, they must enter the root password for the computer, which allows them to launch certain programs as the root user.

Though regular user accounts on Unix-like operating systems provide plenty of freedom for day to day activities, this requiring of passwords for various operations can be frustrating for some users. As an example, lets say a person who is used to a Windows environment is being exposed to Ubuntu Linux for the first time. In this situation, they may question why so many operations in this version of Linux require a root password. In their Windows environment they were allowed to change, delete, create, and rename files anywhere on their system. Installing new software, or changing system settings required no special permissions because they logged on automatically as the administrator. This is however considered to be a security risk, as it makes it extremely easy to accidentally delete important files, and for viruses to infect the operating system. With Windows Vista, Microsoft attempted to make improvements in this area of security. Whether they were successful is a matter of debate. Many feel that Vista requires verification for many activities that would rarely or never compromise the system.

External security involves a request from outside the computer, such as a login at a connected console or some kind of network connection. External requests are often passed through device drivers to the operating system's kernel, where they can be passed onto applications, or carried out directly. Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate,

Page 52: ICT Folio...

classify and select computer systems being considered for the processing, storage and retrieval of sensitive or classified information.

Network services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security. At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port.

An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is the operating system not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java.

Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing.

File system support in modern operating systems

Support for file systems is highly varied among modern operating systems although there are several common file systems which almost all operating systems include support and drivers for.

Under Linux and UNIX

Many Linux distributions support some or all of ext2, ext3, ReiserFS, Reiser4, GFS, GFS2, OCFS, OCFS2, and NILFS. The ext file systems, namely ext2 and ext3 are based on the original Linux file system. Others have been developed by companies to meet their specific needs, hobbyists, or adapted from UNIX, Microsoft Windows, and other operating systems. Linux has full support for XFS and JFS, along with FAT (the MS-DOS file system), and HFS which is the primary file system for the Macintosh.

Page 53: ICT Folio...

In recent years support for Microsoft Windows NT's NTFS file system has appeared in Linux, and is now comparable to the support available for other native UNIX file systems. ISO 9660, and UDF are supported which are standard file systems used on CDs, DVDs, and BluRay discs. It is possible to install Linux on the majority of these file systems. Unlike other operating systems Linux and UNIX allow any file system to be used regardless of the media it is stored on, whether it is a hard drive CD or DVD, or even a contained within a file located on an another file system.

Under Microsoft Windows

Microsoft Windows presently supports NTFS and FAT file systems, along with network file systems shared from other computers, and the ISO 9660 and UDF filesystems used for CDs, DVDs, and other optical discs such as BluRay. Under windows each file system is usually limited in application to certain media, for example CDs must use ISO 9660 or UDF, and as of Windows Vista, NTFS is the only file system which the operating system can be installed on. The NTFS file system is the most efficient and reliable of the Windows file systems, comparing closely in performance to Linux's XFS. Details of its design are not known. Windows Embedded CE 6.0 introduced ExFAT, a file system more suitable for flash drives.

Under Mac OS

Mac OS X supports HFS+ with journaling as its primary file system. It is derived from the Hierarchical File System of the earlier Mac OS. Mac OS X has facilities to read and write FAT, NTFS, UDF, and other file systems, but cannot be installed to them. Due to its UNIX heritage Mac OS X now supports virtually all the file systems supported by the UNIX VFS.

Special Purpose File Systems

FAT file systems are commonly found on floppy discs, flash memory cards, digital cameras, and many other portable devices because of their relative simplicity. Performance of FAT compares poorly to most other file systems as it uses overly simplistic data structures, making file operations time-consuming, and makes poor use of disk space in situations where many small files are present. ISO 9660 and Universal Disk Format are two common formats that target Compact Discs and DVDs. Mount Rainier is a newer extension to UDF supported by Linux 2.6 kernels and Windows Vista that facilitates rewriting to DVDs in the same fashion as has been possible with floppy disks.

Journalized File Systems

File systems may provide journaling, which provides safe recovery in the event of a system crash. A journaled file system writes some information twice: first to the journal, which is a log of file system operations, then to its proper place in the ordinary file system. Journaling is handled by the file system driver, and keeps track of each operation taking place that changes the contents of the disk. In the event of a crash, the system can

Page 54: ICT Folio...

recover to a consistent state by replaying a portion of the journal. Many UNIX file systems provide journaling including ReiserFS, JFS, and Ext3.

In contrast, non-journaled file systems typically need to be examined in their entirety by a utility such as fsck or chkdsk for any inconsistencies after an unclean shutdown. Soft updates is an alternative to journaling that avoids the redundant writes by carefully ordering the update operations. Log-structured file systems and ZFS also differ from traditional journaled file systems in that they avoid inconsistencies by always writing new copies of the data, eschewing in-place updates.

Graphical user interfaces

Most modern computer systems support graphical user interfaces (GUI), and often include them. In some computer systems, such as the original implementations of Microsoft Windows and the Mac OS, the GUI is integrated into the kernel.

While technically a graphical user interface is not an operating system service, incorporating support for one into the operating system kernel can allow the GUI to be more responsive by reducing the number of context switches required for the GUI to perform its output functions. Other operating systems are modular, separating the graphics subsystem from the kernel and the Operating System. In the 1980s UNIX, VMS and many others had operating systems that were built this way. Linux and Mac OS X are also built this way. Modern releases of Microsoft Windows such as Windows Vista implement a graphics subsystem that is mostly in user-space, however versions between Windows NT 4.0 and Windows Server 2003's graphics drawing routines exist mostly in kernel space. Windows 9x had very little distinction between the interface and the kernel.

Many computer operating systems allow the user to install or create any user interface they desire. The X Window System in conjunction with GNOME or KDE is a commonly-found setup on most Unix and Unix-like (BSD, Linux, Minix) systems. A number of Windows shell replacements have been released for Microsoft Windows, which offer alternatives to the included Windows shell, but the shell itself cannot be separated from Windows.

Numerous Unix-based GUIs have existed over time, most derived from X11. Competition among the various vendors of Unix (HP, IBM, Sun) led to much fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed for the most part due to various reasons, eventually eclipsed by the widespread adoption of GNOME and KDE. Prior to open source-based toolkits and desktop environments, Motif was the prevalent toolkit/desktop combination (and was the basis upon which CDE was developed).

Graphical user interfaces evolve over time. For example, Windows has modified its user interface almost every time a new major version of Windows is released, and the Mac OS GUI changed dramatically with the introduction of Mac OS X in 2001.

Page 55: ICT Folio...

History

The first computers did not have operating systems. By the early 1960s, commercial computer vendors were supplying quite extensive tools for streamlining the development, scheduling, and execution of jobs on batch processing systems. Examples were produced by UNIVAC and Control Data Corporation, amongst others.

MS-DOS provided many operating system like features, such as disk access. However many DOS programs bypassed it entirely and ran directly on hardware.

The operating systems originally deployed on mainframes, and, much later, the original microcomputer operating systems, only supported one program at a time, requiring only a very basic scheduler. Each program was in complete control of the machine while it was running. Multitasking (timesharing) first came to mainframes in the 1960s.

In 1969-70, UNIX first appeared on the PDP-7 and later the PDP-11. It soon became capable of providing cross-platform time sharing using preemptive multitasking, advanced memory management, memory protection, and a host of other advanced features. UNIX soon gained popularity as an operating system for mainframes and minicomputers alike.

IBM microcomputers, including the IBM PC and the IBM PC XT could run Microsoft Xenix, a UNIX-like operating system from the early 1980s. Xenix was heavily marketed by Microsoft as a multi-user alternative to its single user MS-DOS operating system. The CPUs of these personal computer, could not facilitate kernel memory protection or provide dual mode operation, so Microsoft Xenix relied on cooperative multitasking and had no protected memory.

The 80286-based IBM PC AT was the first computer technically capable of using dual mode operation, and providing memory protection.

Classic Mac OS, and Microsoft Windows 1.0-Me supported only cooperative multitasking, and were very limited in their abilities to take advantage of protected memory. Application programs running on these operating systems must yield CPU time to the scheduler when they are not using it, either by default, or by calling a function.

Windows NT's underlying operating system kernel which was a designed by essentially the same team as Digital Equipment Corporation's VMS, a UNIX-like operating system which provided protected mode operation for all user programs, kernel memory protection, preemptive multi-tasking, virtual file system support, and a host of other features.

Page 56: ICT Folio...

Classic AmigaOS and Windows 1.0-Me did not properly track resources allocated by processes at runtime. If a process had to be terminated, the resources might not be freed up for new programs until the machine was restarted.

The AmigaOS did have preemptive multitasking.

Mainframes

Through the 1960s, many major features were pioneered in the field of operating systems. The development of the IBM System/360 produced a family of mainframe computers available in widely differing capacities and price points, for which a single operating system OS/360 was planned (rather than developing ad-hoc programs for every individual model). This concept of a single OS spanning an entire product line was crucial for the success of System/360 and, in fact, IBM's current mainframe operating systems are distant descendants of this original system; applications written for the OS/360 can still be run on modern machines. In the mid-70's, the MVS, the descendant of OS/360 offered the first[citation needed] implementation of using RAM as a transparent cache for disk resident data.

OS/360 also pioneered a number of concepts that, in some cases, are still not seen outside of the mainframe arena. For instance, in OS/360, when a program is started, the operating system keeps track of all of the system resources that are used including storage, locks, data files, and so on. When the process is terminated for any reason, all of these resources are re-claimed by the operating system. An alternative CP-67 system started a whole line of operating systems focused on the concept of virtual machines.

Control Data Corporation developed the SCOPE operating system in the 1960s, for batch processing. In cooperation with the University of Minnesota, the KRONOS and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and timesharing use. Like many commercial timesharing systems, its interface was an extension of the Dartmouth BASIC operating systems, one of the pioneering efforts in timesharing and programming languages. In the late 1970s, Control Data and the University of Illinois developed the PLATO operating system, which used plasma panel displays and long-distance time sharing networks. Plato was remarkably innovative for its time, featuring real-time chat, and multi-user graphical games.

Burroughs Corporation introduced the B5000 in 1961 with the MCP, (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages with no machine language or assembler, and indeed the MCP was the first OS to be written exclusively in a high-level language – ESPOL, a dialect of ALGOL. MCP also introduced many other ground-breaking innovations, such as being the first commercial implementation of virtual memory. MCP is still in use today in the Unisys ClearPath/MCP line of computers.

UNIVAC, the first commercial computer manufacturer, produced a series of EXEC operating systems. Like all early main-frame systems, this was a batch-oriented system

Page 57: ICT Folio...

that managed magnetic drums, disks, card readers and line printers. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BASIC system.

General Electric and MIT developed General Electric Comprehensive Operating Supervisor (GECOS), which introduced the concept of ringed security privilege levels. After acquisition by Honeywell it was renamed to General Comprehensive Operating System (GCOS).

Digital Equipment Corporation developed many operating systems for its various computer lines, including TOPS-10 and TOPS-20 time sharing systems for the 36-bit PDP-10 class systems. Prior to the widespread use of UNIX, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community.

In the late 1960s through the late 1970s, several hardware capabilities evolved that allowed similar or ported software to run on more than one system. Early systems had utilized microprogramming to implement features on their systems in order to permit different underlying architecture to appear to be the same as others in a series. In fact most 360's after the 360/40 (except the 360/165 and 360/168) were microprogrammed implementations. But soon other means of achieving application compatibility were proven to be more significant.

The enormous investment in software for these systems made since 1960s caused most of the original computer manufacturers to continue to develop compatible operating systems along with the hardware. The notable supported mainframe operating systems include:

Burroughs MCP -- B5000,1961 to Unisys Clearpath/MCP, present. IBM OS/360 -- IBM System/360, 1966 to IBM z/OS, present. IBM CP-67 -- IBM System/360, 1967 to IBM z/VM, present. UNIVAC EXEC 8 -- UNIVAC 1108, 1964, to Unisys Clearpath IX, present.

Microcomputers

The first microcomputers did not have the capacity or need for the elaborate operating systems that had been developed for mainframes and minis; minimalistic operating systems were developed, often loaded from ROM and known as Monitors. One notable early disk-based operating system was CP/M, which was supported on many early microcomputers and was closely imitated in MS-DOS, which became wildly popular as the operating system chosen for the IBM PC (IBM's version of it was called IBM-DOS or PC-DOS), its successors making Microsoft one of the world's most profitable companies. In the 80's Apple Computer Inc. (now Apple Inc.) abandoned its popular Apple II series of microcomputers to introduce the Apple Macintosh computer with the an innovative Graphical User Interface (GUI) to the Mac OS.

The introduction of the Intel 80386 CPU chip with 32-bit architecture and paging capabilities, provided personal computers with the ability to run multitasking operating

Page 58: ICT Folio...

systems like those of earlier minicomputers and mainframes. Microsoft's responded to this progress by hiring Dave Cutler, who had developed the VMS operating system for Digital Equipment Corporation. He would lead the development of the Windows NT operating system, which continues to serve as the basis for Microsoft's operating systems line. Steve Jobs, a co-founder of Apple Inc., started NeXT Computer Inc., which developed the Unix-like NEXTSTEP operating system. NEXTSTEP would later be acquired by Apple Inc. and used, along with code from FreeBSD as the core of Mac OS X.

Minix, an academic teaching tool which could be run on early PCs, would inspire another reimplementation of Unix, called Linux. Started by computer student Linus Torvalds with cooperation from volunteers over the internet, developed a kernel which was combined with the tools from the GNU Project. The Berkeley Software Distribution, known as BSD, is the UNIX derivative distributed by the University of California, Berkeley, starting in the 1970s. Freely distributed and ported to many minicomputers, it eventually also gained a following for use on PCs, mainly as FreeBSD, NetBSD and OpenBSD.

Examples

Microsoft Windows

The Microsoft Windows family of operating systems originated as an add-on to the older MS-DOS operating system for the IBM PC. Modern versions are based on the newer Windows NT kernel that was originally intended for OS/2 and borrowed from VMS. Windows runs on x86, x86-64 and Itanium processors. Earlier versions also ran on the DEC Alpha, MIPS, Fairchild (later Intergraph) Clipper and PowerPC architectures (some work was done to port it to the SPARC architecture).

As of June 2008, Microsoft Windows holds a large amount of the worldwide desktop market share. Windows is also used on servers, supporting applications such as web servers and database servers. In recent years, Microsoft has spent significant marketing and research & development money to demonstrate that Windows is capable of running any enterprise application, which has resulted in consistent price/performance records (see the TPC) and significant acceptance in the enterprise market.

The most widely used version of the Microsoft Windows family is Windows XP, released on October 25, 2001.

In November 2006, after more than five years of development work, Microsoft released Windows Vista, a major new operating system version of Microsoft Windows family which contains a large number of new features and architectural changes. Chief amongst these are a new user interface and visual style called Windows Aero, a number of new security features such as User Account Control, and few new multimedia applications such as Windows DVD Maker.

Page 59: ICT Folio...

Microsoft has announced a new version codenamed Windows 7 will be released in late 2009 - mid 2010

Plan 9

Ken Thompson, Dennis Ritchie and Douglas McIlroy at Bell Labs designed and developed the C programming language to build the operating system Unix. Programmers at Bell Labs went on to develop Plan 9 and Inferno, which were engineered for modern distributed environments. Plan 9 was designed from the start to be a networked operating system, and had graphics built-in, unlike Unix, which added these features to the design later. Plan 9 has yet to become as popular as Unix derivatives, but it has an expanding community of developers. It is currently released under the Lucent Public License. Inferno was sold to Vita Nuova Holdings and has been released under a GPL/MIT license.

Unix and Unix-like operating systems

Ubuntu, a popular Linux distribution, one of many available

Ken Thompson wrote B, mainly based on BCPL, which he used to write Unix, based on his experience in the MULTICS project. B was replaced by C, and Unix developed into a large, complex family of inter-related operating systems which have been influential in every modern operating system (see History).

The Unix-like family is a diverse group of operating systems, with several major sub-categories including System V, BSD, and Linux. The name "UNIX" is a trademark of The Open Group which licenses it for use with any operating system that has been shown to conform to their definitions. "Unix-like" is commonly used to refer to the large set of operating systems which resemble the original Unix.

Unix-like systems run on a wide variety of machine architectures. They are used heavily for servers in business, as well as workstations in academic and engineering environments. Free software Unix variants, such as GNU, Linux and BSD, are popular in these areas. The market share for Linux is divided between many different distributions. Enterprise class distributions by Red Hat or Novell are used by corporations, but some home users may use those products. Historically home users typically installed a distribution themselves, but in 2007 Dell began to offer the Ubuntu Linux distribution on home PCs and now Walmart offers an low end computer with GOS v2. [3] [4] Linux on the

Page 60: ICT Folio...

desktop is also popular in the developer and hobbyist operating system development communities. (see below)

Market share statistics for freely available operating systems are usually inaccurate since most free operating systems are not purchased, making usage under-represented. On the other hand, market share statistics based on total downloads of free operating systems are often inflated, as there is no economic disincentive to acquire multiple operating systems so users can download multiple systems, test them, and decide which they like best.

Some Unix variants like HP's HP-UX and IBM's AIX are designed to run only on that vendor's hardware. Others, such as Solaris, can run on multiple types of hardware, including x86 servers and PCs. Apple's Mac OS X, a hybrid kernel-based BSD variant derived from NeXTSTEP, Mach, and FreeBSD, has replaced Apple's earlier (non-Unix) Mac OS.

Unix interoperability was sought by establishing the POSIX standard. The POSIX standard can be applied to any operating system, although it was originally created for various Unix variants.

Mac OS X

Mac OS X is a line of proprietary, graphical operating systems developed, marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping Macintosh computers. Mac OS X is the successor to the original Mac OS, which had been Apple's primary operating system since 1984. Unlike its predecessor, Mac OS X is a UNIX operating system built on technology that had been developed at NeXT through the second half of the 1980s and up until Apple purchased the company in early 1997.

The operating system was first released in 1999 as Mac OS X Server 1.0, with a desktop-oriented version (Mac OS X v10.0) following in March 2001. Since then, five more distinct "end-user" and "server" editions of Mac OS X have been released, the most recent being Mac OS X v10.5, which was first made available in October 2007. Releases of Mac OS X are named after big cats; Mac OS X v10.5 is usually referred to by Apple and users as "Leopard".

The server edition, Mac OS X Server, is architecturally identical to its desktop counterpart but usually runs on Apple's line of Macintosh server hardware. Mac OS X Server includes workgroup management and administration software tools that provide

Page 61: ICT Folio...

simplified access to key network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others.

Real-time operating systems

A real-time operating system (RTOS) is a multitasking operating system intended for applications with fixed deadlines (real-time computing). Such applications include some small embedded systems, automobile engine controllers, industrial robots, spacecraft, industrial control, and some large-scale computing systems.

An early example of a large-scale real-time operating system was Transaction Processing Facility developed by American Airlines and IBM for the Sabre Airline Reservations System.

Embedded systems

Embedded systems use a variety of dedicated operating systems. In some cases, the "operating system" software is directly linked to the application to produce a monolithic special-purpose program. In the simplest embedded systems, there is no distinction between the OS and the application.

Embedded systems that have fixed deadlines use a real-time operating system such as VxWorks, eCos, QNX, and RTLinux.

Some embedded systems use operating systems such as Palm OS, Windows CE, BSD, and Linux, although such operating systems do not support real-time computing.

Windows CE shares similar APIs to desktop Windows but shares none of desktop Windows' codebase.

However Linux has recently pulled ahead as a leader in embedded operating systems, due to its lack of royalties, vast capabilities, high performance, and potentially tiny memory footprint.

Hobby operating system development

Operating system development, or OSDev for short, as a hobby has a large cult-like following. As such, operating systems, such as Linux, have derived from hobby operating system projects. The design and implementation of an operating system requires skill and determination, and the term can cover anything from a basic "Hello World" boot loader to a fully featured kernel. One classical example of this is the Minix Operating System—an OS that was designed as a teaching tool but was heavily used by hobbyists before Linux eclipsed it in popularity.

Page 62: ICT Folio...

Other

Older operating systems which are still used in niche markets include OS/2 from IBM; Mac OS, the non-Unix precursor to Apple's Mac OS X; BeOS; XTS-300. Some, most notably AmigaOS and RISC OS, continue to be developed as minority platforms for enthusiast communities and specialist applications. OpenVMS formerly from DEC, is still under active development by Hewlett-Packard.

Research and development of new operating systems continues. GNU Hurd is designed to be backwards compatible with Unix, but with enhanced functionality and a microkernel architecture. Singularity is a project at Microsoft Research to develop an operating system with better memory protection based on the .Net managed code model. Systems development follows the same model used by other Software development, which involves maintainers, version control "trees"[5], Fork (software development), "patches", and specifications. From the AT&T-Berkeley lawsuit the new unencombered systems were based on 4.4BSD which forked as FreeBSD and NetBSD efforts to replace missing code after the Unix wars. Recent forks include DragonFly BSD and Darwin from BSD Unix [6].

Computer hardwareComputer hardware is the physical part of a computer, including its digital circuitry, as distinguished from the computer software that executes within the hardware. The hardware of a computer is infrequently changed, in comparison with software and hardware data, which are "soft" in the sense that they are readily created, modified or erased on the computer. Firmware is a special type of software that rarely, if ever, needs to be changed and so is stored on hardware devices such as read-only memory (ROM) where it is not readily changed (and is, therefore, "firm" rather than just "soft").

Most computer hardware is not seen by normal users. It is in embedded systems in automobiles, microwave ovens, electrocardiograph machines, compact disc players, and other devices. Personal computers, the computer hardware familiar to most people, form only a small minority of computers (about 0.2% of all new computers produced in 2003). See Market statistics.

Typical PC hardware

A typical personal computer consists of a case or chassis in a tower shape (desktop) and the following parts:

Page 63: ICT Folio...

Internals of typical personal computer

Inside a Custom Computer

Motherboard

System Unit - the "body" or mainframe of the computer, through which all other components interface.

Central processing unit (CPU) - Performs most of the calculations which enable a computer to function, sometimes referred to as the "brain" of the computer.

o Computer fan - Used to lower the temperature of the computer; a fan is almost always attached to the CPU, and the computer case will generally have several fans to maintain a constant airflow. Liquid cooling can also be used to cool a computer, though it focuses more on individual parts rather than the overall temperature inside the chassis.

Page 64: ICT Folio...

Random Access Memory (RAM) - Fast-access memory that is cleared when the computer is powered-down. RAM attaches directly to the motherboard, and is used to store programs that are currently running.

Firmware is loaded from the Read only memory ROM run from the Basic Input-Output System (BIOS) or in newer systems Extensible Firmware Interface (EFI) compliant

Internal Buses - Connections to various internal components. o PCI o PCI-E o USB o HyperTransport o CSI (expected in 2008)o AGP (being phased out)o VLB (outdated)

External Bus Controllers - used to connect to external peripherals, such as printers and input devices. These ports may also be based upon expansion cards, attached to the internal buses.

o parallel port (outdated)o serial port (outdated)o USB o firewire o SCSI (On Servers and older machines)o PS/2 (For mice and keyboards, being phased out and replaced by USB.)o ISA (outdated)o EISA (outdated)o MCA (outdated)

Power supply

A case that holds a transformer, voltage control, and (usually) a cooling fan, and supplies power to run the rest of the computer, the most common types of power supplies are AT and BabyAT (old) but the standard for PC's actually are ATX and micro ATX. smps( Switch Mode Power Supply )

Storage controllers

Controllers for hard disk, CD-ROM and other drives like internal Zip and Jaz conventionally for a PC are IDE/ATA; the controllers sit directly on the motherboard (on-board) or on expansion cards, such as a Disk array controller. IDE is usually integrated, unlike SCSI which is found in most servers. The floppy drive interface is a legacy MFM interface which is now slowly disappearing. All these interfaces are gradually being phased out to be replaced by SATA and SAS.

Page 65: ICT Folio...

Some brand of storage controllers product

Video display controller

Produces the output for the visual display unit. This will either be built into the motherboard or attached in its own separate slot (PCI, PCI-E, PCI-E 2.0, or AGP), in the form of a Graphics Card.

Some brand of video display controller product

Removable media devices

CD compact disc - the most common type of removable media, inexpensive but has a short life-span.

o CD-ROM Drive - a device used for reading data from a CD.o CD Writer - a device used for both reading and writing data to and from a

CD. DVD digital versatile disc - a popular type of removable media that is the same

dimensions as a CD but stores up to 6 times as much information. It is the most common way of transferring digital video.

o DVD-ROM Drive - a device used for reading data from a DVD.o DVD Writer - a device used for both reading and writing data to and from

a DVD.o DVD-RAM Drive - a device used for rapid writing and reading of data

from a special type of DVD. Blu-ray - a high-density optical disc format for the storage of digital information,

including high-definition video. o BD-ROM Drive - a device used for reading data from a Blu-ray disc.o BD Writer - a device used for both reading and writing data to and from a

Blu-ray disc. HD DVD - a high-density optical disc format and successor to the standard DVD.

It was a discontinued competitor to the Blu-ray format. Floppy disk - an outdated storage device consisting of a thin disk of a flexible

magnetic storage medium.

Page 66: ICT Folio...

Zip drive - an outdated medium-capacity removable disk storage system, first introduced by Iomega in 1994.

USB flash drive - a flash memory data storage device integrated with a USB interface, typically small, lightweight, removable and rewritable.

Tape drive - a device that reads and writes data on a magnetic tape, usually used for long term storage.

Internal storage

Hardware that keeps data inside the computer for later use and remains persistent even when the computer has no power.

Hard disk - for medium-term storage of data. Solid-state drive - a device similar to hard disk, but containing no moving parts. Disk array controller - a device to manage several hard disks, to achieve

performance or reliability improvement.

Sound card

Enables the computer to output sound to audio devices, as well as accept input from a microphone. Most modern computers have sound cards built-in to the motherboard, though it is common for a user to install a separate sound card as an upgrade.

Networking

Connects the computer to the Internet and/or other computers.

Modem - for dial-up connections Network card - for DSL/Cable internet, and/or connecting to other computers. Direct Cable Connection - Use of a null modem, connecting two computers

together using their serial ports or a Laplink Cable, connecting two computers together with their parallel ports.

Other peripherals

In addition, hardware devices can include external components of a computer system. The following are either standard or very common.

Wheel mouse

Page 67: ICT Folio...

Includes various input and output devices, usually external to the computer system

Input

Text input devices o Keyboard - a device, to input text and characters by depressing buttons

(referred to as keys), similar to a typewriter. The most common English-language key layout is the QWERTY layout.

Pointing devices o Mouse - a pointing device that detects two dimensional motion relative to

its supporting surface.o Trackball - a pointing device consisting of an exposed portruding ball

housed in a socket that detects rotation about two axes.o Xbox 360 Controller - A controller used for Xbox 360, Which with the use

of the application Switchblade(tm), can be used as an additional pointing device with the left or right thumbstick.

Gaming devices o Joystick - a general control device that consists of a handheld stick that

pivots around one end, to detect angles in two or three dimensions.o Gamepad - a general game controller held in the hand that relies on the

digits (especially thumbs) to provide input.o Game controller - a specific type of controller specialized for certain

gaming purposes. Image , Video input devices

o Image scanner - a device that provides input by analyzing images, printed text, handwriting, or an object.

o Webcam - a low resolution video camera used to provide visual input that can be easily transferred over the internet.

Audio input devices o Microphone - an acoustic sensor that provides input by converting sound

into an electrical signals

Output

Image , Video output devices o Printer - a peripheral device that produces a hard (usually paper) copy of a

document.o Monitor - device that displays a video signal, similar to a television, to

provide the user with information and an interface with which to interact. Audio output devices

o Speakers - a device that converts analog audio signals into the equivalent air vibrations in order to make audible sound.

o Headset - a device similar in functionality to computer speakers used mainly to not disturb others nearby.

Page 68: ICT Folio...

Ict Scrap Book

Group : ______________Group Members : ______________________

____________________________________________________________________________________

Class : ___________Teacher : _________________