Micro Excellent

114
Introduction to Microcontrollers EMK 310 Theme 1 Compiled by Prof T Hanekom November 2010 PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Mon, 29 Nov 2010 10:17:49 UTC

Transcript of Micro Excellent

Page 1: Micro Excellent

Introduction to Microcontrollers

EMK 310 Theme 1

Compiled by Prof T Hanekom

November 2010

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.

PDF generated at: Mon, 29 Nov 2010 10:17:49 UTC

Page 2: Micro Excellent

Contents

Chapter 1 Number systems 1

1.1 Binary numeral system 1 1.2 Binary-coded decimal 16 1.3 ASCII 25 1.4 Floating point 38 1.5 FLOPS 54

Chapter 2 Embedded systems 61

2.1 Embedded system 61 2.2 Microprocessor 71 2.3 Microcontroller 81 2.4 Instruction cycle 89 2.5 Computer memory 91 2.6 Memory-mapped I/O 93 2.7 Chip select 96 2.8 Reduced instruction set computing 97 2.9 Complex instruction set computing 105

References

Article Sources and Contributors 108 Image Sources, Licenses and Contributors 111

Article Licenses

License 112

Page 3: Micro Excellent

1

Number systems

Binary numeral system

Numeral systems by culture

Hindu-Arabic numerals

Western ArabicEastern ArabicIndian familyBurmese

KhmerMongolianThai

East Asian numerals

ChineseJapaneseSuzhou

KoreanVietnameseCounting rods

Alphabetic numerals

AbjadArmenianĀryabhaṭaCyrillic

Ge'ezGreek (Ionian)Hebrew

Other systems

AegeanAtticBabylonianBrahmiEgyptianEtruscan

InuitMayanQuipuRomanSumerianUrnfield

List of numeral system topics

Positional systems by base

Decimal (10)

1, 2, 3, 4, 5, 6, 8, 12, 16, 20, 30, 36, 60 more…

The binary numeral system, or base-2 number system, represents numeric values using two symbols, 0 and 1.More specifically, the usual base-2 system is a positional notation with a radix of 2. Owing to its straightforwardimplementation in digital electronic circuitry using logic gates, the binary system is used internally by all moderncomputers.

Page 4: Micro Excellent

Binary numeral system 2

HistoryThe Indian scholar Pingala (circa 5th–2nd centuries BC) developed mathematical concepts for describing prosody,and in so doing presented the first known description of a binary numeral system.[1] [2] He used binary numbers inthe form of short and long syllables (the latter equal in length to two short syllables), making it similar to Morsecode.[3] [4]

A set of eight trigrams and a set of 64 hexagrams, analogous to the three-bit and six-bit binary numerals, wereknown in ancient China through the classic text I Ching. In the 11th century, scholar and philosopher Shao Yongdeveloped a method for arranging the hexagrams which corresponds to the sequence 0 to 63, as represented inbinary, with yin as 0, yang as 1 and the least significant bit on top. There is, however, no evidence that Shaounderstood binary computation. The ordering is also the lexicographical order on sextuples of elements chosen froma two-element set.[5]

Similar sets of binary combinations have also been used in traditional African divination systems such as Ifá as wellas in medieval Western geomancy. The base-2 system utilized in geomancy had long been widely applied insub-Saharan Africa.In 1605 Francis Bacon discussed a system whereby letters of the alphabet could be reduced to sequences of binarydigits, which could then be encoded as scarcely visible variations in the font in any random text.[6] Importantly forthe general theory of binary encoding, he added that this method could be used with any objects at all: "providedthose objects be capable of a twofold difference only; as by Bells, by Trumpets, by Lights and Torches, by the reportof Muskets, and any instruments of like nature".[6] (See Bacon's cipher.)The modern binary number system was fully documented by Gottfried Leibniz in his article Explication del'Arithmétique Binaire[7] (1703). Leibniz's system uses 0 and 1, like the modern binary numeral system. As aSinophile, Leibniz was aware of the I Ching and noted with fascination how its hexagrams correspond to the binarynumbers from 0 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in thesort of philosophical mathematics he admired.[8]

In 1854, British mathematician George Boole published a landmark paper detailing an algebraic system of logic thatwould become known as Boolean algebra. His logical calculus was to become instrumental in the design of digitalelectronic circuitry.[9]

In 1937, Claude Shannon produced his master's thesis at MIT that implemented Boolean algebra and binaryarithmetic using electronic relays and switches for the first time in history. Entitled A Symbolic Analysis of Relay andSwitching Circuits, Shannon's thesis essentially founded practical digital circuit design.[10]

In November 1937, George Stibitz, then working at Bell Labs, completed a relay-based computer he dubbed the"Model K" (for "Kitchen", where he had assembled it), which calculated using binary addition.[11] Bell Labs thusauthorized a full research programme in late 1938 with Stibitz at the helm. Their Complex Number Computer,completed January 8, 1940, was able to calculate complex numbers. In a demonstration to the AmericanMathematical Society conference at Dartmouth College on September 11, 1940, Stibitz was able to send theComplex Number Calculator remote commands over telephone lines by a teletype. It was the first computingmachine ever used remotely over a phone line. Some participants of the conference who witnessed the demonstrationwere John Von Neumann, John Mauchly and Norbert Wiener, who wrote about it in his memoirs.[12] [13] [14]

Page 5: Micro Excellent

Binary numeral system 3

RepresentationA binary number can be represented by any sequence of bits (binary digits), which in turn may be represented by anymechanism capable of being in two mutually exclusive states. The following sequences of symbols could all beinterpreted as the binary numeric value of 667:

1 0 1 0 0 1 1 0 1 1

| − | − − | | − | |x o x o o x x o x x

y n y n n y y n y y

A binary clock might use LEDs to express binary values. In thisclock, each column of LEDs shows a binary-coded decimal numeral

of the traditional sexagesimal time.

The numeric value represented in each case isdependent upon the value assigned to each symbol. In acomputer, the numeric values may be represented bytwo different voltages; on a magnetic disk, magneticpolarities may be used. A "positive", "yes", or "on"state is not necessarily equivalent to the numericalvalue of one; it depends on the architecture in use.

In keeping with customary representation of numeralsusing Arabic numerals, binary numbers are commonlywritten using the symbols 0 and 1. When written,binary numerals are often subscripted, prefixed orsuffixed in order to indicate their base, or radix. Thefollowing notations are equivalent:

100101 binary (explicit statement of format)100101b (a suffix indicating binary format)100101B (a suffix indicating binary format)bin 100101 (a prefix indicating binary format)1001012 (a subscript indicating base-2 (binary) notation)%100101 (a prefix indicating binary format)0b100101 (a prefix indicating binary format, common in programming languages)

When spoken, binary numerals are usually read digit-by-digit, in order to distinguish them from decimal numbers.For example, the binary numeral 100 is pronounced one zero zero, rather than one hundred, to make its binary natureexplicit, and for purposes of correctness. Since the binary numeral 100 is equal to the decimal value four, it would beconfusing to refer to the numeral as one hundred.

Counting in binary

Page 6: Micro Excellent

Binary numeral system 4

Decimal Binary

0 0

1 1

2 10

3 11

4 100

5 101

6 110

7 111

8 1000

9 1001

10 1010

11 1011

12 1100

13 1101

14 1110

15 1111

16 10000

Counting in binary is similar to counting in any other number system. Beginning with a single digit, countingproceeds through each symbol, in increasing order. Decimal counting uses the symbols 0 through 9, while binaryonly uses the symbols 0 and 1.When the symbols for the first digit are exhausted, the next-higher digit (to the left) is incremented, and countingstarts over at 0. In decimal, counting proceeds like so:

000, 001, 002, ... 007, 008, 009, (rightmost digit starts over, and next digit is incremented)010, 011, 012, ......090, 091, 092, ... 097, 098, 099, (rightmost two digits start over, and next digit is incremented)100, 101, 102, ...

After a digit reaches 9, an increment resets it to 0 but also causes an increment of the next digit to the left. In binary,counting is the same except that only the two symbols 0 and 1 are used. Thus after a digit reaches 1 in binary, anincrement resets it to 0 but also causes an increment of the next digit to the left:

0000,0001, (rightmost digit starts over, and next digit is incremented)0010, 0011, (rightmost two digits start over, and next digit is incremented)0100, 0101, 0110, 0111, (rightmost three digits start over, and the next digit is incremented)1000, 1001, ...

Since binary is a base-2 system, each digit represents an increasing power of 2, with the rightmost digit representing20, the next representing 21, then 22, and so on. To determine the decimal representation of a binary number simplytake the sum of the products of the binary digits and the powers of 2 which they represent. For example, the binarynumber:

Page 7: Micro Excellent

Binary numeral system 5

100101is converted to decimal form by:[(1) × 25] + [(0) × 24] + [(0) × 23] + [(1) × 22] + [(0) × 21] + [(1) × 20] =[1 × 32] + [0 × 16] + [0 × 8] + [1 × 4] + [0 × 2] + [1 × 1] = 37To create higher numbers, additional digits are simply added to the left side of the binary representation.

Fractions in binaryFractions in binary only terminate if the denominator has 2 as the only prime factor. As a result, does not have afinite binary representation, and this causes 10(0.1) to be not precisely equal to 1 in floating point arithmetic. As anexample, to interpret the binary expression for 1/3 = .010101..., this means: 1/3 = 0 × 2−1 + 1 × 2−2 + 0 × 2−3 + 1 ×2−4 + ... = 0.3125 + ... An exact value cannot be found with a sum of a finite number of inverse powers of two, andzeros and ones alternate forever.

Fraction Decimal Binary Fractional Approx.

1/1 1 or 0.9999... 1 or 0.1111... 1/1

1/2 0.5 0.1 1/2

1/3 0.333... 0.010101... 1/4+1/16+1/64...

1/4 0.25 0.01 1/4

1/5 0.2 0.00110011... 1/8+1/16+1/128...

1/6 0.1666... 0.0010101... 1/8+1/32+1/128...

1/7 0.142857142857... 0.001001... 1/8+1/64+1/512...

1/8 0.125 0.001 1/8

1/9 0.111... 0.000111000111... 1/16+1/32+1/64...

1/10 0.1 0.000110011... 1/16+1/32+1/256...

1/11 0.090909... 0.0001011101000101101... 1/16+1/64+1/128...

1/12 0.08333... 0.00010101... 1/16+1/64+1/256...

1/13 0.07692376923... 0.000100111011000100111011... 1/16+1/128+1/256...

1/14 0.0714285714285... 0.0001001001... 1/16+1/128+1/1024...

1/15 0.0666... 0.00010001... 1/16+1/256...

1/16 0.0625 0.0001 1/16

Page 8: Micro Excellent

Binary numeral system 6

Binary arithmeticArithmetic in binary is much like arithmetic in other numeral systems. Addition, subtraction, multiplication, anddivision can be performed on binary numerals.

Addition

The circuit diagram for a binary half adder, which addstwo bits together, producing sum and carry bits.

The simplest arithmetic operation in binary is addition. Addingtwo single-digit binary numbers is relatively simple, using a formof carrying:

0 + 0 → 00 + 1 → 11 + 0 → 11 + 1 → 10, carry 1 (since 1 + 1 = 0 + 1 × binary 10)

Adding two "1" digits produces a digit "0", while 1 will have to beadded to the next column. This is similar to what happens indecimal when certain single-digit numbers are added together; if the result equals or exceeds the value of the radix(10), the digit to the left is incremented:

5 + 5 → 0, carry 1 (since 5 + 5 = 0 + 1 × 10)7 + 9 → 6, carry 1 (since 7 + 9 = 6 + 1 × 10)

This is known as carrying. When the result of an addition exceeds the value of a digit, the procedure is to "carry" theexcess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correctsince the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way inbinary:

1 1 1 1 1 (carried digits)

0 1 1 0 1

+ 1 0 1 1 1

-------------

= 1 0 0 1 0 0

In this example, two numerals are being added together: 011012 (1310) and 101112 (2310). The top row shows thecarry bits used. Starting in the rightmost column, 1 + 1 = 102. The 1 is carried to the left, and the 0 is written at thebottom of the rightmost column. The second column from the right is added: 1 + 0 + 1 = 102 again; the 1 is carried,and 0 is written at the bottom. The third column: 1 + 1 + 1 = 112. This time, a 1 is carried, and a 1 is written in thebottom row. Proceeding like this gives the final answer 1001002 (36 decimal).When computers must add two numbers, the rule that: x xor y = (x + y) mod 2 for any two bits x and y allows forvery fast calculation, as well.A simplification for many binary addition problems is the Long Carry Method or Brookhouse Method of BinaryAddition. This method is generally useful in any binary addition where one of the numbers has a long string of “1”digits. For example the following large binary numbers can be added in two simple steps without multiple carriesfrom one place to the next.

1 1 1 1 1 1 1 1 (carried digits) (Long Carry Method)

1 1 1 0 1 1 1 1 1 0 1 1 1 0 1 1 1 1 1 0

+ 1 0 1 0 1 1 0 0 1 1 Versus: + 1 0 1 0 1 1 0 0 1 1 add crossed out digits first

----------------------- + 1 0 0 0 1 0 0 0 0 0 0 = sum of crossed out digits

= 1 1 0 0 1 1 1 0 0 0 1 ----------------------- now add remaining digits

Page 9: Micro Excellent

Binary numeral system 7

1 1 0 0 1 1 1 0 0 0 1

In this example, two numerals are being added together: 1 1 1 0 1 1 1 1 1 02 (95810) and 1 0 1 0 1 1 0 0 1 12 (69110).The top row shows the carry bits used. Instead of the standard carry from one column to the next, the lowestplace-valued "1" with a "1" in the corresponding place value beneath it may be added and a "1" may be carried toone digit past the end of the series. These numbers must be crossed off since they are already added. Then simplyadd that result to the uncanceled digits in the second row. Proceeding like this gives the final answer 1 1 0 0 1 1 1 0 00 12 (164910).

Addition table

0 1

0 0 1

1 1 10

SubtractionSubtraction works in much the same way:

0 − 0 → 00 − 1 → 1, borrow 11 − 0 → 11 − 1 → 0

Subtracting a "1" digit from a "0" digit produces the digit "1", while 1 will have to be subtracted from the nextcolumn. This is known as borrowing. The principle is the same as for carrying. When the result of a subtraction isless than 0, the least possible value of a digit, the procedure is to "borrow" the deficit divided by the radix (that is,10/10) from the left, subtracting it from the next positional value.

* * * * (starred columns are borrowed from)

1 1 0 1 1 1 0

− 1 0 1 1 1----------------

= 1 0 1 0 1 1 1

Subtracting a positive number is equivalent to adding a negative number of equal absolute value; computerstypically use two's complement notation to represent negative values. This notation eliminates the need for a separate"subtract" operation. Using two's complement notation subtraction can be summarized by the following formula:A − B = A + not B + 1

For further details, see two's complement.

Page 10: Micro Excellent

Binary numeral system 8

MultiplicationMultiplication in binary is similar to its decimal counterpart. Two numbers A and B can be multiplied by partialproducts: for each digit in B, the product of that digit in A is calculated and written on a new line, shifted leftward sothat its rightmost digit lines up with the digit in B that was used. The sum of all these partial products gives the finalresult.Since there are only two digits in binary, there are only two possible outcomes of each partial multiplication:• If the digit in B is 0, the partial product is also 0• If the digit in B is 1, the partial product is equal to AFor example, the binary numbers 1011 and 1010 are multiplied as follows:

1 0 1 1 (A)

× 1 0 1 0 (B)

---------

0 0 0 0 ← Corresponds to a zero in B + 1 0 1 1 ← Corresponds to a one in B + 0 0 0 0

+ 1 0 1 1

---------------

= 1 1 0 1 1 1 0

Binary numbers can also be multiplied with bits after a binary point:

1 0 1.1 0 1 (A) (5.625 in decimal)

× 1 1 0.0 1 (B) (6.25 in decimal)

-------------

1 0 1 1 0 1 ← Corresponds to a one in B + 0 0 0 0 0 0 ← Corresponds to a zero in B + 0 0 0 0 0 0

+ 1 0 1 1 0 1

+ 1 0 1 1 0 1

-----------------------

= 1 0 0 0 1 1.0 0 1 0 1 (35.15625 in decimal)

See also Booth's multiplication algorithm.

Multiplication table

0 1

0 0 0

1 0 1

DivisionBinary division is again similar to its decimal counterpart:Here, the divisor is 1012, or 5 decimal, while the dividend is 110112, or 27 decimal. The procedure is the same asthat of decimal long division; here, the divisor 1012 goes into the first three digits 1102 of the dividend one time, so a"1" is written on the top line. This result is multiplied by the divisor, and subtracted from the first three digits of thedividend; the next digit (a "1") is included to obtain a new three-digit sequence:

Page 11: Micro Excellent

Binary numeral system 9

1

___________

1 0 1 ) 1 1 0 1 1

− 1 0 1 -----

0 1 1

The procedure is then repeated with the new sequence, continuing until the digits in the dividend have beenexhausted:

1 0 1

___________

1 0 1 ) 1 1 0 1 1

− 1 0 1 -----

0 1 1

− 0 0 0 -----

1 1 1

− 1 0 1 -----

1 0

Thus, the dividend of 110112 divided by 1012 is 1012, as shown on the top line, while the remainder, shown on thebottom line, is 102. In decimal, 27 divided by 5 is 5, with a remainder of 2.

Bitwise operationsThough not directly related to the numerical interpretation of binary symbols, sequences of bits may be manipulatedusing Boolean logical operators. When a string of binary symbols is manipulated in this way, it is called a bitwiseoperation; the logical operators AND, OR, and XOR may be performed on corresponding bits in two binarynumerals provided as input. The logical NOT operation may be performed on individual bits in a single binarynumeral provided as input. Sometimes, such operations may be used as arithmetic short-cuts, and may have othercomputational benefits as well. For example, an arithmetic shift left of a binary number is the equivalent ofmultiplication by a (positive, integral) power of 2.

Conversion to and from other numeral systems

DecimalTo convert from a base-10 integer numeral to its base-2 (binary) equivalent, the number is divided by two, and theremainder is the least-significant bit. The (integer) result is again divided by two, its remainder is the next mostsignificant bit. This process repeats until the result of further division becomes zero.Conversion from base-2 to base-10 proceeds by applying the preceding algorithm, so to speak, in reverse. The bits ofthe binary number are used one by one, starting with the most significant (leftmost) bit. Beginning with the value 0,repeatedly double the prior value and add the next bit to produce the next value. This can be organized in amulti-column table. For example to convert 100101011012 to decimal:

Page 12: Micro Excellent

Binary numeral system 10

Prior value × 2 + Next Bit Next value

0 × 2 + 1 = 1

1 × 2 + 0 = 2

2 × 2 + 0 = 4

4 × 2 + 1 = 9

9 × 2 + 0 = 18

18 × 2 + 1 = 37

37 × 2 + 0 = 74

74 × 2 + 1 = 149

149 × 2 + 1 = 299

299 × 2 + 0 = 598

598 × 2 + 1 = 1197

The result is 119710. Note that the first Prior Value of 0 is simply an initial decimal value. This method is anapplication of the Horner scheme.

Binary  1 0 0 1 0 1 0 1 1 0 1

Decimal  1×210 + 0×29 + 0×28 + 1×27 + 0×26 + 1×25 + 0×24 + 1×23 + 1×22 + 0×21 + 1×20 = 1197

The fractional parts of a number are converted with similar methods. They are again based on the equivalence ofshifting with doubling or halving.

In a fractional binary number such as .110101101012, the first digit is , the second , etc. So if there is a1 in the first place after the decimal, then the number is at least , and vice versa. Double that number is at least 1.This suggests the algorithm: Repeatedly double the number to be converted, record if the result is at least 1, and thenthrow away the integer part.For example, 10, in binary, is:

Converting Result

0.

0.0

0.01

0.010

0.0101

Thus the repeating decimal fraction 0.3... is equivalent to the repeating binary fraction 0.01... .Or for example, 0.110, in binary, is:

Page 13: Micro Excellent

Binary numeral system 11

Converting Result

0.1 0.

0.1 × 2 = 0.2 < 1 0.0

0.2 × 2 = 0.4 < 1 0.00

0.4 × 2 = 0.8 < 1 0.000

0.8 × 2 = 1.6 ≥ 1 0.0001

0.6 × 2 = 1.2 ≥ 1 0.00011

0.2 × 2 = 0.4 < 1 0.000110

0.4 × 2 = 0.8 < 1 0.0001100

0.8 × 2 = 1.6 ≥ 1 0.00011001

0.6 × 2 = 1.2 ≥ 1 0.000110011

0.2 × 2 = 0.4 < 1 0.0001100110

This is also a repeating binary fraction 0.00011... . It may come as a surprise that terminating decimal fractions canhave repeating expansions in binary. It is for this reason that many are surprised to discover that 0.1 + ... + 0.1, (10additions) differs from 1 in floating point arithmetic. In fact, the only binary fractions with terminating expansionsare of the form of an integer divided by a power of 2, which 1/10 is not.The final conversion is from binary to decimal fractions. The only difficulty arises with repeating fractions, butotherwise the method is to shift the fraction to an integer, convert it as above, and then divide by the appropriatepower of two in the decimal base. For example:

= 1100 .101110...

= 1100101110 .01110...

= 11001 .01110...

= 1100010101

= (789/62)10

Another way of converting from binary to decimal, often quicker for a person familiar with hexadecimal, is to do soindirectly—first converting ( in binary) into ( in hexadecimal) and then converting ( in hexadecimal) into (

in decimal).For very large numbers, these simple methods are inefficient because they perform a large number of multiplicationsor divisions where one operand is very large. A simple divide-and-conquer algorithm is more effectiveasymptotically: given a binary number, it is divided by 10k, where k is chosen so that the quotient roughly equals theremainder; then each of these pieces is converted to decimal and the two are concatenated. Given a decimal number,it can be split into two pieces of about the same size, each of which is converted to binary, whereupon the firstconverted piece is multiplied by 10k and added to the second converted piece, where k is the number of decimaldigits in the second, least-significant piece before conversion.

Page 14: Micro Excellent

Binary numeral system 12

Hexadecimal

0hex = 0dec = 0oct 0 0 0 0

1hex = 1dec = 1oct 0 0 0 1

2hex = 2dec = 2oct 0 0 1 0

3hex = 3dec = 3oct 0 0 1 1

4hex = 4dec = 4oct 0 1 0 0

5hex = 5dec = 5oct 0 1 0 1

6hex = 6dec = 6oct 0 1 1 0

7hex = 7dec = 7oct 0 1 1 1

8hex = 8dec = 10oct 1 0 0 0

9hex = 9dec = 11oct 1 0 0 1

Ahex = 10dec = 12oct 1 0 1 0

Bhex = 11dec = 13oct 1 0 1 1

Chex = 12dec = 14oct 1 1 0 0

Dhex = 13dec = 15oct 1 1 0 1

Ehex = 14dec = 16oct 1 1 1 0

Fhex = 15dec = 17oct 1 1 1 1

Binary may be converted to and from hexadecimal somewhat more easily. This is because the radix of thehexadecimal system (16) is a power of the radix of the binary system (2). More specifically, 16 = 24, so it takes fourdigits of binary to represent one digit of hexadecimal, as shown in the table to the right.To convert a hexadecimal number into its binary equivalent, simply substitute the corresponding binary digits:

3A16 = 0011 10102E716 = 1110 01112

To convert a binary number into its hexadecimal equivalent, divide it into groups of four bits. If the number of bitsisn't a multiple of four, simply insert extra 0 bits at the left (called padding). For example:

10100102 = 0101 0010 grouped with padding = 5216110111012 = 1101 1101 grouped = DD16

To convert a hexadecimal number into its decimal equivalent, multiply the decimal equivalent of each hexadecimaldigit by the corresponding power of 16 and add the resulting values:

C0E716 = (12 × 163) + (0 × 162) + (14 × 161) + (7 × 160) = (12 × 4096) + (0 × 256) + (14 × 16) + (7 × 1) =49,38310

Page 15: Micro Excellent

Binary numeral system 13

OctalBinary is also easily converted to the octal numeral system, since octal uses a radix of 8, which is a power of two(namely, 23, so it takes exactly three binary digits to represent an octal digit). The correspondence between octal andbinary numerals is the same as for the first eight digits of hexadecimal in the table above. Binary 000 is equivalent tothe octal digit 0, binary 111 is equivalent to octal 7, and so forth.

Octal Binary

0 000

1 001

2 010

3 011

4 100

5 101

6 110

7 111

Converting from octal to binary proceeds in the same fashion as it does for hexadecimal:658 = 110 1012178 = 001 1112

And from binary to octal:1011002 = 101 1002 grouped = 548100112 = 010 0112 grouped with padding = 238

And from octal to decimal:658 = (6 × 81) + (5 × 80) = (6 × 8) + (5 × 1) = 53101278 = (1 × 82) + (2 × 81) + (7 × 80) = (1 × 64) + (2 × 8) + (7 × 1) = 8710

Representing real numbersNon-integers can be represented by using negative powers, which are set off from the other digits by means of aradix point (called a decimal point in the decimal system). For example, the binary number 11.012 thus means:

1 × 21 (1 × 2 = 2) plus

1 × 20 (1 × 1 = 1) plus

0 × 2−1 (0 × ½ = 0) plus

1 × 2−2 (1 × ¼ = 0.25)

For a total of 3.25 decimal.

All dyadic rational numbers have a terminating binary numeral—the binary representation has a finite number of

terms after the radix point. Other rational numbers have binary representation, but instead of terminating, they recur,with a finite sequence of digits repeating indefinitely. For instance

= = 0.0101010101...2

Page 16: Micro Excellent

Binary numeral system 14

= = 0.10110100 10110100 10110100...2The phenomenon that the binary representation of any rational is either terminating or recurring also occurs in otherradix-based numeral systems. See, for instance, the explanation in decimal. Another similarity is the existence ofalternative representations for any terminating representation, relying on the fact that 0.111111... is the sum of thegeometric series 2−1 + 2−2 + 2−3 + ... which is 1.Binary numerals which neither terminate nor recur represent irrational numbers. For instance,• 0.10100100010000100000100.... does have a pattern, but it is not a fixed-length recurring pattern, so the number

is irrational• 1.0110101000001001111001100110011111110... is the binary representation of , the square root of 2,

another irrational. It has no discernible pattern. See irrational number.

On-line live converters and calculators• On-line converter for all types of binary numbers (including single and double precision IEEE754 numbers) [15]

• On-line converter for any base [16]

• Online binary calculator [17] supports addition, subtraction, multiplication and division• Binary converter with direct access to bits [18]

See also• Binary-coded decimal• Finger binary• Gray code• linear feedback shift register• Offset binary• Quibinary• Reduction of summands• Redundant binary representation• SZTAKI Desktop Grid searches for generalized binary number systems up to dimension 11.• Two's complement

Notes[1] Sanchez, Julio; Canton, Maria P. (2007). Microcontroller programming : the microchip PIC. Boca Raton, Florida: CRC Press. p. 37.

ISBN 0-8493-7189-9[2] W. S. Anglin and J. Lambek, The Heritage of Thales, Springer, 1995, ISBN 0-387-94544-X[3] Binary Numbers in Ancient India (http:/ / home. ica. net/ ~roymanju/ Binary. htm)[4] Math for Poets and Drummers (http:/ / www. sju. edu/ ~rhall/ Rhythms/ Poets/ arcadia. pdf) (pdf, 145KB)[5] Ryan, James A. (January 1996). "Leibniz' Binary System and Shao Yong's "Yijing"" (http:/ / www. jstor. org/ stable/ 1399337). Philosophy

East and West (University of Hawaii Press) 46 (1): 59–90. doi:10.2307/1399337. . Retrieved July 6, 2010.[6] Bacon, Francis. "The Advancement of Learning" (http:/ / home. hiwaay. net/ ~paul/ bacon/ advancement/ book6ch1. html). London. pp.

Chapter 1.[7] Leibniz G., Explication de l'Arithmétique Binaire, Die Mathematische Schriften, ed. C. Gerhardt, Berlin 1879, vol.7, p.223; Engl. transl.

(http:/ / www. leibniz-translations. com/ binary. htm)[8] Aiton, Eric J. (1985). Leibniz: A Biography. Taylor & Francis. pp. 245–8. ISBN 0-85274-470-6[9] Boole, George (2009) [1854]. An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and

Probabilities (http:/ / www. gutenberg. org/ etext/ 15114) (Macmillan, Dover Publications, reprinted with corrections [1958] ed.). New York:Cambridge University Press. ISBN 9781108001533. .

[10] Shannon, Claude Elwood (1940). A symbolic analysis of relay and switching circuits (http:/ / hdl. handle. net/ 1721. 1/ 11173). Cambridge:Massachusetts Institute of Technology. .

Page 17: Micro Excellent

Binary numeral system 15

[11] "National Inventors Hall of Fame – George R. Stibitz" (http:/ / www. invent. org/ hall_of_fame/ 140. html). 20 August 2008. . Retrieved 5July 2010.

[12] "George Stibitz : Bio" (http:/ / stibitz. denison. edu/ bio. html). Math & Computer Science Department, Denison University. 30 April 2004. .Retrieved 5 July 2010.

[13] "Pioneers – The people and ideas that made a difference – George Stibitz (1904–1995)" (http:/ / www. kerryr. net/ pioneers/ stibitz. htm).Kerry Redshaw. 20 February 2006. . Retrieved 5 July 2010.

[14] "George Robert Stibitz – Obituary" (http:/ / ei. cs. vt. edu/ ~history/ Stibitz. html). Computer History Association of California. 6 February1995. . Retrieved 5 July 2010.

[15] http:/ / www. binaryconvert. com[16] http:/ / www. digitconvert. com/[17] http:/ / www. miniwebtool. com/ binary-calculator/[18] http:/ / calc. 50x. eu/

References• Sanchez, Julio; Canton, Maria P. (2007), Microcontroller programming : the microchip PIC, Boca Raton, FL:

CRC Press, p. 37, ISBN 0849371899

External links• A brief overview of Leibniz and the connection to binary numbers (http:/ / www. kerryr. net/ pioneers/ leibniz.

htm)• Binary System (http:/ / www. cut-the-knot. org/ do_you_know/ BinaryHistory. shtml) at cut-the-knot• Conversion of Fractions (http:/ / www. cut-the-knot. org/ blue/ frac_conv. shtml) at cut-the-knot• Binary Digits (http:/ / www. mathsisfun. com/ binary-digits. html) at Math Is Fun• How to Convert from Decimal to Binary (http:/ / www. wikihow. com/ Convert-from-Decimal-to-Binary) at

wikiHow• Learning exercise for children at CircuitDesign.info (http:/ / www. circuitdesign. info/ blog/ 2008/ 06/

the-binary-number-system-part-2-binary-weighting/ )• Binary Counter with Kids (http:/ / gwydir. demon. co. uk/ jo/ numbers/ binary/ kids. htm)• “Magic” Card Trick (http:/ / gwydir. demon. co. uk/ jo/ numbers/ binary/ cards. htm)• Quick reference on Howto read binary (http:/ / www. mycomputeraid. com/ networking-support/

general-networking-support/ howto-read-binary-basics/ )

Page 18: Micro Excellent

Binary-coded decimal 16

Binary-coded decimalIn computing and electronic systems, binary-coded decimal (BCD) (sometimes called natural binary-codeddecimal, NBCD) or, in its most common modern implementation, packed decimal, is an encoding for decimalnumbers in which each digit is represented by its own binary sequence. Its main virtue is that it allows easyconversion to decimal digits for printing or display, and allows faster decimal calculations. Its drawbacks are a smallincrease in the complexity of circuits needed to implement mathematical operations. Uncompressed BCD is also arelatively inefficient encoding—it occupies more space than a purely binary representation.In BCD, a digit is usually represented by four bits which, in general, represent the decimal digits 0 through 9. Otherbit combinations are sometimes used for a sign or for other indications (e.g., error or overflow).Although uncompressed BCD is not as widely used as it once was, decimal fixed-point and floating-point are stillimportant and continue to be used in financial, commercial, and industrial computing.[1]

Recent decimal floating-point representations use base-10 exponents, but not BCD encodings. Current hardwareimplementations, however, convert the compressed decimal encodings to BCD internally before carrying outcomputations. Software implementations of decimal arithmetic typically use BCD or some other 10n base, dependingon the operation.

BasicsTo encode a decimal number using the common BCD encoding, each decimal digit is stored in a 4-bit nibble:

Decimal: 0 1 2 3 4 5 6 7 8 9

BCD: 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001

Thus, the BCD encoding for the number 127 would be:

0001 0010 0111

Whereas the pure binary number would be:

0111 1111

Since most computers store data in 8-bit bytes, there are two common ways of storing 4-bit BCD digits in thosebytes:• each digit is stored in one nibble of a byte, with the other nibble being set to all zeros, all ones (as in the EBCDIC

code), or to 0011 (as in the ASCII code)• two digits are stored in each byte.Unlike binary-encoded numbers, BCD-encoded numbers can easily be displayed by mapping each of the nibbles to adifferent character. Converting a binary-encoded number to decimal for display is much harder, as this generallyinvolves integer multiplication or divide operations. BCD also avoids problems where fractions that can berepresented exactly in decimal cannot be represented in binary (e.g., one-tenth).

BCD in ElectronicsBCD is very common in electronic systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic, and not containing a microprocessor. By utilizing BCD, the manipulation of numerical data for display can be greatly simplified by treating each digit as a separate single sub-circuit. This matches much more closely the physical reality of display hardware—a designer might choose to use a series of separate identical seven-segment displays to build a metering circuit, for example. If the numeric quantity were stored and manipulated as pure binary, interfacing to such a display would require complex circuitry. Therefore, in

Page 19: Micro Excellent

Binary-coded decimal 17

cases where the calculations are relatively simple working throughout with BCD can lead to a simpler overall systemthan converting to binary.The same argument applies when hardware of this type uses an embedded microcontroller or other small processor.Often, smaller code results when representing numbers internally in BCD format, since a conversion from or tobinary representation can be expensive on such limited processors. For these applications, some small processorsfeature BCD arithmetic modes, which assist when writing routines that manipulate BCD quantities.

Packed BCDA common variation of the two-digits-per-byte encoding, in use since the 1960s or earlier and implemented in allIBM mainframe hardware since then, is called packed BCD (or simply packed decimal). All of the upper bytes of amulti-byte word plus the upper four bits (nibble) of the lowest byte are used to store decimal integers. The lower fourbits of the lowest byte are used as the sign flag. As an example, a 32-bit word contains 4 bytes or 8 nibbles. PackedBCD uses the upper 7 nibbles to store the integers of a 7-digit decimal value and uses the lowest nibble to indicatethe sign of those integers.Standard sign values are 1100 (hex C) for positive (+) and 1101 (D) for negative (−). This convention was derivedfrom abbreviations for accounting terms (Credit and Debit), as packed decimal coding was widely used inaccounting systems. Other allowed signs are 1010 (A) and 1110 (E) for positive and 1011 (B) for negative. Someimplementations also provide unsigned BCD values with a sign nibble of 1111 (F). In packed BCD, the number 127is represented by 0001 0010 0111 1100 (127C) and −127 is represented by 0001 0010 0111 1101 (127D).

SignDigit

BCD8 4 2 1

Sign Notes

A 1 0 1 0 +

B 1 0 1 1 −

C 1 1 0 0 + Preferred

D 1 1 0 1 − Preferred

E 1 1 1 0 +

F 1 1 1 1 + Unsigned

No matter how many bytes wide a word is, there are always an even number of nibbles because each byte has two ofthem. Therefore, a word of n bytes can contain up to (2n)−1 decimal digits, which is always an odd number of digits.A decimal number with d digits requires ½(d+1) bytes of storage space.For example, a 4-byte (32-bit) word can hold seven decimal digits plus a sign, and can represent values ranging from±9,999,999. Thus the number −1,234,567 is 7 digits wide and is encoded as:

0001 0010 0011 0100 0101 0110 0111 1101

1 2 3 4 5 6 7 −

(Note that, like character strings, the first byte of the packed decimal — with the most significant two digits — isusually stored in the lowest address in memory, independent of the endianness of the machine.)In contrast, a 4-byte binary two's complement integer can represent values from −2,147,483,648 to +2,147,483,647.While packed BCD does not make optimal use of storage (about 1/6 of the memory used is wasted), conversion toASCII, EBCDIC, or the various encodings of Unicode is still trivial, as no arithmetic operations are required. Theextra storage requirements are usually offset by the need for the accuracy and compatibility with calculator or handcalculation that fixed-point decimal arithmetic provides. Denser packings of BCD exist which avoid the storagepenalty and also need no arithmetic operations for common conversions.

Page 20: Micro Excellent

Binary-coded decimal 18

Packed BCD is supported in the COBOL programming language as the "COMPUTATIONAL-3" data type. Besidesthe IBM System/360 and later compatible mainframes, packed BCD was implemented in the native instruction set ofthe original VAX processors from Digital Equipment Corporation.

Fixed-point packed decimalFixed-point decimal numbers are supported by some programming languages (such as COBOL and PL/I). Theselanguages allow the programmer to specify an implicit decimal point in front of one of the digits. For example, apacked decimal value encoded with the bytes 12 34 56 7C represents the fixed-point value +1,234.567 when theimplied decimal point is located between the 4th and 5th digits:

12 34 56 7C

12 34.56 7+

The decimal point is not actually stored in memory, as the packed BCD storage format does not provide for it. Itslocation is simply known to the compiler and the generated code acts accordingly for the various arithmeticoperations.

Higher-density encodingsIf a decimal digit requires four bits, then three decimal digits require 12 bits. However, since 210 (1,024) is greaterthan 103 (1,000), if three decimal digits are encoded together, only 10 bits are needed. Two such encodings areChen-Ho encoding and Densely Packed Decimal. The latter has the advantage that subsets of the encoding encodetwo digits in the optimal seven bits and one digit in four bits, as in regular BCD.

Zoned decimalSome implementations, for example IBM mainframe systems, support zoned decimal numeric representations. Eachdecimal digit is stored in one byte, with the lower four bits encoding the digit in BCD form. The upper four bits,called the "zone" bits, are usually set to a fixed value so that the byte holds a character value corresponding to thedigit. EBCDIC systems use a zone value of 1111 (hex F); this yields bytes in the range F0 to F9 (hex), which are theEBCDIC codes for the characters "0" through "9". Similarly, ASCII systems use a zone value of 0011 (hex 3), givingcharacter codes 30 to 39 (hex).For signed zoned decimal values, the rightmost (least significant) zone nibble holds the sign digit, which is the sameset of values that are used for signed packed decimal numbers (see above). Thus a zoned decimal value encoded asthe hex bytes F1 F2 D3 represents the signed decimal value −123:

F1 F2 D3

1 2 −3

EBCDIC zoned decimal conversion table

Page 21: Micro Excellent

Binary-coded decimal 19

BCD Digit EBCDIC Character Hexadecimal

0+ { (*) \ (*) C0 A0 E0

1+ A ~ (*) C1 A1 E1

2+ B s S C2 A2 E2

3+ C t T C3 A3 E3

4+ D u U C4 A4 E4

5+ E v V C5 A5 E5

6+ F w W C6 A6 E6

7+ G x X C7 A7 E7

8+ H y Y C8 A8 E8

9+ I z Z C9 A9 E9

0− }  (*) ^  (*) D0 B0

1− J D1 B1

2− K D2 B2

3− L D3 B3

4− M D4 B4

5− N D5 B5

6− O D6 B6

7− P D7 B7

8− Q D8 B8

9− R D9 B9

(*) Note: These characters vary depending on the local character code page.

Fixed-point zoned decimalSome languages (such as COBOL and PL/I) directly support fixed-point zoned decimal values, assigning an implicitdecimal point at some location between the decimal digits of a number. For example, given a six-byte signed zoneddecimal value with an implied decimal point to the right of the fourth digit, the hex bytes F1 F2 F7 F9 F5 C0represent the value +1,279.50:

F1 F2 F7 F9 F5 C0

1 2 7 9. 5 +0

IBM and BCDIBM used the terms binary-coded decimal and BCD for 6-bit alphameric codes that represented numbers,upper-case letters and special characters. Some variation of BCD alphamerics was used in most early IBMcomputers, including the IBM 1620, IBM 1400 series, and non-Decimal Architecture members of the IBM 700/7000series.The IBM 1400 series were character-addressable machines, each location being six bits labeled B, A, 8, 4, 2 and 1, plus an odd parity check bit (C) and a word mark bit (M). For encoding digits 1 through 9, B and A were zero and the digit value represented by standard 4-bit BCD in bits 8 through 1. For most other characters bits B and A were derived simply from the "12", "11", and "0" "zone punches" in the punched card character code, and bits 8 through 1 from the 1 through 9 punches. A "12 zone" punch set both B and A, an "11 zone" set B, and a "0 zone" (a 0 punch

Page 22: Micro Excellent

Binary-coded decimal 20

combined with any others) set A. Thus the letter A, (12,1) in the punched card format, was encoded (B,A,1) and thecurrency symbol $, (11,8,3) in the punched card, as (B,8,3). This allowed the circuitry to convert between thepunched card format and the internal storage format to be very simple with only a few special cases. One importantspecial case was digit 0, represented by a lone 0 punch in the card, and (8,2) in core memory. [2]

The memory of the IBM 1620 was organized into 5-bit addressable digits, the usual 8, 4, 2, 1 plus F, used as a flagbit. BCD alphamerics were encoded using digit pairs, with the "zone" in the even-addressed digit and the "digit" inthe odd-addressed digit, the "zone" being related to the 12, 11, and 0 "zone punches" as in the 1400 series.Input/Output translation hardware converted between the internal digit pairs and the external standard 6-bit BCDcodes.In the Decimal Architecture IBM 7070, IBM 7072, and IBM 7074 alphamerics were encoded using digit pairs (usingtwo-out-of-five code in the digits, not BCD) of the 10-digit word, with the "zone" in the left digit and the "digit" inthe right digit. Input/Output translation hardware converted between the internal digit pairs and the external standard6-bit BCD codes.With the introduction of System/360, IBM expanded 6-bit BCD alphamerics to 8-bit EBCDIC, allowing the additionof many more characters (e.g., lowercase letters). A variable length Packed BCD numeric data type was alsoimplemented.Today, BCD data is still heavily used in IBM processors and databases, such as IBM DB2, mainframes, and Power6.In these products, the BCD is usually zoned BCD (as in EBCDIC or ASCII), Packed BCD (two decimal digits perbyte), or "pure" BCD encoding (one decimal digit stored as BCD in the low four bits of each byte). All of these areused within hardware registers and processing units, and in software.

Other computers and BCDOther computers such as the Digital Equipment Corporation VAX-11 series could also use BCD for numeric dataand could perform arithmetic directly on packed BCD data. The MicroVAX and later VAX implementationsdropped this ability from the CPU but retained code compatibility with earlier machines by implementing themissing instructions in an operating system-supplied software library. In more recent computers such capabilities arealmost always implemented in software rather than the CPU's instruction set, but BCD numeric data is stillextremely common in commercial and financial applications.

Addition with BCDIt is possible to perform addition in BCD by first adding in binary, and then converting to BCD afterwards.Conversion of the simple sum of two digits can be done by adding 6 (that is, 16 – 10) when the result has a valuegreater than 9. For example:

1001 + 1000 = 10001 = 0001 0001

9 + 8 = 17 = 1 1

In BCD, there cannot exist a value greater than 9 (1001) per nibble. To correct this, 6 (0110) is added to that sum toget the correct first two digits:

0001 0001 + 0000 0110 = 0001 0111

1 1 + 0 6 = 1 7

which gives two nibbles, 0001 and 0111, which correspond to the digits "1" and "7". This yields "17" in BCD, whichis the correct result. This technique can be extended to adding multiple digits, by adding in groups from right to left,propagating the second digit as a carry, always comparing the 5-bit result of each digit-pair sum to 9.

Page 23: Micro Excellent

Binary-coded decimal 21

Subtraction with BCDSubtraction is done by adding the ten's complement of the subtrahend. To represent the sign of a number in BCD, thenumber 0000 is used to represent a positive number, and 1001 is used to represent a negative number. The remaining14 combinations are invalid signs. To illustrate signed BCD subtraction, consider the following problem: 357 - 432.In signed BCD, 357 is 0000 0011 0101 0111. The ten's complement of 432 can be obtained by taking the nine'scomplement of 432, and then adding one. So, 999 - 432 = 567, and 567 + 1 = 568. By preceding 568 in BCD by thenegative sign code, the number -432 can be represented. So, -568 in signed BCD is 1001 0101 0110 1000.Now that both numbers are represented in signed BCD, they can be added together:

0000 0011 0101 0111 + 1001 0101 0110 1000 = 1001 1000 1011 1111

0 3 5 7 + 9 5 6 8 = 9 8 11 15

Since BCD is a form of decimal representation, several of the digit sums above are invalid. In the event that aninvalid entry (any BCD digit greater than 1001) exists, 6 is added to generate a carry bit and cause the sum tobecome a valid entry. The reason for adding 6 is that there are 16 possible 4-bit BCD values (since 24 = 16), but only10 values are valid (0000 through 1001). So adding 6 to the invalid entries results in the following:

1001 1000 1011 1111 + 0000 0000 0110 0110 = 1001 1001 0010 0101

9 8 11 15 + 0 0 6 6 = 9 9 2 5

Thus the result of the subtraction is 1001 1001 0010 0101 (-925). To check the answer, note that the first bit is thesign bit, which is negative. This seems to be correct, since 357 - 432 should result in a negative number. To checkthe rest of the digits, represent them in decimal. 1001 0010 0101 is 925. The ten's complement of 925 is 1000 - 925 =999 - 925 + 1 = 074 + 1 = 75, so the calculated answer is -75. To check, perform standard subtraction to verify that357 - 432 is -75.Note that in the event that there are a different number of nibbles being added together (such as 1053 - 122), thenumber with the fewest number of digits must first be padded with zeros before taking the ten's complement orsubtracting. So, with 1053 - 122, 122 would have to first be represented as 0122, and the ten's complement of 0122would have to be calculated.

BackgroundThe binary-coded decimal scheme described in this article is the most common encoding, but there are many others.The method here can be referred to as Simple Binary-Coded Decimal (SBCD) or BCD 8421. In the headers to thetable, the '8 4 2 1', etc., indicates the weight of each bit shown; note that in the fifth column two of the weights arenegative. Both ASCII and EBCDIC character codes for the digits are examples of zoned BCD, and are also shown inthe table.The following table represents decimal digits from 0 to 9 in various BCD systems:

Page 24: Micro Excellent

Binary-coded decimal 22

Digit BCD8 4 2 1

Excess-3or Stibitz Code

BCD 2 4 2 1or Aiken Code

BCD8 4 −2 −1

IBM 702 IBM 705IBM 7080 IBM

14018 4 2 1

ASCII0000 8421

EBCDIC0000 8421

0 0000 0011 0000 0000 1010 0011 0000 1111 0000

1 0001 0100 0001 0111 0001 0011 0001 1111 0001

2 0010 0101 0010 0110 0010 0011 0010 1111 0010

3 0011 0110 0011 0101 0011 0011 0011 1111 0011

4 0100 0111 0100 0100 0100 0011 0100 1111 0100

5 0101 1000 1011 1011 0101 0011 0101 1111 0101

6 0110 1001 1100 1010 0110 0011 0110 1111 0110

7 0111 1010 1101 1001 0111 0011 0111 1111 0111

8 1000 1011 1110 1000 1000 0011 1000 1111 1000

9 1001 1100 1111 1111 1001 0011 1001 1111 1001

Legal historyIn the 1972 case Gottschalk v. Benson, the U.S. Supreme Court overturned a lower court decision which had alloweda patent for converting BCD encoded numbers to binary on a computer. This was an important case in determiningthe patentability of software and algorithms.

Comparison with pure binary

Advantages• Many non-integral values, such as decimal 0.2, have an infinite place-value representation in binary

(.001100110011...) but have a finite place-value in binary-coded decimal (0.0010). Consequently a system basedon binary-coded decimal representations of decimal fractions avoids errors representing and calculating suchvalues.

• Scaling by a factor of 10 (or a power of 10) is simple; this is useful when a decimal scaling factor is needed torepresent a non-integer quantity (e.g., in financial calculations)

• Rounding at a decimal digit boundary is simpler. Addition and subtraction in decimal does not require rounding.• Alignment of two decimal numbers (for example 1.3 + 27.08) is a simple, exact, shift.• Conversion to a character form or for display (e.g., to a text-based format such as XML, or to drive signals for a

seven-segment display) is a simple per-digit mapping, and can be done in linear (O(n)) time. Conversion frompure binary involves relatively complex logic that spans digits, and for large numbers no linear-time conversionalgorithm is known (see Binary numeral system).

Page 25: Micro Excellent

Binary-coded decimal 23

Disadvantages• Some operations are more complex to implement. Adders require extra logic to cause them to wrap and generate a

carry early. 15–20% more circuitry is needed for BCD add compared to pure binary. Multiplication requires theuse of algorithms that are somewhat more complex than shift-mask-add (a binary multiplication, requiring binaryshifts and adds or the equivalent, per-digit or group of digits is required)

• Standard BCD requires four bits per digit, roughly 20% more space than a binary encoding. When packed so thatthree digits are encoded in ten bits, the storage overhead is reduced to about 0.34%, at the expense of an encodingthat is unaligned with the 8-bit byte boundaries common on existing hardware, resulting in slowerimplementations on these systems.

• Practical existing implementations of BCD are typically slower than operations on binary representations,especially on embedded systems, due to limited processor support for native BCD operations.

ApplicationThe BIOS in many personal computers stores the date and time in BCD because the MC6818 real-time clock chipused in the original IBM PC AT motherboard provided the time encoded in BCD. This form is easily converted intoASCII for display.[3]

The Atari 8-bit family of computers used BCD to implement floating-point algorithms. The MOS 6502 processorused has a BCD mode that affects the addition and subtraction instructions.Early models of the PlayStation 3 store the date and time in BCD. This led to a worldwide outage of the console on1st march 2010. The last two digits of the year stored as BCD were misinterpreted as 16 causing a paradox in theunit's date, rendering most functionalities inoperable.

Representational variationsVarious BCD implementations exist that employ other representations for numbers. Programmable calculatorsmanufactured by Texas Instruments, Hewlett-Packard, and others typically employ a floating-point BCD format,typically with two or three digits for the (decimal) exponent. The extra bits of the sign digit may be used to indicatespecial numeric values, such as infinity, underflow/overflow, and error (a blinking display).

Signed variationsSigned decimal values may be represented in several ways. The COBOL programming language, for example,supports a total of five zoned decimal formats, each one encoding the numeric sign in a different way:

Type Description Example

Unsigned No sign nibble F1 F2 F3

Signed trailing (canonical format) Sign nibble in the last (least significant) byte F1 F2 C3

Signed leading Sign nibble in the first (most significant) byte C1 F2 F3

Signed trailing separate Separate sign character byte ('+' or '−') following the digit bytes F1 F2 F3 2B

Signed leading separate Separate sign character byte ('+' or '−') preceding the digit bytes 2B F1 F2 F3

Page 26: Micro Excellent

Binary-coded decimal 24

Alternative encodingsIf errors in representation and computation are more important than the speed of conversion to and from display, ascaled binary representation may be used, which stores a decimal number as a binary-encoded integer and abinary-encoded signed decimal exponent. For example, 0.2 can be represented as 2 × 10−1.This representation allows rapid multiplication and division, but may require shifting by a power of 10 duringaddition and subtraction to align the decimal points. It is appropriate for applications with a fixed number of decimalplaces that do not then require this adjustment— particularly financial applications where 2 or 4 digits after thedecimal point are usually enough. Indeed this is almost a form of fixed point arithmetic since the position of theradix point is implied.Chen-Ho encoding provides a boolean transformation for converting groups of three BCD-encoded digits to andfrom 10-bit values that can be efficiently encoded in hardware with only 2 or 3 gate delays. Densely Packed Decimalis a similar scheme that is used for most of the significand, except the lead digit, for one of the two alternativedecimal encodings specified in the IEEE 754-2008 standard.

See also• Bi-quinary coded decimal• Chen-Ho encoding• Densely packed decimal• Double dabble, an algorithm for converting binary numbers to BCD• Gray code• Year 2010 problem

References[1] "General Decimal Arithmetic" (http:/ / speleotrove. com/ decimal/ ). .[2] IBM BM 1401/1440/1460/1410/7010 Character Code Chart in BCD Order (http:/ / ed-thelen. org/ 1401Project/ Van1401-CodeChart. pdf)[3] http:/ / www. se. ecu. edu. au/ units/ ens1242/ lectures/ ens_Notes_08. pdf

• Arithmetic Operations in Digital Computers, R. K. Richards, 397pp, D. Van Nostrand Co., NY, 1955• Schmid, Hermann, Decimal computation, ISBN 0-471-76180-X, 266pp, Wiley, 1974• Superoptimizer: A Look at the Smallest Program, Henry Massalin, ACM Sigplan Notices, Vol. 22 #10

(Proceedings of the Second International Conference on Architectural support for Programming Languages andOperating Systems), pp122–126, ACM, also IEEE Computer Society Press #87CH2440-6, October 1987

• VLSI designs for redundant binary-coded decimal addition, Behrooz Shirazi, David Y. Y. Yun, and Chang N.Zhang, IEEE Seventh Annual International Phoenix Conference on Computers and Communications, 1988,pp52–56, IEEE, March 1988

• Fundamentals of Digital Logic by Brown and Vranesic, 2003• Modified Carry Look Ahead BCD Adder With CMOS and Reversible Logic Implementation, Himanshu Thapliyal

and Hamid R. Arabnia, Proceedings of the 2006 International Conference on Computer Design (CDES'06), ISBN1-60132-009-4, pp64–69, CSREA Press, November 2006

• Reversible Implementation of Densely-Packed-Decimal Converter to and from Binary-Coded-Decimal FormatUsing in IEEE-754R, A. Kaivani, A. Zaker Alhosseini, S. Gorgin, and M. Fazlali, 9th International Conference onInformation Technology (ICIT'06), pp273–276, IEEE, December 2006.

• See also the Decimal Arithmetic Bibliography (http:/ / speleotrove. com/ decimal/ decbibindex. html)

Page 27: Micro Excellent

Binary-coded decimal 25

External links• IBM: Chen-Ho encoding (http:/ / speleotrove. com/ decimal/ chen-ho. html)• IBM: Densely Packed Decimal (http:/ / speleotrove. com/ decimal/ DPDecimal. html).• Convert BCD to decimal, binary and hexadecimal and vice versa (http:/ / www. unitjuggler. com/

convert-numbersystems-from-decimal-to-bcd. html)• BCD for Java (https:/ / code. google. com/ p/ bcd4j/ )

ASCII

All 128 ASCII characters including non-printable characters (represented bytheir abbreviation).

The 95 ASCII graphic characters are numbered from 0x20 to 0x7E (32 to 126decimal). The space character is considered a non-printing graphic.[1]

The American Standard Code forInformation Interchange (acronym: ASCII;pronounced /ˈæski/ ASS-kee)[2] is acharacter-encoding scheme based on theordering of the English alphabet. ASCII codesrepresent text in computers, communicationsequipment, and other devices that use text.Most modern character-encoding schemes arebased on ASCII, though they support manymore characters than did ASCII.

US-ASCII is the Internet Assigned NumbersAuthority (IANA) preferred charset name for ASCII.

Historically, ASCII developed from telegraphic codes. Its first commercial use was as a seven-bit teleprinter codepromoted by Bell data services. Work on ASCII formally began on October 6, 1960, with the first meeting of theAmerican Standards Association's (ASA) X3.2 subcommittee. The first edition of the standard was published during1963,[3] [4] a major revision during 1967,[5] and the most recent update during 1986.[6] Compared to earlier telegraphcodes, the proposed Bell code and ASCII were both ordered for more convenient sorting (i.e., alphabetization) oflists, and added features for devices other than teleprinters.

ASCII includes definitions for 128 characters: 33 are non-printing control characters (now mostly obsolete) thataffect how text and space is processed;[7] 94 are printable characters, and the space is considered an invisiblegraphic.[8] The most commonly used character encoding on the World Wide Web was US-ASCII[9] until December2007, when it was surpassed by UTF-8.[10] [11] [12]

Page 28: Micro Excellent

ASCII 26

History

The US ASCII 1968 Code Chart was structured with two columns of control characters, acolumn with special characters, a column with numbers, and four columns of letters

The American Standard Code forInformation Interchange (ASCII) wasdeveloped under the auspices of acommittee of the American StandardsAssociation, called the X3 committee,by its X3.2 (later X3L2) subcommittee,and later by that subcommittee'sX3.2.4 working group. The ASAbecame the United States of AmericaStandards Institute or USASI[13] andultimately the American NationalStandards Institute.

The X3.2 subcommittee designedASCII based on earlier teleprinterencoding systems. Like other characterencodings, ASCII specifies acorrespondence between digital bit patterns and character symbols (i.e. graphemes and control characters). Thisallows digital devices to communicate with each other and to process, store, and communicate character-orientedinformation such as written language. Before ASCII was developed, the encodings in use included 26 alphabeticcharacters, 10 numerical digits, and from 11 to 25 special graphic symbols. To include all these, and controlcharacters compatible with the Comité Consultatif International Téléphonique et Télégraphique standard, Fieldata,and early EBCDIC, more than 64 codes were required for ASCII.

The committee debated the possibility of a shift key function (like the Baudot code), which would allow more than64 codes to be represented by six bits. In a shifted code, some character codes determine choices between options forthe following character codes. It allows compact encoding, but is less reliable for data transmission; an error intransmitting the shift code typically makes a long part of the transmission unreadable. The standards committeedecided against shifting, and so ASCII required at least a seven-bit code.[14]

The committee considered an eight-bit code, since eight bits would allow two four-bit patterns to efficiently encodetwo digits with binary coded decimal. However, it would require all data transmission to send eight bits when sevencould suffice. The committee voted to use a seven-bit code to minimize costs associated with data transmission.Since perforated tape at the time could record eight bits in one position, it also allowed for a parity bit for errorchecking if desired.[15] Machines with octets as the native data type that did not use parity checking typically set theeighth bit to 0.[16]

The code itself was patterned so that most control codes were together, and all graphic codes were together, for easeof identification. The first two columns (32 positions) were reserved for control characters.[17] The "space" characterhad to come before graphics to make sorting easier, so it became position 0x20;[18] for the same reason, many specialsigns commonly-used as separators were placed before digits. The committee decided it was important to supportupper case 64-character alphabets, and chose to pattern ASCII so it could be reduced easily to a usable 64-characterset of graphic codes.[19] Lower case letters were therefore not interleaved with upper case. To keep options availablefor lower case letters and other graphics, the special and numeric codes were arranged before the letters, and theletter 'A' was placed in position 0x41 to match the draft of the corresponding British standard.[20] The digits 0–9were arranged so they correspond to values in binary prefixed with 011, making conversion with binary-codeddecimal straightforward.

Page 29: Micro Excellent

ASCII 27

Many of the non-alphanumeric characters were positioned to correspond to their shifted position on typewriters.Thus #, $ and % were placed to correspond to 3, 4, and 5 in the adjacent column. The parentheses could notcorrespond to 9 and 0, however, because the place corresponding to 0 was taken by the space character. Since manyEuropean typewriters placed the parentheses with 8 and 9, those corresponding positions were chosen for theparentheses. The @ symbol was not used in continental Europe and the committee expected it would be replaced byan accented À in the French variation, so the @ was placed in position 0x40 next to the letter A.[21]

The control codes felt essential for data transmission were the start of message (SOM), end of address (EOA), end ofmessage (EOM), end of transmission (EOT), "who are you?" (WRU), "are you?" (RU), a reserved device control(DC0), synchronous idle (SYNC), and acknowledge (ACK). These were positioned to maximize the Hammingdistance between their bit patterns.[22]

With the other special characters and control codes filled in, ASCII was published as ASA X3.4-1963, leaving 28code positions without any assigned meaning, reserved for future standardization, and one unassigned controlcode.[23] There was some debate at the time whether there should be more control characters rather than the lowercase alphabet.[24] The indecision did not last long: during May 1963 the CCITT Working Party on the NewTelegraph Alphabet proposed to assign lower case characters to columns 6 and 7,[25] and International Organizationfor Standardization TC 97 SC 2 voted during October to incorporate the change into its draft standard.[26] The X3.2.4task group voted its approval for the change to ASCII at its May 1963 meeting.[27] Locating the lowercase letters incolumns 6 and 7 caused the characters to differ in bit pattern from the upper case by a single bit, which simplifiedcase-insensitive character matching and the construction of keyboards and printers.The X3 committee made other changes, including other new characters (the brace and vertical line characters),[28]

renaming some control characters (SOM became start of header (SOH)) and moving or removing others (RU wasremoved).[29] ASCII was subsequently updated as USASI X3.4-1967, then USASI X3.4-1968, ANSI X3.4-1977,and finally, ANSI X3.4-1986 (the first two are occasionally retronamed ANSI X3.4-1967, and ANSI X3.4-1968).The X3 committee also addressed how ASCII should be transmitted (least significant bit first), and how it should berecorded on perforated tape. They proposed a 9-track standard for magnetic tape, and attempted to deal with someforms of punched card formats.ASCII itself was first used commercially during 1963 as a seven-bit teleprinter code for American Telephone &Telegraph's TWX (Teletype Wide-area eXchange) network. TWX originally used the earlier five-bit Baudot code,which was also used by the competing Telex teleprinter system. Bob Bemer introduced features such as the escapesequence.[3] His British colleague Hugh McGregor Ross helped to popularize this work—according to Bemer, "somuch so that the code that was to become ASCII was first called the Bemer-Ross Code in Europe".[30] Because ofhis extensive work on ASCII, Bemer has been called "the father of ASCII."[31]

On March 11, 1968, U.S. President Lyndon B. Johnson mandated that all computers purchased by the United Statesfederal government support ASCII, stating:

I have also approved recommendations of the Secretary of Commerce regarding standards for recordingthe Standard Code for Information Interchange on magnetic tapes and paper tapes when they are used incomputer operations. All computers and related equipment configurations brought into the FederalGovernment inventory on and after July 1, 1969, must have the capability to use the Standard Code forInformation Interchange and the formats prescribed by the magnetic tape and paper tape standards whenthese media are used.[32]

Other international standards bodies have ratified character encodings such as ISO/IEC 646 that are identical or nearly identical to ASCII, with extensions for characters outside the English alphabet and symbols used outside the United States, such as the symbol for the United Kingdom's pound sterling (£). Almost every country needed an adapted version of ASCII since ASCII only suited the needs of the USA and a few other countries. For example, Canada had its own version that supported French characters. Other adapted encodings include ISCII (India), VISCII (Vietnam), and YUSCII (Yugoslavia). Although these encodings are sometimes referred to as ASCII, true ASCII is

Page 30: Micro Excellent

ASCII 28

defined strictly only by ANSI standard.ASCII was incorporated into the Unicode character set as the first 128 symbols, so the ASCII characters have thesame numeric codes in both sets. This allows UTF-8 to be backward compatible with ASCII, a significant advantage.

ASCII control charactersASCII reserves the first 32 codes (numbers 0–31 decimal) for control characters: codes originally intended not torepresent printable information, but rather to control devices (such as printers) that make use of ASCII, or to providemeta-information about data streams such as those stored on magnetic tape. For example, character 10 represents the"line feed" function (which causes a printer to advance its paper), and character 8 represents "backspace". RFC 2822refers to control characters that do not include carriage return, line feed or white space as non-whitespace controlcharacters.[33] Except for the control characters that prescribe elementary line-oriented formatting, ASCII does notdefine any mechanism for describing the structure or appearance of text within a document. Other schemes, such asmarkup languages, address page and document layout and formatting.The original ASCII standard used only short descriptive phrases for each control character. The ambiguity thiscaused was sometimes intentional (where a character would be used slightly differently on a terminal link than on adata stream) and sometimes accidental (such as what "delete" means).Probably the most influential single device on the interpretation of these characters was the ASR-33 Teletype series,which was a printing terminal with an available paper tape reader/punch option. Paper tape was a very popularmedium for long-term program storage through the 1980s, less costly and in some ways less fragile than magnetictape. In particular, the Teletype 33 machine assignments for codes 17 (Control-Q, DC1, also known as XON), 19(Control-S, DC3, also known as XOFF), and 127 (Delete) became de facto standards. Because the keytop for the Okey also showed a left-arrow symbol (from ASCII-1963, which had this character instead of underscore), anoncompliant use of code 15 (Control-O, Shift In) interpreted as "delete previous character" was also adopted bymany early timesharing systems but eventually became neglected.The use of Control-S (XOFF, an abbreviation for transmit off) as a "handshaking" signal warning a sender to stoptransmission because of impending overflow, and Control-Q (XON, "transmit on") to resume sending, persists to thisday in many systems as a manual output control technique. On some systems Control-S retains its meaning butControl-Q is replaced by a second Control-S to resume output.Code 127 is officially named "delete" but the Teletype label was "rubout". Since the original standard did not givedetailed interpretation for most control codes, interpretations of this code varied. The original Teletype meaning, andthe intent of the standard, was to make it an ignored character, the same as NUL (all zeroes). This was usefulspecifically for paper tape, because punching the all-ones bit pattern on top of an existing mark would obliterate it.Tapes designed to be "hand edited" could even be produced with spaces of extra NULs (blank tape) so that a blockof characters could be "rubbed out" and then replacements put into the empty space.As video terminals began to replace printing ones, the value of the "rubout" character was lost. DEC systems, forexample, interpreted "Delete" to mean "remove the character before the cursor," and this interpretation also becamecommon in Unix systems. Most other systems used "Backspace" for that meaning and used "Delete" to mean"remove the character at the cursor". That latter interpretation is the most common now.Many more of the control codes have been given meanings quite different from their original ones. The "escape" character (ESC, code 27), for example, was intended originally to allow sending other control characters as literals instead of invoking their meaning. This is the same meaning of "escape" encountered in URL encodings, C language strings, and other systems where certain characters have a reserved meaning. Over time this meaning has been co-opted and has eventually been changed. In modern use, an ESC sent to the terminal usually indicates the start of a command sequence, usually in the form of a so-called "ANSI escape code" (or, more properly, a "Control Sequence Introducer") beginning with ESC followed by a "[" (left-bracket) character. An ESC sent from the terminal is most

Page 31: Micro Excellent

ASCII 29

often used as an out-of-band character used to terminate an operation, as in the TECO and vi text editors. Ingraphical user interface (GUI) and windowing systems, ESC generally causes an application to abort its currentoperation or to exit (terminate) altogether.The inherent ambiguity of many control characters, combined with their historical usage, created problems whentransferring "plain text" files between systems. The best example of this is the newline problem on various operatingsystems. Teletypes required that a line of text be terminated with both "Carriage Return" and "Linefeed". The firstreturns the printing carriage to the beginning of the line and the second advances to the next line without moving thecarriage. However, requiring two characters to mark the end of a line introduced unnecessary complexity andquestions as to how to interpret each character when encountered alone. To simplify matters, plain text files on Unixand Amiga systems use line feeds alone to separate lines. Similarly, older Macintosh systems, among others, useonly carriage returns in plain text files. Various IBM operating systems used both characters to mark the end of aline, perhaps for compatibility with teletypes. This de facto standard was copied into CP/M and then into MS-DOSand eventually into Microsoft Windows. Transmission of text over the Internet, for protocols as E-mail and theWorld Wide Web, uses both characters.Some operating systems such as the pre-VMS DEC operating systems, along with CP/M, tracked file length only inunits of disk blocks and used Control-Z (SUB) to mark the end of the actual text in the file. For this reason, EOF, orend-of-file, was used colloquially and conventionally as a TLA for Control-Z instead of SUBstitute. For a variety ofreasons, the end-of-text code, ETX aka Control-C, was inappropriate and using Z as the control code to end a file isanalogous to it ending the alphabet, a very convenient mnemonic aid. Text strings ending with the null character areknown as ASCIZ, ASCIIZ or C strings.

Binary Oct Dec Hex Abbr [34] [35] [36] Description

000 0000 000 0 00 NUL ␀ ^@ \0 Null character

000 0001 001 1 01 SOH ␁ ^A Start of Header

000 0010 002 2 02 STX ␂ ^B Start of Text

000 0011 003 3 03 ETX ␃ ^C End of Text

000 0100 004 4 04 EOT ␄ ^D End of Transmission

000 0101 005 5 05 ENQ ␅ ^E Enquiry

000 0110 006 6 06 ACK ␆ ^F Acknowledgment

000 0111 007 7 07 BEL ␇ ^G \a Bell

000 1000 010 8 08 BS ␈ ^H \b Backspace[37] [38]

000 1001 011 9 09 HT ␉ ^I \t Horizontal Tab[39]

000 1010 012 10 0A LF ␊ ^J \n Line feed

000 1011 013 11 0B VT ␋ ^K \v Vertical Tab

000 1100 014 12 0C FF ␌ ^L \f Form feed

000 1101 015 13 0D CR ␍ ^M \r Carriage return[40]

000 1110 016 14 0E SO ␎ ^N Shift Out

000 1111 017 15 0F SI ␏ ^O Shift In

001 0000 020 16 10 DLE ␐ ^P Data Link Escape

001 0001 021 17 11 DC1 ␑ ^Q Device Control 1 (oft. XON)

Page 32: Micro Excellent

ASCII 30

001 0010 022 18 12 DC2 ␒ ^R Device Control 2

001 0011 023 19 13 DC3 ␓ ^S Device Control 3 (oft. XOFF)

001 0100 024 20 14 DC4 ␔ ^T Device Control 4

001 0101 025 21 15 NAK ␕ ^U Negative Acknowledgement

001 0110 026 22 16 SYN ␖ ^V Synchronous idle

001 0111 027 23 17 ETB ␗ ^W End of Transmission Block

001 1000 030 24 18 CAN ␘ ^X Cancel

001 1001 031 25 19 EM ␙ ^Y End of Medium

001 1010 032 26 1A SUB ␚ ^Z Substitute

001 1011 033 27 1B ESC ␛ ^[ \e[41] Escape[42]

001 1100 034 28 1C FS ␜ ^\ File Separator

001 1101 035 29 1D GS ␝ ^] Group Separator

001 1110 036 30 1E RS ␞ ^^[43] Record Separator

001 1111 037 31 1F US ␟ ^_ Unit Separator

111 1111 177 127 7F DEL ␡ ^? Delete[44] [38]

[1] "RFC 20 : ASCII format for Network Interchange" (http:/ / tools. ietf. org/ html/ rfc20), ANSI X3.4-1968, October 16, 1969.[2] Audio pronunciation for ASCII (http:/ / www. m-w. com/ cgi-bin/ audio. pl?ascii001. wav=ASCII). Merriam Webster. Accessed 2008-04-14.[3] Mary Brandel (July 6, 1999). 1963: The Debut of ASCII (http:/ / edition. cnn. com/ TECH/ computing/ 9907/ 06/ 1963. idg/ index. html):

CNN. Accessed 2008-04-14.[4] American Standard Code for Information Interchange, ASA X3.4-1963, American Standards Association, June 17, 1963[5] USA Standard Code for Information Interchange, USAS X3.4-1967, United States of America Standards Institute, July 7, 1967[6] American National Standard for Information Systems — Coded Character Sets — 7-Bit American National Standard Code for Information

Interchange (7-Bit ASCII), ANSI X3.4-1986, American National Standards Institute, Inc., March 26, 1986[7] International Organization for Standardization (December 1, 1975). " The set of control characters for ISO 646 (http:/ / www. itscj. ipsj. or.

jp/ ISO-IR/ 001. pdf)". Internet Assigned Numbers Authority Registry. Alternate U.S. version: (http:/ / www. itscj. ipsj. or. jp/ ISO-IR/ 006.pdf). Accessed 2008-04-14.

[8] Mackenzie, p.223.[9] Internet Assigned Numbers Authority (May 14, 2007). " Character Sets (http:/ / www. iana. org/ assignments/ character-sets)". Accessed

2008-04-14.[10] Dubost, Karl (May 6, 2008). "utf-8 Growth On The Web" (http:/ / www. w3. org/ QA/ 2008/ 05/ utf8-web-growth. html). W3C Blog. World

Wide Web Consortium. . Retrieved 2010-08-15.[11] Davis, Mark (May 5, 2008). "Moving to Unicode 5.1" (http:/ / googleblog. blogspot. com/ 2008/ 05/ moving-to-unicode-51. html). Official

Google Blog. Google. . Retrieved 2010-08-15.[12] Davis, Mark (Jan 28, 2010). "Unicode nearing 50% of the web" (http:/ / googleblog. blogspot. com/ 2010/ 01/ unicode-nearing-50-of-web.

html). Official Google Blog. Google. . Retrieved 2010-08-15.[13] Mackenzie, p.211.[14] Decision 4. Mackenzie, p.215.[15] Decision 5. Mackenzie, p.217.[16] Sawyer A. Sawyer and Steven George Krantz (January 1, 1995). A Tex Primer for Scientists. CRC Press. ISBN 0-8493-7159-7. p.13.[17] Decision 8,9. Mackenzie, p.220.[18] Decision 10. Mackenzie, p.237.[19] Decision 14. Mackenzie, p.228.[20] Decision 18. Mackenzie, p.238.[21] Mackenzie, p.243.[22] Mackenzie, p.243-245.[23] Mackenzie, p.66, 245.

Page 33: Micro Excellent

ASCII 31

[24] Mackenzie, p.435.[25] Brief Report: Meeting of CCITT Working Party on the New Telegraph Alphabet, May 13–15, 1963.[26] Report of ISO/TC/97/SC 2 – Meeting of October 29–31, 1963.[27] Report on Task Group X3.2.4, June 11, 1963, Pentagon Building, Washington, DC.[28] Report of Meeting No. 8, Task Group X3.2.4, December 17 and 18, 1963[29] Mackenzie, p.247–248.[30] Bob Bemer (n.d.). Bemer meets Europe (http:/ / www. trailing-edge. com/ ~bobbemer/ EUROPE. HTM). Trailing-edge.com. Accessed

2008-04-14. Employed at IBM at that time[31] "Biography of Robert William Bemer" (http:/ / www. thocp. net/ biographies/ bemer_bob. htm). .[32] Lyndon B. Johnson (March 11, 1968). Memorandum Approving the Adoption by the Federal Government of a Standard Code for

Information Interchange (http:/ / www. presidency. ucsb. edu/ ws/ index. php?pid=28724). The American Presidency Project. Accessed2008-04-14.

[33] RFC 2822 (April 2001). "NO-WS-CTL".[34] The Unicode characters from the area U+2400 to U+2421 reserved for representing control characters when it is necessary to print or

display them rather than have them perform their intended function. Some browsers may not display these properly.[35] Caret notation often used to represent control characters. This also indicates the key sequence to input the character traditionally on most

text terminals: The caret (^) that begins these sequences represents holding down the "Ctrl" key while typing the second character.[36] Character Escape Codes in C programming language and many other languages influenced by it, such as Java and Perl (though not all

implementations necessarily support all escape codes).[37] The Backspace character can also be entered by pressing the "Backspace", "Bksp", or ← key on some systems.[38] The ambiguity of Backspace is due to early terminals designed assuming the main use of the keyboard would be to manually punch paper

tape while not connected to a computer. To delete the previous character you had to back up the paper tape punch, which for mechanical andsimplicity reasons was a button on the punch itself and not the keyboard, then type the rubout character. They therefore placed a keyproducing rubout at the location used on typewriters for backspace. When systems used these terminals and provided command-line editing,they had to use the "rubout" code to perform a backspace, and often did not interpret the backspace character (they might echo "^H" forbackspace). Other terminals not designed for paper tape made the key at this location produce Backspace, and systems designed for these usedthat character to back up. Since the delete code often produced a backspace effect, this also forced terminal manufacturers to make any"Delete" key produce something other than the Delete character.

[39] The Tab character can also be entered by pressing the "Tab" key on most systems.[40] The Carriage Return character can also be entered by pressing the "Return", "Ret", "Enter", or ↵ key on most systems.[41] The '\e' escape sequence is not part of ISO C and many other language specifications. However, it is understood by several compilers.[42] The Escape character can also be entered by pressing the "Escape" or "Esc" key on some systems.[43] ^^ means Control-Caret (pressing the "Ctrl" and "^" keys), not Control-Control.[44] The Delete character can sometimes be entered by pressing the "Backspace", "Bksp", or ← key on some systems.

ASCII printable charactersCodes 0x21 to 0x7E, known as the printable characters, represent letters, digits, punctuation marks, and a fewmiscellaneous symbols.Code 0x20, the space character, denotes the space between words, as produced by the space-bar of a keyboard. Sincethe space character is considered an invisible graphic (rather than a control character)[8] and thus would not normallybe visible, it is represented here by Unicode character U+2420 "␠"; Unicode characters U+2422 "␢" or U+2423 "␣"are also available for use when a visible representation of a space is necessary.Code 0x7F corresponds to the non-printable "Delete" (DEL) control character and is therefore omitted from thischart; it is covered in the previous section's chart.

Page 34: Micro Excellent

ASCII 32

Binary Oct Dec Hex Glyph

010 0000 040 32 20 ␠

010 0001 041 33 21 [[Exclamation mark ]]

010 0010 042 34 22 "

010 0011 043 35 23 #

010 0100 044 36 24 $

010 0101 045 37 25 %

010 0110 046 38 26 &

010 0111 047 39 27 '

010 1000 050 40 28 (

010 1001 051 41 29 )

010 1010 052 42 2A *

010 1011 053 43 2B +

010 1100 054 44 2C ,

010 1101 055 45 2D -

010 1110 056 46 2E .

010 1111 057 47 2F /

011 0000 060 48 30 0

011 0001 061 49 31 1

011 0010 062 50 32 2

011 0011 063 51 33 3

011 0100 064 52 34 4

011 0101 065 53 35 5

011 0110 066 54 36 6

011 0111 067 55 37 7

011 1000 070 56 38 8

011 1001 071 57 39 9

011 1010 072 58 3A :

011 1011 073 59 3B ;

011 1100 074 60 3C <

011 1101 075 61 3D =

011 1110 076 62 3E >

011 1111 077 63 3F ?

Page 35: Micro Excellent

ASCII 33

Binary Oct Dec Hex Glyph

100 0000 100 64 40 @

100 0001 101 65 41 A

100 0010 102 66 42 B

100 0011 103 67 43 C

100 0100 104 68 44 D

100 0101 105 69 45 E

100 0110 106 70 46 F

100 0111 107 71 47 G

100 1000 110 72 48 H

100 1001 111 73 49 I

100 1010 112 74 4A J

100 1011 113 75 4B K

100 1100 114 76 4C L

100 1101 115 77 4D M

100 1110 116 78 4E N

100 1111 117 79 4F O

101 0000 120 80 50 P

101 0001 121 81 51 Q

101 0010 122 82 52 R

101 0011 123 83 53 S

101 0100 124 84 54 T

101 0101 125 85 55 U

101 0110 126 86 56 V

101 0111 127 87 57 W

101 1000 130 88 58 X

101 1001 131 89 59 Y

101 1010 132 90 5A Z

101 1011 133 91 5B [

101 1100 134 92 5C \

101 1101 135 93 5D ]

101 1110 136 94 5E ^

101 1111 137 95 5F _

Page 36: Micro Excellent

ASCII 34

Binary Oct Dec Hex Glyph

110 0000 140 96 60 `

110 0001 141 97 61 a

110 0010 142 98 62 b

110 0011 143 99 63 c

110 0100 144 100 64 d

110 0101 145 101 65 e

110 0110 146 102 66 f

110 0111 147 103 67 g

110 1000 150 104 68 h

110 1001 151 105 69 i

110 1010 152 106 6A j

110 1011 153 107 6B k

110 1100 154 108 6C l

110 1101 155 109 6D m

110 1110 156 110 6E n

110 1111 157 111 6F o

111 0000 160 112 70 p

111 0001 161 113 71 q

111 0010 162 114 72 r

111 0011 163 115 73 s

111 0100 164 116 74 t

111 0101 165 117 75 u

111 0110 166 118 76 v

111 0111 167 119 77 w

111 1000 170 120 78 x

111 1001 171 121 79 y

111 1010 172 122 7A z

111 1011 173 123 7B {

111 1100 174 124 7C |

111 1101 175 125 7D }

111 1110 176 126 7E ~

Page 37: Micro Excellent

ASCII 35

AliasesA June 1992 RFC[1] and the Internet Assigned Numbers Authority registry of character sets[9] recognize thefollowing case-insensitive aliases for ASCII as suitable for use on the Internet:• ANSI_X3.4-1968 (canonical name)• iso-ir-6• ANSI_X3.4-1986• ISO_646.irv:1991• ASCII (with ASCII-7 and ASCII-8 variants)• ISO646-US• US-ASCII (preferred MIME name)[9]

• us• IBM367• cp367• csASCIIOf these, the IANA encourages use of the name "US-ASCII" for Internet uses of ASCII. One often finds this in theoptional "charset" parameter in the Content-Type header of some MIME messages, in the equivalent "meta" elementof some HTML documents, and in the encoding declaration part of the prologue of some XML documents.

VariantsAs computer technology spread throughout the world, different standards bodies and corporations developed manyvariations of ASCII to facilitate the expression of non-English languages that used Roman-based alphabets. Onecould class some of these variations as "ASCII extensions", although some misuse that term to represent all variants,including those that do not preserve ASCII's character-map in the 7-bit range.The PETSCII code Commodore International used for their 8-bit systems is probably unique among post-1970 codesin being based on ASCII-1963, instead of the more common ASCII-1967, such as found on the ZX Spectrumcomputer. Atari and Galaksija computers also used ASCII variants.

Incompatibility vs interoperabilityFrom early in its development,[2] ASCII was intended to be just one of several national variants of an internationalcharacter code standard, ultimately published as ISO/IEC 646 (1972), which would share most characters in commonbut assign other locally-useful characters to several code points reserved for "national use." However, the four yearsthat elapsed between the publication of ASCII-1963 and ISO's first acceptance of an international recommendationduring 1967[3] caused ASCII's choices for the national use characters to seem to be de facto standards for the world,causing confusion and incompatibility once other countries did begin to make their own assignments to these codepoints.ISO/IEC 646, like ASCII, was a 7-bit character set. It did not make any additional codes available, so the same codepoints encoded different characters in different countries. Escape codes were defined to indicate which nationalvariant applied to a piece of text, but they were rarely used, so it was often impossible to know what variant to workwith and therefore which character a code represented, and text-processing systems could generally cope with onlyone variant anyway.Because the bracket and brace characters of ASCII were assigned to "national use" code points that were used foraccented letters in other national variants of ISO/IEC 646, a German, French, or Swedish, etc., programmer usingtheir national variant of ISO/IEC 646, rather than ASCII, had to write, and thus read, something such as

ä aÄiÜ='Ön'; üinstead of

Page 38: Micro Excellent

ASCII 36

{ a[i]='\n'; }C trigraphs were created to solve this problem for ANSI C, although their late introduction and inconsistentimplementation in compilers limited their use.Eventually, as 8-, 16-, and 32-bit computers began to replace 18- and 36-bit computers as the norm, it becamecommon to use an 8-bit byte to store each character in memory, providing an opportunity for extended, 8-bit,relatives of ASCII, with the 128 additional characters providing room to avoid most of the ambiguity that had beennecessary in 7-bit codes.For example, IBM developed 8-bit code pages, such as code page 437, which replaced the control-characters withgraphic symbols such as smiley faces, and mapped additional graphic characters to the upper 128 positions.Operating systems such as DOS supported these code-pages, and manufacturers of IBM PCs supported them inhardware. Digital Equipment Corporation developed the Multinational Character Set (DEC-MCS) for use in thepopular VT220 terminal.Eight-bit standards such as ISO/IEC 8859 (derived from the DEC-MCS) and Mac OS Roman developed as trueextensions of ASCII, leaving the original character-mapping intact, but adding additional character definitions afterthe first 128 (i.e., 7-bit) characters. This enabled representation of characters used in a broader range of languages.Because there were several competing 8-bit code standards, they continued to suffer from incompatibilities andlimitations. Still, ISO-8859-1 (Latin 1), its variant Windows-1252 (often mislabeled as ISO-8859-1), and the original7-bit ASCII remain the most common character encodings in use today.

UnicodeUnicode and the ISO/IEC 10646 Universal Character Set (UCS) have a much wider array of characters, and theirvarious encoding forms have begun to supplant ISO/IEC 8859 and ASCII rapidly in many environments. WhileASCII is limited to 128 characters, Unicode and the UCS support more characters by separating the concepts ofunique identification (using natural numbers called code points) and encoding (to 8-, 16- or 32-bit binary formats,called UTF-8, UTF-16 and UTF-32).To allow backward compatibility, the 128 ASCII and 256 ISO-8859-1 (Latin 1) characters are assignedUnicode/UCS code points that are the same as their codes in the earlier standards. Therefore, ASCII can beconsidered a 7-bit encoding scheme for a very small subset of Unicode/UCS, and, conversely, the UTF-8 encodingforms are binary-compatible with ASCII for code points below 128, meaning all ASCII is valid UTF-8. The otherencoding forms resemble ASCII in how they represent the first 128 characters of Unicode, but use 16 or 32 bits percharacter, so they require conversion for compatibility. (similarly UCS-2 is upwards compatible with UTF-16)

OrderASCII-code order is also called ASCIIbetical order.[4] Collation of data is sometimes done in this order rather than"standard" alphabetical order (collating sequence). The main deviations in ASCII order are:• All uppercase come before lowercase letters, i.e. "Z" before "a"• Digits and many punctuation marks come before letters, i.e. "4" is before "one"An intermediate order which can easily be programmed on a computer converts uppercase letters to lowercase beforecomparing ASCII values.

Page 39: Micro Excellent

ASCII 37

See also• 3568 ASCII, an asteroid named after the character encoding• ASCII art• HTML decimal character rendering• Extended ASCII

References[1] RFC 1345 (June 1992).[2] "Specific Criteria," attachment to memo from R. W. Reach, "X3-2 Meeting – September 14 and 15," September 18, 1961[3] R. Maréchal, ISO/TC 97 – Computers and Information Processing: Acceptance of Draft ISO Recommendation No. 1052, December 22, 1967[4] ASCIIbetical definition (http:/ / www. pcmag. com/ encyclopedia_term/ 0,2542,t=ASCIIbetical& i=38025,00. asp). PC Magazine. Accessed

2008-04-14.

Further reading• R.W. Bemer, "A Proposal for Character Code Compatibility," Communications of the ACM, Vol. 3. No. 2,

February, 1960, pp. 71–72• R.W. Bemer, "The Babel of Codes Prior to ASCII: The 1960 Survey of Coded Character Sets: The Reasons for

ASCII" (http:/ / www. trailing-edge. com/ ~bobbemer/ SURVEY. HTM), May 23, 2003 (from H.J. Smith, Jr.,F.A. Williams, "Survey of punched card codes", Communications of the ACM 3, 639 & 642, December 1960)

• G.S. Robinson & C. Cargill (October 1996). "History and impact of computer standards". Computer Vol. 29, no.10: pp. 79–85.

• American National Standards Institute, et al. (1977). American National Standard Code for InformationInterchange. The Institute.

• Charles E. Mackenzie (1980). Coded Character Sets, History and Development. Addison-Wesley.ISBN 0-201-14460-3.

External links• A history of ASCII, its roots and predecessors (http:/ / www. wps. com/ projects/ codes/ index. html) by Tom

Jennings (October 29, 2004) (accessed 2005-12-17)• The ASCII subset (http:/ / www. unicode. org/ charts/ PDF/ U0000. pdf) of Unicode• The Evolution of Character Codes, 1874–1968 (http:/ / www. pobox. com/ ~enf/ ascii/ ascii. pdf)• Scanned copy of American Standard Code for Information Interchange ASA standard X3.4-1963 (http:/ / wps.

com/ projects/ codes/ X3. 4-1963/ index. html)• ASCII (http:/ / www. dmoz. org/ Computers/ Software/ Globalization/ Character_Encoding/ Latin/ ASCII/ ) at the

Open Directory Project

Page 40: Micro Excellent

Floating point 38

Floating pointIn computing, floating point describes a system for representing numbers that would be too large or too small to berepresented as integers. Numbers are in general represented approximately to a fixed number of significant digits andscaled using an exponent. The base for the scaling is normally 2, 10 or 16. The typical number that can berepresented exactly is of the form:

significant digits × baseexponent

The term floating point refers to the fact that the radix point (decimal point, or, more commonly in computers, binarypoint) can "float"; that is, it can be placed anywhere relative to the significant digits of the number. This position isindicated separately in the internal representation, and floating-point representation can thus be thought of as acomputer realization of scientific notation. Over the years, several different floating-point representations have beenused in computers; however, for the last ten years the most commonly encountered representation is that defined bythe IEEE 754 Standard.The advantage of floating-point representation over fixed-point (and integer) representation is that it can support amuch wider range of values. For example, a fixed-point representation that has seven decimal digits with twodecimal places, can represent the numbers 12345.67, 123.45, 1.23 and so on, whereas a floating-point representation(such as the IEEE 754 decimal32 format) with seven decimal digits could in addition represent 1.234567, 123456.7,0.00001234567, 1234567000000000, and so on. The floating-point format needs slightly more storage (to encode theposition of the radix point), so when stored in the same space, floating-point numbers achieve their greater range atthe expense of precision.The speed of floating-point operations is an important measure of performance for computers in many applicationdomains. It is measured in FLOPS.

OverviewA number representation (called a numeral system in mathematics) specifies some way of storing a number that maybe encoded as a string of digits. The arithmetic is defined as a set of actions on the representation that simulateclassical arithmetic operations.There are several mechanisms by which strings of digits can represent numbers. In common mathematical notation,the digit string can be of any length, and the location of the radix point is indicated by placing an explicit "point"character (dot or comma) there. If the radix point is omitted then it is implicitly assumed to lie at the right (leastsignificant) end of the string (that is, the number is an integer). In fixed-point systems, some specific assumption ismade about where the radix point is located in the string. For example, the convention could be that the stringconsists of 8 decimal digits with the decimal point in the middle, so that "00012345" has a value of 1.2345.In scientific notation, the given number is scaled by a power of 10 so that it lies within a certain range—typicallybetween 1 and 10, with the radix point appearing immediately after the first digit. The scaling factor, as a power often, is then indicated separately at the end of the number. For example, the revolution period of Jupiter's moon Io is152853.5047 seconds. This is represented in standard-form scientific notation as 1.528535047 × 105 seconds.Floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consistsof:• A signed digit string of a given length in a given base (or radix). This is known as the significand, or sometimes

the mantissa (see below) or coefficient. The radix point is not explicitly included, but is implicitly assumed toalways lie in a certain position within the significand—often just after or just before the most significant digit, orto the right of the rightmost digit. This article will generally follow the convention that the radix point is just afterthe most significant (leftmost) digit. The length of the significand determines the precision to which numbers canbe represented.

Page 41: Micro Excellent

Floating point 39

• A signed integer exponent, also referred to as the characteristic or scale, which modifies the magnitude of thenumber.

The significand is multiplied by the base raised to the power of the exponent, equivalent to shifting the radix pointfrom its implied position by a number of places equal to the value of the exponent—to the right if the exponent ispositive or to the left if the exponent is negative.Using base-10 (the familiar decimal notation) as an example, the number 152853.5047, which has ten decimal digitsof precision, is represented as the significand 1528535047 together with an exponent of 5 (if the implied position ofthe radix point is after the first most significant digit, here 1). To recover the actual value, a decimal point is placedafter the first digit of the significand and the result is multiplied by 105 to give 1.528535047 × 105, or 152853.5047.In storing such a number, the base (10) need not be stored, since it will be the same for all numbers used, and canthus be inferred. It could as easily be written 1.528535047 E 5 (and sometimes is), where "E" is taken to mean"multiplied by ten to the power of", as long as the convention is known to all parties.Symbolically, this final value is

where s is the value of the significand (after taking into account the implied radix point), b is the base, and e is theexponent.Equivalently, this is:

where s here means the integer value of the entire significand, ignoring any implied decimal point, and p is theprecision—the number of digits in the significand.Historically, different bases have been used for representing floating-point numbers, with base 2 (binary) being themost common, followed by base 10 (decimal), and other less common varieties such as base 16 (hexadecimalnotation). Floating point numbers are rational numbers because they can be represented as one integer divided byanother. The base however determines the fractions that can be represented. For instance 1/5 cannot be representedexactly as a floating point number using a binary base but can be represented exactly using a decimal base.The way in which the significand, exponent and sign bits are internally stored on a computer isimplementation-dependent. The common IEEE formats are described in detail later and elsewhere, but as anexample, in the binary single-precision (32-bit) floating-point representation p=24 and so the significand is a stringof 24 bits (1s and 0s). For instance, the number π's first 33 bits are 11001001 00001111 11011010 10100010 0.Rounding to 24 bits in binary mode means attributing the 24th bit the value of the 25th which yields 1100100100001111 11011011. When this is stored using the IEEE 754 encoding, this becomes the significand s with e = 1(where s is assumed to have a binary point to the right of the first bit) after a left-adjustment (or normalization)during which leading or padding zeros are truncated should there be any. Note that they do not matter anyway. Thensince the first bit of a non-zero binary significand is always 1 it need not be stored, giving an extra bit of precision.To calculate π the formula is

where n is the normalized significand's nth bit from the left. Normalization, which is reversed when 1 is being addedabove, can be thought of as a form of compression; it allows a binary significand to be compressed into a field onebit shorter than the maximum precision, at the expense of extra processing.The word "mantissa" is often used as a synonym for significand. Many people do not consider this usage to be correct, because the mantissa is traditionally defined as the fractional part of a logarithm, while the characteristic is

Page 42: Micro Excellent

Floating point 40

the integer part. This terminology comes from the way logarithm tables were used before computers becamecommonplace. Log tables were actually tables of mantissas. Therefore, a mantissa is the logarithm of the significand.

Some other computer representations for non-integral numbersFloating-point representation, in particular the standard IEEE format, is by far the most common way of representingan approximation to real numbers in computers because it is efficiently handled in most large computer processors.However, there are alternatives:• Fixed-point representation uses integer hardware operations controlled by a software implementation of a specific

convention about the location of the binary or decimal point, for example, 6 bits or digits from the right. Thehardware to manipulate these representations is less costly than floating-point and is also commonly used toperform integer operations. Binary fixed point is usually used in special-purpose applications on embeddedprocessors that can only do integer arithmetic, but decimal fixed point is common in commercial applications.

• Binary-coded decimal is an encoding for decimal numbers in which each digit is represented by its own binarysequence.

• Where greater precision is desired, floating-point arithmetic can be implemented (typically in software) withvariable-length significands (and sometimes exponents) that are sized depending on actual need and depending onhow the calculation proceeds. This is called arbitrary-precision arithmetic.

• Some numbers (e.g., 1/3 and 0.1) cannot be represented exactly in binary floating-point no matter what theprecision. Software packages that perform rational arithmetic represent numbers as fractions with integralnumerator and denominator, and can therefore represent any rational number exactly. Such packages generallyneed to use "bignum" arithmetic for the individual integers.

• Computer algebra systems such as Mathematica and Maxima can often handle irrational numbers like or in a completely "formal" way, without dealing with a specific encoding of the significand. Such programs canevaluate expressions like " " exactly, because they "know" the underlying mathematics.

• A representation based on natural logarithms is sometimes used in FPGA-based applications where mostarithmetic operations are multiplication or division.[1] Like floating-point representation, this solution hasprecision for smaller numbers, as well as a wide range.

Range of floating-point numbersBy allowing the radix point to be adjustable, floating-point notation allows calculations over a wide range ofmagnitudes, using a fixed number of digits, while maintaining good precision. For example, in a decimalfloating-point system with three digits, the multiplication that humans would write as

0.12 × 0.12 = 0.0144would be expressed as

(1.2 × 10−1) × (1.2 × 10−1) = (1.44 × 10−2).In a fixed-point system with the decimal point at the left, it would be

0.120 × 0.120 = 0.014.A digit of the result was lost because of the inability of the digits and decimal point to 'float' relative to each otherwithin the digit string.The range of floating-point numbers depends on the number of bits or digits used for representation of thesignificand (the significant digits of the number) and for the exponent. On a typical computer system, a 'doubleprecision' (64-bit) binary floating-point number has a coefficient of 53 bits (one of which is implied), an exponent of11 bits, and one sign bit. Positive floating-point numbers in this format have an approximate range of 10−308 to 10308

(because 308 is approximately 1023 × log10(2), since the range of the exponent is [−1022,1023]). The completerange of the format is from about −10308 through +10308 (see IEEE 754).

Page 43: Micro Excellent

Floating point 41

The number of normalized floating point numbers in a system F(B, P, L, U) (where B is the base of the system, P isthe precision of the system to P numbers, L is the smallest exponent representable in the system, and U is the largestexponent used in the system) is: .There is a smallest positive normalized floating-point number, Underflow level = UFL = which has a 1 as theleading digit and 0 for the remaining digits of the significand, and the smallest possible value for the exponent.There is a largest floating point number, Overflow level = OFL = which has B − 1 as thevalue for each digit of the significand and the largest possible value for the exponent.In addition there are representable values strictly between −UFL and UFL. Namely, zero and negative zero, as wellas subnormal numbers.

HistoryIn 1938, Konrad Zuse of Berlin completed the "Z1", the first mechanical binary programmable computer. It workedwith 22-bit binary floating-point numbers having a 7-bit signed exponent, a 15-bit significand (including one implicitbit), and a sign bit. The memory used sliding metal parts to store 64 words of such numbers. The relay-based Z3,completed in 1941, implemented floating point arithmetic exceptions with representations for plus and minus infinityand undefined.The first commercial computer with floating point hardware was Zuse's Z4 computer designed in 1942–1945. TheBell Laboratories Mark V computer implemented decimal floating point in 1946. The mass-produced vacuumtube-based IBM 704 followed a decade later in 1954; it introduced the use of a biased exponent. For many decadesafter that, floating-point hardware was typically an optional feature, and computers that had it were said to be"scientific computers", or to have "scientific computing" capability. It was not until 1989 that general-purposecomputers had floating point capability in hardware as standard.The UNIVAC 1100/2200 series, introduced in 1962, supported two floating-point formats. Single precision used 36bits, organized into a 1-bit sign, an 8-bit exponent, and a 27-bit significand. Double precision used 72 bits organizedas a 1-bit sign, an 11-bit exponent, and a 60-bit significand. The IBM 7094, introduced the same year, also supportedsingle and double precision, with slightly different formats.Prior to the IEEE-754 standard, computers used many different forms of floating-point. These differed in the wordsizes, the format of the representations, and the rounding behavior of operations. These differing systemsimplemented different parts of the arithmetic in hardware and software, with varying accuracy.The IEEE-754 standard was created in the early 1980s after word sizes of 32 bits (or 16 or 64) had been generallysettled upon. This was based on a proposal from Intel who were designing the i8087 numerical coprocessor.[2]

Among the innovations are these:• A precisely specified encoding of the bits, so that all compliant computers would interpret bit patterns the same

way. This made it possible to transfer floating-point numbers from one computer to another.• A precisely specified behavior of the arithmetic operations. This meant that a given program, with given data,

would always produce the same result on any compliant computer. This helped reduce the almost mysticalreputation that floating-point computation had for seemingly nondeterministic behavior.

• The ability of exceptional conditions (overflow, divide by zero, etc.) to propagate through a computation in abenign manner and be handled by the software in a controlled way.

Page 44: Micro Excellent

Floating point 42

IEEE 754: floating point in modern computersThe IEEE has standardized the computer representation for binary floating-point numbers in IEEE 754. Thisstandard is followed by almost all modern machines. Notable exceptions include IBM mainframes, which supportIBM's own format (in addition to the IEEE 754 binary and decimal formats), and Cray vector machines, where theT90 series had an IEEE version, but the SV1 still uses Cray floating-point format.

Floating point precisions

IEEE 754:16-bit: Half (binary16)32-bit: Single (binary32), decimal3264-bit: Double (binary64), decimal64128-bit: Quadruple (binary128),decimal128Other:Minifloat · Extended precisionArbitrary-precision

The standard provides for many closely-related formats, differing in only a few details. Five of these formats arecalled basic formats, and two of these are especially widely used in computer hardware and languages:• Single precision, called "float" in the C language family, and "real" or "real*4" in Fortran. This is a binary format

that occupies 32 bits (4 bytes) and its significand has a precision of 24 bits (about 7 decimal digits).• Double precision, called "double" in the C language family, and "double precision" or "real*8" in Fortran. This is

a binary format that occupies 64 bits (8 bytes) and its significand has a precision of 53 bits (about 16 decimaldigits).

The other basic formats are quadruple precision (128-bit) binary, as well as decimal floating point (64-bit) and"double" (128-bit) decimal floating point.Less common formats include:• Extended precision format, 80-bit floating point value. Sometimes "long double" is used for this in the C language

family, though "long double" may be a synonym for "double" or may stand for quadruple precision.• Half, also called float16, a 16-bit floating point value.Any integer with absolute value less than or equal to 224 can be exactly represented in the single precision format,and any integer with absolute value less than or equal to 253 can be exactly represented in the double precisionformat. Furthermore, a wide range of powers of 2 times such a number can be represented. These properties aresometimes used for purely integer data, to get 53-bit integers on platforms that have double precision floats but only32-bit integers.The standard specifies some special values, and their representation: positive infinity (+∞), negative infinity (−∞), anegative zero (−0) distinct from ordinary ("positive") zero, and "not a number" values (NaNs).Comparison of floating-point numbers, as defined by the IEEE standard, is a bit different from usual integercomparison. Negative and positive zero compare equal, and every NaN compares unequal to every value, includingitself. Apart from these special cases, more significant bits are stored before less significant bits. All values exceptNaN are strictly smaller than +∞ and strictly greater than −∞.To a rough approximation, the bit representation of an IEEE binary floating-point number is proportional to its base2 logarithm, with an average error of about 3%. (This is because the exponent field is in the more significant part ofthe datum.) This can be exploited in some applications, such as volume ramping in digital sound processing.Although the 32 bit ("single") and 64 bit ("double") formats are by far the most common, the standard actually allows for many different precision levels. Computer hardware (for example, the Intel Pentium series and the Motorola 68000 series) often provides an 80 bit extended precision format, with a 15 bit exponent, a 64 bit

Page 45: Micro Excellent

Floating point 43

significand, and no hidden bit.There is controversy about the failure of most programming languages to make these extended precision formatsavailable to programmers (although C and related programming languages usually provide these formats via the longdouble type on such hardware). System vendors may also provide additional extended formats (e.g. 128 bits)emulated in software.A project for revising the IEEE 754 standard was started in 2000 (see IEEE 754 revision); it was completed andapproved in June 2008. It includes decimal floating-point formats and a 16 bit floating point format ("binary16").binary16 has the same structure and rules as the older formats, with 1 sign bit, 5 exponent bits and 10 trailingsignificand bits. It is being used in the NVIDIA Cg graphics language, and in the openEXR standard.[3]

Internal representationFloating-point numbers are typically packed into a computer datum as the sign bit, the exponent field, and thesignificand (mantissa), from left to right. For the IEEE 754 binary formats they are apportioned as follows:

Type Sign Exponent Significand Total bits Exponent bias Bits precision

Half (IEEE 754-2008) 1 5 10 16 15 11

Single 1 8 23 32 127 24

Double 1 11 52 64 1023 53

Quad 1 15 112 128 16383 113

While the exponent can be positive or negative, in binary formats it is stored as an unsigned number that has a fixed"bias" added to it. Values of all 0s in this field are reserved for the zeros and subnormal numbers, values of all 1s arereserved for the infinities and NaNs. The exponent range for normalized numbers is [−126, 127] for single precision,[−1022, 1023] for double, or [−16382, 16383] for quad. Normalised numbers exclude subnormal values, zeros,infinities, and NaNs.In the IEEE binary interchange formats the leading 1 bit of a normalized significand is not actually stored in thecomputer datum. It is called the "hidden" or "implicit" bit. Because of this, single precision format actually has asignificand with 24 bits of precision, double precision format has 53, and quad has 113.For example, it was shown above that π, rounded to 24 bits of precision, has:• sign = 0 ; e = 1 ; s = 110010010000111111011011 (including the hidden bit)The sum of the exponent bias (127) and the exponent (1) is 128, so this is represented in single precision format as• 0 10000000 10010010000111111011011 (excluding the hidden bit) = 40490FDB [4] as a hexadecimal number.

Special values

Signed zero

In the IEEE 754 standard, zero is signed, meaning that there exist both a "positive zero" (+0) and a "negative zero"(−0). In most run-time environments, positive zero is usually printed as "0", while negative zero may be printed as"-0". The two values behave as equal in numerical comparisons, but some operations return different results for +0and −0. For instance, 1/(−0) returns negative infinity (exactly), while 1/+0 returns positive infinity (exactly); thesetwo operations are however accompanied by "divide by zero" exception. A sign symmetric arccot operation will givedifferent results for +0 and −0 without any exception. The difference between +0 and −0 is mostly noticeable forcomplex operations at so-called branch cuts.

Page 46: Micro Excellent

Floating point 44

Subnormal numbers

Subnormal values fill the underflow gap with values where the absolute distance between them are the same as foradjacent values just outside of the underflow gap. This is an improvement over the older practice to just have zero inthe underflow gap, and where underflowing results were replaced by zero (flush to zero).Modern floating point hardware usually handles subnormal values (as well as normal values), and does not requiresoftware emulation for subnormals.

Infinities

The infinities of the extended real number line can be represented in IEEE floating point datatypes, just like ordinaryfloating point values like 1, 1.5 etc. They are not error values in any way, though they are often (but not always, as itdepends on the rounding) used as replacement values when there is an overflow. Upon a divide by zero exception, apositive or negative infinity is returned as an exact result. An infinity can also be introduced as a numeral (like C's"INFINITY" macro, or "∞" if the programming language allows that syntax).IEEE 754 requires infinities to be handled in a reasonable way, such as• (+∞) + (+7) = (+∞)• (+∞) × (−2) = (−∞)• (+∞) × 0 = NaN – there is no meaningful thing to do

NaNs

IEEE 754 specifies a special value called "Not a Number" (NaN) to be returned as the result of certain "invalid"operations, such as 0/0, ∞×0, or sqrt(−1). There are actually two kinds of NaNs, signaling and quiet. Using asignaling NaN in any arithmetic operation (including numerical comparisons) will cause an "invalid" exception.Using a quiet NaN merely causes the result to be NaN too.The representation of NaNs specified by the standard has some unspecified bits that could be used to encode the typeof error; but there is no standard for that encoding. In theory, signaling NaNs could be used by a runtime system toextend the floating-point numbers with other special values, without slowing down the computations with ordinaryvalues. Such extensions do not seem to be common, though.

Representable numbers, conversion and roundingBy their nature, all numbers expressed in floating-point format are rational numbers with a terminating expansion inthe relevant base (for example, a terminating decimal expansion in base-10, or a terminating binary expansion inbase-2). Irrational numbers, such as π or √2, or non-terminating rational numbers, must be approximated. Thenumber of digits (or bits) of precision also limits the set of rational numbers that can be represented exactly. Forexample, the number 123456789 clearly cannot be exactly represented if only eight decimal digits of precision areavailable.When a number is represented in some format (such as a character string) which is not a native floating-pointrepresentation supported in a computer implementation, then it will require a conversion before it can be used in thatimplementation. If the number can be represented exactly in the floating-point format then the conversion is exact. Ifthere is not an exact representation then the conversion requires a choice of which floating-point number to use torepresent the original value. The representation chosen will have a different value to the original, and the value thusadjusted is called the rounded value.Whether or not a rational number has a terminating expansion depends on the base. For example, in base-10 thenumber 1/2 has a terminating expansion (0.5) while the number 1/3 does not (0.333...). In base-2 only rationals withdenominators that are powers of 2 (such as 1/2 or 3/16) are terminating. Any rational with a denominator that has aprime factor other than 2 will have an infinite binary expansion. This means that numbers which appear to be short

Page 47: Micro Excellent

Floating point 45

and exact when written in decimal format may need to be approximated when converted to binary floating-point. Forexample, the decimal number 0.1 is not representable in binary floating-point of any finite precision; the exact binaryrepresentation would have a "1100" sequence continuing endlessly:

e = −4; s = 1100110011001100110011001100110011...,where, as previously, s is the significand and e is the exponent.When rounded to 24 bits this becomes

e = −4; s = 110011001100110011001101,which is actually 0.100000001490116119384765625 in decimal.As a further example, the real number π, represented in binary as an infinite series of bits is

11.0010010000111111011010101000100010000101101000110000100011010011...but is

11.0010010000111111011011when approximated by rounding to a precision of 24 bits.In binary single-precision floating-point, this is represented as s = 1.10010010000111111011011 with e = 1. This hasa decimal value of

3.1415927410125732421875,whereas a more accurate approximation of the true value of π is

3.1415926535897932384626433832795...The result of rounding differs from the true value by about 0.03 parts per million, and matches the decimalrepresentation of π in the first 7 digits. The difference is the discretization error and is limited by the machineepsilon.The arithmetical difference between two consecutive representable floating-point numbers which have the sameexponent is called a unit in the last place (ULP). For example, if there is no representable number lying between therepresentable numbers 1.45a70c22hex and 1.45a70c24hex, the ULP is 2×16−8, or 2−31. For numbers with an exponentof 0, a ULP is exactly 2−23 or about 10−7 in single precision, and about 10−16 in double precision. The mandatedbehavior of IEEE-compliant hardware is that the result be within one-half of a ULP.

Rounding modesRounding is used when the exact result of a floating-point operation (or a conversion to floating-point format) wouldneed more digits than there are digits in the significand. There are several different rounding schemes (or roundingmodes). Historically, truncation was the typical approach. Since the introduction of IEEE 754, the default method(round to nearest, ties to even, sometimes called Banker's Rounding) is more commonly used. This method roundsthe ideal (infinitely precise) result of an arithmetic operation to the nearest representable value, and gives thatrepresentation as the result.[5] In the case of a tie, the value that would make the significand end in an even digit ischosen. The IEEE 754 standard requires the same rounding to be applied to all fundamental algebraic operations,including square root and conversions, when there is a numeric (non-NaN) result. It means that the results of IEEE754 operations are completely determined in all bits of the result, except for the representation of NaNs. ("Library"functions such as cosine and log are not mandated.)Alternative rounding options are also available. IEEE 754 specifies the following rounding modes:• round to nearest, where ties round to the nearest even digit in the required position (the default and by far the most

common mode)• round to nearest, where ties round away from zero (optional for binary floating-point and commonly used in

decimal)

Page 48: Micro Excellent

Floating point 46

• round up (toward +∞; negative results thus round toward zero)• round down (toward −∞; negative results thus round away from zero)• round toward zero (truncation; it is similar to the common behavior of float-to-integer conversions, which convert

−3.9 to −3)Alternative modes are useful when the amount of error being introduced must be bounded. Applications that requirea bounded error are multi-precision floating-point, and interval arithmetic.A further use of rounding is when a number is explicitly rounded to a certain number of decimal (or binary) places,as when rounding a result to euros and cents (two decimal places).

Floating-point arithmetic operationsFor ease of presentation and understanding, decimal radix with 7 digit precision will be used in the examples, as inthe IEEE 754 decimal32 format. The fundamental principles are the same in any radix or precision, except thatnormalization is optional (it does not affect the numerical value of the result). Here, s denotes the significand and edenotes the exponent.

Addition and subtractionA simple method to add floating-point numbers is to first represent them with the same exponent. In the examplebelow, the second number is shifted right by three digits, and we then proceed with the usual addition method:

123456.7 = 1.234567 × 10^5

101.7654 = 1.017654 × 10^2 = 0.001017654 × 10^5

Hence:

123456.7 + 101.7654 = (1.234567 × 10^5) + (1.017654 × 10^2)

= (1.234567 × 10^5) + (0.001017654 × 10^5)

= (1.234567 + 0.001017654) × 10^5

= 1.235584654 × 10^5

In detail:

e=5; s=1.234567 (123456.7)

+ e=2; s=1.017654 (101.7654)

e=5; s=1.234567

+ e=5; s=0.001017654 (after shifting)

--------------------

e=5; s=1.235584654 (true sum: 123558.4654)

This is the true result, the exact sum of the operands. It will be rounded to seven digits and then normalized ifnecessary. The final result is

e=5; s=1.235585 (final sum: 123558.5)

Note that the low 3 digits of the second operand (654) are essentially lost. This is round-off error. In extreme cases,the sum of two non-zero numbers may be equal to one of them:

e=5; s=1.234567

+ e=−3; s=9.876543

e=5; s=1.234567

+ e=5; s=0.00000009876543 (after shifting)

Page 49: Micro Excellent

Floating point 47

----------------------

e=5; s=1.23456709876543 (true sum)

e=5; s=1.234567 (after rounding/normalization)

Another problem of loss of significance occurs when two close numbers are subtracted. In the following examplee = 5; s = 1.234571 and e = 5; s = 1.234567 are representations of the rationals 123457.1467 and 123456.659.

e=5; s=1.234571

− e=5; s=1.234567----------------

e=5; s=0.000004

e=−1; s=4.000000 (after rounding/normalization)

The best representation of this difference is e = −1; s = 4.877000, which differs more than 20% from e = −1;s = 4.000000. In extreme cases, the final result may be zero even though an exact calculation may be several million.This cancellation illustrates the danger in assuming that all of the digits of a computed result are meaningful.Dealing with the consequences of these errors is a topic in numerical analysis; see also Accuracy problems.

Multiplication and divisionTo multiply, the significands are multiplied while the exponents are added, and the result is rounded and normalized.

e=3; s=4.734612

× e=5; s=5.417242

-----------------------

e=8; s=25.648538980104 (true product)

e=8; s=25.64854 (after rounding)

e=9; s=2.564854 (after normalization)

Division is done similarly, but is more complicated.There are no cancellation or absorption problems with multiplication or division, though small errors mayaccumulate as operations are performed repeatedly [6] . In practice, the way these operations are carried out in digitallogic can be quite complex (see Booth's multiplication algorithm and digital division).[7] For a fast, simple method,see the Horner method.

Dealing with exceptional casesFloating-point computation in a computer can run into three kinds of problems:• An operation can be mathematically illegal, such as division by zero.• An operation can be legal in principle, but not supported by the specific format, for example, calculating the

square root of −1 or the inverse sine of 2 (both of which result in complex numbers).• An operation can be legal in principle, but the result can be impossible to represent in the specified format,

because the exponent is too large or too small to encode in the exponent field. Such an event is called an overflow(exponent too large), underflow (exponent too small) or denormalization (precision loss).

Prior to the IEEE standard, such conditions usually caused the program to terminate, or triggered some kind of trapthat the programmer might be able to catch. How this worked was system-dependent, meaning that floating-pointprograms were not portable.The original IEEE 754 standard (from 1984) took a first step towards a standard way for the IEEE 754 based operations to record that an error occurred. Here we are ignoring trapping (optional in the 1984 version) and "alternate exception handling modes" (replacing trapping in the 2008 version, but still optional), and just looking at

Page 50: Micro Excellent

Floating point 48

the required default method of handling exceptions according to IEEE 754. Arithmetic exceptions are (by default)required to be recorded in "sticky" error indicator bits. That they are "sticky" means that they are not reset by thenext (arithmetic) operation, but stay set until explicitly reset. By default, an operation always returns a resultaccording to specification without interrupting computation. For instance, 1/0 returns +∞, while also setting thedivide-by-zero error bit.The original IEEE 754 standard, however, failed to recommend operations to handle such sets of arithmetic errorbits. So while these were implemented in hardware, initially programming language implementations did notautomatically provide a means to access them (apart from assembler). Over time some programming languagestandards (e.g., C and Fortran) have been updated to specify methods to access and change status and error bits. The2008 version of the IEEE 754 standard now specifies a few operations for accessing and handling the arithmeticerror bits. The programming model is based on a single thread of execution and use of them by multiple threads hasto be handled by a means outside of the standard.IEEE 754 specifies five arithmetic errors that are to be recorded in "sticky bits":• inexact, set if the rounded (and returned) value is different from the mathematically exact result of the operation.• underflow, set if the rounded value is tiny (as specified in IEEE 754) and inexact (or maybe limited to if it has

denormalisation loss, as per the 1984 version of IEEE 754), returning a subnormal value including the zeros.• overflow, set if the absolute value of the rounded value is too large to be represented. An infinity or maximal

finite value is returned, depending on which rounding is used.• divide-by-zero, set if the result is infinite given finite operands, returning an infinity, either +∞ or −∞.• invalid, set if a real-valued result cannot be returned e.g. sqrt(−1) or 0/0, returning a quiet NaN.

Accuracy problemsThe fact that floating-point numbers cannot precisely represent all real numbers, and that floating-point operationscannot precisely represent true arithmetic operations, leads to many surprising situations. This is related to the finiteprecision with which computers generally represent numbers.For example, the non-representability of 0.1 and 0.01 (in binary) means that the result of attempting to square 0.1 isneither 0.01 nor the representable number closest to it. In 24-bit (single precision) representation, 0.1 (decimal) wasgiven previously as e = −4; s = 110011001100110011001101, which is

0.100000001490116119384765625 exactly.Squaring this number gives

0.010000000298023226097399174250313080847263336181640625 exactly.Squaring it with single-precision floating-point hardware (with rounding) gives

0.010000000707805156707763671875 exactly.But the representable number closest to 0.01 is

0.009999999776482582092285156250 exactly.Also, the non-representability of π (and π/2) means that an attempted computation of tan(π/2) will not yield a resultof infinity, nor will it even overflow. It is simply not possible for standard floating-point hardware to attempt tocompute tan(π/2), because π/2 cannot be represented exactly. This computation in C:

/* Enough digits to be sure we get the correct approximation. */

double pi = 3.1415926535897932384626433832795;

double z = tan(pi/2.0);

will give a result of 16331239353195370.0. In single precision (using the tanf function), the result will be−22877332.0.

Page 51: Micro Excellent

Floating point 49

By the same token, an attempted computation of sin(π) will not yield zero. The result will be (approximately)0.1225 × 10−15 in double precision, or −0.8742 × 10−7 in single precision.[8]

While floating-point addition and multiplication are both commutative (a + b = b + a and a×b = b×a), they are notnecessarily associative. That is, (a + b) + c is not necessarily equal to a + (b + c). Using 7-digit decimal arithmetic:

a = 1234.567, b = 45.67834, c = 0.0004

(a + b) + c:

1234.567 (a)

+ 45.67834 (b)

____________

1280.24534 rounds to 1280.245

1280.245 (a + b)

+ 0.0004 (c)

____________

1280.2454 rounds to 1280.245 <--- (a + b) + c

a + (b + c):

45.67834 (b)

+ 0.0004 (c)

____________

45.67874

45.67874 (b + c)

+ 1234.567 (a)

____________

1280.24574 rounds to 1280.246 <--- a + (b + c)

They are also not necessarily distributive. That is, (a + b) ×c may not be the same as a×c + b×c:

1234.567 × 3.333333 = 4115.223

1.234567 × 3.333333 = 4.115223

4115.223 + 4.115223 = 4119.338

but

1234.567 + 1.234567 = 1235.802

1235.802 × 3.333333 = 4119.340

In addition to loss of significance, inability to represent numbers such as π and 0.1 exactly, and other slightinaccuracies, the following phenomena may occur:• Cancellation: subtraction of nearly equal operands may cause extreme loss of accuracy. This is perhaps the most

common and serious accuracy problem.• Conversions to integer are not intuitive: converting (63.0/9.0) to integer yields 7, but converting (0.63/0.09) may

yield 6. This is because conversions generally truncate rather than round. Floor and ceiling functions may produceanswers which are off by one from the intuitively expected value.

• Limited exponent range: results might overflow yielding infinity, or underflow yielding a subnormal number orzero. In these cases precision will be lost.

• Testing for safe division is problematic: Checking that the divisor is not zero does not guarantee that a divisionwill not overflow.

Page 52: Micro Excellent

Floating point 50

• Testing for equality is problematic. Two computational sequences that are mathematically equal may wellproduce different floating-point values. Programmers often perform comparisons within some tolerance (often adecimal constant, itself not accurately represented), but that doesn't necessarily make the problem go away.

Machine precision"Machine precision" is a quantity that characterizes the accuracy of a floating point system. It is also known as unitroundoff or machine epsilon. Usually denoted Εmach, its value depends on the particular rounding being used.With rounding to zero,

whereas rounding to nearest,

This is important since it bounds the relative error in representing any non-zero real number x within the normalizedrange of a floating point system:

Minimizing the effect of accuracy problemsBecause of the issues noted above, naive use of floating-point arithmetic can lead to many problems. The creation ofthoroughly robust floating-point software is a complicated undertaking, and a good understanding of numericalanalysis is essential.In addition to careful design of programs, careful handling by the compiler is required. Certain "optimizations" thatcompilers might make (for example, reordering operations) can work against the goals of well-behaved software.There is some controversy about the failings of compilers and language designs in this area. See the externalreferences at the bottom of this article.Binary floating-point arithmetic is at its best when it is simply being used to measure real-world quantities over awide range of scales (such as the orbital period of Io or the mass of the proton), and at its worst when it is expectedto model the interactions of quantities expressed as decimal strings that are expected to be exact. An example of thelatter case is financial calculations. For this reason, financial software tends not to use a binary floating-point numberrepresentation.[9] The "decimal" data type of the C# programming language and Python (programming language),and the IEEE 754-2008 decimal floating-point standard, are designed to avoid the problems of binary floating-pointrepresentations when applied to human-entered exact decimal values, and make the arithmetic always behave asexpected when numbers are printed in decimal.Small errors in floating-point arithmetic can grow when mathematical algorithms perform operations an enormousnumber of times. A few examples are matrix inversion, eigenvector computation, and differential equation solving.These algorithms must be very carefully designed if they are to work well.Expectations from mathematics may not be realised in the field of floating-point computation. For example, it isknown that , and that . These facts cannot be counted onwhen the quantities involved are the result of floating-point computation.A detailed treatment of the techniques for writing high-quality floating-point software is beyond the scope of thisarticle, and the reader is referred to the references at the bottom of this article. Descriptions of a few simpletechniques follow.The use of the equality test (if (x==y) ...) is usually not recommended when expectations are based on results from pure mathematics. Such tests are sometimes replaced with "fuzzy" comparisons (if (abs(x-y) < epsilon) ..., where epsilon is sufficiently small and tailored to the application, such as 1.0E−13). The wisdom of doing this varies

Page 53: Micro Excellent

Floating point 51

greatly. It is often better to organize the code in such a way that such tests are unnecessary.An awareness of when loss of significance can occur is useful. For example, if one is adding a very large number ofnumbers, the individual addends are very small compared with the sum. This can lead to loss of significance. Atypical addition would then be something like

3253.671

+ 3.141276

--------

3256.812

The low 3 digits of the addends are effectively lost. Suppose, for example, that one needs to add many numbers, allapproximately equal to 3. After 1000 of them have been added, the running sum is about 3000; the lost digits are notregained. The Kahan summation algorithm may be used to reduce the errors.Computations may be rearranged in a way that is mathematically equivalent but less prone to error. As an example,Archimedes approximated π by calculating the perimeters of polygons inscribing and circumscribing a circle,starting with hexagons, and successively doubling the number of sides. The recurrence formula for the circumscribedpolygon is:

Here is a computation using IEEE "double" (a significand with 53 bits of precision) arithmetic:

i 6 × 2i × ti, first form 6 × 2i × t

i, second form

0 '.4641016151377543863 '.4641016151377543863

1 '.2153903091734710173 '.2153903091734723496

2 '596599420974940120 '596599420975006733

3 '60862151314012979 '60862151314352708

4 '27145996453136334 '27145996453689225

5 '8730499801259536 '8730499798241950

6 '6627470548084133 '6627470568494473

7 '6101765997805905 '6101766046906629

8 '70343230776862 '70343215275928

9 '37488171150615 '37487713536668

10 '9278733740748 '9273850979885

11 '7256228504127 '7220386148377

12 '717412858693 '707019992125

13 '189011456060 '78678454728

14 '717412858693 '46593073709

15 '19358822321783 '8571730119

16 '717412858693 '6566394222

17 '810075796233302 '6065061913

18 '717412858693 '939728836

19 '4061547378810956 '908393901

20 '05434924008406305 '900560168

21 '00068646912273617 '8608396

22 '349453756585929919 '8122118

23 '00068646912273617 '95552

Page 54: Micro Excellent

Floating point 52

24 '.2245152435345525443 '68907

25 ''62246

26 ''62246

27 ''62246

28 ''62246

The true value is ''

While the two forms of the recurrence formula are clearly equivalent, the first subtracts 1 from a number extremelyclose to 1, leading to huge cancellation errors. Note that, as the recurrence is applied repeatedly, the accuracyimproves at first, but then it deteriorates. It never gets better than about 8 digits, even though 53-bit arithmetic shouldbe capable of about 16 digits of precision. When the second form of the recurrence is used, the value converges to 15digits of precision.

See also• Computable number• Decimal floating point• double precision• Fixed-point arithmetic• FLOPS• half precision• IEEE 754 — Standard for Binary Floating-Point Arithmetic• IBM Floating Point Architecture• Microsoft Binary Format• minifloat• Q (number format) for constant resolution• quad precision• Significant digits• single precision• Gal's accurate tables• Coprocessor

Notes and references[1] Haohuan Fu, Oskar Mencer, Wayne Luk (June 2010). "Comparing Floating-point and Logarithmic Number Representations for

Reconfigurable Acceleration" (http:/ / ieeexplore. ieee. org/ xpl/ freeabs_all. jsp?arnumber=4042464). IEEE Conference on FieldProgrammable Technology: 337. doi:10.1109/FPT.2006.270342. .

[2] Severance, Charles (20 Feb 1998). "An Interview with the Old Man of Floating-Point" (http:/ / www. eecs. berkeley. edu/ ~wkahan/ieee754status/ 754story. html). .

[3] openEXR (http:/ / www. openexr. com/ about. html)[4] http:/ / babbage. cs. qc. edu/ IEEE-754/ 32bit. html[5] Computer hardware doesn't necessarily compute the exact value; it simply has to produce the equivalent rounded result as though it had

computed the infinitely precise result.[6] Goldberg, David (1991). "What Every Computer Scientist Should Know About Floating-Point Arithmetic" (http:/ / docs. sun. com/ source/

806-3568/ ncg_goldberg. html). ACM Computing Surveys 23: 5–48. doi:10.1145/103162.103163. . Retrieved 2010-09-02.[7] The enormous complexity of modern division algorithms once led to a famous error. An early version of the Intel Pentium chip was shipped

with a division instruction that, on rare occasions, gave slightly incorrect results. Many computers had been shipped before the error wasdiscovered. Until the defective computers were replaced, patched versions of compilers were developed that could avoid the failing cases. SeePentium FDIV bug.

[8] But an attempted computation of cos(π) yields −1 exactly. Since the derivative is nearly zero near π, the effect of the inaccuracy in theargument is far smaller than the spacing of the floating-point numbers around −1, and the rounded result is exact.

[9] General Decimal Arithmetic (http:/ / speleotrove. com/ decimal/ )

Page 55: Micro Excellent

Floating point 53

Further reading• What Every Computer Scientist Should Know About Floating-Point Arithmetic (http:/ / docs. sun. com/ source/

806-3568/ ncg_goldberg. html), by David Goldberg, published in the March, 1991 issue of Computing Surveys.• Donald Knuth. The Art of Computer Programming, Volume 2: Seminumerical Algorithms, Third Edition.

Addison-Wesley, 1997. ISBN 0-201-89684-2. Section 4.2: Floating Point Arithmetic, pp. 214–264.• Press et al. Numerical Recipes in C++. The Art of Scientific Computing, ISBN 0-521-75033-4.

External links• Kahan, William and Darcy, Joseph (2001). How Java’s floating-point hurts everyone everywhere. Retrieved

September 5, 2003 from http:/ / www. cs. berkeley. edu/ ~wkahan/ JAVAhurt. pdf (http:/ / www. cs. berkeley.edu/ ~wkahan/ JAVAhurt. pdf).

• Survey of Floating-Point Formats (http:/ / www. mrob. com/ pub/ math/ floatformats. html) This page gives avery brief summary of floating-point formats that have been used over the years.

• The pitfalls of verifying floating-point computations (http:/ / hal. archives-ouvertes. fr/ hal-00128124/ en/ ), byDavid Monniaux, also printed in ACM Transactions on programming languages and systems (TOPLAS), May2008: a compendium of non-intuitive behaviours of floating-point on popular architectures, with implications forprogram verification and testing

• (http:/ / www. opencores. org) The www.opencores.org website contains open source floating point IP cores forthe implementation of floating point operators in FPGA or ASIC devices. The project, double_fpu, containsverilog source code of a double precision floating point unit. The project, fpuvhdl, contains vhdl source code of asingle precision floating point unit.

Page 56: Micro Excellent

FLOPS 54

FLOPS

Computer Performance

Name FLOPS

yottaFLOPS 1024

zettaFLOPS 1021

exaFLOPS 1018

petaFLOPS 1015

teraFLOPS 1012

gigaFLOPS 109

megaFLOPS 106

kiloFLOPS 103

In computing, FLOPS (or flops or flop/s) is an acronym meaning FLoating point OPerations per Second. TheFLOPS is a measure of a computer's performance, especially in fields of scientific calculations that make heavy useof floating point calculations, similar to the older, simpler, instructions per second. Since the final S stands for"second", conservative speakers consider "FLOPS" as both the singular and plural of the term, although the singular"FLOP" is frequently encountered. Alternatively, the singular FLOP (or flop) is used as an abbreviation for"FLoating-point OPeration", and a flop count is a count of these operations (e.g., required by a given algorithm orcomputer program). In this context, "flops" is simply the plural rather than a rate.NEC's SX-9 supercomputer was the world's first vector processor to exceed 100 gigaFLOPS per single core. IBM'ssupercomputer dubbed Roadrunner was the first to reach a sustained performance of 1 petaFLOPS measured by theLinpack benchmark. As of June 2010, the 500 fastest supercomputers in the world combine for 32.4 petaFLOPS ofcomputing power.[1]

For comparison, a hand-held calculator must perform relatively few FLOPS. Each calculation request, such as to addor subtract two numbers, requires only a single operation, so there is rarely any need for its response time to exceedwhat the operator can physically use. A computer response time below 0.1 second in a calculation context is usuallyperceived as instantaneous by a human operator,[2] so a simple calculator needs only about 10 FLOPS to beconsidered functional.

Measuring performanceIn order for FLOPS to be useful as a measure of floating-point performance, a standard benchmark must be availableon all computers of interest. One example is the LINPACK benchmark.There are many factors in computer performance other than raw floating-point computing speed, such as I/Operformance, interprocessor communication, cache coherence, and the memory hierarchy. This means thatsupercomputers are in general only capable of a fraction of their "theoretical peak" FLOPS throughput (obtained byadding together the theoretical peak FLOPS performance of every element of the system). Even when operating onlarge highly parallel problems, their performance will be bursty, mostly due to the residual effects of Amdahl's law.Real benchmarks therefore measure both peak actual FLOPS performance as well as sustained FLOPS performance.Supercomputer ratings, like TOP500, usually derive theoretical peak FLOPS as a product of number of cores, cycles per second each core runs at, and number of double-precision FLOPS each core can ideally perform, thanks to SIMD

Page 57: Micro Excellent

FLOPS 55

or otherwise. Despite different processor architectures can achieve different parallelism on single core, mostmainstream ones, like recent Xeon and Itanium models, claim a factor of four. Some ratings adopted the factor as agiven constant, and use it to compute peak values for all architectures, often leading to huge difference fromsustained performance.For ordinary (non-scientific) applications, integer operations (measured in MIPS) are far more common. Measuringfloating point operation speed, therefore, does not predict accurately how the processor will perform on just anyproblem. However, for many scientific jobs such as analysis of data, a FLOPS rating is effective.Historically, the earliest reliably documented serious use of the Floating Point Operation as a metric appears to beAEC justification to Congress for purchasing a Control Data CDC 6600 in the mid-1960s.The terminology is currently so confusing that until April 24, 2006 U.S. export control was based upon measurementof "Composite Theoretical Performance" (CTP) in millions of "Theoretical Operations Per Second" or MTOPS. Onthat date, however, the U.S. Department of Commerce's Bureau of Industry and Security amended the ExportAdministration Regulations to base controls on Adjusted Peak Performance (APP) in Weighted TeraFLOPS (WT).

RecordsIn June 2006, a new computer was announced by Japanese research institute RIKEN, the MDGRAPE-3. Thecomputer's performance tops out at one petaFLOPS, almost two times faster than the Blue Gene/L, butMDGRAPE-3 is not a general purpose computer, which is why it does not appear in the Top500.org list. It hasspecial-purpose pipelines for simulating molecular dynamics.By 2007, Intel Corporation unveiled the experimental multi-core POLARIS chip, which achieves 1 TFLOPS at3.13 GHz. The 80-core chip can raise this result to 2 TFLOPS at 6.26 GHz, although the thermal dissipation at thisfrequency exceeds 190 watts.[3]

On June 26, 2007, IBM announced the second generation of its top supercomputer, dubbed Blue Gene/P anddesigned to continuously operate at speeds exceeding one petaFLOPS. When configured to do so, it can reach speedsin excess of three petaFLOPS.[4]

In June 2007, Top500.org reported the fastest computer in the world to be the IBM Blue Gene/L supercomputer,measuring a peak of 596 TFLOPS.[5] The Cray XT4 hit second place with 101.7 TFLOPS.On October 25, 2007, NEC Corporation of Japan issued a press release[6] announcing its SX series model SX-9,claiming it to be the world's fastest vector supercomputer. The SX-9 features the first CPU capable of a peak vectorperformance of 102.4 gigaFLOPS per single core.On February 4, 2008, the NSF and the University of Texas opened full scale research runs on an AMD, Sunsupercomputer named Ranger,[7] the most powerful supercomputing system in the world for open science research,which operates at sustained speed of half a petaflop.On May 25, 2008, an American military supercomputer built by IBM, named 'Roadrunner', reached the computingmilestone of one petaflop by processing more than 1.026 quadrillion calculations per second. It headed the June2008[8] and November 2008[9] TOP500 list of the most powerful supercomputers (excluding grid computers). Thecomputer is located at Los Alamos National Laboratory in New Mexico, and the computer's name refers to the statebird of New Mexico, the Greater Roadrunner.[10]

In June 2008, AMD released ATI Radeon HD4800 series, which are reported to be the first GPUs to achieve oneteraFLOP scale. On August 12, 2008 AMD released the ATI Radeon HD 4870X2 graphics card with two RadeonR770 GPUs totaling 2.4 teraFLOPS.In November 2008, an upgrade to the Cray XT Jaguar supercomputer at the Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) raised the system's computing power to a peak 1.64 “petaflops,” or a quadrillion mathematical calculations per second, making Jaguar the world’s first petaflop system dedicated to open research. In early 2009 the supercomputer was named after a mythical creature, Kraken. Kraken was declared the world's fastest

Page 58: Micro Excellent

FLOPS 56

university-managed supercomputer and sixth fastest overall in the 2009 TOP500 list, which is the global standard forranking supercomputers. In 2010 Kraken was upgraded and can operate faster and is more powerful.In 2009, the Cray Jaguar performed at 1.75 petaFLOPS, beating the IBM Roadrunner for the number one spot on theTOP500 list.[11]

In October 2010, China unveiled the Tianhe-I. A supercomputer that operates at a peak computing rate of 2.5petaflops.[12] [13]

As of 2010, the fastest PC processors six-core has a theoretical peak performance of 107.55 GFLOPS (Intel Core i7980 XE) in double precision calculations. GPUs are considerably more powerful. For example, NVIDIA TeslaC2050 GPU computing processors perform around 515 GFLOPS[14] in double precision calculations while AMDFireStream 9270 peaks at 240 GFLOPS.[15] In single precision performance, NVIDIA Tesla C2050 computingprocessors perform around 1.03 TFLOPS while AMD FireStream 9270 cards peak at 1.2 TFLOPS. Both NVIDIAand AMD's consumer gaming GPUs may reach higher FLOPS. For example, AMD’s HemlockXT 5970[15] reaches928 GFLOPS in double precision calculations with two GPUs on board while NVIDIA GTX480 reaches 672GFLOPS[14] with one GPU on board.Distributed computing uses the Internet to link personal computers to achieve more FLOPS:• Folding@Home is, as of April 2010, sustaining over 6.2 PFLOPS("x86" this is the standard other distributed

computers in this section use, which is different from "native"),[16] the first computing project of any kind to crossthe 1,2,3,4 & 5 petaFLOPS milestone. This level of performance is primarily enabled by the cumulative effort ofa vast array of PlayStation 3, CPU, and powerful GPU units.[17]

• The entire BOINC network averages about 5.1 PFLOPS as of April 21, 2010.[18]

• As of April 2010, MilkyWay@Home computes at over 1.6 PFLOPS, with a large amount of this work comingfrom GPUs.[19]

• As of April 2010, SETI@Home, which began in 1999, computes data averages more than 730 TFLOPS.[20]

• As of April 2010, Einstein@Home is crunching more than 210 TFLOPS.[21]

• As of April 2010, GIMPS, which began in 1996, is sustaining 44 TFLOPS.[22]

Future developmentsIn May 2008 a collaboration was announced between NASA, SGI and Intel to build a 1 PFLOPS computer, Pleiades,in 2009, scaling up to 10 PFLOPS by 2012.[23] At the same time, IBM intends to build a 20 PFLOPS supercomputer,Sequoia, at Lawrence Livermore National Laboratory until 2011. IBM also intends to have an exaflopsupercomputer functional by the 2010s to support the Square Kilometre Array. It will be built in either South Africaor Australia, depending on who wins the bid for the SKA.[24]

Given the current speed of progress, Supercomputers are projected to reach 1 exaFLOPS (EFLOPS) in 2019.[25]

Cray, Inc. announced in December 2009 a plan to build a 1 EFLOPS supercomputer before 2020.[26] Erik P.DeBenedictis of Sandia National Laboratories theorizes that a zettaFLOPS (ZFLOPS) computer is required toaccomplish full weather modeling, which could cover a two week time span accurately.[27] Such systems might bebuilt around 2030.[28]

On March 4, 2010 Solomon Assefa et al. of IBM published a paper in the journal Nature revealing theirdiscovery/invention of ultra fast and noise free nanophotonic avalanche photodetectors, which are poised to bringabout the exaflop light circuit era.[29] [30] [31] "We are now working on integrating all of our devices onto amicroprocessor alongside transistors," revealed Assefa.[32] "The Avalanche Photodetector achievement, which is thelast in a series of prior reports from IBM Research, is the last piece of the puzzle that completes the development ofthe “nanophotonics toolbox” of devices necessary to build the on-chip interconnects".[30] "“With opticalcommunications embedded into the processor chips, the prospect of building power-efficient computer systems withperformance at the Exaflop level might not be a very distant future.” [30]

Page 59: Micro Excellent

FLOPS 57

Cost of computing

Hardware costs

The following is a list of examples of computers that demonstrates how drastically performance has increased andprice has decreased. The "cost per GFLOPS" is the cost for a set of hardware that would theoretically operate at onebillion floating point operations per second. During the era when no single computing platform was able to achieveone GFLOPS, this table lists the total cost for multiple instances of a fast computing platform which speed sums toone GFLOPS. Otherwise, the least expensive computing platform able to achieve one GFLOPS is listed.

Date Approximate cost perGFLOPS

Technology Comments

1961 US$1,100,000,000,000 ($1.1trillion), or US$1,100 perFLOPS

About 17 million IBM 1620units costing $64,000 each

The 1620's multiplication operation takes 17.7ms.[33]

1984 US$15,000,000 Cray X-MP

1997 US$30,000 Two 16-processor Beowulfclusters with Pentium Promicroprocessors[34]

April 2000 $1,000 Bunyip Beowulf cluster[35]

Bunyip was the first sub-US$1/MFLOPS computing technology. It wonthe Gordon Bell Prize in 2000.

May 2000 $640 KLAT2 [36] KLAT2 was the first computing technology which scaled to largeapplications while staying under US$1/MFLOPS.[37]

August2003

$82 KASY0 [38] KASY0 was the first sub-US$100/GFLOPS computing technology.[39]

March2007

$0.42 Ambric AM2045[40]

September2009

$0.13 ATI Radeon R800[41] The first high-performance 40 nm GPU from ATI. Can reach speeds of3.04TFLOPS when running at 950 MHz. Price per GFLOPS is slightlyinaccurate as it is single precision and includes only the cost of the card.

November2009[42]

$0.59 (double precision);$0.14 (single precision)

AMD Radeon HD 5970Hemlock

The first high-performance graphics card with dual GPU from AMD,which breaks the 1 TFLOPS mark in Double precision floating-point. Itcan reach speeds of 4.64TFLOPS (sp) and 0.928TFLOPS (dp)respectively when running at 2 * 725 MHz.[43] [44] Price per GFLOPSis slightly inaccurate as it includes only the cost of the card ($640),which will drop in time.

The trend toward placing ever more transistors inexpensively on an integrated circuit follows Moore's law. Thistrend explains the rising speed and falling cost of computer processing.

Operation costs

In energy cost, according to the Green500 list, as of June 2010 the most efficient TOP500 supercomputer runs at773.38 MFLOPS per watt. This translates to an energy requirement of 1.29 watts per GFLOPS, however this energyrequirement will be much greater for less efficient supercomputers.Hardware costs for low cost supercomputers may be less significant than energy costs when running continuouslyfor several years. A PlayStation 3 (PS3) 120 GiB (45 nm Cell) costs $299 (as of September 2009) and consumes 250watts[45] or $219 of electricity each year if operated 24 hours per day, conservatively assuming U.S. national averageresidential electric rates of $0.10/kWh[46] (0.250 kW × 24 h × 365 d × 0.10 $/kWh = $219 per year).

Page 60: Micro Excellent

FLOPS 58

Floating point operation and integer operationFloating point operation per second or FLOPS, measures the computing ability of a computer. Example of floatingpoint operation is the calculation of mathematical equations. FLOPS is a good indicator to measure performance onDSP, supercomputers, robotic motion control, and scientific simulations. MIPS is used to measure the integerperformance of a computer. Examples of integer operation is data movement (A to B) or value testing (If A = B, thenC). MIPS as a performance benchmark is adequate for the computer when it is used in database query, wordprocessing, spreadsheets, or to run multiple virtual operating systems.[47] [48] Frank H. McMahon, of the LawrenceLivermore National Laboratory (LLNL), invented the term FLOPS and MFLOPS (MegaFLOPS) so that he couldcompare the so-called Supercomputers of the day by the number of floating point calculations they did per second.This was much better than using the prevalent MIPS (Millions of Instructions Per Second) to compare computers asthis statistic usually had little bearing on the arithmetic capability of the machine.Fixed point (Integer). These designations refer to the format used to store and manipulate numeric representationsof data. Fixed-point are designed to represent and manipulate integers – positive and negative whole numbers – forexample 16 bits, yielding up to 65,536 possible bit patterns (216).[49]

Floating-point (Real Numbers). The encoding scheme for floating point numbers is more complicated than forfixed point. The basic idea is the same as used in scientific notation, where a mantissa is multiplied by ten raised tosome exponent. For instance, 5.4321 × 106, where 5.4321 is the mantissa and 6 is the exponent. Scientific notation isexceptional at representing very large and very small numbers. For example: 1.2 × 1050, the number of atoms in theearth, or 2.6 × 10−23, the distance a turtle crawls in one second, compared to the diameter of our galaxy. Notice thatnumbers represented in scientific notation are normalized so that there is only a single nonzero digit left of thedecimal point. This is achieved by adjusting the exponent as needed. Floating point representation is similar toscientific notation, except everything is carried out in base two, rather than base ten. While several similar formatsare in use, the most common is ANSI/IEEE Std. 754-1985. This standard defines the format for 32 bit numbers calledsingle precision, as well as 64 bit numbers called double precision. Floating point can support a much wider range ofvalues than fixed point, with the ability to represent very small numbers and very large numbers.With fixed-point notation, the gaps between adjacent numbers always equal a value of one, whereas in floating-pointnotation, gaps between adjacent numbers are not uniformly spaced – the gap between any two numbers isapproximately ten million times smaller than the value of the numbers (ANSI/IEEE Std. 754 standard format), withlarge gaps between large numbers and small gaps between small numbers.[50]

Dynamic Range and Precision. The exponentiation inherent in floating-point computation assures a much largerdynamic range – the largest and smallest numbers that can be represented - which is especially important whenprocessing data sets which are extremely large or where the range may be unpredictable. As such, floating-pointprocessors are ideally suited for computationally intensive applications. It is also important to consider fixed andfloating-point formats in the context of precision – the size of the gaps between numbers. Every time a processorgenerates a new number via a mathematical calculation, that number must be rounded to the nearest value that can bestored via the format in use. Rounding and/or truncating numbers during processing naturally yields quantizationerror or ‘noise’ - the deviation between actual values and quantized values. Since the gaps between adjacent numberscan be much larger with fixed-point processing when compared to floating-point processing, round-off error can bemuch more pronounced. As such, floating-point processing yields much greater precision than fixed-pointprocessing, distinguishing floating-point processors as the ideal CPU when computing accuracy is a criticalrequirement.[51]

Page 61: Micro Excellent

FLOPS 59

See also• Gordon Bell Prize• Orders of magnitude (computing)

References[1] "Number of Processors share for 06/2010" (http:/ / www. top500. org/ stats/ list/ 35/ procclass). TOP500 Supercomputing Site. . Retrieved

June 1, 2010.[2] "Response Times: The Three Important Limits" (http:/ / www. useit. com/ papers/ responsetime. html). Jakob Nielsen. . Retrieved June 11,

2008.[3] http:/ / www. bit-tech. net/ hardware/ 2007/ 04/ 30/ the_arrival_of_teraflop_computing/ 2[4] "June 2008" (http:/ / www. top500. org/ lists/ 2008/ 06). TOP500. . Retrieved July 8, 2008.[5] "29th TOP500 List of World's Fastest Supercomputers Released" (http:/ / top500. org/ news/ 2007/ 06/ 23/

29th_top500_list_world_s_fastest_supercomputers_released). Top500.org. June 23, 2007. . Retrieved July 8, 2008.[6] "NEC Launches World's Fastest Vector Supercomputer, SX-9" (http:/ / www. nec. co. jp/ press/ en/ 0710/ 2501. html). NEC. October 25,

2007. . Retrieved July 8, 2008.[7] "University of Texas at Austin, Texas Advanced Computing Center" (http:/ / www. tacc. utexas. edu/ resources/ hpcsystems/ ). . Retrieved

September 13, 2010. "Any researcher at a U.S. institution can submit a proposal to request an allocation of cycles on the system."[8] Sharon Gaudin (June 9, 2008). "IBM's Roadrunner smashes 4-minute mile of supercomputing" (http:/ / www. computerworld. com/ action/

article. do?command=viewArticleBasic& taxonomyName=hardware& articleId=9095318& taxonomyId=12& intsrc=kc_top).Computerworld. . Retrieved June 10, 2008.

[9] Austin ISC08 (http:/ / www. top500. org/ lists/ 2008/ 11/ press-release)[10] Fildes, Jonathan (June 9, 2008). "Supercomputer sets petaflop pace" (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 7443557. stm). BBC

News. . Retrieved July 8, 2008.[11] http:/ / www. forbes. com/ 2009/ 11/ 15/ supercomputer-ibm-jaguar-technology-cio-network-cray. html?feed=rss_popstories[12] http:/ / www. bbc. co. uk/ news/ technology-11644252[13] http:/ / www. popsci. com/ technology/ article/ 2010-10/ china-unveils-2507-petaflop-supercomputer-worlds-fastest[14] http:/ / www. nvidia. com/ object/ product_tesla_C2050_C2070_us. html[15] http:/ / www. amd. com/ us/ products/ workstation/ firestream/ firestream-9270/ pages/ firestream-9270. aspx[16] "Client statistics by OS" (http:/ / fah-web. stanford. edu/ cgi-bin/ main. py?qtype=osstats). Folding@Home. March 4, 2010. . Retrieved

March 5, 2010.[17] Staff (November 6, 2008). "Sony Computer Entertainment's Support for Folding@home Project on PlayStation3 Receives This Year's

"Good Design Gold Award"" (http:/ / www. scei. co. jp/ corporate/ release/ 081106de. html). Sony Computer Entertainment Inc.. SonyComputer Entertainment Inc. (Sony Computer Entertainment Inc.). . Retrieved December 11, 2008.

[18] "Credit overview" (http:/ / www. boincstats. com/ stats/ project_graph. php?pr=bo). BOINC. . Retrieved April 21, 2010.[19] "MilkyWay@Home Credit overview" (http:/ / boincstats. com/ stats/ project_graph. php?pr=milkyway). BOINC. . Retrieved April 21, 2010.[20] "SETI@Home Credit overview" (http:/ / www. boincstats. com/ stats/ project_graph. php?pr=sah). BOINC. . Retrieved April 21, 2010.[21] "Einstein@Home Credit overview" (http:/ / de. boincstats. com/ stats/ project_graph. php?pr=einstein). BOINC. . Retrieved April 21, 2010.[22] "Internet PrimeNet Server Parallel Technology for the Great Internet Mersenne Prime Search" (http:/ / www. mersenne. org/ primenet).

GIMPS. . Retrieved April 21, 2010[23] "NASA collaborates with Intel and SGI on forthcoming petaflops super computers" (http:/ / www. heise. de/ english/ newsticker/ news/

107683). Heise online. May 9, 2008. .[24] Lohman, Tim (September 18, 2009). "SKA telescope to provide a billion PCs' worth of processing" (http:/ / www. computerworld. com. au/

article/ 319128/ ska_telescope_provide_billion_pcs_worth_processing). ComputerWorld. .[25] Thibodeau, Patrick (June 10, 2008). "IBM breaks petaflop barrier" (http:/ / www. infoworld. com/ article/ 08/ 06/ 10/

IBM_breaks_petaflop_barrier_1. html). InfoWorld. .[26] Cray studies exascale computing in Europe: (http:/ / eetimes. com/ news/ latest/ showArticle. jhtml?articleID=222000288)[27] DeBenedictis, Erik P. (2005). "Reversible logic for supercomputing" (http:/ / portal. acm. org/ citation. cfm?id=1062325). Proceedings of

the 2nd conference on Computing frontiers. New York, NY: ACM Press. pp. 391–402. ISBN 1595930191. .[28] "IDF: Intel says Moore's Law holds until 2029" (http:/ / www. h-online. com/ newsticker/ news/ item/

IDF-Intel-says-Moore-s-Law-holds-until-2029-734779. html). Heise Online. April 4, 2008. .[29] Assefa, Solomon; Xia, Fengnian; Vlasov, Yurii A. (2010). "Reinventing germanium avalanche photodetector for nanophotonic on-chip

optical interconnects". Nature 464 (7285): 80–84. doi:10.1038/nature08813. PMID 20203606.[30] http:/ / www. tadias. com/ 03/ 08/ 2010/ research-discovery-by-ethiopian-scientist-at-ibm/[31] http:/ / domino. research. ibm. com/ comm/ research_projects. nsf/ pages/ photonics. index. html[32] http:/ / physicsworld. com/ cws/ article/ news/ 41904[33] IBM 1961 BRL Report (http:/ / ed-thelen. org/ comp-hist/ BRL61-ibm1401. html)[34] Loki and Hyglac (http:/ / loki-www. lanl. gov/ papers/ sc97/ )

Page 62: Micro Excellent

FLOPS 60

[35] http:/ / tsg. anu. edu. au/ Projects/ Beowulf/[36] http:/ / aggregate. org/ KLAT2/[37] The Aggregate (http:/ / aggregate. org/ KLAT2/ )[38] http:/ / aggregate. org/ KASY0/[39] The Aggregate - KASY0 (http:/ / aggregate. org/ KASY0/ )[40] Halfill, Tom R. (October 10, 2006). "Ambric’s New Parallel Processor" (http:/ / web. archive. org/ web/ 20080627111128/ http:/ / www.

ambric. com/ pdf/ MPR_Ambric_Article_10-06_204101. pdf). Microprocessor Report (Reed Electronics Group): 1–9. Archived from204101.qxd the original (http:/ / www. ambric. com/ pdf/ MPR_Ambric_Article_10-06_204101. pdf) on June 27, 2008. . Retrieved July 8,2008.

[41] Valich, Theo (September 29, 2009). "The fastest ATI 5870 card achieves 3TFLOPS!" (http:/ / www. brightsideofnews. com/ news/ 2009/ 9/29/ the-fastest-ati-5870-card-achieves-3tflops!. aspx). Bright Side of News. . Retrieved September 29, 2009.

[42] (mfi) (October 29, 2009). "ATI will launch the Radeon HD 5970 on 2009, Nov. 19th." (http:/ / www. heise. de/ newsticker/ meldung/Grafik-Geruechte-Radeon-HD-5970-kommt-am-19-November-845774. html) (in German). heise.de. . Retrieved September 1, 2010.

[43] (mfi) (November 18, 2009). "AMD Radeon HD 5970: Teraflops-Monster" (http:/ / www. heise. de/ newsticker/ meldung/AMD-Radeon-HD-5970-Teraflops-Monster-und-3D-Leistungskoenigin-862230. html) (in German). heise.de. . Retrieved September 1, 2010.

[44] AMD. "ATI Radeon™ HD 5970 Graphics Specifications" (http:/ / www. amd. com/ us/ products/ desktop/ graphics/ ati-radeon-hd-5000/hd-5970/ Pages/ ati-radeon-hd-5970-overview. aspx#2). AMD. .

[45] Sony Computer Entertainment Inc. (October 30, 2007). "PlayStation.com - PLAYSTATION 3 - Systems - TechSpecs" (http:/ / www. us.playstation. com/ PS3/ Systems/ TechSpecs/ techspecs120gb. html/ ). Sony Computer Entertainment Inc.. . Retrieved September 20, 2009.

[46] "Average Retail Price of Electricity to Ultimate Customers by End-Use Sector, by State" (http:/ / www. eia. doe. gov/ cneaf/ electricity/epm/ table5_6_a. html). Energy Information Administration. June 10, 2008. . Retrieved July 8, 2008.

[47] Floating point vs fixed-point. (http:/ / www. dspguide. com/ ch28/ 4. htm) Retrieved on December 25, 2009.[48] Data manipulation and math calculation. (http:/ / www. dspguide. com/ ch28/ 1. htm) Retrieved on December 25, 2009.[49] Integer (http:/ / www. dspguide. com/ ch4/ 2. htm) Retrieved on December 25, 2009.[50] Floating Point (http:/ / www. dspguide. com/ ch4/ 3. htm) Retrieved on December 25, 2009.[51] Summary: Fixed point (integer) vs Floating point (http:/ / www. analog. com/ en/ embedded-processing-dsp/ content/

Fixed-Point_vs_Floating-Point_DSP/ fca. html) Retrieved on December 25, 2009.

External links• Current Einstein@Home benchmark (http:/ / einstein. phys. uwm. edu/ server_status. php)• BOINC projects global benchmark (http:/ / www. boincstats. com/ stats/ project_graph. php?pr=bo)• Current GIMPS throughput (http:/ / mersenne. org/ primenet/ )• Top500.org (http:/ / top500. org)• LinuxHPC.org (http:/ / www. LinuxHPC. org) Linux High Performance Computing and Clustering Portal• WinHPC.org (http:/ / www. WinHPC. org) Windows High Performance Computing and Clustering Portal• Oscar Linux-cluster ranking list by CPUs/types and respective FLOPS (http:/ / svn. oscar. openclustergroup. org/

php/ clusters_register. php?sort=rpeak)• Information on how to calculate "Composite Theoretical Performance" (CTP) (http:/ / www. mosis. org/ forms/

mosis_forms/ ECCN_CTP_Computation. pdf)• Information on the Oak Ridge National Laboratory Cray XT system. (http:/ / investors. cray. com/ phoenix.

zhtml?c=98390& p=irol-newsArticle& ID=873357& highlight=)• Infiscale Cluster Portal - Free GPL HPC (http:/ / www. perceus. org/ portal/ )• Source code, pre-compiled versions and results for PCs (http:/ / www. roylongbottom. org. uk/ index. htm) -

Linpack, Livermore Loops, Whetstone MFLOPS• PC CPU Performance Comparisons %MFLOPS/MHz - CPU, Caches and RAM (http:/ / www. roylongbottom.

org. uk/ cpuspeed. htm)• Xeon export compliance metrics (http:/ / www. intel. com/ support/ processors/ xeon/ sb/ CS-020863. htm),

including GFLOPS• IBM Brings NVIDIA Tesla GPUs Onboard (May 2010) (http:/ / www. hpcwire. com/ features/

IBM-Brings-NVIDIA-GPUs-Onboard-94190024. html)

Page 63: Micro Excellent

61

Embedded systems

Embedded system

Picture of the internals of an ADSL modem/router. A modern example of anembedded system. Labelled parts include a microprocessor (4), RAM (6), and flash

memory (7).

An embedded system is a computer systemdesigned to perform one or a few dedicatedfunctions developed by virendra[1] [2] oftenwith real-time computing constraints. It isembedded as part of a complete device oftenincluding hardware and mechanical parts.By contrast, a general-purpose computer,such as a personal computer (PC), isdesigned to be flexible and to meet a widerange of end-user needs. Embedded systemscontrol many devices in common usetoday.[3]

Embedded systems are controlled by one ormore main processing cores that aretypically either microcontrollers or digitalsignal processors (DSP).[4] The keycharacteristic, however, is being dedicatedto handle a particular task, which mayrequire very powerful processors. For example, air traffic control systems may usefully be viewed as embedded,even though they involve mainframe computers and dedicated regional and national networks between airports andradar sites (each radar probably includes one or more embedded systems of its own).

Since the embedded system is dedicated to specific tasks, design engineers can optimize it to reduce the size and costof the product and increase the reliability and performance. Some embedded systems are mass-produced, benefitingfrom economies of scale.

Physically, embedded systems range from portable devices such as digital watches and MP3 players, to largestationary installations like traffic lights, factory controllers, or the systems controlling nuclear power plants.Complexity varies from low, with a single microcontroller chip, to very high with multiple units, peripherals andnetworks mounted inside a large chassis or enclosure.In general, "embedded system" is not a strictly definable term, as most systems have some element of extensibility orprogrammability. For example, handheld computers share some elements with embedded systems such as theoperating systems and microprocessors which power them, but they allow different applications to be loaded andperipherals to be connected. Moreover, even systems which don't expose programmability as a primary featuregenerally need to support software updates. On a continuum from "general purpose" to "embedded", largeapplication systems will have subcomponents at most points even if the system as a whole is "designed to performone or a few dedicated functions", and is thus appropriate to call "embedded".

Page 64: Micro Excellent

Embedded system 62

Variety of embedded systems

PC Engines' ALIX.1C Mini-ITX embedded board with an x86 AMDGeode LX 800 together with Compact Flash, miniPCI and PCI slots,

44-pin IDE interface, audio, USB and 256MB RAM

An embedded RouterBoard 112 with U.FL-RSMA pigtail and R52miniPCI Wi-Fi card widely used by wireless Internet service

providers (WISPs) in the Czech Republic.

Embedded systems span all aspects of modern life andthere are many examples of their use.Telecommunications systems employ numerousembedded systems from telephone switches for thenetwork to mobile phones at the end-user. Computernetworking uses dedicated routers and network bridgesto route data.

Consumer electronics include personal digital assistants(PDAs), mp3 players, mobile phones, videogameconsoles, digital cameras, DVD players, GPS receivers,and printers. Many household appliances, such asmicrowave ovens, washing machines and dishwashers,are including embedded systems to provide flexibility,efficiency and features. Advanced HVAC systems usenetworked thermostats to more accurately andefficiently control temperature that can change by timeof day and season. Home automation uses wired- andwireless-networking that can be used to control lights,climate, security, audio/visual, surveillance, etc., all ofwhich use embedded devices for sensing andcontrolling.

Transportation systems from flight to automobilesincreasingly use embedded systems. New airplanescontain advanced avionics such as inertial guidancesystems and GPS receivers that also have considerablesafety requirements. Various electric motors —brushless DC motors, induction motors and DC motors— are using electric/electronic motor controllers.Automobiles, electric vehicles, and hybrid vehicles areincreasingly using embedded systems to maximizeefficiency and reduce pollution. Other automotive safety systems include anti-lock braking system (ABS), ElectronicStability Control (ESC/ESP), traction control (TCS) and automatic four-wheel drive.

Medical equipment is continuing to advance with more embedded systems for vital signs monitoring, electronicstethoscopes for amplifying sounds, and various medical imaging (PET, SPECT, CT, MRI) for non-invasive internalinspections.

Embedded systems are especially suited for use in transportation, fire safety, safety and security, medicalapplications and life critical systems as these systems can be isolated from hacking and thus be more reliable. Forfire safety, the systems can be designed to be have greater ability to handle higher temperatures and continue tooperate. In dealing with security, the embedded systems can be self sufficient and be able to deal with cut electricaland communication systems. [5]

In addition to commonly described embedded systems based on small computers, a new class of miniature wireless devices called motes are quickly gaining popularity as the field of wireless sensor networking rises. Wireless sensor networking, WSN, makes use of miniaturization made possible by advanced IC design to couple full wireless

Page 65: Micro Excellent

Embedded system 63

subsystems to sophisticated sensors, enabling people and companies to measure a myriad of things in the physicalworld and act on this information through IT monitoring and control systems. These motes are completely selfcontained, and will typically run off a battery source for many years before the batteries need to be changed orcharged.

HistoryOne of the first recognizably modern embedded systems was the Apollo Guidance Computer, developed by CharlesStark Draper at the MIT Instrumentation Laboratory. At the project's inception, the Apollo guidance computer wasconsidered the riskiest item in the Apollo project as it employed the then newly developed monolithic integratedcircuits to reduce the size and weight. An early mass-produced embedded system was the Autonetics D-17 guidancecomputer for the Minuteman missile, released in 1961. It was built from transistor logic and had a hard disk for mainmemory. When the Minuteman II went into production in 1966, the D-17 was replaced with a new computer thatwas the first high-volume use of integrated circuits. This program alone reduced prices on quad nand gate ICs from$1000/each to $3/each, permitting their use in commercial products.Since these early applications in the 1960s, embedded systems have come down in price and there has been adramatic rise in processing power and functionality. The first microprocessor for example, the Intel 4004, wasdesigned for calculators and other small systems but still required many external memory and support chips. In 1978National Engineering Manufacturers Association released a "standard" for programmable microcontrollers, includingalmost any computer-based controllers, such as single board computers, numerical, and event-based controllers.As the cost of microprocessors and microcontrollers fell it became feasible to replace expensive knob-based analogcomponents such as potentiometers and variable capacitors with up/down buttons or knobs read out by amicroprocessor even in some consumer products. By the mid-1980s, most of the common previously external systemcomponents had been integrated into the same chip as the processor and this modern form of the microcontrollerallowed an even more widespread use, which by the end of the decade were the norm rather than the exception foralmost all electronics devices.The integration of microcontrollers has further increased the applications for which embedded systems are used intoareas where traditionally a computer would not have been considered. A general purpose and comparatively low-costmicrocontroller may often be programmed to fulfill the same role as a large number of separate components.Although in this context an embedded system is usually more complex than a traditional solution, most of thecomplexity is contained within the microcontroller itself. Very few additional components may be needed and mostof the design effort is in the software. The intangible nature of software makes it much easier to prototype and testnew revisions compared with the design and construction of a new circuit not using an embedded processor.

Page 66: Micro Excellent

Embedded system 64

Characteristics

Gumstix Overo COM, a tiny, OMAP-basedembedded computer-on-module with Wifi and

Bluetooth.

1. Embedded systems are designed to do some specific task, rather thanbe a general-purpose computer for multiple tasks. Some also havereal-time performance constraints that must be met, for reasons such assafety and usability; others may have low or no performancerequirements, allowing the system hardware to be simplified to reducecosts.

2. Embedded systems are not always standalone devices. Manyembedded systems consist of small, computerized parts within a largerdevice that serves a more general purpose. For example, the GibsonRobot Guitar features an embedded system for tuning the strings, butthe overall purpose of the Robot Guitar is, of course, to play music.[6]

Similarly, an embedded system in an automobile provides a specificfunction as a subsystem of the car itself.

e-con Systems eSOM270 & eSOM300 Computeron Modules

3. The program instructions written for embedded systems are referredto as firmware, and are stored in read-only memory or Flash memorychips. They run with limited computer hardware resources: littlememory, small or non-existent keyboard and/or screen.

User interface

Embedded system text user interface usingMicroVGA

Embedded systems range from no user interface at all — dedicatedonly to one task — to complex graphical user interfaces that resemblemodern computer desktop operating systems. Simple embeddeddevices use buttons, LEDs, graphic or character LCDs (for examplepopular HD44780 LCD) with a simple menu system.

A more sophisticated devices use graphical screen with touch sensingor screen-edge buttons provide flexibility while minimizing spaceused: the meaning of the buttons can change with the screen, andselection involves the natural behavior of pointing at what's desired.Handheld systems often have a screen with a "joystick button" for apointing device.

Some systems provide user interface remotely with the help of a serial (e.g. RS-232, USB, I²C, etc.) or network (e.g. Ethernet) connection. In spite of the potentially necessary proprietary client software and/or specialist cables that are needed, this approach usually gives a lot of advantages: extends the capabilities of embedded system, avoids the cost

Page 67: Micro Excellent

Embedded system 65

of a display, simplifies BSP, allows to build rich user interface on the PC. A good example of this is the combinationof an embedded web server running on an embedded device (such as an IP camera or a network routers. The userinterface is displayed in a web browser on a PC connected to the device, therefore needing no bespoke software to beinstalled.

Processors in embedded systemsFirstly, Embedded processors can be broken into two broad categories: ordinary microprocessors (μP) andmicrocontrollers (μC), which have many more peripherals on chip, reducing cost and size. Contrasting to thepersonal computer and server markets, a fairly large number of basic CPU architectures are used; there are VonNeumann as well as various degrees of Harvard architectures, RISC as well as non-RISC and VLIW; word lengthsvary from 4-bit to 64-bits and beyond (mainly in DSP processors) although the most typical remain 8/16-bit. Mostarchitectures come in a large number of different variants and shapes, many of which are also manufactured byseveral different companies.A long but still not exhaustive list of common architectures are: 65816, 65C02, 68HC08, 68HC11, 68k, 8051, ARM,AVR, AVR32, Blackfin, C167, Coldfire, COP8, Cortus APS3, eZ8, eZ80, FR-V, H8, HT48, M16C, M32C, MIPS,MSP430, PIC, PowerPC, R8C, SHARC, SPARC, ST6, SuperH, TLCS-47, TLCS-870, TLCS-900, Tricore, V850,x86, XE8000, Z80, AsAP etc.

Ready made computer boards

PC/104 and PC/104+ are examples of standards for ready made computer boards intended for small, low-volumeembedded and ruggedized systems, mostly x86-based. These often use DOS, Linux, NetBSD, or an embeddedreal-time operating system such as MicroC/OS-II, QNX or VxWorks. Sometimes these boards use non-x86processors.In certain applications, where small size or power efficiency are not primary concerns, the components used may becompatible with those used in general purpose x86 personal computers. Boards such as the VIA EPIA range help tobridge the gap by being PC-compatible but highly integrated, physically smaller or have other attributes makingthem attractive to embedded engineers. The advantage of this approach is that low-cost commodity components maybe used along with the same software development tools used for general software development. Systems built inthis way are still regarded as embedded since they are integrated into larger devices and fulfill a single role.Examples of devices that may adopt this approach are ATMs and arcade machines, which contain code specific tothe application.However, most ready-made embedded systems boards are not PC-centered and do not use the ISA or PCI busses.When a System-on-a-chip processor is involved, there may be little benefit to having a standarized bus connectingdiscrete components, and the environment for both hardware and software tools may be very different.One common design style uses a small system module, perhaps the size of a business card, holding high densityBGA chips such as an ARM-based System-on-a-chip processor and peripherals, external flash memory for storage,and DRAM for runtime memory. The module vendor will usually provide boot software and make sure there is aselection of operating systems, usually including Linux and some real time choices. These modules can bemanufactured in high volume, by organizations familiar with their specialized testing issues, and combined withmuch lower volume custom mainboards with application-specific external peripherals. Gumstix product lines are aLinux-centric example of this model.

Page 68: Micro Excellent

Embedded system 66

ASIC and FPGA solutions

A common array of n configuration for very-high-volume embedded systems is the system on a chip (SoC) whichcontains a complete system consisting of multiple processors, multipliers, caches and interfaces on a single chip.SoCs can be implemented as an application-specific integrated circuit (ASIC) or using a field-programmable gatearray (FPGA).

PeripheralsEmbedded Systems talk with the outside world via peripherals, such as:• Serial Communication Interfaces (SCI): RS-232, RS-422, RS-485 etc.• Synchronous Serial Communication Interface: I2C, SPI, SSC and ESSI (Enhanced Synchronous Serial Interface)• Universal Serial Bus (USB)• Multi Media Cards (SD Cards, Compact Flash etc.)• Networks: Ethernet, Controller Area Network, LonWorks, etc.• Timers: PLL(s), Capture/Compare and Time Processing Units• Discrete IO: aka General Purpose Input/Output (GPIO)• Analog to Digital/Digital to Analog (ADC/DAC)• Debugging: JTAG, ISP, ICSP, BDM Port, BITP, and DP9 ports.

ToolsAs for other software, embedded system designers use compilers, assemblers, and debuggers to develop embeddedsystem software. However, they may also use some more specific tools:• In circuit debuggers or emulators (see next section).• Utilities to add a checksum or CRC to a program, so the embedded system can check if the program is valid.• For systems using digital signal processing, developers may use a math workbench such as Scilab / Scicos,

MATLAB / Simulink, EICASLAB, MathCad, Mathematica,or FlowStone DSP to simulate the mathematics. Theymight also use libraries for both the host and target which eliminates developing DSP routines as done inDSPnano RTOS and Unison Operating System.

• Custom compilers and linkers may be used to improve optimisation for the particular hardware.• An embedded system may have its own special language or design tool, or add enhancements to an existing

language such as Forth or Basic.• Another alternative is to add a real-time operating system or embedded operating system, which may have DSP

capabilities like DSPnano RTOS.• Modeling and code generating tools often based on state machinesSoftware tools can come from several sources:• Software companies that specialize in the embedded market• Ported from the GNU software development tools• Sometimes, development tools for a personal computer can be used if the embedded processor is a close relative

to a common PC processorAs the complexity of embedded systems grows, higher level tools and operating systems are migrating intomachinery where it makes sense. For example, cellphones, personal digital assistants and other consumer computersoften need significant software that is purchased or provided by a person other than the manufacturer of theelectronics. In these systems, an open programming environment such as Linux, NetBSD, OSGi or Embedded Javais required so that the third-party software provider can sell to a large market.

Page 69: Micro Excellent

Embedded system 67

DebuggingEmbedded debugging may be performed at different levels, depending on the facilities available. From simplest tomost sophisticated they can be roughly grouped into the following areas:• Interactive resident debugging, using the simple shell provided by the embedded operating system (e.g. Forth and

Basic)• External debugging using logging or serial port output to trace operation using either a monitor in flash or using a

debug server like the Remedy Debugger which even works for heterogeneous multicore systems.• An in-circuit debugger (ICD), a hardware device that connects to the microprocessor via a JTAG or Nexus

interface. This allows the operation of the microprocessor to be controlled externally, but is typically restricted tospecific debugging capabilities in the processor.

• An in-circuit emulator replaces the microprocessor with a simulated equivalent, providing full control over allaspects of the microprocessor.

• A complete emulator provides a simulation of all aspects of the hardware, allowing all of it to be controlled andmodified, and allowing debugging on a normal PC.

Unless restricted to external debugging, the programmer can typically load and run software through the tools, viewthe code running in the processor, and start or stop its operation. The view of the code may be as assembly code orsource-code.Because an embedded system is often composed of a wide variety of elements, the debugging strategy may vary. Forinstance, debugging a software- (and microprocessor-) centric embedded system is different from debugging anembedded system where most of the processing is performed by peripherals (DSP, FPGA, co-processor). Anincreasing number of embedded systems today use more than one single processor core. A common problem withmulti-core development is the proper synchronization of software execution. In such a case, the embedded systemdesign may wish to check the data traffic on the busses between the processor cores, which requires very low-leveldebugging, at signal/bus level, with a logic analyzer, for instance.

ReliabilityEmbedded systems often reside in machines that are expected to run continuously for years without errors, and insome cases recover by themselves if an error occurs. Therefore the software is usually developed and tested morecarefully than that for personal computers, and unreliable mechanical moving parts such as disk drives, switches orbuttons are avoided.Specific reliability issues may include:1. The system cannot safely be shut down for repair, or it is too inaccessible to repair. Examples include space

systems, undersea cables, navigational beacons, bore-hole systems, and automobiles.2. The system must be kept running for safety reasons. "Limp modes" are less tolerable. Often backups are selected

by an operator. Examples include aircraft navigation, reactor control systems, safety-critical chemical factorycontrols, train signals, engines on single-engine aircraft.

3. The system will lose large amounts of money when shut down: Telephone switches, factory controls, bridge andelevator controls, funds transfer and market making, automated sales and service.

A variety of techniques are used, sometimes in combination, to recover from errors—both software bugs such asmemory leaks, and also soft errors in the hardware:• watchdog timer that resets the computer unless the software periodically notifies the watchdog• subsystems with redundant spares that can be switched over to• software "limp modes" that provide partial function• Designing with a Trusted Computing Base (TCB) architecture[7] ensures a highly secure & reliable system

environment

Page 70: Micro Excellent

Embedded system 68

• An Embedded Hypervisor is able to provide secure encapsulation for any subsystem component, so that acompromised software component cannot interfere with other subsystems, or privileged-level system software.This encapsulation keeps faults from propagating from one subsystem to another, improving reliability. This mayalso allow a subsystem to be automatically shut down and restarted on fault detection.

• Immunity Aware Programming

High vs low volumeFor high volume systems such as portable music players or mobile phones, minimizing cost is usually the primarydesign consideration. Engineers typically select hardware that is just “good enough” to implement the necessaryfunctions.For low-volume or prototype embedded systems, general purpose computers may be adapted by limiting theprograms or by replacing the operating system with a real-time operating system.

Embedded software architecturesThere are several different types of software architecture in common use.

Simple control loopIn this design, the software simply has a loop. The loop calls subroutines, each of which manages a part of thehardware or software.

Interrupt controlled systemSome embedded systems are predominantly interrupt controlled. This means that tasks performed by the system aretriggered by different kinds of events. An interrupt could be generated for example by a timer in a predefinedfrequency, or by a serial port controller receiving a byte.These kinds of systems are used if event handlers need low latency and the event handlers are short and simple.Usually these kinds of systems run a simple task in a main loop also, but this task is not very sensitive to unexpecteddelays.Sometimes the interrupt handler will add longer tasks to a queue structure. Later, after the interrupt handler hasfinished, these tasks are executed by the main loop. This method brings the system close to a multitasking kernelwith discrete processes.

Cooperative multitaskingA nonpreemptive multitasking system is very similar to the simple control loop scheme, except that the loop ishidden in an API. The programmer defines a series of tasks, and each task gets its own environment to “run” in.When a task is idle, it calls an idle routine, usually called “pause”, “wait”, “yield”, “nop” (stands for no operation), etc.The advantages and disadvantages are very similar to the control loop, except that adding new software is easier, bysimply writing a new task, or adding to the queue-interpreter.

Page 71: Micro Excellent

Embedded system 69

Preemptive multitasking or multi-threadingIn this type of system, a low-level piece of code switches between tasks or threads based on a timer (connected to aninterrupt). This is the level at which the system is generally considered to have an "operating system" kernel.Depending on how much functionality is required, it introduces more or less of the complexities of managingmultiple tasks running conceptually in parallel.As any code can potentially damage the data of another task (except in larger systems using an MMU) programsmust be carefully designed and tested, and access to shared data must be controlled by some synchronizationstrategy, such as message queues, semaphores or a non-blocking synchronization scheme.Because of these complexities, it is common for organizations to use a real-time operating system (RTOS), allowingthe application programmers to concentrate on device functionality rather than operating system services, at least forlarge systems; smaller systems often cannot afford the overhead associated with a generic real time system, due tolimitations regarding memory size, performance, and/or battery life.The choice that a RTOS is required brings in itsown issues however as the selection must be done prior to starting to the application development process. Thistiming forces developers to choose the embedded operating system for their device based upon current requirementsand so restricts future options to a large extent[8] .The restriction of future options becomes more of an issue asproduct life decreases. Additionally the level of complexity is continuously growing as devices are required tomanage many variables such as serial, USB, TCP/IP, Bluetooth, Wireless LAN, trunk radio, multiple channels, dataand voice, enhanced graphics, multiple states, multiple threads, numerous wait states and so on. These trends areleading to the uptake of embedded middleware in addition to a real time operating system.

Microkernels and exokernelsA microkernel is a logical step up from a real-time OS. The usual arrangement is that the operating system kernelallocates memory and switches the CPU to different threads of execution. User mode processes implement majorfunctions such as file systems, network interfaces, etc.In general, microkernels succeed when the task switching and intertask communication is fast, and fail when they areslow.Exokernels communicate efficiently by normal subroutine calls. The hardware, and all the software in the system areavailable to, and extensible by application programmers.

Monolithic kernelsIn this case, a relatively large kernel with sophisticated capabilities is adapted to suit an embedded environment. Thisgives programmers an environment similar to a desktop operating system like Linux or Microsoft Windows, and istherefore very productive for development; on the downside, it requires considerably more hardware resources, isoften more expensive, and because of the complexity of these kernels can be less predictable and reliable.Common examples of embedded monolithic kernels are Embedded Linux and Windows CE.Despite the increased cost in hardware, this type of embedded system is increasing in popularity, especially on themore powerful embedded devices such as Wireless Routers and GPS Navigation Systems. Here are some of thereasons:• Ports to common embedded chip sets are available.• They permit re-use of publicly available code for Device Drivers, Web Servers, Firewalls, and other code.• Development systems can start out with broad feature-sets, and then the distribution can be configured to exclude

unneeded functionality, and save the expense of the memory that it would consume.• Many engineers believe that running application code in user mode is more reliable, easier to debug and that

therefore the development process is easier and the code more portable.

Page 72: Micro Excellent

Embedded system 70

• Many embedded systems lack the tight real time requirements of a control system. Although a system such asEmbedded Linux may be fast enough in order to respond to many other applications.

• Features requiring faster response than can be guaranteed can often be placed in hardware.• Many RTOS systems have a per-unit cost. When used on a product that is or will become a commodity, that cost

is significant.

Exotic custom operating systemsA small fraction of embedded systems require safe, timely, reliable or efficient behavior unobtainable with the one ofthe above architectures. In this case an organization builds a system to suit. In some cases, the system may bepartitioned into a "mechanism controller" using special techniques, and a "display controller" with a conventionaloperating system. A communication system passes data between the two.

Additional software componentsIn addition to the core operating system, many embedded systems have additional upper-layer software components.These components consist of networking protocol stacks like CAN, TCP/IP, FTP, HTTP, and HTTPS, and alsoincluded storage capabilities like FAT and flash memory management systems. If the embedded devices has audioand video capabilities, then the appropriate drivers and codecs will be present in the system. In the case of themonolithic kernels, many of these software layers are included. In the RTOS category, the availability of theadditional software components depends upon the commercial offering.

See also• Communications server• Cyber-physical system• DSP• Electronic Control Unit• Embedded Hypervisor• Embedded operating systems• Embedded software• Firmware• FPGA• Information appliance• Microprocessor• Microcontroller• Programming languages• Real-time operating system• Software engineering• System on a chip• System on module• Ubiquitous computing

Page 73: Micro Excellent

Embedded system 71

References[1] Michael srinivas. "Embedded Systems Glossary" (http:/ / www. netrino. com/ Embedded-Systems/ Glossary). Netrino Technical Library. .

Retrieved 2007-04-21.[2] Heath, Steve (2003). Embedded systems design (http:/ / books. google. com/ books?id=BjNZXwH7HlkC& pg=PA2). EDN series for design

engineers (2 ed.). Newnes. p. 2. ISBN 9780750655460. . "An embedded system is a microprocessor based system that is built to control afunction or a range of functions."

[3] Michael Barr; Anthony J. Massa (2006). "Introduction" (http:/ / books. google. com/ books?id=nPZaPJrw_L0C& pg=PA1). Programmingembedded systems: with C and GNU development tools. O'Reilly. pp. 1–2. ISBN 9780596009830. .

[4] Giovino, Bill. "Microcontroller.com - Embedded Systems supersite" (http:/ / www. microcontroller. com/ ). .[5] http:/ / www. embeddedsystem. com[6] Embedded.com - Under the Hood: Robot Guitar embeds autotuning (http:/ / www. embedded. com/ underthehood/ 207401418) By David

Carey, TechOnline EE Times (04/22/08, 11:10:00 AM EDT)Embedded Systems Design - Embedded.com[7] Your System is secure? Prove it! (http:/ / www. usenix. org/ publications/ login/ 2007-12/ pdfs/ heiser. pdf), Gernot Heiser, December 2007,

Vol. 2 No. 6 Page 35-38, ;login: The USENIX Magazine[8] "Working across Multiple Embedded Platforms" (http:/ / www. clarinox. com/ docs/ whitepapers/ Whitepaper_06_CrossPlatformDiscussion.

pdf). clarinox. . Retrieved 2010-8-17.

External links• Designing Embedded Hardware (http:/ / www. oreilly. com/ catalog/ dbhardware2/ ) John Catsoulis, O'Reilly,

May 2005, ISBN 0-596-00755-8.

Microprocessor

Intel 4004, the first general-purpose, commercialmicroprocessor

A microprocessor incorporates most or all of the functions of acomputer's central processing unit (CPU) on a single integrated circuit(IC, or microchip).[1] The first microprocessors emerged in the early1970s and were used for electronic calculators, using binary-codeddecimal (BCD) arithmetic on 4-bit words. Other embedded uses of4-bit and 8-bit microprocessors, such as terminals, printers, variouskinds of automation etc., followed soon after. Affordable 8-bitmicroprocessors with 16-bit addressing also led to the firstgeneral-purpose microcomputers from the mid-1970s on.

During the 1960s, computer processors were often constructed out ofsmall and medium-scale ICs containing from tens to a few hundredtransistors. The integration of a whole CPU onto a single chip greatlyreduced the cost of processing power. From these humble beginnings,continued increases in microprocessor capacity have rendered other forms of computers almost completely obsolete(see history of computing hardware), with one or more microprocessors used in everything from the smallestembedded systems and handheld devices to the largest mainframes and supercomputers.

Since the early 1970s, the increase in capacity of microprocessors has been a consequence of Moore's Law, whichsuggests that the number of transistors that can be fitted onto a chip doubles every two years. Although originallycalculated as a doubling every year,[2] Moore later refined the period to two years.[3] It is often incorrectly quoted asa doubling of transistors every 18 months.In the late 1990s, and in the high-performance microprocessor segment, heat generation (TDP), due to switchinglosses, static current leakage, and other factors, emerged as a leading developmental constraint.[4]

Page 74: Micro Excellent

Microprocessor 72

FirstsThree projects delivered a microprocessor at about the same time: Intel's 4004, Texas Instruments (TI) TMS 1000,and Garrett AiResearch's Central Air Data Computer (CADC).

Intel 4004

The 4004 with cover removed (left) and as actually used (right).

The Intel 4004 is generally regarded as the firstmicroprocessor,[5] [6] and cost thousands of dollars.[7]

The first known advertisement for the 4004 is datedNovember 1971 and appeared in Electronic News.[8]

The project that produced the 4004 originated in 1969,when Busicom, a Japanese calculator manufacturer,asked Intel to build a chipset for high-performancedesktop calculators. Busicom's original design calledfor a programmable chip set consisting of seven different chips. Three of the chips were to make a special-purposeCPU with its program stored in ROM and its data stored in shift register read-write memory. Ted Hoff, the Intelengineer assigned to evaluate the project, believed the Busicom design could be simplified by using dynamic RAMstorage for data, rather than shift register memory, and a more traditional general-purpose CPU architecture. Hoffcame up with a four–chip architectural proposal: a ROM chip for storing the programs, a dynamic RAM chip forstoring data, a simple I/O device and a 4-bit central processing unit (CPU). Although not a chip designer, he felt theCPU could be integrated into a single chip. This chip would later be called the 4004 microprocessor.

The architecture and specifications of the 4004 came from the interaction of Hoff with Stanley Mazor, a softwareengineer reporting to him, and with Busicom engineer Masatoshi Shima, during 1969. In April 1970, Intel hiredFederico Faggin to lead the design of the four-chip set. Faggin, who originally developed the silicon gate technology(SGT) in 1968 at Fairchild Semiconductor[9] and designed the world’s first commercial integrated circuit using SGT,the Fairchild 3708, had the correct background to lead the project since it was SGT that made it possible toimplement a single-chip CPU with the proper speed, power dissipation and cost. Faggin also developed the newmethodology for random logic design, based on silicon gate, that made the 4004 possible. Production units of the4004 were first delivered to Busicom in March 1971 and shipped to other customers in late 1971.

TMS 1000The Smithsonian Institution says TI engineers Gary Boone and Michael Cochran succeeded in creating the firstmicrocontroller (also called a microcomputer) in 1971. The result of their work was the TMS 1000, which wentcommercial in 1974.[10]

TI developed the 4-bit TMS 1000 and stressed pre-programmed embedded applications, introducing a version calledthe TMS1802NC on September 17, 1971 which implemented a calculator on a chip.TI filed for the patent on the microprocessor. Gary Boone was awarded U.S. Patent 3757306 [11] for the single-chipmicroprocessor architecture on September 4, 1973. It may never be known which company actually had the firstworking microprocessor running on the lab bench. In both 1971 and 1976, Intel and TI entered into broad patentcross-licensing agreements, with Intel paying royalties to TI for the microprocessor patent. A nice history of theseevents is contained in court documentation from a legal dispute between Cyrix and Intel, with TI as intervenor andowner of the microprocessor patent.A computer-on-a-chip is a variation of a microprocessor that combines the microprocessor core (CPU), some program memory and read/write memory, and I/O (input/output) lines onto one chip. The computer-on-a-chip patent, called the "microcomputer patent" at the time, U.S. Patent 4074351 [12], was awarded to Gary Boone and Michael J. Cochran of TI. Aside from this patent, the standard meaning of microcomputer is a computer using one or more

Page 75: Micro Excellent

Microprocessor 73

microprocessors as its CPU(s), while the concept defined in the patent is more akin to a microcontroller.

Pico/General Instrument

The PICO1/GI250 chip introduced in 1971. This was designed byPico Electronics (Glenrothes, Scotland) and manufactured by

General Instrument of Hicksville NY

In 1971 Pico Electronics[13] and General Instrument(GI) introduced their first collaboration in ICs, acomplete single chip calculator IC for theMonroe/Litton Royal Digital III calculator. This chipcould also arguably lay claim to be one of the firstmicroprocessors or microcontrollers having ROM,RAM and a RISC instruction set on-chip. The layoutfor the four layers of the PMOS process was handdrawn at x500 scale on mylar film, a significant task atthe time given the complexity of the chip.

Pico was a spinout by five GI design engineers whosevision was to create single chip calculator ICs. Theyhad significant previous design experience on multiplecalculator chipsets with both GI andMarconi-Elliott.[14] The key team members hadoriginally been tasked by Elliott Automation to createan 8 bit computer in MOS and had helped establish aMOS Research Laboratory in Glenrothes, Scotland in1967.

Calculators were becoming the largest single market for semiconductors and Pico and GI went on to have significantsuccess in this burgeoning market. GI continued to innovate in microprocessors and microcontrollers with productsincluding the PIC1600, PIC1640 and PIC1650. In 1987 the GI Microelectronics business was spun out into the verysuccessful PIC microcontroller business.

CADCIn 1968, Garrett AiResearch (which employed designers Ray Holt and Steve Geller) was invited to produce a digitalcomputer to compete with electromechanical systems then under development for the main flight control computerin the US Navy's new F-14 Tomcat fighter. The design was complete by 1970, and used a MOS-based chipset as thecore CPU. The design was significantly (approximately 20 times) smaller and much more reliable than themechanical systems it competed against, and was used in all of the early Tomcat models. This system contained "a20-bit, pipelined, parallel multi-microprocessor". The Navy refused to allow publication of the design until 1997. Forthis reason the CADC, and the MP944 chipset it used, are fairly unknown.[15] Ray Holt graduated CaliforniaPolytechnic University in 1968, and began his computer design career with the CADC. From its inception, it wasshrouded in secrecy until 1998 when at Holt's request, the US Navy allowed the documents into the public domain.Since then several have debated if this was the first microprocessor. Holt has stated that no one has compared thismicroprocessor with those that came later.[16] According to Parab et al. (2007), "The scientific papers and literaturepublished around 1971 reveal that the MP944 digital processor used for the F-14 Tomcat aircraft of the US Navyqualifies as the first microprocessor. Although interesting, it was not a single-chip processor, and was not generalpurpose – it was more like a set of parallel building blocks you could use to make a special-purpose DSP form. Itindicates that today’s industry theme of converging DSP-microcontroller architectures was started in 1971."[17] Thisconvergence of DSP and microcontroller architectures is known as a Digital Signal Controller.

Page 76: Micro Excellent

Microprocessor 74

Gilbert HyattGilbert Hyatt was awarded a patent claiming an invention pre-dating both TI and Intel, describing a"microcontroller".[18] The patent was later invalidated, but not before substantial royalties were paid out.[19] [20]

Four-Phase Systems AL1The Four-Phase Systems AL1 was an 8-bit bit slice chip containing eight registers and an ALU.[21] It was designedby Lee Boysel in 1969.[22] [23] [24] At the time, it formed part of a nine-chip, 24-bit CPU with three AL1s, but it waslater called a microprocessor when, in response to 1990s litigation by Texas Instruments, a demonstration systemwas constructed where a single AL1 formed part of a courtroom demonstration computer system, together withRAM, ROM, and an input-output device.[25]

8-bit designsThe Intel 4004 was followed in 1972 by the Intel 8008, the world's first 8-bit microprocessor. According to A Historyof Modern Computing, (MIT Press), pp. 220–21, Intel entered into a contract with Computer Terminals Corporation,later called Datapoint, of San Antonio TX, for a chip for a terminal they were designing. Datapoint later decided notto use the chip, and Intel marketed it as the 8008 in April, 1972. This was the world's first 8-bit microprocessor. Itwas the basis for the famous "Mark-8" computer kit advertised in the magazine Radio-Electronics in 1974.The 8008 was the precursor to the very successful Intel 8080 (1974), Zilog Z80 (1976), and derivative Intel 8-bitprocessors. The competing Motorola 6800 was released August 1974 and the similar MOS Technology 6502 in 1975(designed largely by the same people). The 6502 rivaled the Z80 in popularity during the 1980s.A low overall cost, small packaging, simple computer bus requirements, and sometimes the integration of extracircuitry (e.g. the Z80's built-in memory refresh circuitry) allowed the home computer "revolution" to acceleratesharply in the early 1980s. This delivered such inexpensive machines as the Sinclair ZX-81, which sold for US$99.The Western Design Center, Inc. (WDC) introduced the CMOS 65C02 in 1982 and licensed the design to severalfirms. It was used as the CPU in the Apple IIe and IIc personal computers as well as in medical implantable gradepacemakers and defibrilators, automotive, industrial and consumer devices. WDC pioneered the licensing ofmicroprocessor designs, later followed by ARM and other microprocessor Intellectual Property (IP) providers in the1990s.Motorola introduced the MC6809 in 1978, an ambitious and thought-through 8-bit design source compatible with the6800 and implemented using purely hard-wired logic. (Subsequent 16-bit microprocessors typically used microcodeto some extent, as CISC design requirements were getting too complex for purely hard-wired logic only.)Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief surge of interest due to itsinnovative and powerful instruction set architecture.A seminal microprocessor in the world of spaceflight was RCA's RCA 1802 (aka CDP1802, RCA COSMAC)(introduced in 1976), which was used onboard the Galileo probe to Jupiter (launched 1989, arrived 1995). RCACOSMAC was the first to implement CMOS technology. The CDP1802 was used because it could be run at verylow power, and because a variant was available fabricated using a special production process (Silicon on Sapphire),providing much better protection against cosmic radiation and electrostatic discharges than that of any otherprocessor of the era. Thus, the SOS version of the 1802 was said to be the first radiation-hardened microprocessor.The RCA 1802 had what is called a static design, meaning that the clock frequency could be made arbitrarily low,even to 0 Hz, a total stop condition. This let the Galileo spacecraft use minimum electric power for long uneventfulstretches of a voyage. Timers and/or sensors would awaken/improve the performance of the processor in time forimportant tasks, such as navigation updates, attitude control, data acquisition, and radio communication.

Page 77: Micro Excellent

Microprocessor 75

12-bit designsThe Intersil 6100 family consisted of a 12-bit microprocessor (the 6100) and a range of peripheral support andmemory ICs. The microprocessor recognised the DEC PDP-8 minicomputer instruction set. As such it wassometimes referred to as the CMOS-PDP8. Since it was also produced by Harris Corporation, it was also known asthe Harris HM-6100. By virtue of its CMOS technology and associated benefits, the 6100 was being incorporatedinto some military designs until the early 1980s.

16-bit designsThe first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced in early 1973. An8-bit version of the chipset was introduced in 1974 as the IMP-8.Other early multi-chip 16-bit microprocessors include one used by Digital Equipment Corporation (DEC) in theLSI-11 OEM board set and the packaged PDP 11/03 minicomputer, and the Fairchild Semiconductor MicroFlame9440, both of which were introduced in the 1975 to 1976 timeframe.In 1975, National introduced the first 16-bit single-chip microprocessor, the National Semiconductor PACE, whichwas later followed by an NMOS version, the INS8900.Another early single-chip 16-bit microprocessor was TI's TMS 9900, which was also compatible with their TI-990line of minicomputers. The 9900 was used in the TI 990/4 minicomputer, the TI-99/4A home computer, and theTM990 line of OEM microcomputer boards. The chip was packaged in a large ceramic 64-pin DIP package, whilemost 8-bit microprocessors such as the Intel 8080 used the more common, smaller, and less expensive plastic 40-pinDIP. A follow-on chip, the TMS 9980, was designed to compete with the Intel 8080, had the full TI 990 16-bitinstruction set, used a plastic 40-pin package, moved data 8 bits at a time, but could only address 16 KB. A thirdchip, the TMS 9995, was a new design. The family later expanded to include the 99105 and 99110.The Western Design Center, Inc. (WDC) introduced the CMOS 65816 16-bit upgrade of the WDC CMOS 65C02 in1984. The 65816 16-bit microprocessor was the core of the Apple IIgs and later the Super Nintendo EntertainmentSystem, making it one of the most popular 16-bit designs of all time.Intel followed a different path, having no minicomputers to emulate, and instead "upsized" their 8080 design into the16-bit Intel 8086, the first member of the x86 family, which powers most modern PC type computers. Intelintroduced the 8086 as a cost effective way of porting software from the 8080 lines, and succeeded in winning muchbusiness on that premise. The 8088, a version of the 8086 that used an external 8-bit data bus, was themicroprocessor in the first IBM PC, the model 5150. Following up their 8086 and 8088, Intel released the 80186,80286 and, in 1985, the 32-bit 80386, cementing their PC market dominance with the processor family's backwardscompatibility. The 8086 and 80186 had a crude method of segmentation, while the 80286 introduced a full-featuredsemgented memory management unit (MMU), and the 80386 introduced a flat 32-bit memory model with pagedmemory management.

Page 78: Micro Excellent

Microprocessor 76

32-bit designs

Upper interconnect layers on an Intel 80486DX2die.

16-bit designs had only been on the market briefly when 32-bitimplementations started to appear.The most significant of the 32-bit designs is the MC68000, introducedin 1979. The 68K, as it was widely known, had 32-bit registers butused 16-bit internal data paths and a 16-bit external data bus to reducepin count, and supported only 24-bit addresses. Motorola generallydescribed it as a 16-bit processor, though it clearly has 32-bitarchitecture. The combination of high performance, large (16megabytes or 224 bytes) memory space and fairly low cost made it themost popular CPU design of its class. The Apple Lisa and Macintoshdesigns made use of the 68000, as did a host of other designs in themid-1980s, including the Atari ST and Commodore Amiga.

The world's first single-chip fully-32-bit microprocessor, with 32-bitdata paths, 32-bit buses, and 32-bit addresses, was the AT&T BellLabs BELLMAC-32A, with first samples in 1980, and general production in 1982[26] [27] After the divestiture ofAT&T in 1984, it was renamed the WE 32000 (WE for Western Electric), and had two follow-on generations, theWE 32100 and WE 32200. These microprocessors were used in the AT&T 3B5 and 3B15 minicomputers; in the3B2, the world's first desktop supermicrocomputer; in the "Companion", the world's first 32-bit laptop computer; andin "Alexander", the world's first book-sized supermicrocomputer, featuring ROM-pack memory cartridges similar totoday's gaming consoles. All these systems ran the UNIX System V operating system.

Intel's first 32-bit microprocessor was the iAPX 432, which was introduced in 1981 but was not a commercialsuccess. It had an advanced capability-based object-oriented architecture, but poor performance compared tocontemporary architectures such as Intel's own 80286 (introduced 1982), which was almost four times as fast ontypical benchmark tests. However, the results for the iAPX432 was partly due to a rushed and therefore suboptimalAda compiler.The ARM first appeared in 1985. This is a RISC processor design, which has since come to dominate the 32-bitembedded systems processor space due in large part to its power efficiency, its licensing model, and its wideselection of system development tools. Semiconductor manufacturers generally license cores such as the ARM11and integrate them into their own system on a chip products; only a few such vendors are licensed to modify theARM cores. Most cell phones include an ARM processor, as do a wide variety of other products. There aremicrocontroller-oriented ARM cores without virtual memory support, as well as SMP applications processors withvirtual memory.Motorola's success with the 68000 led to the MC68010, which added virtual memory support. The MC68020,introduced in 1985 added full 32-bit data and address busses. The 68020 became hugely popular in the Unixsupermicrocomputer market, and many small companies (e.g., Altos, Charles River Data Systems) produceddesktop-size systems. The MC68030 was introduced next, improving upon the previous design by integrating theMMU into the chip. The continued success led to the MC68040, which included an FPU for better mathperformance. A 68050 failed to achieve its performance goals and was not released, and the follow-up MC68060was released into a market saturated by much faster RISC designs. The 68K family faded from the desktop in theearly 1990s.Other large companies designed the 68020 and follow-ons into embedded equipment. At one point, there were more68020s in embedded equipment than there were Intel Pentiums in PCs.[28] The ColdFire processor cores arederivatives of the venerable 68020.

Page 79: Micro Excellent

Microprocessor 77

During this time (early to mid-1980s), National Semiconductor introduced a very similar 16-bit pinout, 32-bitinternal microprocessor called the NS 16032 (later renamed 32016), the full 32-bit version named the NS 32032.Later the NS 32132 was introduced which allowed two CPUs to reside on the same memory bus, with built inarbitration. The NS32016/32 outperformed the MC68000/10 but the NS32332 which arrived at approximately thesame time the MC68020 did not have enough performance. The third generation chip, the NS32532 was different. Ithad about double the performance of the MC68030 which was released around the same time. The appearance ofRISC processors like the AM29000 and MC88000 (now both dead) influenced the architecture of the final core, theNS32764. Technically advanced, using a superscalar RISC core, internally overclocked, with a 64 bit bus, it was stillcapable of executing Series 32000 instructions through real time translation.When National Semiconductor decided to leave the Unix market, the chip was redesigned into the SwordfishEmbedded processor with a set of on chip peripherals. The chip turned out to be too expensive for the laser printermarket and was killed. The design team went to Intel and there designed the Pentium processor which is very similarto the NS32764 core internally The big success of the Series 32000 was in the laser printer market, where theNS32CG16 with microcoded BitBlt instructions had very good price/performance and was adopted by largecompanies like Canon. By the mid-1980s, Sequent introduced the first symmetric multiprocessor (SMP) server-classcomputer using the NS 32032. This was one of the design's few wins, and it disappeared in the late 1980s. The MIPSR2000 (1984) and R3000 (1989) were highly successful 32-bit RISC microprocessors. They were used in high-endworkstations and servers by SGI, among others. Other designs included the interesting Zilog Z80000, which arrivedtoo late to market to stand a chance and disappeared quickly.In the late 1980s, "microprocessor wars" started killing off some of the microprocessors. Apparently, with only onemajor design win, Sequent, the NS 32032 just faded out of existence, and Sequent switched to Intel microprocessors.From 1985 to 2003, the 32-bit x86 architectures became increasingly dominant in desktop, laptop, and servermarkets, and these microprocessors became faster and more capable. Intel had licensed early versions of thearchitecture to other companies, but declined to license the Pentium, so AMD and Cyrix built later versions of thearchitecture based on their own designs. During this span, these processors increased in complexity (transistor count)and capability (instructions/second) by at least three orders of magnitude. Intel's Pentium line is probably the mostfamous and recognizable 32-bit processor model, at least with the public at large.

64-bit designs in personal computersWhile 64-bit microprocessor designs have been in use in several markets since the early 1990s, the early 2000s sawthe introduction of 64-bit microprocessors targeted at the PC market.With AMD's introduction of a 64-bit architecture backwards-compatible with x86, x86-64 (also called AMD64), inSeptember 2003, followed by Intel's near fully compatible 64-bit extensions (first called IA-32e or EM64T, laterrenamed Intel 64), the 64-bit desktop era began. Both versions can run 32-bit legacy applications without anyperformance penalty as well as new 64-bit software. With operating systems Windows XP x64, Windows Vista x64,Windows 7 x64, Linux, BSD and Mac OS X that run 64-bit native, the software is also geared to fully utilize thecapabilities of such processors. The move to 64 bits is more than just an increase in register size from the IA-32 as italso doubles the number of general-purpose registers.The move to 64 bits by PowerPC processors had been intended since the processors' design in the early 90s and wasnot a major cause of incompatibility. Existing integer registers are extended as are all related data pathways, but, aswas the case with IA-32, both floating point and vector units had been operating at or above 64 bits for several years.Unlike what happened when IA-32 was extended to x86-64, no new general purpose registers were added in 64-bitPowerPC, so any performance gained when using the 64-bit mode for applications making no use of the largeraddress space is minimal.

Page 80: Micro Excellent

Microprocessor 78

Multicore designs

Front of Pentium D dual coreprocessor

Back of Pentium D dual coreprocessor

A different approach to improving a computer's performance is to add extraprocessors, as in symmetric multiprocessing designs, which have been popular inservers and workstations since the early 1990s. Keeping up with Moore's Law isbecoming increasingly challenging as chip-making technologies approach thephysical limits of the technology.

In response, the microprocessor manufacturers look for other ways to improveperformance, in order to hold on to the momentum of constant upgrades in themarket.A multi-core processor is simply a single chip containing more than onemicroprocessor core, effectively multiplying the potential performance with thenumber of cores (as long as the operating system and software is designed to takeadvantage of more than one processor). Some components, such as bus interfaceand second level cache, may be shared between cores. Because the cores arephysically very close they interface at much faster clock rates compared to discretemultiprocessor systems, improving overall system performance.In 2005, the first personal computer dual-core processors were announced and as of2009 dual-core and quad-core processors are widely used in servers, workstationsand PCs while six and eight-core processors will be available for high-endapplications in both the home and professional environments.Sun Microsystems has released the Niagara and Niagara 2 chips, both of which feature an eight-core design. TheNiagara 2 supports more threads and operates at 1.6 GHz.High-end Intel Xeon processors that are on the LGA771 socket are DP (dual processor) capable, as well as the IntelCore 2 Extreme QX9775 also used in the Mac Pro by Apple and the Intel Skulltrail motherboard. With the transitionto the LGA1366 and LGA1156 socket and the Intel i7 and i5 chips, quad core is now considered mainstream, butwith the release of the i7-980x, six core processors are now well within reach.

RISCIn the mid-1980s to early-1990s, a crop of new high-performance Reduced Instruction Set Computer (RISC)microprocessors appeared, influenced by discrete RISC-like CPU designs such as the IBM 801 and others. RISCmicroprocessors were initially used in special-purpose machines and Unix workstations, but then gained wideacceptance in other roles.In 1986, HP released its first system with a PA-RISC CPU. The first commercial RISC microprocessor design wasreleased either by MIPS Computer Systems, the 32-bit R2000 (the R1000 was not released) or by Acorn computers,the 32-bit ARM2 in 1987. The R3000 made the design truly practical, and the R4000 introduced the world's firstcommercially available 64-bit RISC microprocessor. Competing projects would result in the IBM POWER and SunSPARC architectures. Soon every major vendor was releasing a RISC design, including the AT&T CRISP, AMD29000, Intel i860 and Intel i960, Motorola 88000, DEC Alpha.As of 2007, two 64-bit RISC architectures are still produced in volume for non-embedded applications: SPARC andPower ISA.

Page 81: Micro Excellent

Microprocessor 79

Special-purpose designsA microprocessor is a general purpose system. Several specialized processing devices have followed from thetechnology. Microcontrollers integrate a microprocessor with periphal devices for control of embedded system. Adigital signal processors (DSP) is sepcialized for signal processing. Graphics processing units may have no, limited,or general programming facilities. For example, GPUs through the 1990s were mostly non-programmable and haveonly recently gained limited facilities like programmable vertex shaders.

Market statisticsIn 2003, about $44 billion (USD) worth of microprocessors were manufactured and sold.[29] Although about half ofthat money was spent on CPUs used in desktop or laptop personal computers, those count for only about 0.2% of allCPUs sold.About 55% of all CPUs sold in the world are 8-bit microcontrollers, over two billion of which were sold in 1997.[30]

As of 2002, less than 10% of all the CPUs sold in the world are 32-bit or more. Of all the 32-bit CPUs sold, about2% are used in desktop or laptop personal computers. Most microprocessors are used in embedded controlapplications such as household appliances, automobiles, and computer peripherals. Taken as a whole, the averageprice for a microprocessor, microcontroller, or DSP is just over $6.[31]

About ten billion CPUs were manufactured in 2008. About 98% of new CPUs produced each year are embedded.[32]

See also• Microprocessor chronology• List of instruction sets• List of microprocessors• Comparison of CPU architectures• Arithmetic logic unit• Central processing unit• Floating point unit

Notes and references[1] Osborne, Adam (1980). An Introduction to Microcomputers. Volume 1: Basic Concepts (2nd ed.). Berkely, California: Osborne-McGraw

Hill. ISBN 0-931988-34-9.[2] Moore, Gordon (19 April 1965). "Cramming more components onto integrated circuits" (ftp:/ / download. intel. com/ museum/ Moores_Law/

Articles-Press_Releases/ Gordon_Moore_1965_Article. pdf) (PDF). Electronics 38 (8). . Retrieved 2009-12-23.[3] (PDF) Excerpts from A Conversation with Gordon Moore: Moore’s Law (ftp:/ / download. intel. com/ museum/ Moores_Law/

Video-Transcripts/ Excepts_A_Conversation_with_Gordon_Moore. pdf). Intel. 2005. . Retrieved 2009-12-23.[4] Hodgin, Rick (3 December 2007). "Six fold reduction in semiconductor power loss, a faster, lower heat process technology" (http:/ / www.

tgdaily. com/ content/ view/ 35094/ 113/ ). TG Daily (DD&M). . Retrieved 2009-12-23.[5] Mack, Pamela E. (30 November 2005). "The Microcomputer Revolution" (http:/ / www. clemson. edu/ caah/ history/ FacultyPages/

PamMack/ lec122/ micro. htm). . Retrieved 2009-12-23.[6] (PDF) History in the Computing Curriculum (http:/ / www. hofstra. edu/ pdf/ CompHist_9812tla6. PDF). . Retrieved 2009-12-23.[7] Karam, Andrew P. (2000). "Advances in Microprocessor Technology". In Schlager, Neil; Lauer, Josh. Science and Its Times. Farmington

Hills, MI: The Gail Group. pp. 525–528.[8] Faggin, Federico; Hoff, Marcian E., Jr.; Mazor, Stanley; Shima, Masatoshi (December 1996). "The History of the 4004" (http:/ / ieeexplore.

ieee. org/ xpl/ freeabs_all. jsp?arnumber=546561). IEEE Micro 16 (6): 10-20. .[9] Faggin, F.; Klein, T.; L. (23 October 1968). "Insulated Gate Field Effect Transistor Integrated Circuits with Silicon Gates" (http:/ / www.

intel4004. com/ images/ iedm_covart. jpg) (JPEG image). International Electronic Devices Meeting. IEEE Electron Devices Group. .Retrieved 2009-12-23.

[10] Augarten, Stan (1983). The Most Widely Used Computer on a Chip: The TMS 1000 (http:/ / smithsonianchips. si. edu/ augarten/ p38. htm).New Haven and New York: Ticknor & Fields. ISBN 0-89919-195-9. . Retrieved 2009-12-23.

[11] http:/ / www. google. com/ patents?vid=3757306

Page 82: Micro Excellent

Microprocessor 80

[12] http:/ / www. google. com/ patents?vid=4074351[13] McGonigal, James (20 September 2006). "Microprocessor History: Foundations in Glenrothes, Scotland" (http:/ / www. spingal. plus. com/

micro). McGonigal personal website accessdate=2009-12-23. .[14] Tout, Nigel. "ANITA at its Zenith" (http:/ / anita-calculators. info/ html/ anita_at_its_zenith. html). Bell Punch Company and the ANITA

calculators. . Retrieved 2010-07-25.[15] Holt, Ray M.. "World’s First Microprocessor Chip Set" (http:/ / web. archive. org/ web/ 20100725060322/ http:/ / www.

microcomputerhistory. com/ index. php). Ray M. Holt website. Archived from the original (http:/ / www. microcomputerhistory. com) on2010-07-25. . Retrieved 2010-07-25.

[16] Holt, Ray. "Lecture: Microprocessor Design and Development for the US Navy F14 FighterJet" (http:/ / www. pdl. cmu. edu/ SDI/ 2001/092701. html) Room 8220, Wean Hall, Carnegie Mellon University, Pittsburgh, PA, US (27 September 2001). Retrieved on 2010-07-25.

[17] Parab, Jivan S.; Shelake, Vinod G.; Kamat, Rajanish K.; Naik, Gourish M. (2007) (PDF). Exploring C for Microcontrollers: A Hands onApproach (http:/ / ee. sharif. edu/ ~sakhtar3/ books/ Exploring C for Microcontrollers. pdf). Springer. p. 4. ISBN 978-1-4020-6067-0. .Retrieved 2010-07-25.

[18] Hyatt, Gilbert P., "Single chip integrated circuit computer architecture", Patent 4942516 (http:/ / www. google. com/ patents/about?id=cNcbAAAAEBAJ), issued July 17, 1990

[19] "The Gilbert Hyatt Patent" (http:/ / www. intel4004. com/ hyatt. htm). intel4004.com. Federico Faggin. . Retrieved 2009-12-23.[20] Crouch, Dennis (1 July 2007). "Written Description: CAFC Finds Prima Facie Rejection(Hyatt v. Dudas (Fed. Cir. 2007)" (http:/ / www.

patentlyo. com/ patent/ 2007/ 07/ hyatt-v-dudas-f. html). Patently-O blog. . Retrieved 2009-12-23.[21] Basset, Ross (2003). "When is a Microprocessor not a Microprocessor? The Industrial Construction of Semiconductor Innovation" (http:/ /

books. google. com/ books?id=rsRJTiu1h9MC). In Finn, Bernard. Exposing Electronics. Michigan State University Press. p. 121.ISBN 0-87013-658-5. .

[22] "1971 - Microprocessor Integrates CPU Function onto a Single Chip" (http:/ / www. computerhistory. org/ semiconductor/ timeline/1971-MPU. html). The Silicon Engine. Computer History Museum. . Retrieved 2010-07-25.

[23] Shaller, Robert R. (15 April 2004). "Dissertation: Technological Innovation in the Semiconductor Industry: A Case Study of theInternational Technology Roadmap for Semiconductors" (http:/ / web. archive. org/ web/ 20061219012629/ http:/ / home. comcast. net/~gordonepeterson2/ schaller_dissertation_2004. pdf) (PDF). George Mason University. Archived from the original (http:/ / home. comcast.net/ ~gordonepeterson2/ schaller_dissertation_2004. pdf) on 2006-12-19. . Retrieved 2010-07-25.

[24] RW (3 March 1995). "Interview with Gordon E. Moore" (http:/ / www-sul. stanford. edu/ depts/ hasrg/ histsci/ silicongenesis/ moore-ntb.html). LAIR History of Science and Technology Collections. Los Altos Hills, California: Stanford University. .

[25] Bassett 2003. pp. 115, 122.[26] "Shoji, M. Bibliography" (http:/ / cm. bell-labs. com/ cm/ cs/ bib/ shoji. bib). Bell Laboratories. 7 October 1998. . Retrieved 2009-12-23.[27] "Timeline: 1982–1984" (http:/ / www. bell-labs. com/ org/ physicalsciences/ timeline/ span23. html). Physical Sciences & Communications

at Bell Labs. Bell Labs, Alcatel-Lucent. 17 January 2001. . Retrieved 2009-12-23.[28] Turley, Jim (July 1998). "MCore: Does Motorola Need Another Processor Family?" (http:/ / web. archive. org/ web/ 19980702003323/

http:/ / www. embedded. com/ 98/ 9807sr. htm). Embedded Systems Design. TechInsights (United Business Media). Archived from theoriginal (http:/ / www. embedded. com/ 98/ 9807sr. htm) on 1998-07-02. . Retrieved 2009-12-23.

[29] World Semiconductor Trade Statistics. "WSTS Semiconductor Market Forecast World Release Date: 1 June 2004 - 6:00 UTC" (http:/ / web.archive. org/ web/ 20041207091926/ http:/ / www. wsts. org/ press. html). Press release. Archived from the original (http:/ / www. wsts. org/press. html) on 2004-12-07. .

[30] Cantrell, Tom (1998). "Microchip on the March" (http:/ / web. archive. org/ web/ 20070220134759/ http:/ / www. circuitcellar. com/ library/designforum/ silicon_update/ 3/ index. asp). Archived from the original (http:/ / www. circuitcellar. com/ library/ designforum/ silicon_update/3/ ) on 2007-02-20. .

[31] Turley, Jim (18 December 2002). "The Two Percent Solution" (http:/ / www. embedded. com/ shared/ printableArticle.jhtml?articleID=9900861). Embedded Systems Design. TechInsights (United Business Media). . Retrieved 2009-12-23.

[32] Barr, Michael (1 August 2009). "Real men program in C" (http:/ / www. embedded. com/ columns/ barrcode/ 218600142?pgno=2).Embedded Systems Design. TechInsights (United Business Media). p. 2. . Retrieved 2009-12-23.

• Ray, A. K.; Bhurchand, K.M.. Advanced Microprocessors and Peripherals. India: Tata McGraw-Hill.

Page 83: Micro Excellent

Microprocessor 81

External links• Dirk Oppelt. "The CPU Collection" (http:/ / www. cpu-collection. de/ ). Retrieved 2009-12-23.• Gennadiy Shvets. "CPU-World" (http:/ / www. cpu-world. com/ ). Retrieved 2009-12-23.• Jérôme Cremet. "The Cecko's CPU Library" (http:/ / gecko54000. free. fr/ ). Retrieved 2009-12-23.• "How Microprocessors Work" (http:/ / computer. howstuffworks. com/ microprocessor. htm). Retrieved

2009-12-23.• William Blair. "IC Die Photography" (http:/ / diephotos. blogspot. com/ ). Retrieved 2009-12-23.• John Bayko (December 2003). "Great Microprocessors of the Past and Present" (http:/ / jbayko. sasktelwebsite.

net/ cpu. html). Retrieved 2009-12-23.• Wade Warner (22 December 2004). "Great moments in microprocessor history" (http:/ / www-106. ibm. com/

developerworks/ library/ pa-microhist. html?ca=dgr-mw08MicroHistory). IBM. Retrieved 2009-12-23.• Ray M. Holt. "theDocuments" (http:/ / firstmicroprocessor. com/ ?page_id=17). World’s First Microprocessor.

Retrieved 2009-12-23.

Microcontroller

The die from an Intel 8742, an 8-bit microcontroller thatincludes a CPU running at 12 MHz, 128 bytes of RAM, 2048

bytes of EPROM, and I/O in the same chip.

A microcontroller (sometimes abbreviated µC, uC orMCU) is a small computer on a single integrated circuitcontaining a processor core, memory, and programmableinput/output peripherals. Program memory in the form ofNOR flash or OTP ROM is also often included on chip, aswell as a typically small amount of RAM. Microcontrollersare designed for embedded applications, in contrast to themicroprocessors used in personal computers or othergeneral purpose applications.

Microcontrollers are used in automatically controlledproducts and devices, such as automobile engine controlsystems, implantable medical devices, remote controls,office machines, appliances, power tools, and toys. Byreducing the size and cost compared to a design that uses aseparate microprocessor, memory, and input/output devices,microcontrollers make it economical to digitally control even more devices and processes. Mixed signalmicrocontrollers are common, integrating analog components needed to control non-digital electronic systems.Some microcontrollers may use four-bit words and operate at clock rate frequencies as low as 4 kHz, for low powerconsumption (milliwatts or microwatts). They will generally have the ability to retain functionality while waiting foran event such as a button press or other interrupt; power consumption while sleeping (CPU clock and mostperipherals off) may be just nanowatts, making many of them well suited for long lasting battery applications. Othermicrocontrollers may serve performance-critical roles, where they may need to act more like a digital signalprocessor (DSP), with higher clock speeds and power consumption.

Page 84: Micro Excellent

Microcontroller 82

Embedded designA microcontroller can be considered a self-contained system with a processor, memory and peripherals and can beused as an embedded system.[1] The majority of microcontrollers in use today are embedded in other machinery,such as automobiles, telephones, appliances, and peripherals for computer systems. These are called embeddedsystems. While some embedded systems are very sophisticated, many have minimal requirements for memory andprogram length, with no operating system, and low software complexity. Typical input and output devices includeswitches, relays, solenoids, LEDs, small or custom LCD displays, radio frequency devices, and sensors for data suchas temperature, humidity, light level etc. Embedded systems usually have no keyboard, screen, disks, printers, orother recognizable I/O devices of a personal computer, and may lack human interaction devices of any kind.

InterruptsMicrocontrollers must provide real time (predictable, though not necessarily fast) response to events in theembedded system they are controlling. When certain events occur, an interrupt system can signal the processor tosuspend processing the current instruction sequence and to begin an interrupt service routine (ISR, or "interrupthandler"). The ISR will perform any processing required based on the source of the interrupt before returning to theoriginal instruction sequence. Possible interrupt sources are device dependent, and often include events such as aninternal timer overflow, completing an analog to digital conversion, a logic level change on an input such as from abutton being pressed, and data received on a communication link. Where power consumption is important as inbattery operated devices, interrupts may also wake a microcontroller from a low power sleep state where theprocessor is halted until required to do something by a peripheral event.

ProgramsMicrocontroller programs must fit in the available on-chip program memory, since it would be costly to provide asystem with external, expandable, memory. Compilers and assemblers are used to convert high-level language andassembler language codes into a compact machine code for storage in the microcontroller's memory. Depending onthe device, the program memory may be permanent, read-only memory that can only be programmed at the factory,or program memory may be field-alterable flash or erasable read-only memory.

Other microcontroller featuresMicrocontrollers usually contain from several to dozens of general purpose input/output pins (GPIO). GPIO pins aresoftware configurable to either an input or an output state. When GPIO pins are configured to an input state, they areoften used to read sensors or external signals. Configured to the output state, GPIO pins can drive external devicessuch as LEDs or motors.Many embedded systems need to read sensors that produce analog signals. This is the purpose of theanalog-to-digital converter (ADC). Since processors are built to interpret and process digital data, i.e. 1s and 0s, theyare not able to do anything with the analog signals that may be sent to it by a device. So the analog to digitalconverter is used to convert the incoming data into a form that the processor can recognize. A less common featureon some microcontrollers is a digital-to-analog converter (DAC) that allows the processor to output analog signals orvoltage levels.In addition to the converters, many embedded microprocessors include a variety of timers as well. One of the mostcommon types of timers is the Programmable Interval Timer (PIT). A PIT may either count down from some valueto zero, or up to the capacity of the count register, overflowing to zero. Once it reaches zero, it sends an interrupt tothe processor indicating that it has finished counting. This is useful for devices such as thermostats, whichperiodically test the temperature around them to see if they need to turn the air conditioner on, the heater on, etc.

Page 85: Micro Excellent

Microcontroller 83

Time Processing Unit (TPU) is a sophisticated timer. In addition to counting down, the TPU can detect input events,generate output events, and perform other useful operations.A dedicated Pulse Width Modulation (PWM) block makes it possible for the CPU to control power converters,resistive loads, motors, etc., without using lots of CPU resources in tight timer loops.Universal Asynchronous Receiver/Transmitter (UART) block makes it possible to receive and transmit data over aserial line with very little load on the CPU. Dedicated on-chip hardware also often includes capabilities tocommunicate with other devices (chips) in digital formats such as I2C and Serial Peripheral Interface (SPI).

Higher integrationIn contrast to general-purpose CPUs, micro-controllers may not implement an external address or data bus as theyintegrate RAM and non-volatile memory on the same chip as the CPU. Using fewer pins, the chip can be placed in amuch smaller, cheaper package.Integrating the memory and other peripherals on a single chip and testing them as a unit increases the cost of thatchip, but often results in decreased net cost of the embedded system as a whole. Even if the cost of a CPU that hasintegrated peripherals is slightly more than the cost of a CPU and external peripherals, having fewer chips typicallyallows a smaller and cheaper circuit board, and reduces the labor required to assemble and test the circuit board.A micro-controller is a single integrated circuit, commonly with the following features:• central processing unit - ranging from small and simple 4-bit processors to complex 32- or 64-bit processors• discrete input and output bits, allowing control or detection of the logic state of an individual package pin• serial input/output such as serial ports (UARTs)• other serial communications interfaces like I²C, Serial Peripheral Interface and Controller Area Network for

system interconnect• peripherals such as timers, event counters, PWM generators, and watchdog• volatile memory (RAM) for data storage• ROM, EPROM, EEPROM or Flash memory for program and operating parameter storage• clock generator - often an oscillator for a quartz timing crystal, resonator or RC circuit• many include analog-to-digital converters• in-circuit programming and debugging supportThis integration drastically reduces the number of chips and the amount of wiring and circuit board space that wouldbe needed to produce equivalent systems using separate chips. Furthermore, and on low pin count devices inparticular, each pin may interface to several internal peripherals, with the pin function selected by software. Thisallows a part to be used in a wider variety of applications than if pins had dedicated functions. Micro-controllershave proved to be highly popular in embedded systems since their introduction in the 1970s.Some microcontrollers use a Harvard architecture: separate memory buses for instructions and data, allowingaccesses to take place concurrently. Where a Harvard architecture is used, instruction words for the processor may bea different bit size than the length of internal memory and registers; for example: 12-bit instructions used with 8-bitdata registers.The decision of which peripheral to integrate is often difficult. The microcontroller vendors often trade operatingfrequencies and system design flexibility against time-to-market requirements from their customers and overalllower system cost. Manufacturers have to balance the need to minimize the chip size against additional functionality.Microcontroller architectures vary widely. Some designs include general-purpose microprocessor cores, with one or more ROM, RAM, or I/O functions integrated onto the package. Other designs are purpose built for control applications. A micro-controller instruction set usually has many instructions intended for bit-wise operations to make control programs more compact.[2] For example, a general purpose processor might require several instructions to test a bit in a register and branch if the bit is set, where a micro-controller could have a single instruction to

Page 86: Micro Excellent

Microcontroller 84

provide that commonly-required function.Microcontrollers typically do not have a math coprocessor, so floating point arithmetic is performed by software.

VolumesAbout 55% of all CPUs sold in the world are 8-bit microcontrollers and microprocessors. According to Semico, overfour billion 8-bit microcontrollers were sold in 2006.[3]

A typical home in a developed country is likely to have only four general-purpose microprocessors but around threedozen microcontrollers. A typical mid-range automobile has as many as 30 or more microcontrollers. They can alsobe found in many electrical devices such as washing machines, microwave ovens, and telephones.

A PIC 18F8720 microcontroller in an 80-pinTQFP package.

Manufacturers have often produced special versions of theirmicrocontrollers in order to help the hardware and softwaredevelopment of the target system. Originally these included EPROMversions that have a "window" on the top of the device through whichprogram memory can be erased by ultraviolet light, ready forreprogramming after a programming ("burn") and test cycle. Since1998, EPROM versions are rare and have been replaced by EEPROMand flash, which are easier to use (can be erased electronically) andcheaper to manufacture.

Other versions may be available where the ROM is accessed as anexternal device rather than as internal memory, however these arebecoming increasingly rare due to the widespread availability of cheapmicrocontroller programmers.

The use of field-programmable devices on a microcontroller may allowfield update of the firmware or permit late factory revisions to products that have been assembled but not yetshipped. Programmable memory also reduces the lead time required for deployment of a new product.

Where hundreds of thousands of identical devices are required, using parts programmed at the time of manufacturecan be an economical option. These "mask programmed" parts have the program laid down in the same way as thelogic of the chip, at the same time.

Programming environmentsMicrocontrollers were originally programmed only in assembly language, but various high-level programminglanguages are now also in common use to target microcontrollers. These languages are either designed specially forthe purpose, or versions of general purpose languages such as the C programming language. Compilers for generalpurpose languages will typically have some restrictions as well as enhancements to better support the uniquecharacteristics of microcontrollers. Some microcontrollers have environments to aid developing certain types ofapplications. Microcontroller vendors often make tools freely available to make it easier to adopt their hardware.Many microcontrollers are so quirky that they effectively require their own non-standard dialects of C, such asSDCC for the 8051, which prevent using standard tools (such as code libraries or static analysis tools) even for codeunrelated to hardware features. Interpreters are often used to hide such low level quirks.Interpreter firmware is also available for some microcontrollers. For example, BASIC on the early microcontrollersIntel 8052[4] ; BASIC and FORTH on the Zilog Z8[5] as well as some modern devices. Typically these interpreterssupport interactive programming.Simulators are available for some microcontrollers, such as in Microchip's MPLAB environment. These allow a developer to analyze what the behavior of the microcontroller and their program should be if they were using the

Page 87: Micro Excellent

Microcontroller 85

actual part. A simulator will show the internal processor state and also that of the outputs, as well as allowing inputsignals to be generated. While on the one hand most simulators will be limited from being unable to simulate muchother hardware in a system, they can exercise conditions that may otherwise be hard to reproduce at will in thephysical implementation, and can be the quickest way to debug and analyze problems.Recent microcontrollers are often integrated with on-chip debug circuitry that when accessed by an in-circuitemulator via JTAG, allow debugging of the firmware with a debugger.

Types of microcontrollersAs of 2008 there are several dozen microcontroller architectures and vendors including:• Freescale 68HC11 (8-bit)• Intel 8051• Silicon Laboratories Pipelined 8051 Microcontrollers• ARM processors (from many vendors) using ARM7 or Cortex-M3 cores are generally microcontrollers• STMicroelectronics STM8 [6] (8-bit), ST10 (16-bit) and STM32 [6] (32-bit)• Atmel AVR (8-bit), AVR32 (32-bit), and AT91SAM (32-bit)• Freescale ColdFire (32-bit) and S08 (8-bit)• Hitachi H8, Hitachi SuperH (32-bit)• Hyperstone E1/E2 (32-bit, First full integration of RISC and DSP on one processor core [1996] [7])• Infineon Microcontroller: 8, 16, 32 Bit microcontrollers for automotive and industrial applications[8]

• MIPS (32-bit PIC32)• NEC V850 (32-bit)• PIC (8-bit PIC16, PIC18, 16-bit dsPIC33 / PIC24)• PowerPC ISE• PSoC (Programmable System-on-Chip)• Rabbit 2000 (8-bit)• Texas Instruments Microcontrollers MSP430 (16-bit), C2000 (32-bit), and Stellaris (32-bit)• Toshiba TLCS-870 (8-bit/16-bit)• Zilog eZ8 (16-bit), eZ80 (8-bit)and many others, some of which are used in very narrow range of applications or are more like applicationsprocessors than microcontrollers. The microcontroller market is extremely fragmented, with numerous vendors,technologies, and markets. Note that many vendors sell (or have sold) multiple architectures.

Interrupt latencyIn contrast to general-purpose computers, microcontrollers used in embedded systems often seek to optimizeinterrupt latency over instruction throughput. Issues include both reducing the latency, and making it be morepredictable (to support real-time control).When an electronic device causes an interrupt, the intermediate results (registers) have to be saved before thesoftware responsible for handling the interrupt can run. They must also be restored after that software is finished. Ifthere are more registers, this saving and restoring process takes more time, increasing the latency. Ways to reducesuch context/restore latency include having relatively few registers in their central processing units (undesirablebecause it slows down most non-interrupt processing substantially), or at least having the hardware not save them all(this fails if the software then needs to compensate by saving the rest "manually"). Another technique involvesspending silicon gates on "shadow registers": one or more duplicate registers used only by the interrupt software,perhaps supporting a dedicated stack.Other factors affecting interrupt latency include:

Page 88: Micro Excellent

Microcontroller 86

• Cycles needed to complete current CPU activities. To minimize those costs, microcontrollers tend to have shortpipelines (often three instructions or less), small write buffers, and ensure that longer instructions are continuableor restartable. RISC design principles ensure that most instructions take the same number of cycles, helping avoidthe need for most such continuation/restart logic.

• The length of any critical section that needs to be interrupted. Entry to a critical section restricts concurrent datastructure access. When a data structure must be accessed by an interrupt handler, the critical section must blockthat interrupt. Accordingly, interrupt latency is increased by however long that interrupt is blocked. When thereare hard external constraints on system latency, developers often need tools to measure interrupt latencies andtrack down which critical sections cause slowdowns.• One common technique just blocks all interrupts for the duration of the critical section. This is easy to

implement, but sometimes critical sections get uncomfortably long.• A more complex technique just blocks the interrupts that may trigger access to that data structure. This often

based on interrupt priorities, which tend to not correspond well to the relevant system data structures.Accordingly, this technique is used mostly in very constrained environments.

• Processors may have hardware support for some critical sections. Examples include supporting atomic accessto bits or bytes within a word, or other atomic access primitives like the LDREX/STREX exclusive accessprimitives introduced in the ARMv6 architecture.

• Interrupt nesting. Some microcontrollers allow higher priority interrupts to interrupt lower priority ones. Thisallows software to manage latency by giving time-critical interrupts higher priority (and thus lower and morepredictable latency) than less-critical ones.

• Trigger rate. When interrupts occur back-to-back, microcontrollers may avoid an extra context save/restore cycleby a form of tail call optimization.

Lower end microcontrollers tend to support fewer interrupt latency controls than higher end ones.

HistoryThe first single-chip microprocessor was the 4-bit Intel 4004 released in 1971. With the Intel 8008 and more capablemicroprocessors available over the next several years.These however all required external chip(s) to implement a working system, raising total system cost, and making itimpossible to economically computerize appliances.The first computer system on a chip optimized for control applications was the Intel 8048 released in 1975, with bothRAM and ROM on the same chip. This chip would find its way into over one billion PC keyboards, and othernumerous applications. At this time Intels President, Luke J. Valenter, stated that the (Microcontroller) was one ofthe most successful in the companies history, and expanded the division's budget over 25%.Most microcontrollers at this time had two variants. One had an erasable EPROM program memory, which wassignificantly more expensive than the PROM variant which was only programmable once.In 1993, the introduction of EEPROM memory allowed microcontrollers (beginning with the Microchip PIC16x84)[9] ) to be electrically erased quickly without an expensive package as required for EPROM, allowing both rapidprototyping, and In System Programming.The same year, Atmel introduced the first microcontroller using Flash memory.[10]

Other companies rapidly followed suit, with both memory types.Cost has plummeted over time, with the cheapest 8-bit microcontrollers being available for under $0.25 in quantity(thousands) in 2009, and some 32-bit microcontrollers around $1 for similar quantities.Nowadays microcontrollers are low cost and readily available for hobbyists, with large online communities aroundcertain processors.

Page 89: Micro Excellent

Microcontroller 87

In the future, MRAM could potentially be used in microcontrollers as it has infinite endurance and its incrementalsemiconductor wafer process cost is relatively low.

Microcontroller embedded memory technologySince the emergence of microcontrollers, many different memory technologies have been used. Almost allmicrocontrollers have at least two different kinds of memory, a non-volatile memory for storing firmware and aread-write memory for temporary data.

DataFrom the earliest microcontrollers to today, six-transistor SRAM is almost always used as the read/write workingmemory, with a few more transistors per bit used in the register file. MRAM could potentially replace it as it is 4-10times denser which would make it more cost effective.In addition to the SRAM, some microcontrollers also have internal EEPROM for data storage; and even ones that donot have any (or not enough) are often connected to external serial EEPROM chip (such as the BASIC Stamp) orexternal serial flash memory chip.A few recent microcontrollers beginning in 2003 have "self-programmable" flash memory[10] .

FirmwareThe earliest microcontrollers used hard-wired or mask ROM to store firmware. Later microcontrollers (such as theearly versions of the Freescale 68HC11 and early PIC microcontrollers) had quartz windows that allowed ultravioletlight in to erase the EPROM.The Microchip PIC16C84, introduced in 1993,[11] was the first microcontroller to use EEPROM to store firmware.Also in 1993, Atmel introduced the first microcontroller using NOR Flash memory to store firmware.[10]

PSoC microcontrollers, introduced in 2002, store firmware in SONOS flash memory.MRAM could potentially be used to store firmware.

See also• In-circuit emulator• List of common microcontrollers• Microarchitecture• Microbotics• Programmable logic controller• PSoC• Single-board microcontroller

Page 90: Micro Excellent

Microcontroller 88

Notes[1] Heath, Steve (2003). Embedded systems design (http:/ / books. google. com/ books?id=BjNZXwH7HlkC& pg=PA11). EDN series for design

engineers (2 ed.). Newnes. pp. 11–12. ISBN 9780750655460. .[2] Easy Way to build a microcontroller project (http:/ / www. popsci. com/ diy/ article/ 2009-01/ dot-dot-programming)[3] http:/ / www. semico. com[4] "8052-Basic Microcontrollers" (http:/ / www. lvr. com/ microc. htm) by Jan Axelson 1994[5] "Optimizing the Zilog Z8 Forth Microcontroller for Rapid Prototyping" by Robert Edwards 1987, page 3. http:/ / www. ornl. gov/ info/

reports/ 1987/ 3445602791343. pdf[6] http:/ / www. st. com/ mcu/[7] http:/ / www. hyperstone. com/ profile_overview_en,546. html[8] www.infineon.com/mcu (http:/ / www. infineon. com/ mcu)[9] http:/ / microchip. com/ stellent/ idcplg?IdcService=SS_GET_PAGE& nodeId=2018& mcparam=en013082[10] "Atmel’s Self-Programming Flash Microcontrollers" (http:/ / www. atmel. com/ dyn/ resources/ prod_documents/ doc2464. pdf) by Odd

Jostein Svendsli 2003[11] Microchip unveils PIC16C84, a reprogrammable EEPROM-based 8-bit microcontroller" (http:/ / www. microchip. com/ stellent/

idcplg?IdcService=SS_GET_PAGE& nodeId=2018& mcparam=en013082) 1993

External links• Microcontroller (http:/ / www. dmoz. org/ Business/ Electronics_and_Electrical/ Control_Systems/

Microcontroller/ ) at the Open Directory Project• Microcontroller.com - Embedded Systems industry website with tutorials and dedicated resources (http:/ / www.

microcontroller. com)• Embedded Systems Design (http:/ / www. embedded. com/ mag. htm) magazine

Page 91: Micro Excellent

Instruction cycle 89

Instruction cycleAn instruction cycle (sometimes called fetch-and-execute cycle, fetch-decode-execute cycle, or FDX) is the basicoperation cycle of a computer. It is the process by which a computer retrieves a program instruction from itsmemory, determines what actions the instruction requires, and carries out those actions. This cycle is repeatedcontinuously by the central processing unit (CPU), from bootup to when the computer is shut down.

A diagram of the Fetch Execute Cycle.

Circuits used

The circuits used in the CPU during the cycle are:• Program Counter (PC) - an incrementing counter that keeps track of

the memory address of which instruction is to be executed next• Memory Address Register (MAR) - holds the address in memory of

the next instruction to be executed• Memory Data Register (MDR) - a two-way register that holds data

fetched from memory (and ready for the CPU to process) or datawaiting to be stored in memory

• Current Instruction Register (CIR) - a temporary holding ground forthe instruction that has just been fetched from memory

• Control Unit (CU) - decodes the program instruction in the CIR,selecting machine resources such as a data source register and aparticular arithmetic operation, and coordinates activation of thoseresources

• Arithmetic logic unit (ALU) - performs mathematical and logicaloperations

Instruction cycle

Each computer's CPU can have different cycles based on differentinstruction sets, but will be similar to the following cycle:

2. Decode the instructionThe instruction decoder interprets the instruction. If the instruction has an indirect address, the effective address isread from main memory, and any required data is fetched from main memory to be processed and then placed intodata registers. During this phase the instruction inside the IR (instruction register) decode.

3. Execute the instructionThe CU passes the decoded information as a sequence of control signals to the relevant function units of the CPU toperform the actions required by the instruction such as reading values from registers, passing them to the ALU toperform mathematical or logic functions on them, and writing the result back to a register. If the ALU is involved, itsends a condition signal back to the CU.

Page 92: Micro Excellent

Instruction cycle 90

4. Store resultsThe result generated by the operation is stored in the main memory, or sent to an output device. Based on thecondition of any feedback from the ALU, Program Counter may be updated to a different address from which thenext instruction will be fetched.The cycle is then repeated.

Fetch cycleSteps 1 and 2 of the Instruction Cycle are called the Fetch Cycle. These steps are the same for each instruction. Thefetch cycle processes the instruction from the instruction word which contains an opcode and an operand.

Execute cycleSteps 3 and 4 of the Instruction Cycle are part of the Execute Cycle. These steps will change with each instruction.The first step of the execute cycle is the Process-Memory. Data is transferred between the CPU and the I/O module.Next is the Data-Processing uses mathematical operations as well as logical operations in reference to data. Centralalterations is the next step, is a sequence of operations, for example a jump operation. The last step is a combinedoperation from all the other steps.

Initiating the cycleThe cycle starts immediately when power is applied to the system using an initial PC value that is predefined for thesystem architecture (in Intel IA-32 CPUs, for instance, the predefined PC value is 0xfffffff0). Typically this addresspoints to instructions in a read-only memory (ROM) which begin the process of loading the operating system. (Thatloading process is called booting.)[1]

The Fetch-Execute cycle in Transfer NotationExpressed in register transfer notation:

(Increment the PC for next cycle)

The registers used above, besides the ones described earlier, are the Memory Address Register (MAR) and theMemory Data Register (MDR), which are used (at least conceptually) in the accessing of memory.

References[1] Bosky Agarwal (2004). "Instruction Fetch Execute Cycle" (http:/ / www. cs. montana. edu/ ~bosky/ cs518/ ife/ IFE. pdf). . Retrieved

2010-07-09.

Page 93: Micro Excellent

Computer memory 91

Computer memory

Computer memory types

Volatile

• DRAM, e.g. DDR SDRAM• SRAM• Upcoming

• T-RAM• Z-RAM• TTRAM

• Historical

• Delay line memory• Selectron tube• Williams tube

Non-volatile

• ROM

• PROM• EPROM• EEPROM

• Flash memory• FeRAM• MRAM• PRAM• Upcoming

• CBRAM• SONOS• RRAM• Racetrack memory• NRAM• Millipede

• Historical

• Drum memory• Magnetic core memory• Plated wire memory• Bubble memory• Twistor memory

In computing, memory refers to the state information of a computing system, as it is kept active in some physicalstructure. The term "memory" is used for the information in physical systems which are fast (i.e. RAM), as adistinction from physical systems which are slow to access (i.e. data storage). By design, the term "memory" refersto temporary state devices, whereas the term "storage" is reserved for permanent data. Advances in storagetechnology have blurred the distinction a bit —memory kept on what is conventionally a storage system is called"virtual memory".Colloquially, computer memory refers to the physical devices used to store data or programs (sequences of instructions) on a temporary or permanent basis for use in an electronic digital computer. Computers represent information in binary code, written as sequences of 0s and 1s. Each binary digit (or "bit") may be stored by any physical system that can be in either of two stable states, to represent 0 and 1. Such a system is called bistable. This could be an on-off switch, an electrical capacitor that can store or lose a charge, a magnet with its polarity up or down, or a surface that can have a pit or not. Today, capacitors and transistors, functioning as tiny electrical switches, are used for temporary storage, and either disks or tape with a magnetic coating, or plastic discs with

Page 94: Micro Excellent

Computer memory 92

patterns of pits are used for long-term storage.Computer memory is usually meant to refer to the semiconductor technology that is used to store information inelectronic devices. Current primary computer memory makes use of integrated circuits consisting of silicon-basedtransistors. There are two main types of memory: volatile and non-volatile.

History

Detail of the back of a section of ENIAC, showingvacuum tubes

In the early 1940s, memory technology mostly permitted acapacity of a few bytes. The first programmable digital computer,the ENIAC, using thousands of octal-base radio vacuum tubes,could perform simple calculations involving 20 numbers of tendecimal digits which were held in the vacuum tube accumulators.

The next significant advance in computer memory was withacoustic delay line memory developed by J. Presper Eckert in theearly 1940s. Through the construction of a glass tube filled withmercury and plugged at each end with a quartz crystal, delay linescould store bits of information within the quartz and transfer itthrough sound waves propagating through mercury. Delay linememory would be limited to a capacity of up to a few hundredthousand bits to remain efficient.

Two alternatives to the delay line, the Williams tube and Selectrontube, were developed in 1946, both using electron beams in glass tubes as means of storage. Using cathode ray tubes,Fred Williams would invent the Williams tube, which would be the first random access computer memory. TheWilliams tube would prove to be advantageous to the Selectron tube because of its greater capacity (the Selectronwas limited to 256 bits, while the Williams tube could store thousands) and being less expensive. The Williams tubewould nevertheless prove to be frustratingly sensitive to environmental disturbances.

Efforts began in the late 1940s to find non-volatile memory. Jay Forrester, Jan A. Rajchman and An Wang would becredited with the development of magnetic core memory, which would allow for recall of memory after power loss.Magnetic core memory would become the dominant form of memory until the development of transistor basedmemory in the late 1960s.

Volatile memoryVolatile memory is computer memory that requires power to maintain the stored information. Current semiconductorvolatile memory technology is usually either static RAM (see SRAM) or dynamic RAM (see DRAM). Static RAMexhibits data remanence, but is still volatile, since all data is lost when memory is not powered. Whereas, dynamicRAM allows data to be leaked and disappear automatically without a refreshing. Upcoming volatile memorytechnologies that hope to replace or compete with SRAM and DRAM include Z-RAM, TTRAM and A-RAM.

Non-volatile memoryNon-volatile memory is computer memory that can retain the stored information even when not powered. Examplesof non-volatile memory include read-only memory (see ROM), flash memory, most types of magnetic computerstorage devices (e.g. hard disks, floppy discs and magnetic tape), optical discs, and early computer storage methodssuch as paper tape and punched cards.Upcoming non-volatile memory technologies include FeRAM, CBRAM,PRAM, SONOS, RRAM, Racetrack memory, NRAM and Millipede.

Page 95: Micro Excellent

Computer memory 93

See also• Virtual memory• Semiconductor memory• Memory GeometryReferences Miller, Stephen W. (1977), Memory and Storage Technology, Montvale.: AFIPS Press• Memory and Storage Technology, Alexandria, Virginia.: Time Life Books, 1988the end

External links• http:/ / computer. howstuffworks. com/ computer-memory. htm• http:/ / www. kingston. com/ tools/ umg/ pdf/ umg. pdf

Memory-mapped I/OMemory-mapped I/O (MMIO) and port I/O (also called port-mapped I/O (PMIO) or isolated I/O) are twocomplementary methods of performing input/output between the CPU and peripheral devices in a computer. Anothermethod, not discussed in this article, is using dedicated I/O processors — commonly known as channels onmainframe computers — that execute their own instructions.Memory-mapped I/O (not to be confused with memory-mapped file I/O) uses the same address bus to address bothmemory and I/O devices, and the CPU instructions used to access the memory are also used for accessing devices. Inorder to accommodate the I/O devices, areas of the CPU's addressable space must be reserved for I/O. Thereservation might be temporary — the Commodore 64 could bank switch between its I/O devices and regularmemory — or permanent. Each I/O device monitors the CPU's address bus and responds to any of the CPU's accessof device-assigned address space, connecting the data bus to a desirable device's hardware register.Port-mapped I/O uses a special class of CPU instructions specifically for performing I/O. This is generally found onIntel microprocessors, specifically the IN and OUT instructions which can read and write one to four bytes (outb,outw, outl) to an I/O device. I/O devices have a separate address space from general memory, either accomplished byan extra "I/O" pin on the CPU's physical interface, or an entire bus dedicated to I/O. Because the address space forI/O is isolated from that for main memory, this is sometimes referred to as isolated I/O.A device's direct memory access (DMA) is not affected by those CPU-to-device communication methods, especiallyit is not affected by memory mapping. This is because by definition, DMA is a memory-to-device communicationmethod that bypasses the CPU.Hardware interrupt is yet another communication method between CPU and peripheral devices. However, it isalways treated separately for a number of reasons. It is device-initiated, as opposed to the methods mentioned above,which are CPU-initiated. It is also unidirectional, as information flows only from device to CPU. Lastly, eachinterrupt line carries only one bit of information with a fixed meaning, namely "an event that requires attention hasoccurred in a device on this interrupt line".

Page 96: Micro Excellent

Memory-mapped I/O 94

Relative merits of the two I/O methodsThe main advantage of using port-mapped I/O is on CPUs with a limited addressing capability. Becauseport-mapped I/O separates I/O access from memory access, the full address space can be used for memory. It is alsoobvious to a person reading an assembly language program listing (or even, in rare instances, analyzing machinelanguage) when I/O is being performed, due to the special instructions that can only be used for that purpose.I/O operations can slow the memory access, if the address and data buses are shared. This is because the peripheraldevice is usually much slower than main memory. In some architectures, port-mapped I/O operates via a dedicatedI/O bus, alleviating the problem.There are two major advantages of using memory-mapped I/O. One of them is that, by discarding the extracomplexity that port I/O brings, a CPU requires less internal logic and is thus cheaper, faster, easier to build,consumes less power and can be physically smaller; this follows the basic tenets of reduced instruction setcomputing, and is also advantageous in embedded systems. The other advantage is that, because regular memoryinstructions are used to address devices, all of the CPU's addressing modes are available for the I/O as well as thememory, and instructions that perform an ALU operation directly on a memory operand — loading an operand froma memory location, storing the result to a memory location, or both — can be used with I/O device registers as well.In contrast, port-mapped I/O instructions are often very limited, often providing only for plain load and storeoperations between CPU registers and I/O ports, so that, for example, to add a constant to a port-mapped deviceregister would require three instructions: read the port to a CPU register, add the constant to the CPU register, andwrite the result back to the port.As 16-bit processors have become obsolete and replaced with 32-bit and 64-bit in general use, reserving ranges ofmemory address space for I/O is less of a problem, as the memory address space of the processor is usually muchlarger than the required space for all memory and I/O devices in a system. Therefore, it has become more frequentlypractical to take advantage of the benefits of memory-mapped I/O. However, even with address space being nolonger a major concern, neither I/O mapping method is universally superior to the other, and there will be caseswhere using port-mapped I/O is still preferable.A final reason that memory-mapped I/O is preferred in x86-based architectures is that the instructions that performport-based I/O are limited to one or two registers: EAX, AX, and AL are the only registers that data can be moved into or out of, and either a byte-sized immediate value in the instruction or a value in register DX determines whichport is the source or destination port of the transfer[1] [2] . Since any general purpose register can send or receive datato or from memory and memory-mapped I/O, memory-mapped I/O uses less instructions and can run faster than portI/O. AMD did not extend the port I/O instructions when defining the x86-64 architecture to support 64-bit ports, so64-bit transfers cannot be performed using port I/O[3] .

Memory barriersMemory-mapped I/O is the cause of memory barriers in older generations of computers – the 640 KiB barrier is dueto the IBM PC placing the Upper Memory Area in the 640–1024 KiB range (of its 20-bit memory addressing), whilethe 3 GB barrier is due to similar memory-mapping in 32-bit architectures in the 3–4 GB range.

ExampleConsider a simple system built around an 8-bit microprocessor. Such a CPU might provide 16-bit address lines, allowing it to address up to 64 kibibytes (KiB) of memory. On such a system, perhaps the first 32 KiB of address space would be allotted to random access memory (RAM), another 16K to read only memory (ROM) and the remainder to a variety of other devices such as timers, counters, video display chips, sound generating devices, and so forth. The hardware of the system is arranged so that devices on the address bus will only respond to particular addresses which are intended for them; all other addresses are ignored. This is the job of the address decoding

Page 97: Micro Excellent

Memory-mapped I/O 95

circuitry, and it is this that establishes the memory map of the system.Thus we might end up with a memory map like so:

Device Addressrange

(hexadecimal)

Size

RAM 0000 - 7FFF 32 KiB

General purpose I/O 8000 - 80FF 256 bytes

Sound controller 9000 - 90FF 256 bytes

Video controller/text-mapped display RAM A000 - A7FF 2 KiB

ROM C000 - FFFF 16 KiB

Note that this memory map contains gaps; that is also quite common.Assuming the fourth register of the video controller sets the background colour of the screen, the CPU can set thiscolour by writing a value to the memory location A003 using its standard memory write instruction. Using the samemethod, graphs can be displayed on a screen by writing character values into a special area of RAM within the videocontroller. Prior to cheap RAM that enabled bit-mapped displays, this character cell method was a popular techniquefor computer video displays (see Text user interface).

Basic types of address decoding• Exhaustive — 1:1 mapping of unique addresses to one hardware register (physical memory location)• Partial — n:1 mapping of n unique addresses to one hardware register. Partial decoding allows a memory location

to have more than one address, allowing the programmer to reference a memory location using n differentaddresses. It may also be done just to simplify the decoding hardware, when not all of the CPU's address space isneeded. Synonyms: foldback, multiply-mapped, partially-mapped.

• Linear — Address lines are used directly without any decoding logic. This is done with devices such as RAMsand ROMs that have a sequence of address inputs, and with peripheral chips that have a similar sequence ofinputs for addressing a bank of registers. Linear addressing is rarely used alone (only when there are few deviceson the bus, as using purely linear addressing for more than one device usually wastes a lot of address space) butinstead is combined with one of the other methods to select a device or group of devices within which the linearaddressing selects a single register or memory location.

Incomplete address decodingAddresses may be decoded completely or incompletely by a device.• Complete decoding involves checking every line of the address bus, causing an open data bus when the CPU

accesses an unmapped region of memory. (Note that even with incomplete decoding, decoded partial regions maynot be associated with any device, leaving the data bus open when those regions are accessed.)

• Incomplete decoding, or partial decoding, uses simpler and often cheaper logic that examines only some addresslines. Such simple decoding circuitry might allow a device to respond to several different addresses, effectivelycreating virtual copies of the device at different places in the memory map. All of these copies refer to the samereal device, so there is no particular advantage in doing this, except to simplify the decoder (or possibly thesoftware that uses the device). This is also known as address aliasing [4] [5] ; Aliasing has other meanings incomputing. Commonly, the decoding itself is programmable, so the system can reconfigure its own memory mapas required, though this is a newer development and generally in conflict with the intent of being cheaper.

Page 98: Micro Excellent

Memory-mapped I/O 96

References[1] "Intel® 64 and IA-32 Architectures Software Developer’s Manual: Volume 2A: Instruction Set Reference, A-M" (http:/ / www. intel. com/

Assets/ PDF/ manual/ 253666. pdf) (PDF). Intel® 64 and IA-32 Architectures Software Developer’s Manual. Intel Corporation. June 2010. pp.3–520. . Retrieved 21-08-2010.

[2] "Intel® 64 and IA-32 Architectures Software Developer’s Manual: Volume 2B: Instruction Set Reference, N-Z" (http:/ / www. intel. com/Assets/ PDF/ manual/ 253667. pdf) (PDF). Intel® 64 and IA-32 Architectures Software Developer’s Manual. Intel Corporation. June 2010. pp.4–22. . Retrieved 21-08-2010.

[3] "AMD64 Architecture Programmer's Manual: Volume 3: General-Purpose and System Instructions" (http:/ / support. amd. com/ us/Processor_TechDocs/ 24594. pdf) (PDF). AMD64 Architecture Programmer's Manual. Advanced Micro Devices. November 2009. pp. 117,181. . Retrieved 21-08-2010.

[4] Microsoft (December 4, 2001). "Partial Address Decoding and I/O Space in Windows Operating Systems" (http:/ / www. microsoft. com/whdc/ system/ sysinternals/ partialaddress. mspx#EFD). .

[5] HP. "Address aliasing" (http:/ / docs. hp. com/ en/ A3725-96022/ ch03s03. html). .

See also• mmap, not to be confused with memory-mapped I/O• Early examples of computers with port-mapped I/O

• PDP-8• Nova

• PDP-11, an early example of a computer architecture using memory-mapped I/O• Unibus, a dedicated I/O bus used by the PDP-11

• University lecture notes about computer I/O (http:/ / www. cs. nmsu. edu/ ~pfeiffer/ classes/ 473/ notes/ io. html)• Input/Output Base Address• Bank switching

Chip select

An example SPI with a master and three slave select lines. Note that all four chips sharethe SCLK, MISO, and MOSI lines but each slave has its own slave select.

Chip select (CS) or slave select (SS)is the name of a control line in digitalelectronics used to select one chip outof several connected to the samecomputer bus usually utilizing thethree-state logic.

One bus that uses the chip/slave selectis the Serial Peripheral Interface Bus.

When an engineer needs to connectseveral devices to the same set of inputwires (e.g., a computer bus), but retainthe ability to send and receive data orcommands to each deviceindependently of the others on the bus,he can use a chip select. The chipselect is a command pin on most ICswhich connects the input pins on thedevice to the internal circuitry of thatdevice, and similarly for the output pins.

Page 99: Micro Excellent

Chip select 97

When the chip select pin is held in the inactive state, the chip or device is "deaf", and pays no heed to changes in thestate of its input pins; outputs are high impedance, so other chips can drive those signals. When the chip select pin isheld in the active state, the chip or device assumes that any input changes it "hears" are meant for it, and responds asif it is the only chip on the bus. Because the other chips have their chip select pins in the inactive state, they areholding their outputs in the high impedance state, so the single selected pin can drive its outputs.In short, the chip select is an access-enable switch. "ON" means the device responds to changes on its input pins(such as data or address information for a RAM device) and drives any output pins (possibly not at the same time),while "OFF" tells the device to ignore the outside world for both inputs and outputs.

Reduced instruction set computingReduced instruction set computing, or RISC (pronounced /ˈrɪsk/), is a CPU design strategy based on the insightthat simplified (as opposed to complex) instructions can provide higher performance if this simplicity enables muchfaster execution of each instruction. A computer based on this strategy is a reduced instruction set computer (alsoRISC). There are many proposals for precise definitions[1] , but the term is slowly being replaced by the moredescriptive load-store architecture. Well known RISC families include DEC Alpha, AMD 29k, ARC, ARM, AtmelAVR, MIPS, PA-RISC, Power (including PowerPC), SuperH, and SPARC.Some aspects attributed to the first RISC-labeled designs around 1975 include the observations that thememory-restricted compilers of the time were often unable to take advantage of features intended to facilitatemanual assembly coding, and that complex addressing modes take many cycles to perform due to the requiredadditional memory accesses. It was argued that such functions would be better performed by sequences of simplerinstructions if this could yield implementations small enough to leave room for many registers,[2] reducing thenumber of slow memory accesses. In these simple designs, most instructions are of uniform length and similarstructure, arithmetic operations are restricted to CPU registers and only separate load and store instructions accessmemory. These properties enable a better balancing of pipeline stages than before, making RISC pipelinessignificantly more efficient and allowing higher clock frequencies.

Non-RISC design philosophyIn the early days of the computer industry, programming was done in assembly language or machine code, whichencouraged powerful and easy-to-use instructions. CPU designers therefore tried to make instructions that would doas much work as feasible. With the advent of higher level languages, computer architects also started to creatededicated instructions to directly implement certain central mechanisms of such languages. Another general goal wasto provide every possible addressing mode for every instruction, known as orthogonality, to ease compilerimplementation. Arithmetic operations could therefore often have results as well as operands directly in memory (inaddition to register or immediate).The attitude at the time was that hardware design was more mature than compiler design so this was in itself also areason to implement parts of the functionality in hardware or microcode rather than in a memory constrainedcompiler (or its generated code) alone. This design philosophy became retroactively termed complex instruction setcomputing (CISC) after the RISC philosophy came onto the scene.CPUs also had relatively few registers, for several reasons:• More registers also implies more time-consuming saving and restoring of register contents on the machine stack.• A large number of registers requires a large number of instruction bits as register specifiers, meaning less dense

code (see below).• CPU registers are more expensive than external memory locations; large register sets were cumbersome with

limited circuit boards or chip integration.

Page 100: Micro Excellent

Reduced instruction set computing 98

An important force encouraging complexity was very limited main memories (on the order of kilobytes). It wastherefore advantageous for the density of information held in computer programs to be high, leading to features suchas highly encoded, variable length instructions, doing data loading as well as calculation (as mentioned above).These issues were of higher priority than the ease of decoding such instructions.An equally important reason was that main memories were quite slow (a common type was ferrite core memory); byusing dense information packing, one could reduce the frequency with which the CPU had to access this slowresource. Modern computers face similar limiting factors: main memories are slow compared to the CPU and the fastcache memories employed to overcome this are limited in size. This may partly explain why highly encodedinstruction sets have proven to be as useful as RISC designs in modern computers.

RISC design philosophyIn the mid 1970s researchers (particularly John Cocke) at IBM (and similar projects elsewhere) demonstrated thatthe majority of combinations of these orthogonal addressing modes and instructions were not used by most programsgenerated by compilers available at the time. It proved difficult in many cases to write a compiler with more thanlimited ability to take advantage of the features provided by conventional CPUs.It was also discovered that, on microcoded implementations of certain architectures, complex operations tended to beslower than a sequence of simpler operations doing the same thing. This was in part an effect of the fact that manydesigns were rushed, with little time to optimize or tune every instruction, but only those used most often. Oneinfamous example was the VAX's INDEX instruction.[3]

As mentioned elsewhere, core memory had long since been slower than many CPU designs. The advent ofsemiconductor memory reduced this difference, but it was still apparent that more registers (and later caches) wouldallow higher CPU operating frequencies. Additional registers would require sizeable chip or board areas which, atthe time (1975), could be made available if the complexity of the CPU logic was reduced.Yet another impetus of both RISC and other designs came from practical measurements on real-world programs.Andrew Tanenbaum summed up many of these, demonstrating that processors often had oversized immediates. Forinstance, he showed that 98% of all the constants in a program would fit in 13 bits, yet many CPU designs dedicated16 or 32 bits to store them. This suggests that, to reduce the number of memory accesses, a fixed length machinecould store constants in unused bits of the instruction word itself, so that they would be immediately ready when theCPU needs them (much like immediate addressing in a conventional design). This required small opcodes in order toleave room for a reasonably sized constant in a 32-bit instruction word.Since many real-world programs spend most of their time executing simple operations, some researchers decided tofocus on making those operations as fast as possible. The clock rate of a CPU is limited by the time it takes toexecute the slowest sub-operation of any instruction; decreasing that cycle-time often accelerates the execution ofother instructions.[4] The focus on "reduced instructions" led to the resulting machine being called a "reducedinstruction set computer" (RISC). The goal was to make instructions so simple that they could easily be pipelined, inorder to achieve a single clock throughput at high frequencies.Later it was noted that one of the most significant characteristics of RISC processors was that external memory wasonly accessible by a load or store instruction. All other instructions were limited to internal registers. This simplifiedmany aspects of processor design: allowing instructions to be fixed-length, simplifying pipelines, and isolating thelogic for dealing with the delay in completing a memory access (cache miss, etc) to only two instructions. This led toRISC designs being referred to as load/store architectures.[5]

Page 101: Micro Excellent

Reduced instruction set computing 99

Instruction set size and alternative terminologyA common misunderstanding of the phrase "reduced instruction set computer" is the mistaken idea that instructionsare simply eliminated, resulting in a smaller set of instructions. In fact, over the years, RISC instruction sets havegrown in size, and today many of them have a larger set of instructions than many CISC CPUs.[6] [7] Some RISCprocessors such as the INMOS Transputer have instruction sets as large as, say, the CISC IBM System/370; andconversely, the DEC PDP-8 – clearly a CISC CPU because many of its instructions involve multiple memoryaccesses – has only 8 basic instructions, plus a few extended instructions.The term "reduced" in that phrase was intended to describe the fact that the amount of work any single instructionaccomplishes is reduced – at most a single data memory cycle – compared to the "complex instructions" of CISCCPUs that may require dozens of data memory cycles in order to execute a single instruction.[8] In particular, RISCprocessors typically have separate instructions for I/O and data processing; as a consequence, industry observershave started using the terms "register-register" or "load-store" to describe RISC processors.Some CPUs have been retroactively dubbed RISC — a Byte magazine article once referred to the 6502 as "theoriginal RISC processor" due to its simplistic and nearly orthogonal instruction set (most instructions work withmost addressing modes) as well as its 256 zero-page "registers". The 6502 is no load/store design however:arithmetic operations may read memory, and instructions like INC and ROL even modify memory. Furthermore,orthogonality is equally often associated with "CISC". However, the 6502 may be regarded as similar to RISC (andearly machines) in the fact that it uses no microcode sequencing. However, the well known fact that it employedlonger but fewer clock cycles compared to many contemporary microprocessors was due to a more asynchronousdesign with less subdivision of internal machine cycles. This is similar to early machines, but not to RISC.Some CPUs have been specifically designed to have a very small set of instructions – but these designs are verydifferent from classic RISC designs, so they have been given other names such as minimal instruction set computer(MISC), Zero Instruction Set Computer (ZISC), one instruction set computer (OISC), transport triggered architecture(TTA), etc.

AlternativesRISC was developed as an alternative to what is now known as CISC. Over the years, other strategies have beenimplemented as alternatives to RISC and CISC. Some examples are VLIW, MISC, OISC, massive parallelprocessing, systolic array, reconfigurable computing, and dataflow architecture.

Typical characteristics of RISCFor any given level of general performance, a RISC chip will typically have far fewer transistors dedicated to thecore logic which originally allowed designers to increase the size of the register set and increase internal parallelism.Other features, which are typically found in RISC architectures are:• Uniform instruction format, using a single word with the opcode in the same bit positions in every instruction,

demanding less decoding;• Identical general purpose registers, allowing any register to be used in any context, simplifying compiler design

(although normally there are separate floating point registers);• Simple addressing modes. Complex addressing performed via sequences of arithmetic and/or load-store

operations;• Few data types in hardware, some CISCs have byte string instructions, or support complex numbers; this is so far

unlikely to be found on a RISC.Exceptions abound, of course, within both CISC and RISC.

Page 102: Micro Excellent

Reduced instruction set computing 100

RISC designs are also more likely to feature a Harvard memory model, where the instruction stream and the datastream are conceptually separated; this means that modifying the memory where code is held might not have anyeffect on the instructions executed by the processor (because the CPU has a separate instruction and data cache), atleast until a special synchronization instruction is issued. On the upside, this allows both caches to be accessedsimultaneously, which can often improve performance.Many early RISC designs also shared the characteristic of having a branch delay slot. A branch delay slot is aninstruction space immediately following a jump or branch. The instruction in this space is executed, whether or notthe branch is taken (in other words the effect of the branch is delayed). This instruction keeps the ALU of the CPUbusy for the extra time normally needed to perform a branch. Nowadays the branch delay slot is considered anunfortunate side effect of a particular strategy for implementing some RISC designs, and modern RISC designsgenerally do away with it (such as PowerPC, more recent versions of SPARC, and MIPS).

Early RISCThe first system that would today be known as RISC was the CDC 6600 supercomputer, designed in 1964, a decadebefore the term was invented. The CDC 6600 had a load-store architecture with only two addressing modes(register+register, and register+immediate constant) and 74 opcodes (whereas an Intel 8086 has 400). The 6600 hadeleven pipelined functional units for arithmetic and logic, plus five load units and two store units; the memory hadmultiple banks so all load-store units could operate at the same time. The basic clock cycle/instruction issue rate was10 times faster than the memory access time. Jim Thornton and Seymour Cray designed it as a number-crunchingCPU supported by 10 simple computers called "peripheral processors" to handle I/O and other operating systemfunctions.[9] Thus the joking comment later that the acronym RISC actually stood for "Really Invented by SeymourCray".Another early load-store machine was the Data General Nova minicomputer, designed in 1968 by Edson de Castro.It had an almost pure RISC instruction set, remarkably similar to that of today's ARM processors; however it has notbeen cited as having influenced the ARM designers, although Novas were in use at the University of CambridgeComputer Laboratory in the early 1980s.The earliest attempt to make a chip-based RISC CPU was a project at IBM which started in 1975. Named after thebuilding where the project ran, the work led to the IBM 801 CPU family which was used widely inside IBMhardware. The 801 was eventually produced in a single-chip form as the ROMP in 1981, which stood for 'ResearchOPD [Office Products Division] Micro Processor'. As the name implies, this CPU was designed for "mini" tasks, andwhen IBM released the IBM RT-PC based on the design in 1986, the performance was not acceptable. Neverthelessthe 801 inspired several research projects, including new ones at IBM that would eventually lead to their POWERsystem.The most public RISC designs, however, were the results of university research programs run with funding from theDARPA VLSI Program. The VLSI Program, practically unknown today, led to a huge number of advances in chipdesign, fabrication, and even computer graphics.UC Berkeley's RISC project started in 1980 under the direction of David Patterson and Carlo H. Sequin, based ongaining performance through the use of pipelining and an aggressive use of a technique known as registerwindowing. In a normal CPU one has a small number of registers, and a program can use any register at any time. Ina CPU with register windows, there are a huge number of registers, e.g. 128, but programs can only use a smallnumber of them, e.g. 8, at any one time. A program that limits itself to 8 registers per procedure can make very fastprocedure calls: The call simply moves the window "down" by 8, to the set of 8 registers used by that procedure, andthe return moves the window back. (On a normal CPU, most calls must save at least a few registers' values to thestack in order to use those registers as working space, and restore their values on return.)The RISC project delivered the RISC-I processor in 1982. Consisting of only 44,420 transistors (compared withaverages of about 100,000 in newer CISC designs of the era) RISC-I had only 32 instructions, and yet completely

Page 103: Micro Excellent

Reduced instruction set computing 101

outperformed any other single-chip design. They followed this up with the 40,760 transistor, 39 instruction RISC-IIin 1983, which ran over three times as fast as RISC-I.At about the same time, John L. Hennessy started a similar project called MIPS at Stanford University in 1981.MIPS focused almost entirely on the pipeline, making sure it could be run as "full" as possible. Although pipeliningwas already in use in other designs, several features of the MIPS chip made its pipeline far faster. The mostimportant, and perhaps annoying, of these features was the demand that all instructions be able to complete in onecycle. This demand allowed the pipeline to be run at much higher data rates (there was no need for induced delays)and is responsible for much of the processor's performance. However, it also had the negative side effect ofeliminating many potentially useful instructions, like a multiply or a divide.In the early years, the RISC efforts were well known, but largely confined to the university labs that had createdthem. The Berkeley effort became so well known that it eventually became the name for the entire concept. Many inthe computer industry criticized that the performance benefits were unlikely to translate into real-world settings dueto the decreased memory efficiency of multiple instructions, and that that was the reason no one was using them. Butstarting in 1986, all of the RISC research projects started delivering products.

Later RISCBerkeley's research was not directly commercialized, but the RISC-II design was used by Sun Microsystems todevelop the SPARC, by Pyramid Technology to develop their line of mid-range multi-processor machines, and byalmost every other company a few years later. It was Sun's use of a RISC chip in their new machines thatdemonstrated that RISC's benefits were real, and their machines quickly outpaced the competition and essentiallytook over the entire workstation market.John Hennessy left Stanford (temporarily) to commercialize the MIPS design, starting the company known as MIPSComputer Systems. Their first design was a second-generation MIPS chip known as the R2000. MIPS designs wenton to become one of the most used RISC chips when they were included in the PlayStation and Nintendo 64 gameconsoles. Today they are one of the most common embedded processors in use for high-end applications.IBM learned from the RT-PC failure and went on to design the RS/6000 based on their new POWER architecture.They then moved their existing AS/400 systems to POWER chips, and found much to their surprise that even thevery complex instruction set ran considerably faster. POWER would also find itself moving "down" in scale toproduce the PowerPC design, which eliminated many of the "IBM only" instructions and created a single-chipimplementation. Today the PowerPC is one of the most commonly used CPUs for automotive applications (somecars have more than 10 of them inside). It was also the CPU used in most Apple Macintosh machines from 1994 to2006. (Starting in February 2006, Apple switched their main production line to Intel x86 processors.)Almost all other vendors quickly joined. From the UK similar research efforts resulted in the INMOS transputer, theAcorn Archimedes and the Advanced RISC Machine line, which is a huge success today. Companies with existingCISC designs also quickly joined the revolution. Intel released the i860 and i960 by the late 1980s, although theywere not very successful. Motorola built a new design called the 88000 in homage to their famed CISC 68000, but itsaw almost no use and they eventually abandoned it and joined IBM to produce the PowerPC. AMD released their29000 which would go on to become the most popular RISC design of the early 1990s.Today the vast majority of all 32-bit CPUs in use are RISC CPUs, and microcontrollers. RISC design techniques offers power in even small sizes, and thus has become dominant for low-power 32-bit CPUs. Embedded systems are by far the largest market for processors: while a family may own one or two PCs, their car(s), cell phones, and other devices may contain a total of dozens of embedded processors. RISC had also completely taken over the market for larger workstations for much of the 90s (until taken back by inexpensive PC-based solutions). After the release of the Sun SPARCstation the other vendors rushed to compete with RISC based solutions of their own. The high-end server market today is almost completely RISC based, and the #1 spot among supercomputers as of 2008 is held by IBM's Roadrunner system, which uses Power Architecture-based Cell processors[10] to provide most of its

Page 104: Micro Excellent

Reduced instruction set computing 102

computing power, although many other supercomputers use x86 CISC processors instead.[11]

RISC and x86However, despite many successes, RISC has made few inroads into the desktop PC and commodity server markets,where Intel's x86 platform remains the dominant processor architecture. There are three main reasons for this:1. The very large base of proprietary PC applications are written for x86, whereas no RISC platform has a similar

installed base, and this meant PC users were locked into the x86.2. Although RISC was indeed able to scale up in performance quite quickly and cheaply, Intel took advantage of its

large market by spending vast amounts of money on processor development. Intel could spend many times asmuch as any RISC manufacturer on improving low level design and manufacturing. The same could not be saidabout smaller firms like Cyrix and NexGen, but they realized that they could apply (tightly) pipelined designpractices also to the x86-architecture, just like in the 486 and Pentium. The 6x86 and MII series did exactly this,but was more advanced, it implemented superscalar speculative execution via register renaming, directly at thex86-semantic level. Others, like the Nx586 and AMD K5 did the same, but indirectly, via dynamic microcodebuffering and semi-independent superscalar scheduling and instruction dispatch at the micro-operation level(older or simpler ‘CISC’ designs typically executes rigid micro-operation sequences directly). The first availablechip deploying such dynamic buffering and scheduling techniques was the NexGen Nx586, released in 1994; theAMD K5 was severely delayed and released in 1995.

3. Later, more powerful processors such as Intel P6, AMD K6, AMD K7, Pentium 4, etc employed similar dynamicbuffering and scheduling principles and implemented loosely coupled superscalar (and speculative) execution ofmicro-operation sequences generated from several parallel x86 decoding stages. Today, these ideas have beenfurther refined (some x86-pairs are instead merged, into a more complex micro-operation, for example) and arestill used by modern x86 processors such as Intel Core 2 and AMD K8.

While early RISC designs were significantly different than contemporary CISC designs, by 2000 the highestperforming CPUs in the RISC line were almost indistinguishable from the highest performing CPUs in the CISCline.[12] [13] [14]

A number of vendors, including Qualcomm, are attempting to enter the PC market with ARM-based devices dubbedsmartbooks, riding off the netbook trend and rising acceptance of Linux distributions, a number of which alreadyhave ARM builds.[15] Other companies are choosing to use Windows CE.[16]

Diminishing benefits for desktops and serversOver time, improvements in chip fabrication techniques have improved performance exponentially, according toMoore's law, whereas architectural improvements have been comparatively small. Modern CISC implementationshave implemented many of the performance improvements introduced by RISC, such as single-clock throughput ofsimple instructions. Compilers have also become more sophisticated, and are better able to exploit complex as wellas simple instructions on CISC architectures, often carefully optimizing both instruction selection and instruction anddata ordering in pipelines and caches. The RISC-CISC distinction has blurred significantly in practice.

Page 105: Micro Excellent

Reduced instruction set computing 103

Expanding benefits for mobile and embedded devicesThe hardware translation from x86 instructions into RISC operations, which cost relatively little in microprocessorsfor desktops and servers as Moore's Law provided more transistors, become significant in area and energy for mobileand embedded devices. Hence, ARM processors dominate cell phones and tablets today just like x86 processorsdominate PCs.

RISC success storiesRISC designs have led to a number of successful platforms and architectures, some of the larger ones being:• ARM — The ARM architecture dominates the market for low power and low cost embedded systems (typically

100–500 MHz in 2008). ARM Ltd., which licenses intellectual property rather than manufacturing chips, reportedthat 10 billion licensed chips had been shipped as of early 2008.[17] The various generations, variants andimplementations of the ARM core are deployed in over 90% of mobile electronics devices, including almost allmodern mobile phones, mp3 players and portable video players. Some high profile examples are• Apple iPods (custom ARM7TDMI SoC)• Apple iPhone and iPod Touch (Samsung ARM1176JZF, ARM Cortex-A8, Apple A4)• Apple iPad (Apple A4 ARM-based SoC)• Palm and PocketPC PDAs and smartphones (Marvell XScale family, Samsung SC32442 - ARM9)• RIM BlackBerry smartphone/email devices.• Microsoft Windows Mobile• Nintendo Game Boy Advance (ARM7TDMI)• Nintendo DS (ARM7TDMI, ARM946E-S)• Sony Network Walkman (Sony in-house ARM based chip)• T-Mobile G1 (HTC Dream Android, Qualcomm MSM7201A ARM11 @ 528 MHz)

• PowerPC Architecture - The PowerPC architecture is a popular RISC based architecture that dominates theperformance and power constraint embedded device markets such as communication equipments (Routers,Switches), storage equipments etc. It is also used on the Nintendo Wii and the Gamecube.

• MIPS's MIPS line, found in most SGI computers and the PlayStation, PlayStation 2, Nintendo 64 (discontinued),PlayStation Portable game consoles, and residential gateways like Linksys WRT54G series.

• IBM's and Freescale's (formerly Motorola SPS) Power Architecture, used in all of IBM's supercomputers,midrange servers and workstations, in Apple's PowerPC-based Macintosh computers (discontinued), inNintendo's Gamecube and Wii, Microsoft's Xbox 360 and Sony's PlayStation 3 game consoles, EMC's DMXrange of the Symmetrix SAN, and in many embedded applications like printers and cars.

• SPARC, by Oracle (previously Sun Microsystems), and Fujitsu• Hewlett-Packard's PA-RISC, also known as HP-PA, discontinued December 31, 2008.• Alpha, used in single-board computers, workstations, servers and supercomputers from Digital Equipment

Corporation, Compaq and HP, discontinued as of 2007.• XAP processor used in many low-power wireless (Bluetooth, wifi) chips from CSR.• Hitachi's SuperH, originally in wide use in the Sega Super 32X, Saturn and Dreamcast, now at the heart of many

consumer electronics devices. The SuperH is the base platform for the Mitsubishi - Hitachi joint semiconductorgroup. The two groups merged in 2002, dropping Mitsubishi's own RISC architecture, the M32R.

• Atmel AVR used in a variety of products including ranging from Xbox handheld controllers to BMW cars.

Page 106: Micro Excellent

Reduced instruction set computing 104

See also• Addressing mode• Complex instruction set computer• Very long instruction word• Minimal instruction set computer• Zero Instruction Set Computer• One instruction set computer• NISC (No-instruction-set-computer)• Microprocessor• Instruction set• Computer architecture• Classic RISC pipeline

Notes and references[1] Stanford sophomore students defined (http:/ / www-cs-faculty. stanford. edu/ ~eroberts/ courses/ soco/ projects/ 2000-01/ risc/ whatis/ index.

html) RISC as “a type of microprocessor architecture that utilizes a small, highly-optimized set of instructions, rather than a more specializedset of instructions often found in other types of architectures”.

[2] in place of complex logic or microcode—transistors were a scarce resource then[3] Patterson, D. A. and Ditzel, D. R. 1980. The case for the reduced instruction set computing. SIGARCH Comput. Archit. News 8, 6 (October

1980), 25-33. DOI= http:/ / doi. acm. org/ 10. 1145/ 641914. 641917[4] "Microprocessors From the Programmer's Perspective" (http:/ / www. ddj. com/ architect/ 184408418) by Andrew Schulman 1990[5] Kevin Dowd. High Performance Computing. O'Reilly & Associates, Inc. 1993.[6] "RISC vs. CISC: the Post-RISC Era" (http:/ / arstechnica. com/ cpu/ 4q99/ risc-cisc/ rvc-5. html#Branch) by Jon "Hannibal" Stokes

(Arstechnica)[7] "RISC versus CISC" (http:/ / www. borrett. id. au/ computing/ art-1991-06-02. htm) by Lloyd Borrett Australian Personal Computer, June

1991[8] "Guide to RISC Processors for Programmers and Engineers": Chapter 3: "RISC Principles" (http:/ / www. springerlink. com/ content/

u5t457g61q637v66/ ) by Sivarama P. Dandamudi, 2005, ISBN 978-0-387-21017-9. "the main goal was not to reduce the number ofinstructions, but the complexity"

[9] Grishman, Ralph. Assembly Language Programming for the Control Data 6000 Series. Algorithmics Press. 1974. pg 12[10] http:/ / www. top500. org/ list/ 2008/ 06/ 100 TOP500 List - June 2008 (1-100)[11] TOP500 Processor Family share for 11/2007 (http:/ / www. top500. org/ charts/ list/ 30/ procfam)[12] "Schaum's Outline of Computer Architecture" (http:/ / books. google. com/ books?id=24V00tD7HeAC& pg=PT105& lpg=PT105&

dq=RISC+ "fewer+ instructions"& source=web& ots=RkQOcAKjNJ& sig=gTE5OsG93TjvDGpgN0Q87gfHc9Y& hl=en& sa=X&oi=book_result& resnum=1& ct=result#PPT105,M1) by Nicholas P. Carter 2002 p. 96 ISBN 007136207X

[13] "CISC, RISC, and DSP Microprocessors" (http:/ / www. ifp. uiuc. edu/ ~jones/ RISCvCISCvDSP. pdf) by Douglas L. Jones 2000[14] "A History of Apple's Operating Systems" (http:/ / www. kernelthread. com/ mac/ oshistory/ 5. html) by Amit Singh. "the line between

RISC and CISC has been growing fuzzier over the years."[15] "Meet smartbooks" (http:/ / www. hellosmartbook. com/ index. php)[16] Citation Needed[17] "ARM Ships 10 Billionth Processor" (http:/ / www. efytimes. com/ efytimes/ 24375/ news. htm). (28 January 2008). EYFtimes.

External links• RISC vs. CISC (http:/ / www-cs-faculty. stanford. edu/ ~eroberts/ courses/ soco/ projects/ 2000-01/ risc/ risccisc/

)• What is RISC (http:/ / www-cs-faculty. stanford. edu/ ~eroberts/ courses/ soco/ projects/ 2000-01/ risc/ whatis/ )• RISC vs. CISC from historical perspective (http:/ / www. cpushack. net/ CPU/ cpuAppendA. html)

Page 107: Micro Excellent

Complex instruction set computing 105

Complex instruction set computingA complex instruction set computer (CISC) (pronounced /ˈsɪsk/), is a computer where single instructions canexecute several low-level operations (such as a load from memory, an arithmetic operation, and a memory store)and/or are capable of multi-step operations or addressing modes within single instructions. The term wasretroactively coined in contrast to reduced instruction set computer (RISC).Examples of CISC instruction set architectures are System/360 through z/Architecture, PDP-11, VAX, Motorola68k, and x86.

Historical design context

Incitements and benefitsBefore the RISC philosophy became prominent, many computer architects tried to bridge the so called semantic gap,i.e. to design instruction sets that directly supported high-level programming constructs such as procedure calls, loopcontrol, and complex addressing modes, allowing data structure and array accesses to be combined into singleinstructions. Instructions are also typically highly encoded in order to further enhance the code density. The compactnature of such instruction sets results in smaller program sizes and fewer (slow) main memory accesses, which at thetime (early 1960s and onwards) resulted in a tremendous savings on the cost of computer memory and disc storage,as well as faster execution. It also meant good programming productivity even in assembly language, as high levellanguages such as Fortran or Algol were not always available or appropriate (microprocessors in this category aresometimes still programmed in assembly language for certain types of critical applications).

New instructions

In the 70's, analysis of high level languages indicated some complex machine language implementations and it wasdetermined that new instructions could improve performance. Some instructions were added that were neverintended to be used in assembly language but fit well with compiled high level languages. Compilers were updatedto take advantage of these instructions. The benefits of semantically rich instructions with compact encodings can beseen in modern processors as well, particularly in the high performance segment where caches are a centralcomponent (as opposed to most embedded systems). This is because these fast, but complex and expensive,memories are inherently limited in size, making compact code beneficial. Of course, the fundamental reason they areneeded is that main memories (i.e. dynamic RAM today) remain slow compared to a (high performance) CPU-core.

Design issuesWhile many designs achieved the aim of higher throughput at lower cost and also allowed high-level languageconstructs to be expressed by fewer instructions, it was observed that this was not always the case. For instance,low-end versions of complex architectures (i.e. using less hardware) could lead to situations where it was possible toimprove performance by not using a complex instruction (such as a procedure call or enter instruction), but insteadusing a sequence of simpler instructions.One reason for this was that architects (microcode writers) sometimes "over-designed" assembler languageinstructions, i.e. including features which were not possible to implement efficiently on the basic hardware available.This could, for instance, be "side effects" (above conventional flags), such as the setting of a register or memorylocation that was perhaps seldom used; if this was done via ordinary (non duplicated) internal buses, or even theexternal bus, it would demand extra cycles every time, and thus be quite inefficient.Even in balanced high performance designs, highly encoded and (relatively) high-level instructions could becomplicated to decode and execute efficiently within a limited transistor budget. Such architectures therefore

Page 108: Micro Excellent

Complex instruction set computing 106

required a great deal of work on the part of the processor designer in cases where a simpler, but (typically) slower,solution based on decode tables and/or microcode sequencing is not appropriate. At the time where transistors andother components were a limited resource, this also left fewer components and less area for other types ofperformance optimizations.

The RISC idea

The circuitry that performs the actions defined by the microcode in many (but not all) CISC processors is, in itself, aprocessor which in many ways is reminiscent in structure to very early CPU designs. This gave rise to ideas to returnto simpler processor designs in order to make it more feasible to cope without (then relatively large and expensive)ROM tables, or even without PLA structures, for sequencing and/or decoding. At the same time, simplicity andregularity, would make it easier to implement overlapping processor stages (pipelining) at the machine code level(i.e. the level seen by compilers). The first (retroactively) RISC-labeled processor (IBM 801 - IBMs WatsonResearch Center, mid-1970s) was therefore a tightly pipelined machine originally intended to be used as an internalmicrocode kernal, or engine, in a CISC design. At the time, pipelining at the machine code level was already used insome high performance CISC computers, in order to reduce the instruction cycle time, but it was fairly complicatedto implement within the limited component count and wiring complexity that was feasible at the time. (Microcodeexecution, on the other hand, could be more or less pipelined, depending on the particular design.)

Superscalar

In a more modern context, the complex variable length encoding used by some of the typical CISC architecturesmakes it complicated, but still feasible, to build a superscalar implementation of a CISC programming modeldirectly; the in-order superscalar Original Pentium and the out-of-order superscalar Cyrix 6x86 are well knownexamples of this. The frequent memory accesses for operands of a typical CISC machine may limit the instructionlevel parallelism that can be extracted from the code, although this is strongly mediated by the fast cache structuresused in modern designs, as well as by other measures. Due to inherently compact and semantically rich instructions,the average amount of work performed per machine code unit (i.e. per byte or bit) is higher for a CISC than a RISCprocessor, which may give it a significant advantage in a modern cache based implementation. (Whether thedownsides versus the upsides justifies a complex design or not is food for a never-ending debate in certain circles.)Transistors for logic, PLAs, and microcode are no longer scarce resources; only large high-speed cache memories arelimited by the maximum number of transistors today. Although complex, the transistor count of CISC decoders donot grow exponentially like the total number of transistors per processor (the majority typically used for caches).Together with better tools and enhanced technologies, this has led to new implementations of highly encoded andvariable length designs without load-store limitations (i.e. non-RISC). This governs re-implementations of olderarchitectures such as the ubiquitous x86 (see below) as well as new designs for microcontrollers for embeddedsystems, and similar uses. The superscalar complexity in the case of modern x86 was solved with dynamically issuedand buffered micro-operations, i.e. indirect and dynamic superscalar execution; the Pentium Pro and AMD K5 areearly examples of this. This allows a fairly simple superscalar design to be located after the (fairly complex)decoders (and buffers), giving, so to speak, the best of both worlds in many respects.

CISC and RISC terms

The terms CISC and RISC have become less meaningful with the continued evolution of both CISC and RISC designs and implementations. The first highly (or tightly) pipelined x86 implementations, the 486 designs from Intel, AMD, Cyrix, and IBM, supported every instruction that their predecessors did, but achieved maximum efficiency only on a fairly simple x86 subset that was only a little more than a typical RISC instruction set (i.e. without typical RISC load-store limitations). The Intel P5 Pentium generation was a superscalar version of these principles. However, modern x86 processors also (typically) decode and split instructions into dynamic sequences of internal buffered micro-operations, which not only helps execute a larger subset of instructions in a pipelined (overlapping)

Page 109: Micro Excellent

Complex instruction set computing 107

fashion, but also facilitates more advanced extraction of parallelism out of the code stream, for even higherperformance.

See also• CPU• RISC• ZISC• VLIW• CPU design• Computer architecture

References• Tanenbaum, Andrew S. (2006) Structured Computer Organization, Fifth Edition, Pearson Education, Inc. Upper

Saddle River, NJ.

External links• RISC vs. CISC comparison [1]

This article was originally based on material from the Free On-line Dictionary of Computing, which is licensedunder the GFDL.

References[1] http:/ / www. pic24micro. com/ cisc_vs_risc. html

Page 110: Micro Excellent

Article Sources and Contributors 108

Article Sources and ContributorsBinary numeral system  Source: http://en.wikipedia.org/w/index.php?oldid=399214497  Contributors: .Absolution., 1exec1, 2D, 2andrewknyazev, 4, 4twenty42o, 999retard, A.kamburov, ALE!,Abc518, Abu-Fool Danyal ibn Amir al-Makhiri, Adashiel, Addshore, AdjustShift, Aerographer1981, Ahoerstemeier, Airplaneman, Aitias, Alansohn, Aleniko17, AlexJ, Alexf, Amesville,Amorymeltzer, Amplitude101, Andrejj, AndrewKepert, Andy Dingley, Anomie, Anonymous Dissident, Anonymous editor, Antandrus, Arabic Pilot, Arichnad, Arjun024, Arthur Rubin,Asdfg1234, Astral, AubreyEllenShomo, Auroranorth, Aursani, Avant Guard, Avazelda13, Azcolvin429, Azreal Umbra, Aztects, Baa, Baiji, Bart133, Belovedfreak, Ben Kidwell, Bigbluefish,Blackworm, BlaenkDenum, Blue520, Bluerasberry, Bobo192, Bookandcoffee, Brews ohare, Bulletd, Buzzlite101, CWii, Calmer Waters, Camw, Can't sleep, clown will eat me, Capitalist,Capricorn42, CaribDigita, Castedo, Catinator, Causa sui, Chairman S., Charles Matthews, CharlesDexterWard, Charleschuck, Chris 73, Chrislk02, Christian List, Chromaticity, Closedmouth,Cometstyles, ContinueWithCaution, Controls.freq, Courcelles, Crohnie, Cronholm144, CryptoDerk, Cureden, Cutiepie17881, Cwenger, D. F. Schmidt, Da monster under your bed, Da nuke,Daganboy, Daniel Quinlan, DanielCD, Dante Shamest, Darth Panda, David Eppstein, David n m bond, DavidCary, Dcoetzee, Debresser, Deeptrivia, Deggert, Dejan Jovanović, Delirium,DeniabilityPlausible, DerHexer, Derouch, Dicklyon, Dkleeman, Dodgerdave, Dodo bird, Dogcow, Dorkenhavvon, Doug Bell, Drilnoth, Dryman, Dungodung, Dysepsion, Dysprosia, E0steven,ESkog, Ed Poor, EdBever, EdC, Edmarriner, Egmontaz, El C, Elegost5555, Epbr123, Erkan Yilmaz, Euryalus, Evercat, Fabio479, Farosdaughter, Flewis, Fredrik, Fresheneesz, Fritz Jörn,Frozenevolution, GRAHAMUK, Gandalf61, Gav89, Gco, Gdo01, Giftlite, Gilliam, Gimboid13, GinaDana, Glane23, Gogo Dodo, GorillazFanAdam, Grafen, GregAsche, Gregbard, Gurch,Gwalla, Gwen Gale, Gwernol, Haakon, Haham hanuka, Hakufu Sonsaku, HalfShadow, HallwayGiant, Ham Pastrami, Hans Adler, Hatredto, Helix84, HexenX, Honza Záruba, II MusLiMHyBRiD II, Iamunknown, Includeiostream, Infaredz, Infinity0, Insanephantom, Interiot, Iridescence, Iridescent, Ishan.beckham, J.delanoy, JForget, JNW, JSR, Jaan513, Jacob grace, Jacob.jose,Jafet, Jagged 85, JamesBWatson, Jarek Duda, Jeffreyarcand, JesseW, JiFish, Jim.belk, Jimpaz, Jmabel, JoeliusCeasar, Jogers, Johnsmitzerhoven, Jon Awbrey, Jonathan de Boyne Pollard, Jonik,Jorgepblank, Josh Parris, Jshadias, Jstew87, Juliancolton, Junglecat, Jusdafax, Justin W Smith, Jwoodger, Kappa, Karl Palmen, Katalaveno, Kbdank71, Kbh3rd, Kbrose, Keka, Kevinalewis, KimBruning, Kingo1234, Kingpin13, Kirill Lokshin, KnowledgeOfSelf, Kotra, Krackpipe, Kraftlos, Kukini, Kuru, Kurykh, LLarson, Lambiam, Legalboard, LiDaobing, Liempt, LightAnkh, Linas,LinkWalker, Linuxwikiuser, Lipedia, LokiClock, Lost, Luigi30, Lupo, MER-C, Machine Elf 1735, Macjohn2, Majopius, Majorly, Malo, Maniaque27, Marco Polo, Margin1522, MarkKB,Martinap98, Martinman11, Marudubshinki, Math MisterY, MathsIsFun, MatthewMastracci, Mblumber, Mckoch, Meco, Meekywiki, Mendalus, Meno25, MetroMan4, Mets501, Mhdc2003,Michael93555, Michaeldadmum, Midnight Madness, Mike4ty4, Mikeo, Mild Bill Hiccup, Milogardner, Mipadi, Miquonranger03, Mlpearc, Mr Stephen, MrOllie, MrPrada, Mschlindwein,MuZemike, Mushroom, Myanw, N5iln, Naohiro19, Nedge123, Nerd272, Netralized, NewEnglandYankee, Newton2, Ninly, Nitomatik, Nivix, Nixdorf, Noe, Noetica, NoobTheShow24, NrDg,Nsaa, Numbo3, Nø, OKeh, Oleg Alexandrov, Omegatron, Opelio, Patrick0Moran, Patstuart, Paul August, Peace keeper, Pedalist, Peter 2005, Phantomsteve, Philip Trueman, Piano non troppo,Pinethicket, Pit, Platypus222, Poccil, Polpolpol4, PoojanWagh, Poopship75, Poor Yorick, Pooresd, Prolog, Przo, Psiphiorg, Psychotic Midget, Purgatory Fubar, Quarl, Quatrinauta, Queen Spiral,Qwerty112233, R-Joe, R. S. Shaw, R00m c, RHaworth, RJASE1, RJaguar3, Raf1qu3, Rama, Ravi12346, Reach Out to the Truth, Regancy42, Reno171, Retired username, RexNL, Rfl, Rjanag,Rjd0060, Rjwilmsi, Rl, Rob Hooft, RobertG, Rohanpol, Rorro, RoyBoy, Rsm99833, Ruhrfisch, SCZenz, SEWilco, SQGibbon, SWAdair, Samadam, Sandor rawks, Sango123, Scarian, Scohoust,Scottcraig, Shadowjams, Shanes, Shanken, Shenme, Siddhant, Sigma 7, SimonArlott, SimonP, Sir Nicholas de Mimsy-Porpington, Sjakkalle, Slakr, Smyth, Snehalbhai, Snottywong, Snoyes,SohanDsouza, Someguy1221, Sopoforic, Sorisos, SpAwNaGeZ, SpacePirate59, Spangineer, Speuler, Spinningspark, Spug, SpuriousQ, Stwalkerster, Suisui, Supaari, Superfrowny, Superiority,Superm401, Supertouch, Sweet xx, THEN WHO WAS PHONE?, TakuyaMurata, Tanweer Morshed, Tasc, Techbeats123, The Rambling Man, The Thing That Should Not Be, The demiurge, Thesunder king, The undertow, TheWeakWilled, Thelazyleo, Tiddly Tom, Tide rolls, Tintenfischlein, Titoxd, Tobias Bergemann, Tom harrison, Tombomp, Tomislavlac, Travelbird,TrippingTroubadour, Trovatore, Trurle, Twaz, Twsx, Ubergeekguy, Uga Man, UncleDouggie, Utcursch, Valhalla, Vanished 6551232, Velvetron, Vgy7ujm, Violetriga, Violinbecky76543,Vipinhari, Viznut, VoidLurker, Vwollan, WODUP, WadeSimMiser, Wapcaplet, Waterbender kara, WatermelonPotion, Wavelength, Wayne Olajuwon, Wayward, Wernher, WikiLaurent,Wikipop, Willking1979, WimdeValk, Winchelsea, WoollyMind, Wyzzard, Wzwz, Xander756, Yansa, Yayay, Yegorm, Z1nk666, Zac439, Zackfox, ZeroOne, Zinc2005, Zippokovich, Zundark,Zvn, لیقع فشاک, सुभाष राऊत, 1347 anonymous edits

Binary-coded decimal  Source: http://en.wikipedia.org/w/index.php?oldid=396883012  Contributors: 137.111.131.xxx, AVRS, Acerperi, Ale jrb, Alphadriven, AndrewHowse, Anomie, ArabicPilot, ArnoldReinhold, Barvinok, Bigdumbdinosaur, Blahma, Bobfran, Brian0918, BrucebWiki, CRGreathouse, Camw, ChadCloman, Choster, Ciaran H, Ciphers, Clone53421, Comet--berkeley,Conversion script, DMG413, DanielEng, Davehi1, Dcoetzee, Dicklyon, Dragana666, Dzubint, Eliz81, EncMstr, FlyHigh, Fresheneesz, Furrykef, GRAHAMUK, Gazilion, Ghettoblaster, Gilliam,GoingBatty, Hashar, Henrygb, Igiffin, Intgr, JaGa, Jef-Infojef, Jeh, Jim.belk, Joe Decker, Kbdank71, Keka, Kevleyski, Kri, Krischik, Kvdveer, Lbs6380, Leuko, Loadmaster, Matusz, Mcapdevila,Mfc, Michael Hardy, Miraceti, Mirror Vax, Mr Elmo, MrOllie, MrStalker, Murray Langton, Nimur, Niteowlneils, Nnp, Obradovic Goran, Octahedron80, Oli Filth, PacoMarkE, Piano non troppo,Plugwash, Potaco99, Qllach, Quota, R. S. Shaw, RTC, Raidon Kane, Reach Out to the Truth, RexNL, Rich Farmbrough, Rominandreu, Rwwww, Securiger, Shadowjams, Shlomital, SimonP,SimonTrew, Sobreira, Speight, Superm401, Swerdnaneb, Tesi1700, Theresa knott, Thevenerablez, Tombomp, Tsunanet, VampWillow, Vwollan, Warut, Wikiborg2, Wstorr, X!, ZeroOne, فشاکआशीष भटनागर, 210 anonymous edits ,لیقع

ASCII  Source: http://en.wikipedia.org/w/index.php?oldid=399484159  Contributors: 126pk, 24ip, 28bytes, 32X, 4twenty42o, 9thbit, A. di M., A12n, A8UDI, ABCD, AMR, Abdull,AdamRaizen, Addihockey10, Addionne, Aditya, Adriatikus, AgentPeppermint, Ahoerstemeier, Alansohn, Aleksandar Šušnjar, Allstarecho, Andrejj, Angela, Animum, Anomie, AnonMoos,Apokrif, Arienh4, Arjun G. Menon, Ashershow1, Asmeurer, Augietrautz, AxelBoldt, AzaToth, B2382F29, B4hand, BCube, BIL, Baa, Barabuski, Basil.bourque, Bbx, Benjamin Mako Hill,Bennytheboy, Betacommand, Bevo, Bisqwit, Bitserve, Bobblewik, Bobo192, BonsaiViking, Boujois, Brendan Moody, Brighterorange, Brouhaha, Bryan Derksen, Bryant1410, CLW,CRGreathouse, CTZMSC3, Cadby Waydell Bainbrydge, Caffolote, Caltrop, Can't sleep, clown will eat me, Cbdorsett, Cburnett, Cema, Ceyockey, Chaosrxn, Cherlin, Chris Chittleborough, ChrisD Heath, Chris is me, ChrisTek, Christian List, Clappingsimon, Cleared as filed, Closedmouth, Cobaltcigs, ColbertTOLDMETODOIT, Computer97, Conversion script, Corti, Crimsunwulf,Curps, Cyanoa Crylate, Cybercobra, CyborgTosser, DHN, DVD R W, David Gerard, DavidCary, Dbachmann, Dcanright, DeadEyeArrow, Dendodge, DennisWithem, DerHexer, Dfrankow,Dfrg.msc, Diabolical mdog, Direvus, Dispenser, Dl2000, Dmeranda, Dmlandfair, DocWatson42, Domthedude001, Donaldgdavis, Donarreiskoffer, Doomdayx, Dragonscales, Dreftymac,DropDeadGorgias, Dwiakigle, Dysprosia, ERcheck, Ed g2s, Edward Z. Yang, Egil, Eigenlambda, Ekohlwey, El C, Elektron, Ellers, Elliskev, Elm-39, Eloquence, Elpollo, Elwikipedista, Emijrp,Emperorbma, Enf, Epbr123, Eric N Fischer, Eubulides, Eugenwpg, Evertype, Everyking, Extropian314, Eyreland, FFraenz, Fabartus, Finell, Fish and karate, FrankGinzoFella, Freedomlinux,FreplySpang, Fresheneesz, Frungi, Fudoreaper, Fullstop, Furrykef, Fuzheado, Fæ, G1234, Gail, Gaius Cornelius, Galwhaa, Gancheff, GatesPlusPlus, Gauss, Gazibara, Gene Thomas, Geo Swan,George The Dragon, GeorgeLouis, Gerbrant, Ghettoblaster, Giftlite, Gimboid13, Gimmetoo, Gimmetrow, Glloq, Gracenotes, Graham87, GreatWhiteNortherner, GreggTownsend, Gtg204y,Gutworth, Gutza, Guy Harris, Gwalla, Hadal, HairyWombat, Hajhouse, Harland1, Herbee, HexaChord, Hhielscher, Hobart, Hotcrocodile, Hotlorp, Hu12, Hydrargyrum, IAMDESU, IMSoP,Iamzemasterraf, Icairns, Icey, Ignaciomella, Ike, Ilikepie00, Ilya, Indefatigable, Intgr, Iridescent, J Di, J.delanoy, JLaTondre, JakeVortex, Jallan, Jasonglchu, Java13690, Java7837, Jay, Jbmurray,Jeandré du Toit, Jeanhaney, Jeff G., Jeffrey Henning, Jeffthejiff, Jengod, Jesdisciple, Jesseili, Jjk, Johnteslade, Jonathan Drain, Jonathanrcoxhead, Jordan Brown, Jsnx, KJRehberg, KTC,Kaisershatner, Kalidasa 777, Karl Dickman, Kbdank71, Kbolino, KeithTyler, Keka, Kerowyn, KerryVeenstra, Khazaei.mr, KillaDamo, Kjoonlee, Kricxjo, Krótki, Kubanczyk, Kune,Kwamikagami, Kwi, Kylegwot, LWChris, LarryGilbert, Latics, LeadSongDog, Lee Daniel Crocker, Lelkesa, LeoNomis, LeonardoRob0t, Leuko, Levin, Lfwlfw, Liftarn, Lincher, Ling.Nut,LittleOldMe old, Loadmaster, LodeRunner, Logan, Loohcsnuf, LouDogg, Lowellian, LuNatic, Luna Santin, Lupin, Maclean25, MagnaMopus, Magnus Bakken, Mahmudmasri, Mange01, Manop,Mardus, MarkOverton222, Martial75, Matt Britt, MattGiuca, Maury Markowitz, Mav, Maxamegalon2000, Maximaximax, Mboverload, McSly, MeekMark, Mercury, Michaeln, Mike Rosoft,Mike Schwartz, Mike Selinker, Mikehead, Mikeisgay12345, Mild Bill Hiccup, Mindspillage, Minna Sora no Shita, Mitch Ames, Mjb, Mjec, Mkhadpe, Modulatum, Monedula, Mono .lck,Morgan Leigh, Mothmolevna, Mpolo, Mpwrmnt, MrWeeble, Mwtoews, Myleslong, N5iln, NSR, Naelphin, Namazu-tron, Naohiro19, Nate Silva, Nathanaeljones, NatusRoma, Nestea Zen,NevilleDNZ, Nick Comber, Nickj, Nikola Smolenski, Nohat, Noldoaran, Nonky, NotACow, Nwbeeson, ORBIT, Oblivious, Octane, Ohhzone, Ohnoitsjamie, Oleg Alexandrov, Oli Filth,Omegatron, Onegin1, Onorem, Orayzio, OrgasGirl, Ortolan88, OwenBlacker, Oxymoron83, P0lyglut, PSUMark2006, Pabouk, Panser Born, Patrick, PatrickSauncy, Patsw, Paulschou, Pb30,Pboyd04, Pedant17, Pengo, Per Abrahamsen, Peripitus, PeterThoeny, Pgk, Phantomsteve, PhilHibbs, Phuzion, Pichai Asokan, Piet Delport, Pinethicket, Plugwash, Pne, Poiuyuiop, Porterjoh,Potaco99, Potatoswatter, Prohlep, Psychonaut, Pxma, Python eggs, Quintote, R.123, RJaguar3, RTC, Ralos, Raymondwinn, ReallyNiceGuy, Redquark, Relaxing, RenesisX, RexNL, RichFarmbrough, RickK, Rjwilmsi, Rob Hooft, Robbgodshaw, RobertG, RockMFR, Ron1947, Roomoor, Rts.bn.vs, SMC, ST47, Sade, Saga City, Salix alba, SallyForth123, Salvio giuliano,SandyJax, SchfiftyThree, Scientus, Scovetta, Sean William, Seraphim, Sergent Tooj, Shadowjams, Shanes, Shimonnyman, Shlomital, Shoeofdeath, Sintaku, SixSix, Sjoerd visscher, Skarebo, Sl,Smite-Meister, Snoyes, Soccerimp, SpaceFlight89, Spearhead, Speck-Made, Spitzak, Squorky1, Stephen, Stephenb, Stephenchou0722, Sunny256, Superm401, Susvolans, Swtpc6800, T-bonham,Taka, TakuyaMurata, Tangotango, Tannin, Tbutzon, Tearsinraine, Tedius Zanarukando, Tempshill, Terr0rist333, The Epopt, The Rambling Man, The Singing Badger, The Thing That ShouldNot Be, Thelibragirl, Thesean43, Thewayforward, Tide rolls, TigerShark, Tim Starling, Timwi, Titoxd, Tizio, Tobias Bergemann, Tompsci, Traal, Transfinite, TreasuryTag, Treekids, Tripodics,Twas Now, Tyler, UU, Ultraexactzz, UnknownzD, Uriber, UserGoogol, Valery Beaud, VampWillow, Vary, Vegpuff, Vishahu, Wardog, Wario456, Wavelength, Wayward, Wereon, Wernher,WikiWikiHacker, Wikiklrsc, William Avery, Wimt, Wizardman, Wj32, WorldlyWebster, WormNut, Wtshymanski, Ww, Xaosflux, Xevious, Yacht, Yintan, Yosri, ZX81, ZapThunderstrike,Zenohockey, Zoezoo, Zundark, Zzuuzz, Саша Стефановић, 770 anonymous edits

Floating point  Source: http://en.wikipedia.org/w/index.php?oldid=398800725  Contributors: 208.222.150.xxx, 47.83.107.xxx, 63.192.137.xxx, Abjad, Abovechief, Ahoerstemeier, Alexius08, Altenmann, Amanaplanacanalpanama, Ambulnick, Amoss, AnAj, AndrewKepert, AndyKali, Andyroo316, AnnaFrance, Apantomimehorse, Arnero, Ashley Y, Ataleh, Attilios, Aykayel, Azraell, Beland, Big Brother 1984, Bluebusy, Bluemoose, Bmearns, Bongwarrior, Booyabazooka, Borgx, Boundlessly, BradBeattie, Brf, CIreland, CRGreathouse, CambridgeBayWeather, Canwolf, Cburnett, Cdion, CesarB, Charles Matthews, Chary pr23, CitizenB, Cmdrjameson, Conversion script, Copyeditor42, Craig t moore, Cybercobra, Cyfal, Cyhawk, Damian Yerrick, Daniel.cussen, Davewho2, David-Sarah Hopwood, David.Monniaux, Dcoetzee, Delirium, Dendodge, Derek farn, Devine9, Dmcq, Dooywopwopbanjio345, Dulciana, Długosz, Earle Martin, EdJohnston, Ednn, Efa, Ehudshapira, Epbr123, Etu, Evaluist, Everyking, Evil saltine, Fang Aili, Ferritecore, Finell, Focomoso, Foobaz, Fredrik, Fresheneesz, Furrykef, Gaius Cornelius, Garde, Gesslein, Giftlite, Godden46, Goudzovski, Graham87, Grim23, Grr82, Gunter, Hairy Dude, HappyVR, Hefiz, Highpriority, Ikanreed, Illusionz, InverseHypercube, Iseeaboar, Isilanes, Isomorphic, JNighthawk, JakeVortex, Javier Carro, Jehan60188, Jennavecia, JimJJewett, Jimp, Jitse Niesen, Jmath666, Joe Decker, Jonathan de Boyne Pollard, Jorge Stolfi, Jotomicron, JulesH, KSmrq, Kbdank71, Kbthompson, Keka, Kevin B12, Kjmathew, Kuszi, Kypzto, LaHaine, Lambiam, Lightmouse, Liviu trifoi, Lovely idiot, Luckstev, Malcolmxl5, Marioxcc, Maros, Mathiastck, Mav, Mcoupal, Meaningful Username, Merope, Mfc, Michael Hardy, Michael.Pohoreski, Mikiemike, Mild Bill Hiccup, MishBaker, Misterblues, Miterdale, Mjb, Mr1278, Mrdvt92, Mshonle, N8mills, Nanshu,

Page 111: Micro Excellent

Article Sources and Contributors 109

Nd, NickyMcLean, Nixdorf, Nutrimentia, Object01, Octahedron80, Oleg Alexandrov, OlivierM, Patrick, Paul Foxworthy, Pbroks13, Perl87, Pete142, Philip Trueman, Photographerguy,Physicistjedi, Poorsod, Premil, Puffin, R. S. Shaw, RTC, Reedy, Ricklethickets, Rjwilmsi, RobertG, Ross Smith NZ, Ryk, Sanchom, Sgeo, Shanes, Shuroo, Simetrical, SimonTrew, Simoneau,Slo-mo, SmileToday, Sns, Sonett72, Soyweiser, Spiel496, Stevenj, Stux, Subversive, Swat671, Tabletop, Taemyr, TakuyaMurata, Tbhotch, Tbleher, That Guy, From That Show!, The Anome,Thecheesykid, Tim1988, Tofergregg, Tomchiukc, Toolnut, Tsuji, Unixplumber, Unyoyega, Uriyan, Wanker jam, Wbrameld, Wernher, WikiDao, Wikomidia, William Ackerman, Wilt,Wmmorrow, Wolfrock, Wordsoup, Wrs1864, Yonidebest, Yrkoon, ZeroOne, Zippanova, 435 anonymous edits

FLOPS  Source: http://en.wikipedia.org/w/index.php?oldid=399375697  Contributors: -Majestic-, 10014derek, 16@r, A111poker, ACSE, Aaronanodide, Agentbla, Ahoerstemeier, Aleph0, AlexKuper, Alpinwolf, AlyM, Amadude, Analogue Kid, Ancheta Wis, Arrenlex, Artefact1981, Arthur a stevens, AscendedAnathema, Atemperman, Autopilot, Avnjay, B1atv, Bachrach44, Bardeep7,Beckboyanch, BenFrantzDale, Bender235, Bobo192, Borgx, Borisborf, Boxter1977, Brachiator, Bradml, Briaboru, BrokenSegue, BrownsRock10, Bryanlyon, Bsadowski1, Bubba73, Bulgrien,Buo, CS46, Calaka, Camerajohn, CanisRufus, CardinalDan, Chairman S., Charles Gaudette, China Dialogue News, Chowbok, Chris S, Clay Juicer, Cncxbox, Colonies Chris, Constantine, CraigMayhew, Crasshopper, D-Notice, DEC42, Darin-0, Darkstar1st, David Shay, Deglr6328, Dekimasu, Demonkey36, Descender, Diablo65, Dismas, Doctorfluffy, Donatus, Doulos Christos,Drakcap, E Pluribus Anthony, E946, Eb.eric, Ed Poor, Ellmist, Elpuellodiablo, Elroch, Emorlock, EncMstr, Endymi0n, Epachamo, Epbr123, ExNihilo, FalconZero, FeiTeng1000, Feureau,FidelFair, Firien, Furrykef, FuzTheCat, GCarty, Gatoatigrado, Gene Nygaard, Ghettosam3000, Gjeremy, Gomallen, Gortu, GregorB, Gunter, Hairhorn, Henriok, Herbee, Highcount, Iamfscked,Ilted, ImMAW, Infofarmer, Inoculatedcities, Isilanes, Ixfd64, JPG-GR, Jaganath, Jake Nelson, Jarhed, Jaxl, Jaysbro, Jbaxter2007, Jdlambert, Jdm64, Jeff Carr, Jemecki, Jeremy Visser, JeremyA,Jerryobject, Jerryseinfeld, Jjalexand, Joelon, Joffeloff, Johnnaylor, JorgePeixoto, Josh3580, Karada, Keraunos, Ketiltrout, Kigali1, KittenKiller, Komap, Kyle, Kynereth, Laksono, LeeG, Leibnitz,Leithp, LiDaobing, Lightmouse, Littlealien182, Lmenthe, Lordvolton, Ltwizard, Luk, Lurker, Mangojuice, Marc Lacoste, MarcoosPL, Mat cross, Matthew Kornya, Mazca, Mbutts, Meand,Melca, Mgblair, Michael Hardy, Mikaey, Mike1942f, MikeGogulski, Mini-Geek, Mipadi, Mishac, Montrealais, Mrdempsey, Msh210, Mtpaley, Mufbard, Muéro, Myscrnnm, Mário e Dário,Nachmore, Nehalem, Nick Number, Nickshanks, Nigholith, Nitecow, Nneonneo, Nwatson, Ohiostandard, Philthecow, Pk3r72owns, Pmetzger, Poli, ProjectTux, Qiq, Quantumelfmage, Qwertyus,RJEvans, RTC, RainbowOfLight, Rasmus Faber, Raul654, Rebroad, Reinderien, Reject, Requen, Rhoonkim, Rich Farmbrough, Rilak, Rjwilmsi, Roaming, Roarbakk, RobertG, RobertStar20,Robertvan1, Robina Fox, RoyBoy, Rrburke, Rudibs, Ryoohkies, Sam Ellens, Sapeli, Sasuke Sarutobi, Schopenhauer, Scientus, Sdornan, SeanAhern, Seb35, SebastianHelm, Seraphimblade,Shawnc, Shawnhath, Sholtar, Simoneau, Slash, Snowolf, SolarElectricVehicle, Sollosonic, Sonicology, Spinach Monster, Stepa, Stevenj, Stevertigo, Stevestrange, Stroppolo, Supersword,SweatDiver, Symmetric Chaos, TWDorr, TeeEmCee, Tempodivalse, TerraFrost, Teveten, Thatoneguy, The Anome, The Anonymous One, TheBilly, TheGreatConspiracy, TheWickerMan,Theone00, Thunderbird2, Thunderbrand, Tide rolls, Titus III, Tom NM, Trevor Bekolay, Urhixidur, Vendettax, Vicarious, VictorAnyakin, WAS 4.250, Wapcaplet, Wernher, Whitepaw, Whkoh,Winterspan, Wolfgang Kufner, Woohookitty, Ww.ellis, Xakepxakep, Yst, Zdude255, ZeroOne, Zginder, Zodon, Zojj, Zoonfafer, Zotel, Zouavman Le Zouave, Петър Петров, 586 anonymousedits

Embedded system  Source: http://en.wikipedia.org/w/index.php?oldid=399494264  Contributors: !ComputerAlert!, (aeropagitica), 62.253.64.xxx, 7, 7265, A8UDI, ABF, AMcKay, AaronLawrence, Abbas ahmed, Abdull, Abu ali, Acrosser, AdjustShift, Aeroniphus, Aidan Croft, Akadruid, Al Lemos, Alexdemagalhaes, Alexf, Allan McInnes, Alll, AllyUnion, Alvin-cs, Amalas,Amnonc, Anderssjosvard, Andrewpmk, Anorthup, Ap, ArglebargleIV, Arthena, Askild, Atlant, Attilios, Benzamin, Bhas purk, BigJouly, Bob Flintoff, Bobet, Brighterorange, Britton ohl,Bruce89, Bryan Derksen, Bseemba, Bunnyhop11, Can't sleep, clown will eat me, CanisRufus, Caporaletti, Capricorn42, Cats AND hats, Cburnett, CesarB, Ceyockey, Chicken.123nugget,Chrabieh, Chris the speller, Ckatz, Closedmouth, Commsserver, Conan, Conversion script, Corwin8, Cpl Syx, Cpuwhiz11, Crazyman444, Crispmuncher, Cyan, DARTH SIDIOUS 2, Da Joe,Damian Yerrick, Darolew, David Shay, DavidCBryant, DavidCary, DhanushSKB, Dhishnawiki, Diberri, Dicklyon, Didgeweb, Discospinster, Djg2006, DoctorWhat2Know, Donnay, Donnilad93,Dpotop, Dspradau, Dungodung, EM1SS&CSE, Ebde, Echo95, Edward, El C, Elano, Elipongo, Embeddedsystemnews, Embedian, Emesee, EncMstr, Entilsar, Essjay, Expertjohn, FF2010,Fabartus, Fadyfadl6, Falcon8765, Favonian, Femto, Fender123, Ficusreligiosa, Flcelloguy, ForthOK, Frappucino, Fredrik, Frencheigh, GB fan, GK tramrunner, GRAHAMUK, GSlicer, Galoubet,Gamma, Gary Kirk, Gaussgauss, Gennaro Prota, Gerbrant, Ghettoblaster, Ghewgill, Giftlite, Gioto, Glen, Glenn, Gogo Dodo, Graham87, Grand Am, Gregbard, Gregfadein, HEL, Haakon, Hadal,Harriv, Harryboyles, Helix84, HenkeB, Highcount, Hu12, HumphreyW, Hybricon2009, IanOsgood, Identime, Ignorance is strength, Igor Markov, Ilikefood, InTheCastle, InvaderJim42,Iridescent, Iterator12n, JPats, Jaho, Jameblo, Japo, Jcarroll, Jclemens, Jeff G., Jeronimo, Jheiv, Jimwilliams57, Joel Saks, JonHarder, Jondel, Jorunn, Joy, Jozue, Jujutacular, JustinC474,KageMonkey, Katalaveno, Kbdank71, Kenny sh, Kerdip, Kjkolb, Kku, Kottkrig, Kozuch, Krischik, Kurykh, Kvprav, Lakshmin, Law, Lbertybell, Lhbts, Lightmouse, Limbo socrates, Lupin, LuísFelipe Braga, MD87, MDuo13, MER-C, MONGO, Mac, Madoka, Magnus.de, Majjigapumounika, Malcolm, Mandarax, Mani1, Manik762007, Maralia, MariaSantella, Mark Foskey, Maroux,Martinstephene, Mathiastck, Matt Britt, Maxell 555, Maxim, Maximus Rex, McGeddon, McSly, Mcleodm, Methcub, Michael Drüing, Michael.James.Daniel, Microsp, Mike1024, Mike92591,Mikiemike, Millard73, Mimosis, Mirosamek, Mistercupcake, Mjpieters, Mk cr, Mmernex, Morticae, Mpeisenbr, MrOllie, Mrzaius, Musiphil, Myanw, Nanshu, Natalie Erin, Navstar, Netalarm,Netizen, NewEnglandYankee, Nickbao, Ninly, Nixdorf, OccamzRazor, Ojw, Oleg Alexandrov, Oliverdl, Olivier Dupont, Omicronpersei8, Orderud, OrgasGirl, PM800, Patrick, Pearle, Peter.C,Peterkimrowe, PhilKnight, Philip Trueman, Piano non troppo, PlayStation 69, Pmod, Polluxian, Poonacha, Prari, Pravin kumar0611, Pumpkin Pi, QuantumEngineer, Qwyrxian, RTC,Raistlin11325, Ravikumar001, Ray Van De Walker, Razimantv, Redslime, Renoj, Rgvandewalker, Rhobite, Rich Farmbrough, Richfife, RickClements, Rilak, Rjwilmsi, Rmaax, Rmosler2100,Robin1225, RoySmith, Rrelf, Rror, SQGibbon, Saimhe, Samli1018, Sanders muc, Savannah Kaylee, Sbb, Sbmeirow, Schoen, SchuminWeb, Sciurinæ, SeanGustafson, Sebenezer, Seraphiel,ShadowHntr, Shadowjams, Shanerobinson, Shell Kinney, Shooke, Shwr, Sietse Snel, SimonP, Simpsons contributor, SkiAustria, Smart boy abhishek, Someguyonearth, Soumyasch,SpaceFlight89, Steve Farnell, Stewartadcock, SubaruSVX, SunCreator, Sunrise 87, Suruena, Systemsat69, THEN WHO WAS PHONE?, Tanolin, Technopedian, Tedickey, TheMandarin,TheNightFly, Themfromspace, Thisara.d.m, Thulasikce, Tim Pritlove, TomCerul, Tommy1808, Tommy2010, Top Jim, Touseefliaqat, Toussaint, Towel401, Tyler Frederick, VampWillow,Vegaswikian, Venkatkrish02, Versus22, Vipinhari, Vorenus, Voxii, Vrenator, Walden, Warlok42, Wayne Olajuwon, Wbrameld, Wdfarmer, WereSpielChequers, Wernher, Wiki alf, Wikiborg,Wikitanvir, William Avery, Wmspringer, Yorick8080, Zazpot, Zelikazi, Zhuzheng, Zlatan8621, Zondor, Zvar, Zzuuzz, चंद्रकांत धुतडमल, 742 anonymous edits

Microprocessor  Source: http://en.wikipedia.org/w/index.php?oldid=399119198  Contributors: 1111mol, 16@r, 194.230.175.xxx, 209.239.196.xxx, 7, 95jb14, A little insignificant,ActiveSelective, Adityagaur 7, Aero14, Ageton, Ahoerstemeier, Alain, Alansohn, Aldie, Alexvent, Alf Boggis, Alokchakrabarti, Altes2009, Alureiter, Aluvus, Andrejj, Andy Dingley, AngelicWraith, Anomalocaris, [email protected], AnthonyQBachler, Apantomimehorse, Apyule, Arch dude, Archenzo, Arisa, Arnero, Arun dalvi, Asnidhin, Axeman89,BBrad31, Beerden, Begoon, Berek, Berles, Biker Biker, Blainster, BobKawanaka, Bobblewik, Bongwarrior, Bookandcoffee, Brainmachine, Brandon, Brouhaha, Bubba73, C.Fred, CWY2190,Calabraxthis, Caltas, Cameltrader, Can't sleep, clown will eat me, CanadianLinuxUser, CanisRufus, Canthusus, Capricorn42, Cbturner46, Cenarium, Cherfr, Chris the speller, Circularshift,ClanCC, Clockwork Soul, Colin99, Conskeptical, Conversion script, CoolFox, Crazysunshine, Crusadeonilliteracy, Crystallina, Cxk271, DARTH SIDIOUS 2, DSLITEDS, DabMachine,Damueo, DanielVonEhren, Dasarianandkumar009, Davhorn, David Gerard, David Jordan, DavidCary, Db099221, Dcljr, Deagle AP, Deor, DerHexer, Dispenser, Djd1219, Donreed, Dougweller,DragonHawk, Ds13, Dureo, ENIAC, Echo95, Edd 123, Edetic, Edunsi, Elfguy, Elipongo, Elockid, Endothermic, Enochhwang, Enviroboy, Epbr123, FatalError, Felixdakat, Ferkelparade,Figureskatingfan, Fir0002, Fnagaton, Fourohfour, Frazzydee, Fyngyrz, GVP Webmaster, Gachet, Gaius Cornelius, Galoubet, Gary King, Gcalac, Geniac, Ghettoblaster, Giftlite, GlasGhost, Glen,Glenn, Goochelaar, Gorank4, Grafikm fr, Greenrd, Grendelkhan, Grim23, Grunt, Guy Harris, Gökhan, Hadal, Haham hanuka, Hanandre, Hannes Hirzel, Hehkuviini, Henk.muller, HenkeB,Henriok, Herbertxu, Hitchhiker89, Hooperbloob, Horsten, Hpa, IOOI, Idleguy, Ilya, Inkling, Insuperable, Iridescent, Island, Ixfd64, J.delanoy, JJthe13, JLaTondre, Jagged 85, Jake Wartenberg,Jamo spingal, Jaranda, Jason Michael Smithson, Jdforrester, Jeff G., Jeffreyarcand, Jerryobject, JiFish, Jklamo, Jo7hs2, John a s, JohnCD, JonHarder, Jonathan Hall, Jondel, Jossi, Jpbowen, Jpk,Jrockley, Jsc83, Juliancolton, Junexgamer, Jusdafax, Just James, Jwy, Jóna Þórunn, KLWhitehead, Kateshortforbob, Kbdank71, Kerotan, KerryVeenstra, Kingpin13, Kjkolb, Klingoncowboy4,Kmweber, KnowledgeOfSelf, Komap, Kozuch, Krótki, Leon7, Letdorf, Letowskie, Lexein, Liao, Lights, Lilg132, Lindosland, LouI, LouScheffer, Luna Santin, MFNickster, Mac, Mack2,MafiotuL, Mahjongg, Maian, Majorly, MaltaGC, Mandarax, Mani1, Maniago, Mannafredo, Mantror, Manxarbiter, MarkSweep, MarsRover, Matsuiny2004, Matt Britt, Matticus78, MauryMarkowitz, Mav, Maxis ftw, Maxx.T, Mb3k, Mckaysalisbury, Mcleodm, Mdd4696, Mehrunes Dagon, Meldor, Mendiwe, Michael Hardy, Mike Van Emmerik, Mikef, Milkmandan,Minesweeper, Mintleaf, Mirror Vax, Misibacsi, Mjpieters, Moggie2002, MooresLaw, Moxfyre, Mr random, Mramz88, Mrwrite, Muhherfuhher, Murray Langton, Musabhusaini, Mxn, Mysid,MysticMetal, NPalmius, Nadavspi, Nandadeep11, Nansars, NapoliRoma, Nate Silva, Navanee2u, Navy blue84, Neckelmann, NellieBly, NeoPhyteRep, Netsnipe, Nihonjoe, Ninly, Norm mit,NuclearWarfare, OlEnglish, Oliverdl, Olivier, Orichalque, Oystein, Pankaj139, Paul23234545, Persian Poet Gal, Phil [email protected], Philip Trueman, Pingveno, Pixel8, Poppafuze,Premvvnc, Proofreader77, QcRef87, Qwerty189, RA0808, RJaguar3, Rada, RainbowOfLight, Ram-Man, Rama's Arrow, Randydeluxe, Raul654, Ray Van De Walker, Rayholt, Razorpit,RedWolf, Reisio, RexNL, Rich Farmbrough, Riki, Rilak, Rjhanson54, Rmrfstar, Roadrunner, Roberrd, RobertG, Robofish, Rror, Rubikcube, Rōnin, SMC1991, Salvio giuliano, Sanchom,Sarenne, Sbierwagen, SchuminWeb, Scipius, Sclear02, Scootey, Search4Lancer, Seav, Senthilece19, SharpenedSkills, Sherool, Shirik, Shvytejimas, Sietse Snel, SimonP, Snowolf,SoCalSuperEagle, Sp, Spacepotato, Spiff, Spitfire19, SplingyRanger, Squash Racket, Srce, SreekumarC, Srinivasasha, Stan Shebs, Stdazi, Stickee, Sunshine4921, Swtpc6800, Sylvain Mielot,TEG24601, Tasc, Thdremily1, The Anonymous One, The Thing That Should Not Be, The1physicist, TheJosh, Theresa knott, Think outside the box, Thurinym, Tigga en, Tim Ivorson, Tinkar,Tobias Bergemann, Tomekire, Tony1, Top2Cat1, Toussaint, Toytoy, Tpbradbury, Tvdm, Twang, Uncle G, UtherSRG, ValC, Valery Beaud, VampWillow, Versus22, Vinnothkumar, Violetriga,Viskonsas, Vivekyas, Vladmihaisima, Vmanthi, Vonfraginoff, Voyagerfan5761, Warpozio, Wavelength, Wayward, Webshared, Wernher, Wjw0111, WojPob, Wtshymanski, Wutsje, YamamotoIchiro, Yarvin, Yes4us, Zajacik, Zundark, Zvika, 855 anonymous edits

Microcontroller  Source: http://en.wikipedia.org/w/index.php?oldid=399366019  Contributors: 1exec1, A Sengupta, A. B., A8UDI, A930913, ANONYMOUS COWARD0xC0DE, Abrech, Adam850, Addshore, Alansohn, Allan McInnes, Andreas Groß, AndrewHowse, Andries, Andy Dingley, Anon user, Anonymous101, Aspwiki, Atlant, Attilios, AxelBoldt, Axlq, Az1568, Ben b, Ben-Zin, Benhoyt, Blanchardb, Bobblewik, Bobo192, Bogey97, Bookandcoffee, Brewsum, C628, CALR, CSWarren, Calamarain, Calltech, Cambrasa, CanisRufus, Cathy Linton, Cbmeeks, Cburnett, CesarB, Ceyockey, Chaosdruid, Cheung1303, Chris 73, Chris the speller, Chzz, Cje, Clara Anthony, Coasterlover1994, Codinghead, Colin Douglas Howell, Cometstyles, Conversion script, Corwin8, CosineKitty, Cpiral, CyrilB, DJ LoPaTa, DStoykov, Davewild, DavidCary, Derek Ross, Dhiraj1984, Dicklyon, Discospinster, Diwakarbista, Docu, Doodle77, Drvanthorp, Dyl, ESkog, Easwarno1, Ebde, Egil, Electron20, Electron9, Erianna, Ewlyahoocom, Falcon8765, Femto, Foadsabori, Foobaz, Frappucino, Fuadamjala, Furby100, Fæ, GRAHAMUK, Gaius Cornelius, Gapaddict, Gcalac, Geekstuff, Gejigeji, Giftlite, Glass Sword, GlassCobra, Glenn, GoldenMeadows, Hdante, HenkeB, Henriok, Heron, Hobiadami, Hooperbloob, Ianhopkins, Ilikefood, InTheCastle, Ionutzmovie, Izx, J04n, JPats, Jaho, Jasoneth, JidGom, Jidan, Jim1138, Jiuguang Wang, Jmicko, Jni, Joel Saks, JonHarder, Jonik, Jouthus, Juhuang, Kanishafa, Kerdip, Ketiltrout, Kevinkor2, Kinema, Kingpin13, Kozuch, Kphowell, Krishnavedala, Kuru, Kushagraalankar, Kwertii, Langermannia, LanguidMandala, Lbokma, Lightmouse, Lindosland, Lisatwo, LouScheffer, M 3bdelqader, Mac, Madison Alex, Maelwys, Mahjongg, Malikqasim, Maluc, Matt Britt, Matt Crypto, Maury Markowitz, Mbessey, Mcunerd, Menswear, Microco, Mikespedia, Mikiemike, Mlutfi, Moggie2002, MrOllie, Na7id, Nasukaren, Neoflame, Netaimaid, Nishkid64, Oborg, Obsessiveatbest, Omegatron, Otisjimmy1, PATCheS, PJohnson, Parallelized, PeterJohnBishop,

Page 112: Micro Excellent

Article Sources and Contributors 110

Petter73, Pfagerburg, Pfrulas, Philip Trueman, PierreAbbat, Pion, PoPCulture69, Qratman, Quintote, R. S. Shaw, Radagast83, RainbowOfLight, RastaKins, Reedy, Reinderien, RexNL, Rhobite,Rich Farmbrough, Rilak, Rjwilmsi, Rmrfstar, Roscoe x, Sagie, Sarenne, Scarian, Scope creep, Secondwatch, Seidenstud, Shenme, Skarebo, Sloman, Snigbrook, SoccerCore11, Solipsist,Spacepotato, Speedevil, Stan Shebs, StephenBuxton, Stevemcl10, Suresh1686, Suruena, Tartarus, Tevildo, Thaiio, The Thing That Should Not Be, The.happy.hippy, TheMandarin, TheParallax,Theresa knott, Thisara.d.m, Timichal, Timmh, Tom d perkins, Tomayres, Travelbird, Usman365, Vanuan, Viskr, Voidxor, Vwollan, Wernher, WikiPediaAid, Willem7, Wizardman,Wtshymanski, Yakudza, Yrogerg, Zedomax, Zodon, Zvn, ZyMOS, 568 anonymous edits

Instruction cycle  Source: http://en.wikipedia.org/w/index.php?oldid=398958982  Contributors: Adi4094, Al Lemos, Amaltheus, Atwin, Bernhard Bauer, Bookandcoffee, Brianjd, Bubba hotep,Burn, Carmichael95, Chaolinliu, Cpl Syx, Dex1337, Duomillia, Eamonnca1, EdGl, Eleuther, Euryalus, Falsedef, Fashionable.politics, FatalError, Grarynofreak, Insaniac99, Jab416171, Jmlk17,JonHarder, Judicatus, Jxr, Kbdank71, Kingpin13, Kk2mkk, Kku, Klosterdev, KurtRaschke, Kyattei, Leszek Jańczuk, Leuqarte, Ligulem, MER-C, Magioladitis, Mattymayhem, Mcoupal,Nalyd357, Nneonneo, OliviaGuest, OnePt618, Pac72, Phantomsteve, Proberts2003, R. S. Shaw, RJFJR, Ratbum, Rich Farmbrough, Rjwilmsi, Ronhjones, Sdornan, Sheps11, Sir Nicholas deMimsy-Porpington, Snowolf, SpuriousQ, UnknownzD, WikHead, 204 anonymous edits

Computer memory  Source: http://en.wikipedia.org/w/index.php?oldid=398089056  Contributors: 216.237.32.xxx, Aaagmnr, AzaToth, Baxter9, Blehfu, Conversion script, Cpiral, DARTHSIDIOUS 2, DMacks, Danoishan, Doniago, Espoo, Fraxtil, Fyyer, GoingBatty, Graham87, HumphreyW, Icairns, JamesAM, K.Nevelsteen, Keith Lehwald, Kevinkor2, King Semsem, Lenehey,Logan, Mange01, Matt Britt, Memorytn, Mindmatrix, Mlucas300, MusicCyborg23, Niczar, Nordeide, Nubiatech, Oicumayberight, Ovpjuggalor, PM800, Rebroad, Redirect fixer, Rwwww,Shanky98, Stevertigo, TheOtherSiguy, TheoClarke, Xedisdying, 72 anonymous edits

Memory-mapped I/O  Source: http://en.wikipedia.org/w/index.php?oldid=394699274  Contributors: (, A8UDI, Alfio, Angela, Asce88, Atlant, BaxterG4, Bpringlemeir, CanisRufus, Chris G,Damian Yerrick, Ewlyahoocom, Furrykef, GRAHAMUK, Gazpacho, Glennimoss, Haeleth, Intgr, IvanLanin, Ixfd64, Jengelh, Jesse Viviano, Jhansonxi, JonHarder, Kbdank71, Kne1p,Kubanczyk, Kyz, Leotohill, Mulad, Mysidia, Nbarth, Neilc, Nixdorf, OS2Warp, Omegatron, Philip Trueman, R. S. Shaw, Radagast83, Rick Sidwell, Sander123, Stuart Morrow, TakuyaMurata,Tambal1210, Template namespace initialisation script, Thomaslw, Toresbe, Vanessadannenberg, Wernher, Wiki alf, Xezbeth, 89 anonymous edits

Chip select  Source: http://en.wikipedia.org/w/index.php?oldid=383735761  Contributors: Cburnett, Firetwister, Greenshift, Jeff Wheeler, Mild Bill Hiccup, Rich Farmbrough, 8 anonymous edits

Reduced instruction set computing  Source: http://en.wikipedia.org/w/index.php?oldid=399068912  Contributors: 15.253, 16@r, 18.94, 209.239.198.xxx, 62.253.64.xxx, Adam Bishop,AgadaUrbanit, Alecv, Andre Engels, Andrew.baine, Aninhumer, Anss123, Autarchprinceps, AvayaLive, Bcaff05, Beanyk, Betacommand, Bobanater, Bobblewik, Brianski, Bryan Derksen,Btwied, C xong, C. A. Russell, Cambrant, Capricorn42, Cbturner46, Charles Matthews, Christan80, Cliffster1, Cmdrjameson, Conversion script, Corti, Cybermaster, Damian Yerrick, Darkink,Davewho2, David Gerard, David Shay, DavidCary, DavidConner, Davipo, Dbfirs, Dcoetzee, DeadEyeArrow, Derek Ross, Dkanter, Dmsar, Donreed, Dr zepsuj, DragonHawk, Drcwright, Drj,Dro Kulix, Dyl, Eclipsed aurora, EdgeOfEpsilon, Eloquence, EnOreg, Eras-mus, Evice, Finlay McWalter, Fonzy, Frap, Fredrik, Fromageestciel, Fujimuji, Furrykef, G3pro, GCFreak2, GaiusCornelius, Gazno, Gesslein, Gjs238, GregLindahl, GregorB, Guy Harris, Hadal, Hardyplants, Heirpixel, HenkeB, Henriok, Hephaestos, HubmaN, ISC PB, Iain.mcclatchie, Ianw, Imroy,IvanLanin, JVz, Jack1956, Jamesmusik, Jasongagich, Jay.slovak, Jengod, Jesse Viviano, Jevansen, Jiang, JoanneB, Johncatsoulis, JonHarder, Josh Grosse, JulesH, Kaszeta, Kate, Kbdank71,Kelly Martin, Kevin, Kman543210, Knutux, Koper, Koyaanis Qatsi, Kristof vt, Kwamikagami, Kwertii, Labalius, Larowebr, Leszek Jańczuk, Levin, Liao, Liftarn, Ligulem, Littleman TAMU,Lorkki, Lquilter, MER-C, MFH, Marcosw, Mark Richards, MarkMLl, Matsuiny2004, MattGiuca, Mattpat, Maurreen, Maury Markowitz, Mav, Mdz, MehrdadAfshari, Michael Hardy,Micky750k, Mike4ty4, MikeCapone, Mikeblas, Milan Keršláger, Mintleaf, Miremare, Modster, Moxfyre, MrPrada, MrStalker, Mrand, Mrwojo, Murray Langton, Nasukaren, Nate Silva, Neilc,Nikevich, Nurg, OCNative, Optakeover, Orichalque, Parklandspanaway, Paul D. Anderson, Paul Foxworthy, Pgquiles, Phil webster, Philippe, Pixel8, Plr4ever, PrimeHunter, Ptoboley,QTCaptain, Quuxplusone, Qwertyus, R. S. Shaw, RAMChYLD, RadicalBender, Radimvice, Rat144, Raysonho, RedWolf, Rehnn83, Remi0o, Retodon8, Rilak, Robert Merkel, Romanm,Rwwww, Saaya, Sbierwagen, Scepia, Scootey, Self-Perfection, Senpai71, Shieldforyoureyes, Shirifan, Sietse Snel, Simetrical, SimonW, Snoyes, Solipsist, Sonu mangla, SpeedyGonsales,SpuriousQ, Stan Shebs, Stephan Leeds, Stewartadcock, StuartBrady, Surturz, Susvolans, T-bonham, The Appleton, TheMandarin, Thorpe, Thumperward, Thunderbrand, TimBentley,Tksharpless, Toresbe, Toussaint, UncleDouggie, UnicornTapestry, Unyoyega, Uriyan, VampWillow, Vishwastengse, Watcharakorn, Wcooley, Weeniewhite, Weevil, Wehe, Wernher, Wik,Worthawholebean, Wws, Xyb, Yurik, Zachlipton, ZeroOne, ^demon, 382 anonymous edits

Complex instruction set computing  Source: http://en.wikipedia.org/w/index.php?oldid=397530536  Contributors: 209.239.198.xxx, Alimentarywatson, Andrejj, Arndbergmann, Blazar,Buybooks Marius, CanisRufus, Carbuncle, Cassie Puma, Collabi, Conversion script, DMTagatac, DaleDe, Davnor, DmitryKo, Dyl, Ejrrjs, Epbr123, Eras-mus, Ergbert, Ethancleary, EvanCarroll,Eyreland, Fejesjoco, Flying Bishop, Frap, Galain, Gardar Rurak, Graham Chapman, Guy Harris, HenkeB, James Foster, Jason Quinn, Joanjoc, JonHarder, Jpfagerback, Karl-Henner, Kbdank71,Kelly Martin, Kwertii, Liao, Lion10, MFH, MattGiuca, Mike4ty4, Mudlock, Murray Langton, NapoliRoma, Neilc, Nikto parcheesy, Optakeover, OrgasGirl, PS2pcGAMER, Pgquiles,PokeYourHeadOff, Prodego, Qbeep, Quuxplusone, R'n'B, R. S. Shaw, Rdnk, RekishiEJ, Rich Farmbrough, Rilak, Robert Merkel, Rwwww, Saaya, SimonP, Skittleys, Slady, Sopoforic, StephanLeeds, Swiftly, Template namespace initialisation script, Tesi1700, Thincat, Thomas PL, Tirppa, TutterMouse, UnicornTapestry, Urhixidur, VampWillow, Virtualphtn, Whaa?, WhiteTimberwolf,Wiki alf, Wws, 102 anonymous edits

Page 113: Micro Excellent

Image Sources, Licenses and Contributors 111

Image Sources, Licenses and ContributorsImage:Binary clock.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Binary_clock.svg  License: Attribution  Contributors: Alexander Jones & Eric PierceImage:Half Adder.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Half_Adder.svg  License: Public Domain  Contributors: inductiveloadImage:ASCII Code Chart.svg  Source: http://en.wikipedia.org/w/index.php?title=File:ASCII_Code_Chart.svg  License: Public Domain  Contributors: User:AnomieImage:ASCII Code Chart-Quick ref card.png  Source: http://en.wikipedia.org/w/index.php?title=File:ASCII_Code_Chart-Quick_ref_card.png  License: Public Domain  Contributors:User:LWChrisImage:ADSL modem router internals labeled.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:ADSL_modem_router_internals_labeled.jpg  License: Public Domain  Contributors:User:Mike1024Image:Alix.1C board with AMD Geode LX 800 (PC Engines).jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Alix.1C_board_with_AMD_Geode_LX_800_(PC_Engines).jpg License: GNU Free Documentation License  Contributors: User:KozuchImage:RouterBoard 112 with U.FL-RSMA pigtail and R52 miniPCI Wi-Fi card.jpg  Source:http://en.wikipedia.org/w/index.php?title=File:RouterBoard_112_with_U.FL-RSMA_pigtail_and_R52_miniPCI_Wi-Fi_card.jpg  License: GNU Free Documentation License  Contributors:User:KozuchImage:Overo_with_coin.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Overo_with_coin.jpg  License: Public Domain  Contributors: User:JustinC474File:ESOM270 eSOM300 Computer on Modules.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:ESOM270_eSOM300_Computer_on_Modules.jpg  License: Public Domain Contributors: User:LakshminImage:MicroVGA TUI demoapp.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:MicroVGA_TUI_demoapp.jpg  License: GNU Free Documentation License  Contributors: MartinHinnerImage:Intel 4004.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Intel_4004.jpg  License: GNU Free Documentation License  Contributors: Original uploader was LucaDetomi atit.wikipediaFile:C4004.JPG.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:C4004.JPG.jpg  License: Public Domain  Contributors: Photo by John Pilge.File:GI250 PICO1 die photo.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:GI250_PICO1_die_photo.jpg  License: Creative Commons Attribution 3.0  Contributors: User:JamospingalImage:80486DX2 200x.png  Source: http://en.wikipedia.org/w/index.php?title=File:80486DX2_200x.png  License: Creative Commons Attribution-Sharealike 2.5  Contributors:User:UberpenguinImage:PentiumDFront.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:PentiumDFront.JPG  License: Creative Commons Attribution-Sharealike 3.0  Contributors:User:MckaysalisburyImage:PentiumDBack.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:PentiumDBack.JPG  License: Creative Commons Attribution-Sharealike 3.0  Contributors:User:MckaysalisburyFile:153056995 5ef8b01016 o.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:153056995_5ef8b01016_o.jpg  License: Creative Commons Attribution-Sharealike 2.0  Contributors:Ioan SameliFile:PIC18F8720.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:PIC18F8720.jpg  License: Public Domain  Contributors: Achim Raschka, Zedh, 1 anonymous editsFile:Comp fetch execute cycle.png  Source: http://en.wikipedia.org/w/index.php?title=File:Comp_fetch_execute_cycle.png  License: Creative Commons Attribution 3.0  Contributors:User:RatbumImage:ENIAC Penn2.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:ENIAC_Penn2.jpg  License: GNU Free Documentation License  Contributors: Original uploader wasTexasDex at en.wikipediaImage:SPI three slaves.svg  Source: http://en.wikipedia.org/w/index.php?title=File:SPI_three_slaves.svg  License: GNU Free Documentation License  Contributors: Cburnett, 1 anonymous edits

Page 114: Micro Excellent

License 112

LicenseCreative Commons Attribution-Share Alike 3.0 Unportedhttp:/ / creativecommons. org/ licenses/ by-sa/ 3. 0/