Final Terms1
-
Upload
aenna-orra -
Category
Documents
-
view
217 -
download
0
Transcript of Final Terms1
-
8/3/2019 Final Terms1
1/78
(1) Flip-flop
In digital circuits, a flip-flop is a term referring to an electronic circuit that has two stable states and
thereby is capable of serving as one bit ofmemory.A flip-flop is usually controlled by one or two
control signals and/or a gate orclock signal. The output often includes the complement as well as the
normal output.
Uses :-
A single flip-flop can be used to store onebit, or binary digit, of data.
Any one of the flip-flop type can be used to build any of the others.
Many logic synthesis tools will not use any other type than D flip-flop and D latch.
Level sensitive latches cause problems with Static Timing Analysis (STA) tools and Design
For Test (DFT). Therefore, their usage is often discouraged. Many FPGA devices contain only edge-triggered D flip-flops
(2) Adder
In electronics, an adder orsummer is a digital circuit that performs addition of numbers. In modern
computers adders reside in the arithmetic logic unit (ALU) where other operations are performed.
Although adders can be constructed for many numerical representations, such as Binary-coded
decimal or excess-3, the most common adders operate on binary numbers. In cases where two's
complement orone's complement is being used to represent negative numbers, it is trivial to modify
an adder into an adder-subtractor. Other signed number representations require a more complex
adder.
Types:-
(a) Half adder
A half adder is a logical
circuit that performs an
addition operation on two one-bit binary numbers often written asA andB. The half adder output isa sum of the two inputs usually represented with the signals Coutand Swhere
.
Example half adder circuit diagramAs an example, a Half Adder can be built with an XOR gateand an AND gate.
___________
A ------| |
| Half |-----
| Adder || |-----
B ------|___________|
1
Inputs Outputs
A B C S 0 0 0 0
0 1 0 1
1 0 0 1
1 1 1 0
http://en.wikipedia.org/wiki/Digital_circuithttp://en.wikipedia.org/wiki/Electronic_circuithttp://en.wikipedia.org/wiki/Bithttp://en.wikipedia.org/wiki/Computer_storagehttp://en.wikipedia.org/wiki/Signalshttp://en.wikipedia.org/wiki/Clock_signalhttp://en.wikipedia.org/wiki/Outputhttp://en.wikipedia.org/wiki/Complementhttp://en.wikipedia.org/wiki/Bithttp://en.wikipedia.org/wiki/Logic_synthesishttp://en.wikipedia.org/wiki/FPGAhttp://en.wikipedia.org/wiki/Digital_circuithttp://en.wikipedia.org/wiki/Additionhttp://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Arithmetic_logic_unithttp://en.wikipedia.org/wiki/Binary-coded_decimalhttp://en.wikipedia.org/wiki/Binary-coded_decimalhttp://en.wikipedia.org/wiki/Excess-3http://en.wikipedia.org/wiki/Binary_numeral_systemhttp://en.wikipedia.org/wiki/Two's_complementhttp://en.wikipedia.org/wiki/Two's_complementhttp://en.wikipedia.org/wiki/One's_complementhttp://en.wikipedia.org/wiki/Adder-subtractorhttp://en.wikipedia.org/wiki/Signed_number_representationshttp://en.wikipedia.org/wiki/XOR_gatehttp://en.wikipedia.org/wiki/AND_gatehttp://en.wikipedia.org/wiki/File:Half_Adder.svghttp://en.wikipedia.org/wiki/Digital_circuithttp://en.wikipedia.org/wiki/Electronic_circuithttp://en.wikipedia.org/wiki/Bithttp://en.wikipedia.org/wiki/Computer_storagehttp://en.wikipedia.org/wiki/Signalshttp://en.wikipedia.org/wiki/Clock_signalhttp://en.wikipedia.org/wiki/Outputhttp://en.wikipedia.org/wiki/Complementhttp://en.wikipedia.org/wiki/Bithttp://en.wikipedia.org/wiki/Logic_synthesishttp://en.wikipedia.org/wiki/FPGAhttp://en.wikipedia.org/wiki/Digital_circuithttp://en.wikipedia.org/wiki/Additionhttp://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Arithmetic_logic_unithttp://en.wikipedia.org/wiki/Binary-coded_decimalhttp://en.wikipedia.org/wiki/Binary-coded_decimalhttp://en.wikipedia.org/wiki/Excess-3http://en.wikipedia.org/wiki/Binary_numeral_systemhttp://en.wikipedia.org/wiki/Two's_complementhttp://en.wikipedia.org/wiki/Two's_complementhttp://en.wikipedia.org/wiki/One's_complementhttp://en.wikipedia.org/wiki/Adder-subtractorhttp://en.wikipedia.org/wiki/Signed_number_representationshttp://en.wikipedia.org/wiki/XOR_gatehttp://en.wikipedia.org/wiki/AND_gate -
8/3/2019 Final Terms1
2/78
(b) Full adder
Schematic symbol for a 1-bit full
adder with Cin and Cout drawn on
sides of block to emphasize their
use in a multi-bit adder.
A full adder is a logical circuit
that performs an addition operation on three one-bit binary numbers often written as A, B, and Cin.The full adder produces a two-bit output sum typically represented with the signals Cout and Swhere
.
(3) Subtractor
In electronics, a subtractor can be designed using the same approach as that of an adder. The binary
subtraction process is summarized below. As with an adder, in the general case of calculations on
multi-bit numbers, three bits are involved in performing the subtraction for each bit of the difference:
the minuend (Xi), subtrahend (Yi), and a borrow in from the previous (less significant) bit order
position (Bi). The outputs are the difference bit (Di) and borrow bitBi + 1.
K-mapBi(1,2,3,7)
Subtractors are usually implemented within a binary adder for only a small cost when using the
standard two's complement notation, by providing an addition/subtraction selector to the carry-in and
to invert the second operand.
(definition of two's complement negation)
2
Inputs Outputs
A B C iCo S0 0 0 0 0
1 0 0 0 10 1 0 0 1
1 1 0 1 0
0 0 1 0 1
1 0 1 1 0
0 1 1 1 0
1 1 1 1 1
http://en.wikipedia.org/wiki/Electronicshttp://en.wikipedia.org/wiki/Adder_(electronics)http://en.wikipedia.org/wiki/Binary_numeral_systemhttp://en.wikipedia.org/wiki/Bithttp://en.wikipedia.org/wiki/Differencehttp://en.wikipedia.org/wiki/Minuendhttp://en.wikipedia.org/wiki/Subtrahendhttp://en.wikipedia.org/wiki/Two's_complementhttp://en.wikipedia.org/wiki/File:Full_Adder.svghttp://en.wikipedia.org/wiki/File:1-bit_full-adder.svghttp://en.wikipedia.org/wiki/File:1-bit_full-adder.svghttp://en.wikipedia.org/wiki/Electronicshttp://en.wikipedia.org/wiki/Adder_(electronics)http://en.wikipedia.org/wiki/Binary_numeral_systemhttp://en.wikipedia.org/wiki/Bithttp://en.wikipedia.org/wiki/Differencehttp://en.wikipedia.org/wiki/Minuendhttp://en.wikipedia.org/wiki/Subtrahendhttp://en.wikipedia.org/wiki/Two's_complement -
8/3/2019 Final Terms1
3/78
(a) Half Subtractor
The half-subtractor is a combinational
circuit which is used to perform
subtraction of two bits. It has two inputs,
X (minuend) and Y (subtrahend) and two
outputs D (difference) and B (borrow).
From the above table one can draw the
Karnaugh map for "difference" and
"borrow".
So, Logic equations are:
(b) Full Subtractor
The Full_subtractor is a combinational circuit which is used to perform subtraction of threebits. It
has three inputs, X (minuend) and Y (subtrahend) and Z (subtrahend) and two outputs D (difference)
and B (borrow).
Easy way to write truth table
D=X-Y-Z (don't bother about sign)
B = 1 If X
-
8/3/2019 Final Terms1
4/78
(4) Opcode
In computer technology, an opcode (operation code) is the portion of a machine languageinstruction that specifies the operation to be performed. Their specification and format are laid out in
the instruction set architecture of the processor in question (which may be a general CPU or a more
specialized processing unit). Apart from the opcode itself, an instruction normally also has one or
more specifiers foroperands (i.e. data) on which the operation should act, although some operations
may have implicitoperands, or none at all. There are instruction sets with nearly uniform fields foropcode and operand specifiers, as well as others (the x86 architecture for instance) with a more
complicated, varied length structure. Depending on architecture, the operands may be register
values, values in the stack, othermemory values, I/O ports, etc, specified and accessed using more or
less complex addressing modes. The types ofoperations include arithmetics, data copying, logical
operations, and program control, as well as special instructions (such as CPUID and others).
(5) Instruction Code
In digital computers, a set of instructions that assigns algorithms for solving problems in a digital
computer; the principal part of the machine language. Problem-solving programs are compiled
according to definite rules with the aid of an instruction code. An instruction code is usually
presented in the form of tables that give mnemonic designations of the instructions according to their
structure, as well as declarations of format, restrictions on usage, and all the computer operations
controlled by the instructions.
An instruction code is not the same as an operations system. Two computers that have the same
operations systems may differ in their instruction codesfor example, in their instruction address or
instruction content (the complex of operations that are combined by each operation code). The
efficiency in solving different problems depends to a considerable extent on the instruction codes
capability of producing the necessary algorithms. Consequently, this is one of the fundamental
parameters determining the structure of a digital computer. An instruction code is selected by
simulation of the structural scheme of the planned computer, by experimental programming using the
chosen code, and by evaluation and comparison of the results. In general-purpose digital computers
of low and medium capacity, the number of different operations in the instruction code varies from
32 to 64, and in high-performance computers it is in the range of 100 or more.
In modern digital computers the instruction code can be replaced or reorganized within certain limits
by using micro programmed control and extended by connecting to the computer additional
hardware, needed, for example, to process data in decimal notation when solving problems in
economics.
An instruction code is an intermediate step between the programmers language and the computers
performance in executing a problem-solving program. Therefore, the program for the solution is
executed in two stages: translation into the instruction language and conversion of the instructions
into a control sequence of signals. The two-stage process simplifies the structure of the computer.
4
http://en.wikipedia.org/wiki/Machine_codehttp://en.wikipedia.org/wiki/Instruction_(computer_science)http://en.wikipedia.org/wiki/Instruction_set_architecturehttp://en.wikipedia.org/wiki/Central_processing_unithttp://en.wikipedia.org/wiki/Operandhttp://en.wikipedia.org/wiki/X86http://en.wikipedia.org/wiki/Operandhttp://en.wikipedia.org/wiki/Processor_registerhttp://en.wikipedia.org/wiki/Call_stackhttp://en.wikipedia.org/wiki/Memoryhttp://en.wikipedia.org/wiki/I/Ohttp://en.wikipedia.org/wiki/Addressing_modehttp://en.wikipedia.org/wiki/Arithmetichttp://en.wikipedia.org/wiki/Logical_operationhttp://en.wikipedia.org/wiki/Logical_operationhttp://en.wikipedia.org/wiki/CPUIDhttp://en.wikipedia.org/wiki/Machine_codehttp://en.wikipedia.org/wiki/Instruction_(computer_science)http://en.wikipedia.org/wiki/Instruction_set_architecturehttp://en.wikipedia.org/wiki/Central_processing_unithttp://en.wikipedia.org/wiki/Operandhttp://en.wikipedia.org/wiki/X86http://en.wikipedia.org/wiki/Operandhttp://en.wikipedia.org/wiki/Processor_registerhttp://en.wikipedia.org/wiki/Call_stackhttp://en.wikipedia.org/wiki/Memoryhttp://en.wikipedia.org/wiki/I/Ohttp://en.wikipedia.org/wiki/Addressing_modehttp://en.wikipedia.org/wiki/Arithmetichttp://en.wikipedia.org/wiki/Logical_operationhttp://en.wikipedia.org/wiki/Logical_operationhttp://en.wikipedia.org/wiki/CPUID -
8/3/2019 Final Terms1
5/78
(6) Reduced Instruction Set Computing
Reduced instruction set computing, orRISC (pronounced / r sk/ ), is a CPU design strategy based
on the insight that simplified (as opposed to complex) instructions can provide higher performance if
this simplicity enables much faster execution of each instruction. A computer based on this strategy
is a reduced instruction set computer (also RISC). There are many proposals for precise
definitions[1], but the term is slowly being replaced by the more descriptive load-store architecture.
Well known RISC families include DEC Alpha, AMD 29k, ARC, ARM, Atmel AVR, MIPS, PA-
RISC, Power(including PowerPC), SuperH, and SPARC.
Some aspects attributed to the first RISC-labeleddesigns around 1975 include the observations thatthe memory-restricted compilers of the time were often unable to take advantage of features intended
to facilitate manual assembly coding, and that complex addressing modes take many cycles toperform due to the required additional memory accesses. It was argued that such functions would be
better performed by sequences of simpler instructions if this could yield implementations small
enough to leave room for many registers,[2] reducing the number of slow memory accesses. In these
simple designs, most instructions are of uniform length and similar structure, arithmetic operations
are restricted to CPU registers and only separate loadand store instructions access memory. These properties enable a better balancing of pipeline stages than before, making RISC pipelines
significantly more efficient and allowing higherclock frequencies.
(7) Associative Memory
Associative memory (content-addressable memory, CAM) A memory that is capable of
determining whether a given datum the search word is contained in one of its addresses orlocations. This may be accomplished by a number of mechanisms. In some cases parallel
combinational logic is applied at each word in the memory and a test is made simultaneously for
coincidence with the search word. In other cases the search word and all of the words in the memory
are shifted serially in synchronism; a single bit of the search word is then compared to the same bit
of all of the memory words using as many single-bit coincidence circuits as there are words in the
memory. Amplifications of the associative memory technique allow for masking the search word or
requiring only a close match as opposed to an exact match. Small parallel associative memories are
used in cache memory and virtual memory mapping applications.
Since parallel operations on many words are expensive (in hardware), a variety of stratagems are
used to approximate associative memory operation without actually carrying out the full test
described here. One of these uses hashing to generate a best guess for a conventional address
followed by a test of the contents of that address.
a storage unit of digital computers in which selection (entry) is performed not according to concreteaddress but rather according to a preset combination (association) of attributes characteristic of the
desired information. Such attributes can be part of a word (number), attached to it for detection
5
http://en.wikipedia.org/wiki/Wikipedia:IPA_for_Englishhttp://en.wikipedia.org/wiki/Wikipedia:IPA_for_Englishhttp://en.wikipedia.org/wiki/CPU_designhttp://en.wikipedia.org/wiki/Reduced_instruction_set_computing#cite_note-0%23cite_note-0http://en.wikipedia.org/wiki/DEC_Alphahttp://en.wikipedia.org/wiki/AMD_29khttp://en.wikipedia.org/wiki/ARC_Internationalhttp://en.wikipedia.org/wiki/ARM_architecturehttp://en.wikipedia.org/wiki/Atmel_AVRhttp://en.wikipedia.org/wiki/MIPS_architecturehttp://en.wikipedia.org/wiki/PA-RISChttp://en.wikipedia.org/wiki/PA-RISChttp://en.wikipedia.org/wiki/Power_Architecturehttp://en.wikipedia.org/wiki/PowerPChttp://en.wikipedia.org/wiki/SuperHhttp://en.wikipedia.org/wiki/SPARChttp://en.wikipedia.org/wiki/Compilerhttp://en.wikipedia.org/wiki/Addressing_modehttp://en.wikipedia.org/wiki/Reduced_instruction_set_computing#cite_note-1%23cite_note-1http://en.wikipedia.org/wiki/Instruction_pipelinehttp://en.wikipedia.org/wiki/Clock_frequencyhttp://www.encyclopedia.com/doc/1O11-masking.htmlhttp://www.encyclopedia.com/doc/1O11-cache.htmlhttp://www.encyclopedia.com/doc/1O11-virtualmemory.htmlhttp://www.encyclopedia.com/doc/1O11-hashing.htmlhttp://en.wikipedia.org/wiki/Wikipedia:IPA_for_Englishhttp://en.wikipedia.org/wiki/CPU_designhttp://en.wikipedia.org/wiki/Reduced_instruction_set_computing#cite_note-0%23cite_note-0http://en.wikipedia.org/wiki/DEC_Alphahttp://en.wikipedia.org/wiki/AMD_29khttp://en.wikipedia.org/wiki/ARC_Internationalhttp://en.wikipedia.org/wiki/ARM_architecturehttp://en.wikipedia.org/wiki/Atmel_AVRhttp://en.wikipedia.org/wiki/MIPS_architecturehttp://en.wikipedia.org/wiki/PA-RISChttp://en.wikipedia.org/wiki/PA-RISChttp://en.wikipedia.org/wiki/Power_Architecturehttp://en.wikipedia.org/wiki/PowerPChttp://en.wikipedia.org/wiki/SuperHhttp://en.wikipedia.org/wiki/SPARChttp://en.wikipedia.org/wiki/Compilerhttp://en.wikipedia.org/wiki/Addressing_modehttp://en.wikipedia.org/wiki/Reduced_instruction_set_computing#cite_note-1%23cite_note-1http://en.wikipedia.org/wiki/Instruction_pipelinehttp://en.wikipedia.org/wiki/Clock_frequencyhttp://www.encyclopedia.com/doc/1O11-masking.htmlhttp://www.encyclopedia.com/doc/1O11-cache.htmlhttp://www.encyclopedia.com/doc/1O11-virtualmemory.htmlhttp://www.encyclopedia.com/doc/1O11-hashing.html -
8/3/2019 Final Terms1
6/78
among other words, certain features of the word itself (for example, the presence of specific codes in
its digits), the absolute value of a word, its presence in a preset range, and so on.
The operation of an associative memory is based on the representation of all information in the form
of a sequence of zones according to properties and characteristic attributes. In this case the retrieval
of information is reduced to the determination of the zone according to the preset attributes by meansof scanning and comparison of those attributes with the attributes that are stored in the associative
memory. There are two basic methods of realizing the associative memory. The first is the
construction of a memory with storage cells that have the capability of performing simultaneously
the functions of storage, nondestructive reading, and comparison. Such a method of realizing an
associative memory is called network parallel-associativethat is, the required sets of attributes are
preserved in all the memory cells, and the information that possesses a given set of attributes is
searched for simultaneously and independently over the entire storage capacity. Card indexes for
edge-punched cards are prototypes of such an associative memory. Thin-film kryotrons, transfluxors,
biaxes, magnetic thin films, and so on are used as storage elements of network-realized associative
memories.
The second method of realizing an associative memory is the programmed organization (modeling)
of the memory. It consists of the establishment of associative connections between the information
contained in the memory by means of ordered arrangement of the information in the form of
sequential chains or groups (lists) connected by linkage addresses whose codes are stored in the same
memory cells. This procedure is the more suitable for practical realization in dealing with large
volumes of information because it provides for the use of conventional accumulators with address
reference.
The use of an associative memory considerably facilitates the programming and solution of
informational-logical problems and accelerates by hundreds (thousands) of times the speed of
retrieval, analysis, classification, and processing of data.
(8) Direct Memory Access
Direct memory access (DMA) is a feature of modern computers and microprocessors that allows
certain hardware subsystems within the computer to access system memory for reading and/or
writing independently of the central processing unit. Many hardware systems use DMA including
disk drive controllers, graphics cards, network cards and sound cards. DMA is also used for intra-
chip data transfer in multi-core processors, especially in multiprocessor system-on-chips, where its
processing element is equipped with a local memory (often called scratchpad memory) and DMA is
used for transferring data between the local memory and the main memory. Computers that have
DMA channels can transfer data to and from devices with much less CPU overhead than computers
without a DMA channel. Similarly a processing element inside a multi-core processor can transfer
data to and from its local memory without occupying its processor time and allowing computation
and data transfer concurrency.
Without DMA, using programmed input/output (PIO) mode for communication with peripheral
devices, or load/store instructions in the case of multicore chips, the CPU is typically fully occupied
for the entire duration of the read or write operation, and is thus unavailable to perform other work.
With DMA, the CPU would initiate the transfer, do other operations while the transfer is in progress,
and receive an interrupt from the DMA controller once the operation has been done. This isespecially useful in real-time computing applications where not stalling behind concurrent operations
6
http://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Microprocessorhttp://en.wikipedia.org/wiki/Computer_storagehttp://en.wikipedia.org/wiki/Central_processing_unithttp://en.wikipedia.org/wiki/Disk_drivehttp://en.wikipedia.org/wiki/Graphics_cardhttp://en.wikipedia.org/wiki/Network_cardhttp://en.wikipedia.org/wiki/Sound_cardhttp://en.wikipedia.org/wiki/Multi-corehttp://en.wikipedia.org/wiki/MPSoChttp://en.wikipedia.org/wiki/Scratchpad_RAMhttp://en.wikipedia.org/wiki/Central_processing_unithttp://en.wikipedia.org/wiki/Programmed_input/outputhttp://en.wikipedia.org/wiki/Real-time_computinghttp://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Microprocessorhttp://en.wikipedia.org/wiki/Computer_storagehttp://en.wikipedia.org/wiki/Central_processing_unithttp://en.wikipedia.org/wiki/Disk_drivehttp://en.wikipedia.org/wiki/Graphics_cardhttp://en.wikipedia.org/wiki/Network_cardhttp://en.wikipedia.org/wiki/Sound_cardhttp://en.wikipedia.org/wiki/Multi-corehttp://en.wikipedia.org/wiki/MPSoChttp://en.wikipedia.org/wiki/Scratchpad_RAMhttp://en.wikipedia.org/wiki/Central_processing_unithttp://en.wikipedia.org/wiki/Programmed_input/outputhttp://en.wikipedia.org/wiki/Real-time_computing -
8/3/2019 Final Terms1
7/78
is critical. Another and related application area is various forms of stream processing where it is
essential to have data processing and transfer in parallel, in order to achieve sufficient throughput.
Principle
DMA is an essential feature of all modern computers, as it allows devices to transfer data without
subjecting the CPU to a heavy overhead. Otherwise, the CPU would have to copy each piece of data
from the source to the destination, making itself unavailable for other tasks. This situation is
aggravated because access to I/O devices over a peripheral bus is generally slower than normal
system RAM. With DMA, the CPU gets freed from this overhead and can do useful tasks during data
transfer (though the CPU bus would be partly blocked by DMA). In the same way, a DMA engine in
an embedded processor allows its processing element to issue a data transfer and carries on its own
task while the data transfer is being performed.
A DMA transfer copies a block of memory from one device to another. While the CPU initiates the
transfer by issuing a DMA command, it does not execute it. For so-called "third party" DMA, as is
normally used with the ISAbus, the transfer is performed by a DMA controller which is typically
part of the motherboard chipset. More advanced bus designs such as PCI typically usebus mastering
DMA, where the device takes control of the bus and performs the transfer itself. In an embedded
processorormultiprocessor system-on-chip, it is a DMA engine connected to the on-chip bus that
actually administers the transfer of the data, in coordination with the flow control mechanisms of the
on-chip bus.
A typical usage of DMA is copying a block of memory from system RAM to or from a buffer on the
device. Such an operation usually does not stall the processor, which as a result can be scheduled to
perform other tasks unless those tasks include a read from or write to memory. DMA is essential to
high performance embedded systems. It is also essential in providing so-called zero-copy
implementations of peripheral device drivers as well as functionalities such as network packet
routing, audio playback and streaming video. Multicore embedded processors (in the form of
multiprocessor system-on-chip) often use one or more DMA engines in combination with scratchpad
memories for both increased efficiency and lower power consumption. In computer clusters forhigh-
performance computing, DMA among multiple computing nodes is often used under the name of
remote DMA. There are two control signal used to request and acknowledge a DMA transfer in
microprocess-based system.The HOLD pin is used to request a DMA action and the HLDA pin is anoutput acknowledges the DMA action.
A coprocessor is a computerprocessor used to supplement the functions of the primary processor
(the CPU). Operations performed by the coprocessor may be floating point arithmetic, graphics,
signal processing, string processing, orencryption. By offloading processor-intensive tasks from the
main processor, coprocessors can accelerate system performance. Coprocessors allow a line of
computers to be customized, so that customers who do not need the extra performance need not pay
for it.
(9) Coprocessors
7
http://en.wikipedia.org/wiki/Stream_processinghttp://en.wikipedia.org/wiki/Industry_Standard_Architecturehttp://en.wikipedia.org/wiki/Chipsethttp://en.wikipedia.org/wiki/Peripheral_component_interconnecthttp://en.wikipedia.org/wiki/Bus_masteringhttp://en.wikipedia.org/wiki/System-on-a-chiphttp://en.wikipedia.org/wiki/System-on-a-chiphttp://en.wikipedia.org/wiki/MPSoChttp://en.wikipedia.org/wiki/Embedded_systemhttp://en.wikipedia.org/wiki/Zero-copyhttp://en.wikipedia.org/wiki/Device_driverhttp://en.wikipedia.org/wiki/Routinghttp://en.wikipedia.org/wiki/Routinghttp://en.wikipedia.org/wiki/Digital_audiohttp://en.wikipedia.org/wiki/Streaming_videohttp://en.wikipedia.org/wiki/MPSoChttp://en.wikipedia.org/wiki/Scratchpad_RAMhttp://en.wikipedia.org/wiki/Scratchpad_RAMhttp://en.wikipedia.org/wiki/Computer_clustershttp://en.wikipedia.org/wiki/High-performance_computinghttp://en.wikipedia.org/wiki/High-performance_computinghttp://en.wikipedia.org/wiki/Remote_Direct_Memory_Accesshttp://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Floating_pointhttp://en.wikipedia.org/wiki/Graphicshttp://en.wikipedia.org/wiki/Signal_processinghttp://en.wikipedia.org/wiki/Encryptionhttp://en.wikipedia.org/wiki/Stream_processinghttp://en.wikipedia.org/wiki/Industry_Standard_Architecturehttp://en.wikipedia.org/wiki/Chipsethttp://en.wikipedia.org/wiki/Peripheral_component_interconnecthttp://en.wikipedia.org/wiki/Bus_masteringhttp://en.wikipedia.org/wiki/System-on-a-chiphttp://en.wikipedia.org/wiki/System-on-a-chiphttp://en.wikipedia.org/wiki/MPSoChttp://en.wikipedia.org/wiki/Embedded_systemhttp://en.wikipedia.org/wiki/Zero-copyhttp://en.wikipedia.org/wiki/Device_driverhttp://en.wikipedia.org/wiki/Routinghttp://en.wikipedia.org/wiki/Routinghttp://en.wikipedia.org/wiki/Digital_audiohttp://en.wikipedia.org/wiki/Streaming_videohttp://en.wikipedia.org/wiki/MPSoChttp://en.wikipedia.org/wiki/Scratchpad_RAMhttp://en.wikipedia.org/wiki/Scratchpad_RAMhttp://en.wikipedia.org/wiki/Computer_clustershttp://en.wikipedia.org/wiki/High-performance_computinghttp://en.wikipedia.org/wiki/High-performance_computinghttp://en.wikipedia.org/wiki/Remote_Direct_Memory_Accesshttp://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Floating_pointhttp://en.wikipedia.org/wiki/Graphicshttp://en.wikipedia.org/wiki/Signal_processinghttp://en.wikipedia.org/wiki/Encryption -
8/3/2019 Final Terms1
8/78
Coprocessors were first seen on mainframe computers, where they added "optional" functionality
such as floating point math support. A more common use was to control input/output channels,
where they were more often called channel controllers.
Intel coprocessors
i8087 and i80287 microarchitecture.
i80387 microarchitecture.
The original IBM PC included a socket for the Intel 8087 floating-point coprocessor (aka FPU)
which was a popular option for people using the PC for CAD or mathematics-intensive calculations.
In that architecture, the coprocessor sped up floating-point arithmetic on the order of fiftyfold. Users
that only used the PC for word processing, for example, saved the high cost of the coprocessor,
which would not have accelerated performance of text manipulation operations.
The 8087 was tightly integrated with the 8086/8088 and responded to floating-point machine code
operation codes inserted in the 8088 instruction stream. An 8088 processor without an 8087 would
interpret these instructions as an internal interrupt, which could be directed to trap an error or to
triggeremulation of the 8087 instructions in software.
Intel 80386 CPU w/ 80387 Math Coprocessor
Another coprocessor for the 8086/8088 central processor was the 8089 input/output coprocessor. Itused the same programming technique as 8087 for input/output operations, such as transfer of data
8
http://en.wikipedia.org/wiki/Mainframe_computerhttp://en.wikipedia.org/wiki/Input/outputhttp://en.wikipedia.org/wiki/Channel_controllerhttp://en.wikipedia.org/wiki/IBM_PChttp://en.wikipedia.org/wiki/Intel_8087http://en.wikipedia.org/wiki/Floating-pointhttp://en.wikipedia.org/wiki/Floating_point_unithttp://en.wikipedia.org/wiki/Machine_codehttp://en.wikipedia.org/wiki/Emulatorhttp://en.wikipedia.org/wiki/Intel_8089http://en.wikipedia.org/wiki/File:80386with387.JPGhttp://en.wikipedia.org/wiki/File:80386with387.JPGhttp://en.wikipedia.org/wiki/File:Intel_387_arch.svghttp://en.wikipedia.org/wiki/File:Intel_387_arch.svghttp://en.wikipedia.org/wiki/File:Intel_8087_arch.svghttp://en.wikipedia.org/wiki/File:Intel_8087_arch.svghttp://en.wikipedia.org/wiki/Mainframe_computerhttp://en.wikipedia.org/wiki/Input/outputhttp://en.wikipedia.org/wiki/Channel_controllerhttp://en.wikipedia.org/wiki/IBM_PChttp://en.wikipedia.org/wiki/Intel_8087http://en.wikipedia.org/wiki/Floating-pointhttp://en.wikipedia.org/wiki/Floating_point_unithttp://en.wikipedia.org/wiki/Machine_codehttp://en.wikipedia.org/wiki/Emulatorhttp://en.wikipedia.org/wiki/Intel_8089 -
8/3/2019 Final Terms1
9/78
from memory to a peripheral device, and so reducing the load on the CPU. But IBM didn't use it in
IBM PC design and Intel stopped development of this type of coprocessor.
During the era of 8- and 16-bit desktop computers another common source of floating-point
coprocessors was Weitek. The Intel 80386 microprocessorused an optional "math" coprocessor (the
80387) to perform floating point operations directly in hardware.
The Intel 80486DX processor included floating-point hardware on the chip. Intel released a cost-
reduced processor, the 80486SX, that had no FP hardware, and also sold an 80487SX co-processor
that essentially disabled the main processor when installed, since the 80487SX was a complete
80486DX with a different set of pin connections.
Intel processors later than the 80486 integrated floating-point hardware on the main processor chip;
the advances in integration eliminated the cost advantage of selling the floating point processor as an
optional element. It would be very difficult to adapt circuit-board techniques adequate at 75 MHz
processor speed to meet the time-delay, power consumption, and radio-frequency interference
standards required at gigahertz-range clock speeds. These on-chip floating point processors are stillreferred to as coprocessors because they operate in parallel with the main CPU.
Motorola coprocessors
The Motorola 68000 family had the 68881/68882 coprocessors which provided similar floating-point
speed acceleration as for the Intel processors. Computers using the 68000 family but not equipped
with the hardware floating point processor could trap and emulate the floating-point instructions in
software, which, although slower, allowed one binary version of the program to be distributed forboth cases
(10) Universal set
For "universal set" in the sense of a universe of discourse with respect to which
the absolute set complement is taken.
In set theory, a universal set is a set which contains all objects, including itself.[1] In set theory as
usually formulated, the conception of a set of all sets leads to aparadox. The reason for this lies with
the parameters of Zermelo's axiom of separation: for any formula and set A, the set
which contains exactly those elements x ofA that satisfy exists. If the
universal set Vexisted, then Russell's paradox could be resolved by considering
. More generally, for any setA we can prove that is not an element ofA.
9
http://en.wikipedia.org/wiki/Weitekhttp://en.wikipedia.org/wiki/Intel_80386http://en.wikipedia.org/wiki/Microprocessorhttp://en.wikipedia.org/wiki/Computer_hardwarehttp://en.wikipedia.org/wiki/Motorola_68000http://en.wikipedia.org/wiki/Motorola_68881http://en.wikipedia.org/wiki/Absolute_set_complementhttp://en.wikipedia.org/wiki/Set_theoryhttp://en.wikipedia.org/wiki/Set_(mathematics)http://en.wikipedia.org/wiki/Universal_set#cite_note-0%23cite_note-0http://en.wikipedia.org/wiki/Universal_set#cite_note-0%23cite_note-0http://en.wikipedia.org/wiki/Set_theoryhttp://en.wikipedia.org/wiki/Paradoxhttp://en.wikipedia.org/wiki/Zermelohttp://en.wikipedia.org/wiki/Axiom_of_separationhttp://en.wikipedia.org/wiki/Russell's_paradoxhttp://en.wikipedia.org/wiki/Weitekhttp://en.wikipedia.org/wiki/Intel_80386http://en.wikipedia.org/wiki/Microprocessorhttp://en.wikipedia.org/wiki/Computer_hardwarehttp://en.wikipedia.org/wiki/Motorola_68000http://en.wikipedia.org/wiki/Motorola_68881http://en.wikipedia.org/wiki/Absolute_set_complementhttp://en.wikipedia.org/wiki/Set_theoryhttp://en.wikipedia.org/wiki/Set_(mathematics)http://en.wikipedia.org/wiki/Universal_set#cite_note-0%23cite_note-0http://en.wikipedia.org/wiki/Set_theoryhttp://en.wikipedia.org/wiki/Paradoxhttp://en.wikipedia.org/wiki/Zermelohttp://en.wikipedia.org/wiki/Axiom_of_separationhttp://en.wikipedia.org/wiki/Russell's_paradox -
8/3/2019 Final Terms1
10/78
A second issue is that the power set of the set of all sets would be a subset of the set of all sets,
providing that both exist. This conflicts with Cantor's theorem that the power set of any set (whether
infinite or not) always has strictly highercardinality than the set itself.
The idea of a universal set seems intuitively desirable in the ZermeloFraenkel set theory,
particularly because most versions of this theory do allow the use of quantifiers over all sets (seeuniversal quantifier). This is handled by allowing carefully circumscribed mention of Vand similar
large collections asproper classes. In theories withproper classes the statement is not true
becauseproper classes cannot be elements.
Set theories with a universal set
There are set theories known to be consistent (if the usual set theory is consistent) in which the
universal set Vdoes exist (and is true). In these theories, Zermelo's axiom of separation doesnot hold in general, and the axiom of comprehension ofnaive set theory is restricted in a different
way.
The most widely studied set theory with a universal set is Willard Van Orman Quines New
Foundations. Alonzo Church and Arnold Oberschelp also published work on such set theories.
Church speculated that his theory might be extended in a manner consistent with Quines, [2] but this
is not possible for Oberschelps, since in it the singleton function is provably a set, [3] which leads
immediately to paradox in New Foundations.[4]
ZermeloFraenkel set theory and related set theories, which are based on the idea of the cumulative
hierarchy, do not allow for the existence of a universal set.
(11) Complex Instruction Set Computing
A complex instruction set computer (CISC) (pronounced / s sk/ ), is a computerwhere single
instructions can execute several low-level operations (such as a load from memory, an arithmetic
operation, and a memory store) and/or are capable of multi-step operations oraddressing modes
within single instructions. The term was retroactively coined in contrast to reduced instruction set
computer(RISC).
Examples of CISC instruction set architectures are System/360 through z/Architecture,PDP-11,
VAX, Motorola 68k, and x86.
(12) Paging
In computer operating systems, paging is one of the memory-management schemes by which a
computer can store and retrieve data from secondary storage for use in main memory. In the paging
memory-management scheme, the operating system retrieves data from secondary storage in same-
size blocks calledpages. The main advantage of paging is that it allows the physical address space ofa process to be noncontiguous. Before paging, systems had to fit whole programs into storage
contiguously, which caused various storage and fragmentation problems.[1]
10
http://en.wikipedia.org/wiki/Power_sethttp://en.wikipedia.org/wiki/Cardinalityhttp://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theoryhttp://en.wikipedia.org/wiki/Universal_quantifierhttp://en.wikipedia.org/wiki/Proper_classhttp://en.wikipedia.org/wiki/Proper_classhttp://en.wikipedia.org/wiki/Proper_classhttp://en.wikipedia.org/wiki/Consistenthttp://en.wikipedia.org/wiki/Axiom_of_separationhttp://en.wikipedia.org/wiki/Axiom_of_comprehensionhttp://en.wikipedia.org/wiki/Naive_set_theoryhttp://en.wikipedia.org/wiki/Willard_Van_Orman_Quinehttp://en.wikipedia.org/wiki/New_Foundationshttp://en.wikipedia.org/wiki/New_Foundationshttp://en.wikipedia.org/wiki/Alonzo_Churchhttp://de.wikipedia.org/wiki/Arnold_Oberschelphttp://en.wikipedia.org/wiki/Universal_set#cite_note-1%23cite_note-1http://en.wikipedia.org/wiki/Universal_set#cite_note-2%23cite_note-2http://en.wikipedia.org/wiki/Universal_set#cite_note-3%23cite_note-3http://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theoryhttp://en.wikipedia.org/wiki/Cumulative_hierarchyhttp://en.wikipedia.org/wiki/Cumulative_hierarchyhttp://en.wikipedia.org/wiki/Wikipedia:IPA_for_Englishhttp://en.wikipedia.org/wiki/Wikipedia:IPA_for_Englishhttp://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Instruction_set_architecturehttp://en.wikipedia.org/wiki/Memory_(computers)http://en.wikipedia.org/wiki/Arithmetichttp://en.wikipedia.org/wiki/Operatorhttp://en.wikipedia.org/wiki/Memory_(computers)http://en.wikipedia.org/wiki/Addressing_modehttp://en.wikipedia.org/wiki/Reduced_instruction_set_computerhttp://en.wikipedia.org/wiki/Reduced_instruction_set_computerhttp://en.wikipedia.org/wiki/System/360http://en.wikipedia.org/wiki/Z/Architecturehttp://en.wikipedia.org/wiki/PDP-11http://en.wikipedia.org/wiki/VAXhttp://en.wikipedia.org/wiki/Motorola_68000_familyhttp://en.wikipedia.org/wiki/X86http://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/Memory_managementhttp://en.wikipedia.org/wiki/Contiguous#Computer_sciencehttp://en.wikipedia.org/wiki/Contiguouslyhttp://en.wikipedia.org/wiki/Computer_data_storagehttp://en.wikipedia.org/wiki/Fragmentation_(computer)http://en.wikipedia.org/wiki/Paging#cite_note-0%23cite_note-0http://en.wikipedia.org/wiki/Power_sethttp://en.wikipedia.org/wiki/Cardinalityhttp://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theoryhttp://en.wikipedia.org/wiki/Universal_quantifierhttp://en.wikipedia.org/wiki/Proper_classhttp://en.wikipedia.org/wiki/Proper_classhttp://en.wikipedia.org/wiki/Proper_classhttp://en.wikipedia.org/wiki/Consistenthttp://en.wikipedia.org/wiki/Axiom_of_separationhttp://en.wikipedia.org/wiki/Axiom_of_comprehensionhttp://en.wikipedia.org/wiki/Naive_set_theoryhttp://en.wikipedia.org/wiki/Willard_Van_Orman_Quinehttp://en.wikipedia.org/wiki/New_Foundationshttp://en.wikipedia.org/wiki/New_Foundationshttp://en.wikipedia.org/wiki/Alonzo_Churchhttp://de.wikipedia.org/wiki/Arnold_Oberschelphttp://en.wikipedia.org/wiki/Universal_set#cite_note-1%23cite_note-1http://en.wikipedia.org/wiki/Universal_set#cite_note-2%23cite_note-2http://en.wikipedia.org/wiki/Universal_set#cite_note-3%23cite_note-3http://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theoryhttp://en.wikipedia.org/wiki/Cumulative_hierarchyhttp://en.wikipedia.org/wiki/Cumulative_hierarchyhttp://en.wikipedia.org/wiki/Wikipedia:IPA_for_Englishhttp://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Instruction_set_architecturehttp://en.wikipedia.org/wiki/Memory_(computers)http://en.wikipedia.org/wiki/Arithmetichttp://en.wikipedia.org/wiki/Operatorhttp://en.wikipedia.org/wiki/Memory_(computers)http://en.wikipedia.org/wiki/Addressing_modehttp://en.wikipedia.org/wiki/Reduced_instruction_set_computerhttp://en.wikipedia.org/wiki/Reduced_instruction_set_computerhttp://en.wikipedia.org/wiki/System/360http://en.wikipedia.org/wiki/Z/Architecturehttp://en.wikipedia.org/wiki/PDP-11http://en.wikipedia.org/wiki/VAXhttp://en.wikipedia.org/wiki/Motorola_68000_familyhttp://en.wikipedia.org/wiki/X86http://en.wikipedia.org/wiki/Computerhttp://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/Memory_managementhttp://en.wikipedia.org/wiki/Contiguous#Computer_sciencehttp://en.wikipedia.org/wiki/Contiguouslyhttp://en.wikipedia.org/wiki/Computer_data_storagehttp://en.wikipedia.org/wiki/Fragmentation_(computer)http://en.wikipedia.org/wiki/Paging#cite_note-0%23cite_note-0 -
8/3/2019 Final Terms1
11/78
Paging is an important part ofvirtual memory implementation in most contemporary general-purpose
operating systems, allowing them to use disk storage for data that does not fit into physical Random-
access memory (RAM). Paging is usually implemented as architecture-specific code built into the
kernel of the operating system.
The main functions of paging are performed when a program tries to access pages that are notcurrently mapped to physical memory (RAM). This situation is known as apage fault. The operating
system must then take control and handle the page fault, in a manner invisible to the program.
Therefore, the operating system must:
1. Determine the location of the data in auxiliary storage.
2. Obtain an empty page frame in RAM to use as a container for the data.
3. Load the requested data into the available page frame.
4. Update thepage table to show the new data.
5. Return control to the program, transparently retrying the instruction that caused the page
fault.
Because RAM is faster than auxiliary storage, paging is avoided until there is not enough RAM to
store all the data needed. When this occurs, a page in RAM is moved to auxiliary storage, freeing up
space in RAM for use. Thereafter, whenever the page in secondary storage is needed, a page in RAM
is saved to auxiliary storage so that the requested page can then be loaded into the space left behind
by the old page. Efficient paging systems must determine the page to swap by choosing one that is
least likely to be needed within a short time. There are various page replacement algorithms that try
to do this.
(13) Validation & Verification
Validation:-
1. The process of determining whether a value in a table's data cell its within the allowable
range or is a member of a set of acceptable values.
2. The process of evaluating software to ensure compliance with established requirements and
design criteria.
Verification:-The process of evaluating an application to determine whether or not the work products of a stage ofa software development lifecycle fulfill the requirements established during the previous stage.
(14) Spiral Model
An iterative version of the waterfall software development model. Rather than producing the
entire software product in one linear series of steps, the spiral development model implies that
specific components or sets of components of the software product are brought through each of the
stages in the software development lifecycle before development begins on the next set. Once one
component is completed, the next component in order of applicability is developed. Often, a
11
http://en.wikipedia.org/wiki/Virtual_memoryhttp://en.wikipedia.org/wiki/Random-access_memoryhttp://en.wikipedia.org/wiki/Random-access_memoryhttp://en.wikipedia.org/wiki/Computer_architecturehttp://en.wikipedia.org/wiki/Kernel_(computer_science)http://en.wikipedia.org/wiki/Page_faulthttp://en.wikipedia.org/wiki/Page_tablehttp://en.wikipedia.org/wiki/Instruction_(computer_science)http://en.wikipedia.org/wiki/Page_replacement_algorithmhttp://en.wikipedia.org/wiki/Virtual_memoryhttp://en.wikipedia.org/wiki/Random-access_memoryhttp://en.wikipedia.org/wiki/Random-access_memoryhttp://en.wikipedia.org/wiki/Computer_architecturehttp://en.wikipedia.org/wiki/Kernel_(computer_science)http://en.wikipedia.org/wiki/Page_faulthttp://en.wikipedia.org/wiki/Page_tablehttp://en.wikipedia.org/wiki/Instruction_(computer_science)http://en.wikipedia.org/wiki/Page_replacement_algorithm -
8/3/2019 Final Terms1
12/78
dependency tree is used where components that have other components dependent upon them are
developed first.
(15) Distributed databaseA distributed database is a database that is under the control of a central database management
system (DBMS) in which storage devices are not all attached to a common CPU. It may be stored in
multiple computers located in the same physical location, or may be dispersed over a network of
interconnected computers.
Collections of data (e.g. in a database) can be distributed across multiple physical locations. A
distributed database can reside on network servers on the Internet, on corporate intranets or extranets,
or on other company networks. Replication and distribution of databases improve database
performance at end-user worksites.
To ensure that the distributive databases are up to date and current, there are two processes:
replication and duplication. Replication involves using specialized software that looks for changes in
the distributive database. Once the changes have been identified, the replication process makes all
the databases look the same. The replication process can be very complex and time consuming
depending on the size and number of the distributive databases. This process can also require a lot of
time and computer resources. Duplication on the other hand is not as complicated. It basically
identifies one database as a master and then duplicates that database. The duplication process is
normally done at a set time after hours. This is to ensure that each distributed location has the same
data. In the duplication process, changes to the master database only are allowed. This is to ensure
that local data will not be overwritten. Both of the processes can keep the data current in all
distributive locations.
Besides distributed database replication and fragmentation, there are many other distributed database
design technologies. For example, local autonomy, synchronous and asynchronous distributed
database technologies. These technologies' implementation can and does depend on the needs of the
business and the sensitivity/confidentiality of the data to be stored in the database, and hence the
price the business is willing to spend on ensuring data security, consistency and integr
(16) Demoralization
12
-
8/3/2019 Final Terms1
13/78
demoralization is the process of attempting to optimize the read performance of a database by
adding redundant data or by grouping data. In some cases, demoralization helps cover up the
inefficiencies inherent in relational database software. A relational normalized database imposes a
heavy access load over physical storage of data even if it is well tuned for high performance.
A normalized design will often store different but related pieces of information in separate logicaltables (called relations). If these relations are stored physically as separate disk files, completing a
database query that draws information from several relations (a join operation) can be slow. If manyrelations are joined, it may be prohibitively slow. There are two strategies for dealing with this. The
preferred method is to keep the logical design normalized, but allow the database management
system (DBMS) to store additional redundant information on disk to optimize query response. In this
case it is the DBMS software's responsibility to ensure that any redundant copies are kept consistent.
This method is often implemented in SQL as indexed views (Microsoft SQL Server) or materialized
views (Oracle). A view represents information in a format convenient for querying, and the index
ensures that queries against the view are optimized.
The more usual approach is to deformalize the logical data design. With care this can achieve asimilar improvement in query response, but at a costit is now the database designer's responsibility
to ensure that the deformalized database does not become inconsistent. This is done by creating rules
in the database called constraints, that specify how the redundant copies of information must be kept
synchronized. It is the increase in logical complexity of the database design and the added
complexity of the additional constraints that make this approach hazardous. Moreover, constraints
introduce a trade-off, speeding up reads (SELECT in SQL) while slowing down writes (INSERT,
UPDATE, and DELETE). This means a deformalized database under heavy write load may actually
offerworse performance than its functionally equivalent normalized counterpart.
A deformalized data model is not the same as a data model that has not been normalized, and
renormalization should only take place after a satisfactory level of normalization has taken place and
that any required constraints and/or rules have been created to deal with the inherent anomalies in the
design. For example, all the relations are in third normal form and any relations with join and multi-
valued dependencies are handled appropriately.
(17)ADSL (Asymmetric Digital Subscriber Line)
ADSL Stands for "Asymmetric Digital Subscriber Line." ADSL is a type ofDSL, whichis a method of transferring data over copper telephone lines. While symmetrical DSL (SDSL)
uploads and downloads data at the same speed, ADSL has different maximum data transfer rates for
uploading and downloading data.
For example, an ADSL connection may allow download rates of 1.5Mbps, while upload speeds may
only reach 256Kbps. Since most users download much more data than they upload, this difference
usually does not make a noticeable impact on Internet access speeds. However, for Web servers or
other computers that send a lot of data upstream, ADSL would be an inefficient choice.
13
http://www.techterms.com/definition/dslhttp://www.techterms.com/definition/uploadhttp://www.techterms.com/definition/downloadhttp://www.techterms.com/definition/dslhttp://www.techterms.com/definition/uploadhttp://www.techterms.com/definition/download -
8/3/2019 Final Terms1
14/78
(18) Natural language processing
ASIMOuses sensors and intelligent algorithms to avoid obstacles and navigate stairs.
Main article: Natural language processing
Natural language processing gives machines the ability to read and understand the languages that
humans speak. Many researchers hope that a sufficiently powerful natural language processing
system would be able to acquire knowledge on its own, by reading the existing text available over
the internet. Some straightforward applications of natural language processing include information
retrieval (or text mining) and machine translation.
Perception
Main articles: Machine perception, Computer vision, and Speech recognition
Machine perception is the ability to use input from sensors (such as cameras, microphones, sonar
and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze
visual input. A few selected sub problems are speech recognition, [facial recognition and object
recognition.
(19) Domain
While the term "domain" is often used synonymously with "domain name," it also has a definition
specific to local networks.
A domain contains a group of computers that can be accessed and administered with a common set
of rules. For example, a company may require all local computers to be networked within the same
domain so that each computer can be seen from other computers within the domain or located from a
centralserver. Setting up a domain may also block outside traffic from accessing computers within
the network, which adds an extra level of security
While domains can be setup using a variety of networking software, including applications from
Novell and Oracle, Windows users are most likely familiar with Windows Network Domains. This
networking option is built into Windows and allows users to create or join a domain. The domain
may or may not be password-protected. Once connected to the domain, a user may view othercomputers within the domain and can browse the shared files and folders available on the connected
systems.
14
http://www.techterms.com/definition/domainnamehttp://www.techterms.com/definition/serverhttp://www.techterms.com/definition/serverhttp://www.techterms.com/definition/serverhttp://en.wikipedia.org/wiki/File:Honda_ASIMO_Walking_Stairs.JPGhttp://www.techterms.com/definition/domainnamehttp://www.techterms.com/definition/server -
8/3/2019 Final Terms1
15/78
Windows XP users can browse Windows Network Domains by selecting the "My Network Places"
option on the left side of an open window. You can create a new domain by using the Network Setup
Wixard. Mac users using Mac OS X 10.2 or later can also connect to a Windows Network by
clicking the "Network" icon on the left side of an open window. This will allow you to browse localMacintosh and Windows networks using theSMB protocol
(20) Applet
This a Java program that can be embedded in a Web page. The difference between a standard Java
application and a Java applet is that an applet can't access system resources on the local computer.System files and serial devices (modems, printers, scanners, etc.) cannot be called or used by the
applet. This is for security reasons -- nobody wants their system wiped out by a malicious applet on
some wacko's Web site. Applets have helped make the Web more dynamic and entertaining and have
given a helpful boost to the Java programming language.
(21) Primary key
Primary key means main key
Def.:- A primary key is one which uniquely identifies a row of a table. this key does not allow nullvalues and also does not allow duplicate values.
for example:
empno empname salary
1 firoz 35000
2 basha 34000
3 chintoo 40000
it will not the values as follows:
1 firoz 35000
1 basha 34000chintoo 35000
Definition : A primary key, also called a primary keyword, is a key in a relational databasethat is unique for each record. It is a unique identifier, such as a driver license number, telephone
number (including area code), or vehicle identification number (VIN). A relational database must
always have one and only one primary key. Primary keys typically appear as columns in relational
database tables.
15
http://www.techterms.com/definition/windowsxphttp://www.techterms.com/definition/macosxhttp://www.techterms.com/definition/smbhttp://www.techterms.com/definition/smbhttp://www.techterms.com/definition/windowsxphttp://www.techterms.com/definition/macosxhttp://www.techterms.com/definition/smb -
8/3/2019 Final Terms1
16/78
The choice of a primary key in a relational database often depends on the preference of the
administrator. It is possible to change the primary key for a given database when the specific needs
of the users changes. For example, the people in a town might be uniquely identified according to
their driver license numbers in one application, but in another situation it might be more convenient
to identify them according to their telephone numbers.
(22) EncryptionEncryption is the coding or scrambling of information so that it can only be decoded and read by
someone who has the correct decoding key. Encryption is used in secure Web sites as well as othermediums of data transfer. If a third party were to intercept the information you sent via an encrypted
connection, they would not be able to read it. So if you are sending a message over the office
network to your co-worker about how much you hate your job, your boss, and the whole dang
company, it would be a good idea to make sure that you send it over an encrypted line.
(23)Two-phase locking
In databases and transaction processing (transaction management), Two-phase locking, (2PL) is a
concurrency control locking protocol, or mechanism, which guarantees Serializability (e.g., seeBernstein et al. 1987, Weikum and Vossen 2001). It is also the name of the resulting class (set) of
transaction schedules. Using locks that block processes, 2PL may be subject to deadlocks that resultfrom the mutual blocking of two transactions or more.
2PL is a super-class ofStrong strict two-phase locking (SS2PL), also called Rigorousness, which
has been widely utilized for concurrency control in general-purpose database systems since their
early days in the 1970s. SS2PL implementation has many variants. SS2PL was called in the past
Strict 2PL, and confusingly it is still called so by some. SS2PL is also a special case (subclass) of
Commitment ordering(Commit ordering; CO), and inherits many of CO's useful properties.
2PL in its general form, as well as when combined with Strictness, i.e., Strict 2PL (S2PL), are notknown to be utilized in practice.
Two-phase locking
According to the two-phase locking protocol, a transaction handles its locks in two distinct,
consecutive phases during the transaction's execution:
1. Expanding phase (number of locks can only increase): locks are acquired and no locks are
released.
2. Shrinking phase: locks are released and no locks are acquired.
16
-
8/3/2019 Final Terms1
17/78
The serializability property is guaranteed for a schedule with transactions that obey the protocol. The
2PL schedule class is defined as the class of all the schedules comprising transactions with dataaccess orders that could be generated by the 2PL protocol.
Typically, without explicit knowledge in a transaction on end of phase-1, it is safely determined only
when a transaction has entered its ready state in all its processes (processing has ended, and it isready to be committed; no additional locking is possible). In this case phase-2 can end immediately
(no additional processing is needed), and actually no phase-2 is needed. Also, if several processes
(two or more) are involved, then a synchronization point (similar to atomic commitment) among
them is needed to determine end of phase-1 for all of them (i.e., in the entire distributed transaction),
to start releasing locks in phase-2 (otherwise it is very likely that both 2PL and Serializability are
quickly violated). Such synchronization point is usually too costly (involving a distributed protocol
similar to atomic commitment), and end of phase-1 is usually postponed to be merged with
transaction end (atomic commitment protocol for a multi-process transaction), and again phase-2 is
not needed. This turns 2PL to SS2PL (see below). All known implementations of 2PL in products
are SS2PL based.
(24) ATA (Advanced Technology Attachment)ATA Stands for "Advanced Technology Attachment." It is a type of disk drive that integrates the
drive controller directly on the drive itself. Computers can use ATA hard drives without a specific
controller to support the drive. The motherboard must still support an ATA connection, but a
separate card (such as a SCSI card for a SCSI hard drive) is not needed. Some different types of
ATA standards include ATA-1, ATA-2 (a.k.a. Fast ATA), ATA-3, Ultra ATA (33 MBps maximum
transfer rate), ATA/66 (66 MBps), and ATA/100 (100MBps).
The term IDE, or "Integrated Drive Electronics," is also used to refer to ATA drives.Sometimes (to
add extra confusion to people buying hard drives), ATA drives are labeled as "IDE/ATA."
Technically, ATA uses IDE technology, but the important thing to know is that they refer to the
same thing.
(25) Exception Handling
An exception is a problem that arises during the execution of a program. An exception can occur for
many different reasons, including the following:
A user has entered invalid data.
A file that needs to be opened cannot be found.
A network connection has been lost in the middle of communications, or the JVM has run out
of memory.
Some of these exceptions are caused by user error, others by programmer error, and others by
physical resources that have failed in some manner.
To understand how exception handling works in Java, you need to understand the three categories of
exceptions:
17
-
8/3/2019 Final Terms1
18/78
Checked exceptions: Achecked exception is an exception that is typically a user error or a problem
that cannot be foreseen by the programmer. For example, if a file is to be opened, but the file cannot
be found, an exception occurs. These exceptions cannot simply be ignored at the time of compilation.
Runtime exceptions: A runtime exception is an exception that occurs that probably could have been
avoided by the programmer. As opposed to checked exceptions, runtime exceptions are ignored atthe time of compliation.
Errors: These are not exceptions at all, but problems that arise beyond the control of the user or the
programmer. Errors are typically ignored in your code because you can rarely do anything about an
error. For example, if a stack overflow occurs, an error will arise. They are also ignored at the time
of compilation.
(26) ClusterA group of sectors on a disk. While a sectoris the smallest unit that can be accessed on your harddisk, a cluster is a slightly larger unit that is used to organize and identify files on the disk. Most files
take up several clusters of disk space.
Each cluster has a unique ID, which enables the hard drive to locate all the clusters on the disk. After
reading and writing many files to a disk, some clusters may remain labeled as being used even
though they do not contain any data. These are called "lost clusters" and can be fixed using ScanDisk
on Windows or the Disk Utility program on the Mac. This is why running a disk utility or
defragmentation program may free up space on your hard disk.
(27) OSIOpen Systems Interconnection (OSI) model is a reference model developed by ISO (International
Organization for Standardization) in 1984, as a conceptual framework of standards for
communication in the network across different equipment and applications by different vendors. It is
now considered the primary architectural model for inter-computing and internetworking
communications. Most of the network communication protocols used today have a structure based on
the OSI model. The OSI model defines the communications process into 7 layers, which divides the
tasks involved with moving information between networked computers into seven smaller, more
manageable task groups. A task or group of tasks is then assigned to each of the seven OSI layers.
Each layer is reasonably self-contained so that the tasks assigned to each layer can be implemented
independently. This enables the solutions offered by one layer to be updated without adverselyaffecting the other layers.
OSI 7 Layers Reference Model For Network Communication
18
http://www.techterms.com/definition/sectorhttp://www.techterms.com/definition/harddiskhttp://www.techterms.com/definition/harddiskhttp://www.techterms.com/definition/defragmenthttp://www.techterms.com/definition/sectorhttp://www.techterms.com/definition/harddiskhttp://www.techterms.com/definition/harddiskhttp://www.techterms.com/definition/defragment -
8/3/2019 Final Terms1
19/78
OSI model with comparison of TCP/IP
Layer 7:Application Layer
Defines interface to user processes for communication and data transfer in network
Provides standardized services such as virtual terminal, file and job transfer and operations
Layer 6:Presentation Layer
Masks the differences of data formats between dissimilar systems
Specifies architecture-independent data transfer format
Encodes and decodes data; Encrypts and decrypts data; Compresses and decompresses data
Layer 5:Session Layer
Manages user sessions and dialogues
Controls establishment and termination of logic links between users
Reports upper layer errors
Layer 4:Transport Layer
Manages end-to-end message delivery in network
Provides reliable and sequential packet delivery through error recovery and flow control mechanisms
Provides connectionless oriented packet delivery
Layer 3:Network Layer
19
-
8/3/2019 Final Terms1
20/78
Determines how data are transferred between network devices
Routes packets according to unique network device addresses
Provides flow and congestion control to prevent network resource depletion
Layer 2:Data Link Layer
Defines procedures for operating the communication links
Frames packets
Detects and corrects packets transmit errors
Layer 1:Physical Layer
Defines physical means of sending data over network devices
Interfaces between network medium and devices
Defines optical, electrical and mechanical characteristics
(28) Extranet
If you know the difference between the Internet and an intranet, you have an above average
understanding of computer terminology. If you know what an extranet is, you may be in the top
echelon.
An extranet actually combines both the Internet and an intranet. It extends an intranet, or internal
network, to other users over the Internet. Most extranets can be accessed via a Web interface using a
Web browser. Since secure or confidential information is often accessible within an intranet,
extranets typically require authentication for users to access them.
Extranets are often used by companies that need to share selective information with other businesses
or individuals. For example, a supplier may use an extranet to provide inventory data to certain
clients, while not making the information available to the general public. The extranet may also
include a secure means of communication for the company and its clients, such as a support ticket
system or Web-based forum.
(29) COCOMO MODEL
Overview of COCOMO
The COCOMO cost estimation model is used by thousands of software project managers, and is
based on a study of hundreds of software projects. Unlike other cost estimation models, COCOMO is
an open model, so all of the details are published, including:
The underlying cost estimation equations
Every assumption made in the model (e.g. "the project will enjoy good management") Every definition (e.g. the precise definition of the Product Design phase of a project)
20
http://www.techterms.com/definition/internethttp://www.techterms.com/definition/intranethttp://www.techterms.com/definition/browserhttp://www.techterms.com/definition/internethttp://www.techterms.com/definition/intranethttp://www.techterms.com/definition/browser -
8/3/2019 Final Terms1
21/78
The costs included in an estimate are explicitly stated (e.g. project managers are included,
secretaries aren't)
Because COCOMO is well defined, and because it doesn't rely upon proprietary estimation
algorithms, Costar offers these advantages to its users:
COCOMO estimates are more objective and repeatable than estimates made by methods
relying on proprietary models
COCOMO can be calibrated to reflect your software development environment, and to
produce more accurate estimates
Costar is a faithful implementation of the COCOMO model that is easy to use on small projects, and
yet powerful enough to plan and control large projects.
Typically, you'll start with only a rough description of the software system that you'll be developing,
and you'll use Costar to give you early estimates about the proper schedule and staffing levels. As
you refine your knowledge of the problem, and as you design more of the system, you can use Costarto produce more and more refined estimates.
Costar allows you to define a software structure to meet your needs. Your initial estimate might be
made on the basis of a system containing 3,000 lines of code. Your second estimate might be more
refined so that you now understand that your system will consist of two subsystems (and you'll have
a more accurate idea about how many lines of code will be in each of the subsystems). Your next
estimate will continue the process -- you can use Costar to define the components of each subsystem.
Costar permits you to continue this process until you arrive at the level of detail that suits your needs.
One word of warning: It is so easy to use Costar to make software cost estimates, that it's possibleto misuse it -- every Costar user should spend the time to learn the underlying COCOMO
assumptions and definitions fromSoftware Engineering Economics andSoftware Cost Estimation
with COCOMO II.
Introduction to the COCOMO Model
The most fundamental calculation in the COCOMO model is the use of the Effort Equation to
estimate the number of Person-Months required to develop a project. Most of the other COCOMO
results, including the estimates for Requirements and Maintenance, are derived from this quantity.
Source Lines of Code
The COCOMO calculations are based on your estimates of a project's size in Source Lines of Code
(SLOC). SLOC is defined such that:
Only Source lines that are DELIVERED as part of the product are included -- test drivers and
other support software is excluded
SOURCE lines are created by the project staff -- code created by applications generators is
excluded
One SLOC is one logical line of code
Declarations are counted as SLOC
Comments are not counted as SLOC
21
-
8/3/2019 Final Terms1
22/78
The original COCOMO 81 model was defined in terms of Delivered Source Instructions, which are
very similar to SLOC. The major difference between DSI and SLOC is that a single Source Line of
Code may be several physical lines. For example, an "if-then-else" statement would be counted as
one SLOC, but might be counted as several DSI.
The Scale Drivers
In the COCOMO II model, some of the most important factors contributing to a project's duration
and cost are the Scale Drivers. You set each Scale Driver to describe your project; these Scale
Drivers determine the exponent used in the Effort Equation.
The 5 Scale Drivers are:
Precedentedness
Development Flexibility
Architecture / Risk Resolution
Team Cohesion Process Maturity
Cost Drivers
COCOMO II has 17 cost drivers you assess your project, development environment, and team to
set each cost driver. The cost drivers are multiplicative factors that determine the effort required to
complete your software project. For example, if your project will develop software that controls an
airplane's flight, you would set the Required Software Reliability (RELY) cost driver to Very High.
That rating corresponds to an effort multiplier of 1.26, meaning that your project will require 26%
more effort than a typical software project.
COCOMO II defines each of the cost drivers, and the Effort Multiplier associated with each rating.
Check the Costar help for details about the definitions and how to set the cost drivers.
COCOMO II Effort Equation
The COCOMO II model makes its estimates of required effort (measured in Person-Months PM)
based primarily on your estimate of the software project's size (as measured in thousands of SLOC,
KSLOC)):
Effort = 2.94 * EAF * (KSLOC)E
Where
EAF Is the Effort Adjustment Factor derived from the Cost Drivers
E Is an exponent derived from the five Scale Drivers
As an example, a project with all Nominal Cost Drivers and Scale Drivers would have an EAF of
1.00 and exponent, E, of 1.0997. Assuming that the project is projected to consist of 8,000 source
lines of code, COCOMO II estimates that 28.9 Person-Months of effort is required to complete it:
Effort = 2.94 * (1.0) * (8)1.0997 = 28.9 Person-Months
Effort Adjustment Factor
22
-
8/3/2019 Final Terms1
23/78
The Effort Adjustment Factor in the effort equation is simply the product of the effort multipliers
corresponding to each of the cost drivers for your project.
For example, if your project is rated Very High for Complexity (effort multiplier of 1.34), and Low
for Language & Tools Experience (effort multiplier of 1.09), and all of the other cost drivers are
rated to be Nominal (effort multiplier of 1.00), the EAF is the product of 1.34 and 1.09.
Effort Adjustment Factor = EAF = 1.34 * 1.09 = 1.46
Effort = 2.94 * (1.46) * (8)1.0997 = 42.3 Person-Months
COCOMO II Schedule Equation
The COCOMO II schedule equation predicts the number of months required to complete your
software project. The duration of a project is based on the effort predicted by the effort equation:
Duration = 3.67 * (Effort)SE
Where
Effort Is the effort from the COCOMO II effort equation
SE Is the schedule equation exponent derived from the five Scale Drivers
Continuing the example, and substituting the exponent of 0.3179 that is calculated from the scale
drivers, yields an estimate of just over a year, and an average staffing of between 3 and 4 people:
Duration = 3.67 * (42.3)0.3179 = 12.1 months
Average staffing = (42.3 Person-Months) / (12.1 Months) = 3.5 people
The SCED Cost Driver
The COCOMO cost driver for Required Development Schedule (SCED) is unique, and requires a
special explanation.
The SCED cost driver is used to account for the observation that a project developed on an
accelerated schedule will require more effort than a project developed on its optimum schedule. A
SCED rating of Very Low corresponds to an Effort Multiplier of 1.43 (in the COCOMO II.2000
model) and means that you intend to finish your project in 75% of the optimum schedule (asdetermined by a previous COCOMO estimate). Continuing the example used earlier, but assuming
that SCED has a rating of Very Low, COCOMO produces these estimates:
Duration = 75% * 12.1 Months = 9.1 Months
Effort Adjustment Factor = EAF = 1.34 * 1.09 * 1.43 = 2.09
Effort = 2.94 * (2.09) * (8)1.0997 = 60.4 Person-Months
Average staffing = (60.4 Person-Months) / (9.1 Months) = 6.7 people
Notice that the calculation of duration isn't based directly on the effort (number of Person-Months)
instead it's based on the schedule that would have been required for the project assuming it had
23
-
8/3/2019 Final Terms1
24/78
been developed on the nominal schedule. Remember that the SCED cost driver means "accelerated
from the nominal schedule".
The Costar command Constraints | Constrain Projectdisplays a dialog box that lets you trade off
duration vs. effort (SCED is set for you automatically). You can use the dialog box to constrain your
project to have a fixed duration, or a fixed cost.
(30) Firmware
Firmware is a software program or set of instructions programmed on a hardware device. It provides
the necessary instructions for how the device communicates with the other computer hardware. But
how can software be programmed onto hardware? Good question. Firmware is typically stored in the
flash ROM of a hardware device. While ROM is "read-only memory," flash ROM can be erased and
rewritten because it is actually a type of flash memory.
Firmware can be thought of as "semi-permanent" since it remains the same unless it is updated by a
firmware updater. You may need to update the firmware of certain devices, such as hard drives and
video cards in order for them to work with a new operating system. CD and DVD drive
manufacturers often make firmware updates available that allow the drives to read faster media.
Sometimes manufacturers release firmware updates that simply make their devices work more
efficiently
(31) Computer Graphics
A graphic is an image or visual representation of an object. Therefore, computer graphics are simply
images displayed on a computer screen. Graphics are often contrasted with text, which is comprised
ofcharacters, such as numbers and letters, rather than images.
Computer graphics can be either two or three-dimensional. Early computers only supported 2D
monochrome graphics, meaning they were black and white (or black and green, depending on the
monitor). Eventually, computers began to support color images. While the first machines only
supported 16 or 256 colors, most computers can now display graphics in millions of colors.
2D graphics come in two flavors rasterand vector. Raster graphics are the most common and are
used for digital photos, Web graphics, icons, and other types of images. They are composed of a
simple grid ofpixels, which can each be a different color. Vector graphics, on the other hand are
made up of paths, which may be lines, shapes, letters, or other scalable objects. They are often used
for creating logos, signs, and other types of drawings. Unlike raster graphics, vector graphics can be
scaled to a larger size without losing quality.
3D graphics started to become popular in the 1990s, along with 3D rendering software such as CAD
and 3D animation programs. By the year 2000, many video games had begun incorporating 3D
graphics, since computers had enough processing power to support them. Now most computers now
come with a 3D video card that handles all the 3D processing. This allows even basic home systems
to support advanced 3D games and applications.
24
http://www.techterms.com/definition/romhttp://www.techterms.com/definition/flashmemoryhttp://www.techterms.com/definition/harddrivehttp://www.techterms.com/definition/operatingsystemhttp://www.techterms.com/definition/characterhttp://www.techterms.com/definition/monitorhttp://www.techterms.com/definition/rastergraphichttp://www.techterms.com/definition/vectorgraphichttp://www.techterms.com/definition/iconhttp://www.techterms.com/definition/pixelhttp://www.techterms.com/definition/cadhttp://www.techterms.com/definition/videocardhttp://www.techterms.com/definition/romhttp://www.techterms.com/definition/flashmemoryhttp://www.techterms.com/definition/harddrivehttp://www.techterms.com/definition/operatingsystemhttp://www.techterms.com/definition/characterhttp://www.techterms.com/definition/monitorhttp://www.techterms.com/definition/rastergraphichttp://www.techterms.com/definition/vectorgraphichttp://www.techterms.com/definition/iconhttp://www.techterms.com/definition/pixelhttp://www.techterms.com/definition/cadhttp://www.techterms.com/definition/videocard -
8/3/2019 Final Terms1
25/78
(32) Software Testing
Software testing is any activity aimed at evaluating an attribute or capability of a program or system
and determining that it meets its required results. Although crucial to software quality and widely
deployed by programmers and testers, software testing still remains an art, due to limitedunderstanding of the principles of software. The difficulty in software testing stems from the
complexity of software: we cannot completely test a program with moderate complexity. Testing is
more than just debugging. The purpose of testing can be quality assurance, verification and
validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness
testing and reliability testing are two major areas of testing. Software testing is a trade-off between
budget, time and quality.
Software Testing Types:
Black box testing Internal system design is not considered in this type of testing. Tests are based
on requirements and functionality.
White box testing This testing is based on knowledge of the internal logic of an applications
code. Also known as Glass box Testing. Internal software and code working should be known for
this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing Testing of individual software components or modules. Typically done by the
programmer and not by testers, as it requires detailed knowledge of the internal program design and
code. may require developing test driver modules or test harnesses.
Incremental integration testing Bottom up approach for testing i.e continuous testing of an
application as new functionality is added; Application functionality and modules should beindependent enough to test separately. done by programmers or by testers.
Integration testing Testing of integrated modules to verify combined functionality after
integration. Modules are typically code modules, individual applications, client and server
applications on a network, etc. This type of testing is especially relevant to client/server and
distributed systems.
Functional testing This type of testing ignores the internal parts and focus on the output is as per
requirement or not. Black-box type testing geared to functional requirements of an application.
System testing Entire system is tested as per the requirements. Black-box type testing that is based
on overall requirements specifications, covers all combined parts of a system.
End-to-end testing Similar to system testing, involves testing of a complete application
environment in a situation that mimics real-world use, such as interacting with a database, using
network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing - Testing to determine if a new software version is performing well enough to accept
it for a major testing effort. If application is crashing for initial use then system is not stable enough
for further testing and build or application is assigned to fix.
25
-
8/3/2019 Final Terms1
26/78
Regression testing Testing the application as a whole for the modification in any module or
functionality. Difficult to cover all the system in regression testing so typically automation tools are
used for these testing types.
Acceptance testing -Normally this type of testing is done to verify if system meets the customer
specified requirements. User or customer do this testing to determine whether to accept application.
Load testing Its a performance testing to check system behavior under load. Testing an application
under heavy loads, such as testing of a web site under a range of loads to determine at what point the
systems response time degrades or fails.
Stress testing System is stressed beyond its specifications to check how and when it fails.
Performed under heavy load like putting large number beyond storage capacity, complex database
queries, continuous input to system or database load.
Performance testing Term often used interchangeably with stress and load testing. To check
whether system meets performance requirements. Used different performance and load tools to dothis.
Usability testing User-friendliness check. Application flow is tested, Can new user understand the
application easily, Proper help documented whenever user stuck at any point. Basically system
navigation is checked in this testing.
Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different
operating systems under different hardware, software environment.
Recovery testing Testing how well a system recovers from crashes, hardware failures, or othercatastrophic problems.
Security testing Can system be penetrated by any hacking way. Testing how well the system
protects against unauthorized internal or external access. Checked if system, database is safe from
external attacks.
Comparison testing Comparison of product strengths and weaknesses with previous versions or
other similar products.
Alpha testing In house virtual user environment can be created for this type of testing. Testing is
done at the end of development. Still minor design changes may be made as a result of such testing.
Beta testing Testing typically done by end-users or others. Final testing before releasing
application for commercial purpose.
(33) Intranet
Contrary to popular belief, this is not simply a misspelling of "Internet." "Intra" means "internal" or
"within," so an Intranet is an internal or private network that can only be accessed within the
confines of a company, university, or organization. "Inter" means "between or among," hence the
difference between the Internet and an Intranet.
26
-
8/3/2019 Final Terms1
27/78
Up until the last few years, most corporations used local networks composed of expensive
proprietary hardware and software for their internal communications.
(34) Sockets and Socket Programming
A socket is one of the most fundamental technologies of computer networking. Sockets allow
applications to communicate using standard mechanisms built into network hardware and operating
systems. Although network software may seem to be a relatively new "Web" phenomenon, socket
technology actually has been emp