MISD1
-
Upload
dileep-karpur -
Category
Documents
-
view
222 -
download
0
Transcript of MISD1
-
8/3/2019 MISD1
1/4
MISD Architectures
1
Contents
2. Flynns Taxonomy of Parallel Computers
2.1 Single Instruction, Single Data stream (SISD)
2.2 Single Instruction, Multiple Data streams (SIMD)
2.3 Multiple Instructions, Single Data stream (MISD)
2.4 Multiple Instruction, Multiple Data streams (MIMD)
2.4.1 Shared Memory Model
2.4.2 Distributed Memory
2. Flynns Taxonomy of Parallel Computers
Flynn defined the taxonomy of parallel computers [Flynn, 1972] based on the number of instruction
streams and data streams.
y An Instruction stream is a sequence of instructions followed from a single program countery A Data stream is an address in memory which the instruction operates on.
A control unit fetches instructions from a single program counter, decodes them, and issues them to the
processing element. The processing element is assumed to be a functional unit. Instruction and data
are both supplied from the memory.
The four classifications defined by Flynn are based upon the number of concurrent instruction (or
control) and data streams available in the architecture are
Figure 1. Flynns Taxonomy
2.1 Single Instruction, Single Data stream (SISD)
SISD (single instruction, single data) is a term referring to a computer architecture in which a single
processor, a uniprocessor, executes a single instruction stream, to operate on data stored in a single
memory. Even though there is only one stream of instructions, parallelism between the instructions
from the stream can be exploited when the instructions are independent from one another. This
corresponds to the von Neumann architecture.
http://upload.wikimedia.org/wikipedia/commons/a/ae/SISD.svg
It is a type of sequential computer which exploits no parallelism in either the instruction or data
streams. Single control unit (CU) fetches single Instruction Stream (IS) from memory. The CU then
generates appropriate control signals to direct single processing element (PE) to operate on single Data
Stream (DS) i.e. one operation at a time
-
8/3/2019 MISD1
2/4
MISD Architectures
2
2.2 Single Instruction, Multiple Data streams (SIMD)
The first use of SIMD instructions was in vector supercomputers of the early 1970s such as the CDC Star-
100 and the Texas Instruments ASC, which could operate on a vector of data with a single instruction.
Vector processing was especially popularized by Cray in the 1970s and 1980s. The first widely-deployeddesktop SIMD was with Intel's MMX extensions to the x86 architecture in 1996, followed in 1999
by SSE after IBM and Motorola added AltiVec to the POWER architecture. All of these developments
have been oriented toward support for real-time graphics, and are therefore oriented toward
processing in two, three, or four dimensions, usually with vector lengths of between two and sixteen
words, depending on data type and architecture.
http://en.wikipedia.org/wiki/File:SIMD.svg
SIMD is a parallel architecture in which a single instruction operates on multiple data. An example of
SIMD architectures can be found in vector processors. As a simple illustration, consider multiplying a
scalar and an array X. With SISD, we need to execute a loop where in each iteration, we perform a
multiplication between a and one element from array X. With SIMD, the entire operation can be
performed with one scalar-vector multiply instruction without the use of a loop. SIMD is known for its
efficiency in terms of the instruction count needed to perform a computation task.
One of the major advantages in SIMD systems is, typically they include only those instructions that can
be applied to all of the data in one operation. In other words, if the SIMD system works by loading up
eight data points at once, the add operation being applied to the data will happen to all eight values at
the same time. Although the same is true for any super-scalar processor design, the level of parallelism
in a SIMD system is typically much higher.
The major drawback is, it has large register files which increase power consumption and chip area.
2.3 Multiple Instructions, Single Data stream (MISD)
MISD ((multiple instruction, single data) is an architecture in which multiple processing elements
execute from different instruction streams, and data is passed from one processing element to the next.
It is a type of parallel computing architecture where many functional units perform different operations
on the same data.
Pipeline architectures belong to this type, though a purist might say that the data is different after
processing by each stage in the pipeline. Fault-tolerant computers executing the same instructionsredundantly in order to detect and mask errors, in a manner known as task replication, may be
considered to belong to this type. Not many instances of this architecture exist, as MIMD and SIMD are
often more appropriate for common data parallel techniques. Specifically, they allow better scaling and
use of computational resources than MISD does.
http://en.wikipedia.org/wiki/File:MISD.svg
-
8/3/2019 MISD1
3/4
MISD Architectures
3
However, one prominent example of MISD in computing is the Space Shuttle flight control computers.
Another example of this machine is the systolic array, such as the CMU iWrap [BORKAR et al., 1990]. All
the elements in this array are controlled by a global clock. On each cycle, an element will read a piece of
data from one of its neighbors, perform a simple operation (e.g. add the incoming element to a stored
value), and prepare a value to be written to a neighbor on the next step.
2.4 Multiple Instruction, Multiple Data streams (MIMD)
MIMD (multiple instructions, multiple data) is a technique employed to achieve parallelism. Machines
using MIMD have a number of processors that function asynchronously and independently. At any time,
different processors may be executing different instructions on different pieces of data. MIMD
architectures may be used in a number of application areas such as computer-aided design/computer-
aided manufacturing, simulation, modeling, and as communication switches. MIMD machines can be of
either shared memory or distributed memory categories. Shared memory machines may be of the bus-
based, extended, or hierarchical type. Distributed memory machines may have
hypercube or mesh interconnection schemes.
http://en.wikipedia.org/wiki/File:MIMD.svg
2.4.1 Shared Memory Model
The processors are all connected to a "globally available" memory, via either
a software or hardware means. The operating system usually maintains its memory coherence.
From a programmer's point-of-view, this memory model is better understood than the distributedmemory model. Another advantage is that memory coherence is managed by the operating system and
not the written program. Two known disadvantages are: scalability beyond thirty-two processors is
difficult, and the shared memory model is less flexible than the distributed memory model.
There are many examples of shared memory (multiprocessors): UMA (Uniform Memory Access), COMA
(Cache Only Memory Access) and NUMA (Non-Uniform Memory Access).
y Bus-basedMIMD machines with shared memory have processors which share a common, central memory.
In the simplest form, all processors are attached to a bus which connects them to memory.
y HierarchicalMIMD machines with hierarchical shared memory use a hierarchy of buses to give processorsaccess to each other's memory. Processors on different boards may communicate through inter-
nodal buses. Buses support communication between boards. With this type of architecture, the
machine may support over a thousand processors.
2.4.2 Distributed Memory
-
8/3/2019 MISD1
4/4
MISD Architectures
4
In distributed memory MIMD machines, each processor has its own individual memory location. Each
processor has no direct knowledge about other processor's memory. For data to be shared, it must be
passed from one processor to another as a message. Since there is no shared memory, contention is not
as great a problem with these machines.
Examples of distributed memory (multi-computers): MPP (massively parallel processors) and COW
(Clusters of Workstations). The first one is complex and expensive: lots of super-computers coupled bybroad-band networks. Examples: hypercube and mesh interconnections.
y Hypercube Interconnection NetworkIn an MIMD distributed memory machine with a hypercube system interconnection network
containing four processors, a processor and a memory module are placed at each vertex of a
square. The diameter of the system is the minimum number of steps it takes for one processor
to send a message to the processor that is the farthest away.
y Mesh Interconnection NetworkIn an MIMD distributed memory machine with a mesh interconnection network, processors are
placed in a two-dimensional grid. Each processor is connected to its four immediate neighbors.Wraparound connections may be provided at the edges of the mesh.
References
y http://en.wikipedia.org/wiki/Flynn's_taxonomyy https://computing.llnl.gov/tutorials/parallel_comp/#Flynny http://www.phy.ornl.gov/csep/ca/node11.html
(More pictures can be found here..: https://computing.llnl.gov/tutorials/parallel_comp/#Flynn)