5/2/20151 Parallel Computing DCS 860A Topics in Emerging Computer TechnologiesTopics in Emerging...

26
03/14/22 1 Parallel Computing Parallel Computing DCS 860A Topics in Emerging Computer Technologies DPS 2016, Fall 2014 Dr. Ron Frank & Dr. Tappert By: Team 1 – DPS 2016 (Leigh Anne Clevenger, Kevin Khan, Mantie Reid, Javid Maghsoudi, Hugh Eng) Parallel Computing

Transcript of 5/2/20151 Parallel Computing DCS 860A Topics in Emerging Computer TechnologiesTopics in Emerging...

04/18/23 1

Parallel Computing Parallel Computing

DCS 860A Topics in Emerging Computer Technologies

DPS 2016, Fall 2014 Dr. Ron Frank & Dr. Tappert

By: Team 1 – DPS 2016 (Leigh Anne Clevenger, Kevin Khan, Mantie Reid, Javid Maghsoudi, Hugh Eng)

Parallel Computing

04/18/23 2

Presentation SummaryPresentation Summary: Parallel Computing Parallel Computing

• IntroductionIntroduction: Single Thread, Multi-Thread, Serial Computing, etc.

• ConceptsConcepts: Software, Memory Architecture, Programming Models

• Operating SystemsOperating Systems: Cluster, Beowulf, SMP, AMP, Embedded, HPS, SSI

• Graphics Processing UnitGraphics Processing Unit  (GPU)GPU)

• Parallel Computing Future OutlookParallel Computing Future Outlook

• A Quick VideoA Quick Video: Massively Parallel Computation at NASA Goddard

• ClosingClosing

Parallel Computing

• Single Thread: Processing of one command at a time. The smallest sequence of programmed instructions that can be managed independently by an operating system’s scheduler.

• Multithreading: they are a subset of a process, so that a process can have multiple threads and share resources. On a multiprocessor or multicore system the threads are concurrent with every processor/core executing a separate thread.

• Serial computing: is execution of one instruction at a time. This is the type of computing that we are all familiar with.

Introduction

04/18/23 3Parallel Computing

Introduction – cont.• Parallel computing:

• Is the simultaneous use of multiple processors/cores to solve a problem.• Problems are broken down into parts that can be solved concurrently.• Each part is broken into a series of instructions.• Each instruction can be executed on different processors/cores• There is a need for a control mechanism.• Almost all computers that are made today are capable of parallel

processing from a hardware point of view.• Most of the supercomputers today are really clusters of hardware.

04/18/23 4Parallel Computing

Why Parallel Computing?

We are at the limits of single CPU computing in terms of performanceParallel computing allows us to solve problems that don’t fit onto one CPU.

(An example: the game consoles that are available, they would not be able to process both the instruction execution and the graphic display processing needed using one processor.)

Our ability to model real situations require the problem to look at complex, interrelated events that are occurring at the same time.

Where are we using Parallel Computing?- In science and engineering: Circuit designs, Molecular sciences, design of fighter planes, submarines, and other defense systems. - Industrial and commercial: Oil explorations, medical imaging,

pharmaceutical design,- weather forecasting- Search for Extra Terrestrial Intelligence (SETI)- web search engines

Introduction – cont.

04/18/23 5Parallel Computing

Introduction – cont.• Single Instruction Single Data (SISD) : The oldest type of computers executing

only one instruction stream with one data in any one clock cycle.

• Single Instruction, Multiple Data (SIMD): Single instruction each processing unit can work on a different data element (Processor Arrays and Vector pipelines and most graphic processing units)

• Multiple Instruction, Single Data (MISD ) : Each processing unit operates on the data independently using separate instruction streams (multiple cryptography algorithms for a single coded message)

• Multiple Instruction, Multiple Data (MIMD) : Every processor is executing a different instruction and every processor can be working on a different data stream. (most supercomputers, networked parallel computer clusters

04/18/23 6Parallel Computing

Parallel Computing – Concepts & Software

04/18/23 7

Differences: Parallel Computing & Serial Computing:

Serial Computing: Software has been written for serial computation:

A problem is broken into a discrete series of instructions Instructions are executed sequentially one after anotherExecuted on a single processor & Only one instruction may execute at any moment in time

Parallel Computing

Parallel Computing – Concepts & Software – Cont.

04/18/23 8

Differences: Parallel Computing & Serial Computing:

Parallel Computing:In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem:

A problem is broken into discrete parts that can be solved concurrentlyEach part is further broken down to a series of instructionsInstructions from each part execute simultaneously on different processorsAn overall control/coordination mechanism is employed

Parallel Computing

Parallel Computing – Computers

04/18/23 9

Parallel Computers:

Virtually all stand-alone computers today are parallel from a hardware perspective:•Multiple functional units (L1 cache, L2 cache, branch, prefetch, decode, floating-point, graphics processing (GPU), integer, etc.)•Multiple execution units/cores•Multiple hardware threads

Parallel Computing

Parallel Computing – Concepts & Terminology

04/18/23 10

von Neumann Architecture:

•Named after the Hungarian mathematician John von Neumann who first authored the general requirements for an electronic computer in his 1945 papers.•Also known as "stored-program computer" - both program instructions and data are kept in electronic memory. Differs from earlier computers which were programmed through "hard wiring".•Since then, virtually all computers have followed this basic design:

Comprised of four main components:MemoryControl UnitArithmetic Logic UnitInput/Output

Parallel Computing

Parallel Computing – Concepts & Terminology

04/18/23 11

Flynn's Classical Taxonomy:

•There are different ways to classify parallel computers. •Available Flynn's taxonomy distinguishes multi-processor computer architectures according to how they can be classified along the two independent dimensions of Instruction Stream and Data Stream. Each of these dimensions can have only one of two possible states: Single or Multiple.•The matrix below defines the 4 possible classifications according to Flynn:

An Example of MISD:A type of parallel computerEach processing unit operates on the data independently via separate instruction streams.Single Data: A single data stream is fed into multiple processing units.

Parallel Computing

Parallel Computing – Memory Architectures

04/18/23 12

There are multiple ways of having memory architecture:

Non-Uniform Memory Access (NUMA):Uniform Memory Access (UMA):

Distributed Memory

Parallel Computing

Parallel Computing – Programming Models

04/18/23 13

Shared Memory Model (without threads)

• In this programming model, tasks share a common address space, which they read and write to asynchronously.

Threads Model

• This programming model is a type of shared memory programming.

Distributed Memory / Message Passing Model

Parallel Computing

04/18/23 14

Parallel Computing – Programming ModelsData Parallel Model

The data parallel model demonstrates the following characteristics:

• Address space is treated globally• Most of the parallel work focuses on performing operations on a

data set. • The data set is typically organized into a common structure, such

as an array or cube.

Parallel Computing

04/18/23 15

Parallel Computing – An ExampleArray Processing: This example demonstrates calculations on 2-dimensional array elements, with the computation on each array element being independent from other array elements.•The serial program calculates one element at a time in sequential order. Serial code could be of the form:

Parallel Solution

• Arrays elements are distributed so that each processor owns a portion of an array (subarray).• Independent calculation of array elements ensures there is no need for communication between

tasks.

Parallel Computing

Parallel Computing Operating SystemsCluster

Each computer has a complete OS, and they can be combined using load-balancing servers for task parallelism, or perform computation for a single program

Beowulf Cluster built of standard computers with a standard OS, controlled by server

using Parallel Virtual Machine (PVM) and Message Passing Interface (MPI) Client nodes do only what they are directed to do

Symmetric Multi-Processing (SMP) All processors are peers, sharing memory and I/O bus

Asymmetric Multi-Processing (AMP) Operating system reserves processors for parallel use, cores may be

specialized.

Embedded Compilers, debuggers for parallel system on a chip (SoC) software designs (i.e. Intel

System Studio)

04/18/23 16Parallel Computing

Cluster Operating Systems High Performance Computing (HPC)

Synchronization of clusters, task scheduler Example – Blue Gene from IBM

Single-system Image (SSI) Multiple computers look like one Kerrighed global process management

04/18/23 17Parallel Computing

Beowulf Clusters Low-cost solution for parallel computing platform

Linux on desktops

Scalable

Construct with : Knoppix bootable CDs OpenMosix Open Source cluster application resources (OSCAR)

Examples: Linux-Windows Hybrid HPC Cluster Scientific simulations High Density Computing: Green Destiny from Los Alamos National Labs

04/18/23 18Parallel Computing

What is GPU?• It is a processor optimized for 2D/3D graphics, video, visual

computing, and display.• It is highly parallel, highly multithreaded multiprocessor

optimized for visual computing.• It provide real-time visual interaction with

computed objects via graphics images, and video.• It serves as both a programmable graphics

processor and a scalable parallel computing platform.

• Heterogeneous Systems: combine a GPU with a CPU

04/18/23 19Parallel Computing

GPU Graphic Trends• OpenGL – an open standard for 3D programming• DirectX – a series of Microsoft multimedia programming

interfaces• New GPU are being developed every 12 to 18 months• New idea of visual computing:

combines graphics processing and parallel computing• Heterogeneous System – CPU + GPU• GPU evolves into scalable parallel processor• vGPU renders graphics on a server• GPU Computing: GPGPU and CUDA• GPU unifies graphics and computing

04/18/23 20Parallel Computing

GPU vs. CPU

• GPUs contain much larger number of dedicated ALUs then CPUs.

 • GPUs also contain extensive support of Stream Processing

paradigm. It is related to SIMD ( Single Instruction Multiple Data) processing. 

 • Each processing unit on GPU contains local memory that

improves data manipulation and reduces fetch time.

04/18/23 21Parallel Computing

GPU and CPU: The Differences

GPUMore transistors devoted to computation, instead of

caching or flow controlSuitable for data-intensive computation

High arithmetic/memory operation ratio

DRAM

Cache

ALUControl

ALU

ALU

ALU

DRAM

CPU GPU

04/18/23 22Parallel Computing

Future Apps in Concurrent WorldExciting applications in mass computing market

Molecular dynamics simulationVideo and audio coding and manipulation3D imaging and visualizationConsumer game physicsVirtual reality products

Various granularities of parallelism exist, but…programming model must not hinder parallel

implementationdata delivery needs careful management

Introducing domain-specific architectureCUDA for GPGPU

04/18/23 23Parallel Computing

Parallel Computing Future Outlook Large parallel supercomputers, referred to as “exascale” computers, will have large

data centers with hundreds of thousands of computers coordinating with distributed memory systems by the year 2020

According to the researchers, this type of computing will help conduct studies about genomics, new materials, simulations of fluid dynamics used for atmospheric analysis and weather forecasts, and even the human brain and its behavior.

"Scientific field after field has changed as a result of the availability of prodigious amounts of computation, whether we're talking what you can get on your desk or what the big labs have available. The shockwave won't be fully understood for decades to come.“

Future capabilities such as photorealistic graphics, computational perception, and machine learning really heavily on highly parallel algorithms. Enabling these capabilities will advance a new generation of experiences that expand the scope and efficiency of what users can accomplish in their digital lifestyles and work place. These experiences include more natural, immersive, and increasingly multi-sensory interactions that offer multi-dimensional richness and context awareness.

04/18/23 24

Massively Parallel Computation at NASA GoddardMassively Parallel refers to the use of a

large number of processors (or separate computers) to perform a set of coordinated computations in parallel.

A Quick Video: Massively Parallel Computation at NASA Goddardhttps://www.youtube.com/watch?v=s7aBDrho-hA

04/18/23 Parallel Computing 25

04/18/23 26

References :References :http://en.wikipedia.org/wiki/Computer_cluster#Parallel_programminghttp://electronicdesign.com/digital-ics/symmetric-multiprocessing-vs-asymmetric-processinghttp://goparallel.sourceforge.net/embedded-goes-parallel/

E. Betti, M. Cesati, R. Gioiosa, and F. Piermaria, “A global operating system for HPC clusters,” in IEEE International Conference on Cluster Computing and Workshops, 2009. CLUSTER ’09, 2009, pp. 1–10.

M. K. Gobbert, “Configuration and performance of a Beowulf cluster for large-scale scientific simulations,” Computing in Science Engineering, vol. 7, no. 2, pp. 14–26, Mar. 2005.

I. Castaos, I. Garrido, A. Garrido, and G. Sevillano, “Design and implementation of an easy-to-use automated system to build Beowulf parallel computing clusters,” in XXII International Symposium on Information, Communication and Automation Technologies, 2009. ICAT 2009, 2009, pp. 1–6.

M. S. Warren, E. H. Weigle, and W. Feng, “High-Density Computing: A 240-Processor Beowulf in One Cubic Meter,” in Supercomputing, ACM/IEEE 2002 Conference, 2002, pp. 61–61.

S. Liang, V. Holmes, and I. Kureshi, “Hybrid Computer Cluster with High Flexibility,” in 2012 IEEE International Conference on Cluster Computing Workshops (CLUSTER WORKSHOPS), 2012, pp. 128–135.

K. V. Sandhya and G. Raju, “Single System Image clustering using Kerrighed,” in 2011 Third International Conference on Advanced Computing (ICoAC), 2011, pp. 260–264.

W. Luo, A. Xie, and W. Ruan, “The Construction and Test for a Small Beowulf Parallel Computing System,” in 2010 Third International Symposium on Intelligent Information Technology and Security Informatics (IITSI), 2010, pp. 767–770.

Introduction to Parallel Programming conceptsResearch Computing and Cyberinfrastructure

http://rcc.its.psu.edu/education/workshops/pages/parwork/IntroductiontoParallelProgrammingConcepts.pdfhttp://searchsdn.techtarget.com/search/query?q=gpuhttp://web.eecs.umich.edu/~qstout/parallel.html

Barney, Blaise. "Introduction to Parallel Computing." Introduction to Parallel Computing. Lawrence Livermore National Laboratory, 14 July 2014. Web. 24 Sept. 2014.<https://computing.llnl.gov/tutorials/parallel_comp/>.

“Multithreaded Programming Guide”, SunSoft, Sun Microsystems, Inc. , 1994,<http://www4.ncsu.edu/~rhee/clas/csc495j/MultithreadedProgrammingGuide_Solaris24.pdf>

Parallel Computing