Lecture1

17
CS-416 Parallel and Distributed Systems Jawwad Shamsi

description

 

Transcript of Lecture1

Page 1: Lecture1

CS-416 Parallel and Distributed Systems

Jawwad Shamsi

Page 2: Lecture1

Course Outline

• Parallel Computing Concepts• Parallel Computing Architecture• Algorithms• Parallel Programming Environments

Page 3: Lecture1

Introduction

• parallel computing is the simultaneous use of multiple compute resources to solve a computational problem: – To be run using multiple CPUs – A problem is broken into discrete parts that can be

solved concurrently – Each part is further broken down to a series of

instructions – Instructions from each part execute simultaneously

on different resources : Source llnl.gov

Page 4: Lecture1

Types of Processes

• Sequential processes: that occur in a strict order, where it is not possible to do the next step until the current one is completed. Parallel processes: are those in which many events happen simultaneously.

Page 5: Lecture1

Need for Parallelism

• A huge complex problems– Super Computers• Hardware

– Use Parallelization techniques

Page 6: Lecture1

Motivation

• Solve complex problems in a much shorter time– Fast CPU– Large Memory– High Speed Interconnect• The interconnect, or interconnection network, is made

up of the wires and cables that define how the multiple processors of a parallel computer are connected to each other and to the memory units

Page 7: Lecture1

Applications

Large data set or Large eguations• Seismic operations• Geological predictions• Financial Market

Page 8: Lecture1

• Parallel computing: more than one computation at a time using more than one processor. – If one processor can perform the arithmetic in

time t.– Then ideally p processors can perform the

arithmetic in time t/p.

Page 9: Lecture1

Parallel Programming Environments

• MPI– Distributed Memory

• OpenMP– Shared Memory

• Hybrid Model• Threads

Page 10: Lecture1
Page 11: Lecture1
Page 12: Lecture1

How Much of Parallelism

• Decomposition:The process of partitioning a computer program into independent pieces that can be run simultaneously (in parallel).– Data Parallelism– Task Parallelism

Page 13: Lecture1

Data Parallelism

• Same code segment runs concurrently on each processor

• Each processor is assigned its own part of the data to work on

Page 14: Lecture1

• SIMD: Single Instruction Multiple data

Page 15: Lecture1

Increase speed processor

• Greater no. of transistors– Operation can be done in fewer clock cycles

• Increased clock speed– More operations per unit time

• Example– 8088/8086 : 5 Mhz, 29000 transistors– E6700 Core 2 Duo: 2.66 GHz, 291 million

transistor

Page 16: Lecture1

Multicore

• A multi-core processor is one processor that contains two or more complete functional units. Such chips are now the focus of Intel and AMD. A multi-core chip is a form of SMP

Page 17: Lecture1

Symmetric Multi-Processing

• SMP Symmetric multiprocessing is where two or more processors have equal access to the same memory. The processors may or may not be on one chip.