Lecture 1

25
Parallel Computing An Overview

description

 

Transcript of Lecture 1

Page 1: Lecture 1

Parallel Computing

An Overview

Page 2: Lecture 1

Teaching Parallel Computing

Undergraduates

ShardhanandAssistant Professor

Department of Computer Science

[email protected]

Page 3: Lecture 1

Theory Practical3/4 1/4

Mid Term Exam

Labs

Labs

Theoretical aspects of the course

Practical approachto the course

Time Distribution

Page 4: Lecture 1

Performance Assessment Criteria

Marks DistributionTotal 100 points

Lectures Labs

Course Work Exams Weekly Labs Final

Assign. 1 Assign. 2 Mid Term Final Lab1, Lab 2 ……Lab n

80 20

10 7010 10

5 5 10 60

Page 5: Lecture 1

Pre-Mid Term

Course Coverage

Page 6: Lecture 1

• Overview– What is Parallel Computing– Why use Parallel Computing

• Concepts and Terminology – Von Neumann Architecture– Flynn’s Classical Taxonomy

• SISD, SIMD, MISD, MIMD

• Memory Architecture– Shared Memory, Distributed Memory, Hybrid

• Programming Models– Shared Memory, Thread Model, MPI Model

• Designing Parallel Progamm– Partitioning, Communications, synchronization, Data

dependencies, Load Balancing

Page 7: Lecture 1

Post-Mid Term

Course Coverage

Page 8: Lecture 1

• MPI

• Distributed Systems

• Clusters

• Grid Computing

• Fault tolerance

• Prototype of parallel Computers

• Parallel Algorithms and Software

Page 9: Lecture 1

Reading Resources

• Text Book

• Reference Books– Specific to the course– General to the topic

• Internet Sources

Page 10: Lecture 1

Text Book

• Advanced Computer Architecture: Parallel, Scalability, Programmability

By William Stallings – McGraw-Hill Inc.

Page 11: Lecture 1

Reference Books

– Scalable Parallel Computing• By Kai Hwang & Z. Xu

– Grid Computing: Making The Global Infrastructure a Reality, John Wiley and Sons, 2003

• By F. Berman, G. Fox, and T. Hey

Page 12: Lecture 1

Internet Sources

• http://www.ssuet.edu.pk/~mfarooqui/ParComp• http://www.top500.org

• http://www.cs.rit.edu/~ncs/parallel.html#books

Page 13: Lecture 1

How to get what we discuss?• Online Access

• http://www.ssuet.edu.pk/~mfarooqui/ParComp

• Soft Copy• http://www.ssuet.edu.pk/~mfarooqui/ParComp

• Hard Copy• Will not be provided

Page 14: Lecture 1

Codes of Conduct

• Strictly practice your attendance in the class and labs.

• No relaxation, compensation or adjustment in your attendance.

• Be in Uniform (at least in the class)• Preserve the sanity of the class, teachers,

department and the University.• Help us in serving you for a better future.

Page 15: Lecture 1

Introduction

• Parallel Computing – Real Life Scenario

• Cost Effectiveness of Parallel Processors

• What is parallel computing?

• Why do parallel computing?

• Types of parallel computing

• What are some limits of parallel computing?

Page 16: Lecture 1

Parallel Computing – Real Life Scenario

• Stacking or reshelving of a set of library books. Assume books are organized into shelves and shelves are grouped into bays.

Single worker can only do it in a certain rate.We can speed it up by employing multiple workers.What is the best strategy ?1. Simple way is to divide the total books equally among workers.

Each worker stacks the books one at a time. Worker must walk all over the library.

2. Alternate way is to assign fixed disjoint sets of bay to each worker. Each worker is assigned equal # of books arbitrarily. Workers stack books in their bays or pass to another worker responsible for the bay it belongs to.

Page 17: Lecture 1

Parallel Computing – Real Life Scenario

• Parallel processing allows to accomplish a task faster by dividing the work into a set of substacks assigned to multiple workers.

• Assigning a set of books to workers is task partitioning. Passing of books to each other is an example of communication between subtasks.

• For some problems assigning work to multiple workers might be more time consuming than doing it locally.

• Some problems may be completely serial; e.g. digging a post hole. Poorly suited to parallel processing.

• All problems are not equally amenable to parallel processing

Page 18: Lecture 1

Wheather Modelling and Forcasting

Consider 3000 X 3000 miles, and height of 11 miles.For modeling partition into segments of 0.1X0.1X0.1 cubic miles = ~1011 segments.

Lets take 2-day period and parameters need to be computed every 30 min. Assume the computations take 100 instrs. A single update takes 1013 instrs. For two days we have total instrs. of 1015 . For serial computer with 1010 instrs./sec, this takes 280 hrs to predict next 48 hrs !!

Lets take 1000 processors capable of 108 instrs/sec. Each processor will do 108 segments. For 2 days we have 1012 instrs. Calculation done in 3 hrs !!

Currently all major weather forcast centers (US, Europe, Asia) have supercomputers with 1000s of processors.

Page 19: Lecture 1

What is Parallel Computing?

• Traditionally, software has been written for serial computation: – To be executed by a single computer having a single Central Processing

Unit (CPU);

– Problems are solved by a series of instructions, executed one after the other by the CPU. Only one instruction may be executed at any moment in time.

• In the simplest sense, parallel computing is the simultaneous use of multiple computer resources to solve a computational problem.

• The computer resources can include: – A single computer with multiple processors;

– An arbitrary number of computers connected by a network;

– A combination of both.

Page 20: Lecture 1

What is Parallel Computing?

• The computational problem usually demonstrates characteristics such as the ability to be: – Broken apart into discrete pieces of work that can be solved

simultaneously;

– Execute multiple program instructions at any moment in time;

– Solved in less time with multiple computer resources than with a single compute resource

• Parallel computing is an evolution of serial computing that attempts to emulate what has always been the state of affairs in the natural world

Page 21: Lecture 1

What is Parallel Computing?

• many complex, interrelated events happening at the same time, yet within a sequence. Some examples: – Planetary and galactic orbits – Weather and ocean patterns – Tectonic plate drift – Rush hour traffic in LA – Automobile assembly line – Daily operations within a business – Building a shopping mall – Ordering a hamburger at the drive through.

Page 22: Lecture 1

What is Parallel Computing?

• Traditionally, parallel computing has been considered to be "the high end of computing" and has been motivated by numerical simulations of complex systems and "Grand Challenge Problems" such as: – weather and climate – chemical and nuclear reactions – biological, human genome – geological, seismic activity – mechanical devices - from prosthetics to spacecraft – electronic circuits – manufacturing processes

Page 23: Lecture 1

What is Parallel Computing

• Today, commercial applications are providing an equal or greater driving force in the development of faster computers. These applications require the processing of large amounts of data in sophisticated ways. Example applications include: – parallel databases, data mining – web search engines, web based business services – computer-aided diagnosis in medicine – management of national and multi-national corporations – advanced graphics and virtual reality, particularly in the entertainment

industry – networked video and multi-media technologies

• Ultimately, parallel computing is an attempt to maximize the infinite but seemingly scarce commodity called time

Page 24: Lecture 1

Why Use Parallel Computing?

• Save time • Solve larger problems • Taking advantage of non-local resources - using available

compute resources on a wide area network, or even the Internet when local compute resources are scarce.

• Cost savings - using multiple "cheap" computing resources instead of paying for time on a supercomputer.

• Overcoming memory constraints - single computers have very finite memory resources. For large problems, using the memories of multiple computers may overcome this obstacle.

Page 25: Lecture 1

Why Use Parallel Computing?

• both physical and practical reasons pose significant constraints to simply building ever faster serial computers:

• Transmission speeds - the speed of a serial computer is directly dependent upon how fast data can move through hardware. Absolute limits are the speed of light (30 cm/nanosecond) and the transmission limit of copper wire (9 cm/nanosecond). Increasing speeds necessitate increasing proximity of processing elements.

• Limits to miniaturization - processor technology is allowing an increasing number of transistors to be placed on a chip. However, even with molecular or atomic-level components, a limit will be reached on how small components can be.

• Economic limitations - it is increasingly expensive to make a single processor faster. Using a larger number of moderately fast commodity processors to achieve the same (or better) performance is less expensive.