Supercomputing and Scienceoscer.ou.edu/Workshops/Introduction/HPCintro.pdf · OU Supercomputing...

Post on 26-Jun-2020

0 views 0 download

Transcript of Supercomputing and Scienceoscer.ou.edu/Workshops/Introduction/HPCintro.pdf · OU Supercomputing...

Supercomputing and Science

An Introduction toHigh Performance Computing

Part I: Overview

Henry Neeman

OU Supercomputing Center for Education & Research 2

Goals of These WorkshopsIntroduce undergrads, grads, staff and faculty to supercomputing issues Provide a common language for discussing supercomputing issues when we meet to work on your researchNOT: to teach everything you need to know about supercomputing – that can’t be done in a handful of hourlong workshops!

OU Supercomputing Center for Education & Research 3

OSCER History

Aug 2000: founding of OU High Performance Computing interest groupNov 2000: first meeting of OUHPC and OU Chief Information Officer Dennis AebersoldFeb 2001: meeting between OUHPC, CIO and VP for Research Lee Williams; draft white paper releasedApr 2001: Henry named Director of HPC for Department of Information TechnologyJuly 2001: draft OSCER charter released

OU Supercomputing Center for Education & Research 4

Friday August 31 2001

The OU Supercomputer Center The OU Supercomputer Center for Education & Researchfor Education & Researchis now open for business!is now open for business!

Thanks to everyone who helped make this happen.

Celebration 5:30 today at Brothers (Buchanan just north of Boyd).

OU Supercomputing Center for Education & Research 5

What is Supercomputing?Supercomputing is the biggest, fastest

computing right this minute.Likewise, a supercomputer is the biggest,

fastest computer right this minute.So, the definition of supercomputing is

constantly changing.The technical term for supercomputing is

High Performance Computing (HPC).

OU Supercomputing Center for Education & Research 6

What is HPC Used For?Numerical simulation of physical phenomenaData mining: finding nuggets of information in a vast sea of dataVisualization: turning a vast sea of data into pictures that a scientist can understand… and lots of other stuff

OU Supercomputing Center for Education & Research 7

What Is HPC Used For at OU?AstronomyBiochemistryChemical EngineeringChemistryCivil EngineeringComputer ScienceElectrical EngineeringIndustrial Engineering

GeographyGeophysicsMathematicsMechanical EngineeringMeteorologyMicrobiologyPhysicsZoology

Note: some of these aren’t using HPC yet, but plan to.

OU Supercomputing Center for Education & Research 8

HPC IssuesThe tyranny of the storage hierarchyParallelism: do many things at the same time

Instruction-level parallelism: doing multiple operations at the same time (e.g., add, multiply, load and store simultaneously)Multiprocessing: multiple CPUs working on different parts of a problem at the same time

Shared Memory MultiprocessingDistributed MultiprocessingHybrid Multiprocessing

The Tyranny of the Storage Hierarchy

OU Supercomputing Center for Education & Research 10

The Storage Hierarchy

RegistersCache memoryMain memory (RAM)Hard diskRemovable media (e.g., CDROM)Internet

Small, fast, expensive

Big, slow, cheap

OU Supercomputing Center for Education & Research 11

Henry’s Laptop

Pentium III 700 MHz w/256 KB L2 Cache256 MB RAM30 GB Hard DriveDVD/CD-RW Drive10/100 Mbps Ethernet56 Kbps Phone Modem

Dell Inspiron 4000[1]

OU Supercomputing Center for Education & Research 12

Typical Computer HardwareCentral Processing Unit Primary storage Secondary storageInput devicesOutput devices

OU Supercomputing Center for Education & Research 13

Central Processing UnitAlso called CPU or processor: the “brain”Parts

Control Unit: figures out what to do next --e.g., whether to load data from memory, or to add two values together, or to store data into memory, or to decide which of two possible actions to perform (branching)Arithmetic/Logic Unit: performs calculations –e.g., adding, multiplying, checking whether two values are equalRegisters: where data reside that are being used right now

OU Supercomputing Center for Education & Research 14

Primary StorageMain Memory

Also called RAM (“Random Access Memory”)Where data reside when they’re being used by a program that’s currently running

CacheSmall area of much faster memoryWhere data reside when they’ve been used recently and/or are about to be used

Primary storage is volatile: values in primary storage disappear when power is turned off.

OU Supercomputing Center for Education & Research 15

Secondary Storage

Where data and programs reside that are going to be used in the futureSecondary storage is non-volatile: values don’t disappear when power is turned off.Examples: hard disk, CDROM, DVD, magnetic tape, Zip, JazMany are portable: can pop out the CD/tape/Zip/floppy and take it with you

OU Supercomputing Center for Education & Research 16

Input/OutputInput devices – e.g., keyboard, mouse, touchpad, joystick, scannerOutput devices – e.g., monitor, printer, speakers

OU Supercomputing Center for Education & Research 17

Why Does Cache Matter?CPU

The speed of data transferbetween Main Memory and theCPU is much slower than thespeed of calculating, so the CPUspends most of its time waitingfor data to come in or go out.

Bottleneck

OU Supercomputing Center for Education & Research 18

Why Have Cache?Cache is (typically) the samespeed as the CPU, so the CPUdoesn’t have to wait nearly aslong for stuff that’s already incache: it can do moreoperations per second!

CPU

OU Supercomputing Center for Education & Research 19

Henry’s Laptop, Again

Pentium III 700 MHz w/256 KB L2 Cache256 MB RAM30 GB Hard DriveDVD/CD-RW Drive10/100 Mbps Ethernet56 Kbps Phone Modem

Dell Inspiron 4000[1]

OU Supercomputing Center for Education & Research 20

Storage Speed, Size, CostOn Henry’s laptop

chargedper month(typically)

unlimited

12

Ethernet(100

Mbps)

free(local call)

unlimited

0.007

Phone Modem

(56 Kbps)

CD-RWHard Drive

MainMemory(100 MHz

RAM)

CacheMemory

(L2)

Registers(Pentium III 700 MHz)

$0.0015[9]$0.009[8]$1.17[8]$400[8]???Cost($/MB)

unlimited30,0002560.25112 bytes**[7]

Size(MB)

3.6[6]100[5]800[4]11,200[3]5340[2]

(700 MFLOPS*)

Speed(MB/sec)

[peak]

* MFLOPS: millions of floating point operations per second** 8 32-bit integer registers, 8 80-bit floating point registers

OU Supercomputing Center for Education & Research 21

Storage Use StrategiesRegister reuse: do a lot of work on the same data before working on new data.Cache reuse: the program is much more efficient if all of the data and instructions fit in cache; if not, try to use what’s in cache a lot before using anything that isn’t in cache.Data locality: try to access data that are near each other in memory before data that are far.I/O efficiency: do a bunch of I/O all at once rather than a little bit at a time; don’t mix calculations and I/O.

Parallelism, Part I:

Instruction-LevelParallelism

DON’T PANIC!

OU Supercomputing Center for Education & Research 23

Kinds of ILPSuperscalar: perform multiple operations at the same timePipeline: start performing an operation on one piece of data while finishing the same operation on another piece of dataSuperpipeline: perform multiple pipelined operations at the same timeVector: load a bunch of pieces of data into registers and operate on all of them

OU Supercomputing Center for Education & Research 24

What’s an Instruction?Fetch a value from a specific address in main memory into a specific registerStore a value from a specific register into a specific address in main memoryAdd (subtract, multiply, divide, square root, etc) two specific registers together and put the sum in a specific registerDetermine whether two registers both contain nonzero values (“AND”)… and so on

OU Supercomputing Center for Education & Research 25

DON’TPANIC!

OU Supercomputing Center for Education & Research 26

Scalar Operationz = a * b + c * d

How would this statement be executed?1. Load a into register R02. Load b into R13. Multiply R2 = R0 * R14. Load c into R35. Load d into R46. Multiply R5 = R3 * R47. Add R6 = R2 + R58. Store R6 into z

OU Supercomputing Center for Education & Research 27

Does Order Matter?

1. Load a into R02. Load b into R13. Multiply R2 = R0 * R14. Load c into R35. Load d into R46. Multiply R5 = R3 * R47. Add R6 = R2 + R58. Store R6 into z

z = a * b + c * d1. Load d into R42. Load c into R33. Multiply R5 = R3 * R44. Load a into R05. Load b into R16. Multiply R2 = R0 * R17. Add R6 = R2 + R58. Store R6 into z

In the cases where order doesn’t matter, we say thatthe operations are independent of one another.

OU Supercomputing Center for Education & Research 28

Superscalar Operation

1. Load a into R0 AND load b into R12. Multiply R2 = R0 * R1 AND

load c into R3 AND load d into R43. Multiply R5 = R3 * R44. Add R6 = R2 + R55. Store R6 into z

z = a * b + c * d

So, we go from 8 operations down to 5.

OU Supercomputing Center for Education & Research 29

Superscalar LoopsDO i = 1, n

z(i) = a(i)*b(i) + c(i)*d(i)END DO !! i = 1, nEach of the iterations is completely independent of all of the other iterations; e.g.,

z(1) = a(1)*b(1) + c(1)*d(1)has nothing to do with

z(2) = a(2)*b(2) + c(2)*d(2)Operations that are independent of each other can be performed in parallel.

OU Supercomputing Center for Education & Research 30

Superscalar Loopsfor (i = 0; i < n; i++) {

z[i] = a[i]*b[i] + c[i]*d[i];} /* for i */

1. Load a[i] into R0 AND load b[i] into R12. Multiply R2 = R0 * R1 AND load c[i] into R3

AND load d[i] into R43. Multiply R5 = R3 * R4 AND load a[i+1] into

R0 AND load b[i+1] into R14. Add R6 = R2 + R5 AND load c[i+1] into R3

AND load d[i+1] into R45. Store R6 into z[i] AND multiply R2 = R0 * R16. etc etc etc

OU Supercomputing Center for Education & Research 31

Example: Sun UltraSPARC-III

4-way Superscalar: can execute up to 4 operations at the same time[10]

2 integer, memory and/or branchUp to 2 arithmetic or logical operations, and/or1 memory access (load or store), and/or1 branch

2 floating point (e.g., add, multiply)

OU Supercomputing Center for Education & Research 32

DON’TPANIC!

OU Supercomputing Center for Education & Research 33

PipeliningPipelining is like an assembly line or a

bucket brigade.An operation consists of multiple stages.After a set of operands complete a particular stage, they move into the next stage.Then, another set of operands can move into the stage that was just abandoned.

OU Supercomputing Center for Education & Research 34

Pipelining Examplet = 2 t = 5t = 0 t = 1 t = 3 t = 4 t = 6 t = 7

InstructionFetch

InstructionDecode

OperandFetch

InstructionExecution

ResultWriteback

InstructionFetch

InstructionDecode

OperandFetch

InstructionExecution

ResultWriteback

InstructionFetch

InstructionDecode

OperandFetch

InstructionExecution

ResultWriteback

InstructionFetch

InstructionDecode

OperandFetch

InstructionExecution

ResultWriteback

i = 1i = 2

i = 3i = 4

Computation time

If each stage takes, say, one CPU cycle, then oncethe loop gets going, each iteration of the loop

only increases the total time by one cycle. So aloop of length 1000 takes only 1004 cycles. [11]

OU Supercomputing Center for Education & Research 35

Multiply Is Better Than DivideIn most (maybe all) CPU types, adds and

subtracts execute very quickly. So do multiplies.

But divides take much longer to execute, typically 5 to 10 times longer than multiplies.

More complicated operations, like square root, exponentials, trigonometric functions and so on, take even longer.

Also, on some CPU types, divides and other complicated operations aren’t pipelined.

OU Supercomputing Center for Education & Research 36

SuperpipeliningSuperpipelining is a combination of superscalar and

pipelining.So, a superpipeline is a collection of multiple

pipelines that can operate simultaneously.In other words, several different operations can

execute simultaneously, and each of these operations can be broken into stages, each of which is filled all the time.

So you can get multiple operations per CPU cycle.For example, a Compaq Alpha 21264 can have up

to 80 different operations running at the same time.[12]

OU Supercomputing Center for Education & Research 37

DON’TPANIC!

OU Supercomputing Center for Education & Research 38

Why You Shouldn’t PanicIn general, the compiler and the CPU will

do most of the heavy lifting for instruction-level parallelism.

BUT:You need to be aware of ILP, because

how your code is structured affectshow much ILP the compiler and theCPU can give you.

Parallelism, Part II:

Multiprocessing

OU Supercomputing Center for Education & Research 40

The Jigsaw Puzzle Analogy

OU Supercomputing Center for Education & Research 41

Serial ComputingSuppose I want to do a jigsaw puzzlethat has, say, a thousand pieces.

We can imagine that it’ll take me acertain amount of time. Let’s saythat I can put the puzzle together inan hour.

OU Supercomputing Center for Education & Research 42

Shared Memory ParallelismIf Dan sits across the table from me, then he can work on his half of the puzzle and I can work on mine. Once in a while, we’ll both reach into the pile of pieces at the same time (we’ll contend for the same resource), which will cause a little bit of slowdown. And from time to time we’ll have to work together (communicate) at the interface between his half and mine. But the speedup will be nearly 2-to-1: we might take 35 minutes instead of 30.

OU Supercomputing Center for Education & Research 43

The More the Merrier?Now let’s put Loretta and May on the other two sides of the table. We can each work on a piece of the puzzle, but there’ll be a lot more contention for the shared resource (the pile of puzzle pieces) and a lot more communication at the interfaces. So we’ll get noticeably less than a 4-to-1 speedup, but we’ll still have an improvement, maybe something like 3-to-1: we can get it done in 20 minutes instead of an hour.

OU Supercomputing Center for Education & Research 44

Diminishing ReturnsIf we now put Courtney and Isabella and James and Nilesh on the corners of the table, there’s going to be a whole lot of contention for the shared resource, and a lot of communication at the many interfaces. So the speedup we get will be much less than we’d like; we’ll be lucky to get 5-to-1.

So we can see that adding more and more workers onto a shared resource is eventually going to have a diminishing return.

OU Supercomputing Center for Education & Research 45

Distributed Parallelism

Now let’s try something a little different. Let’s set up two tables, and let’s put me at one of them and Dan at the other. Let’s put half of the puzzle pieces on my table and the other hand of the pieces on Dan’s. Now we can work completely independently, without any contention for the shared resource. BUT, the cost of communicating is MUCH higher (we have to get up from our tables and meet), and we need the ability to split up the puzzle pieces reasonably easily (domain decomposition), which may be hard for some puzzles.

OU Supercomputing Center for Education & Research 46

More Distributed ProcessorsIt’s a lot easier to add more processors in distributed parallelism. But, you always have to be aware of the need to decompose the problem and to communicate between the processors. Also, as you add more processors, it may be harder to load balancethe amount of work that each processor gets.

OU Supercomputing Center for Education & Research 47

Hybrid Parallelism

OU Supercomputing Center for Education & Research 48

Hybrid Parallelism is GoodSome HPC platforms don’t support shared memory parallelism, or not very well. When you run on those machines, you can turn the code’s shared memory parallelism system off.Some HPC platforms don’t support distributed parallelism, or not very well. When you run on those machines, you can turn the code’s distributed parallelism system off.Some support both kinds well.So, when you want to use the newest, fastest supercomputer, you can target what it does well without having to rewrite your code.

Why Bother?

OU Supercomputing Center for Education & Research 50

Why Bother with HPC at All?

It’s clear that making effective use of HPC takes quite a bit of effort, both learning and programming.

That seems like a lot of trouble to go to just to get your code to run faster.

It’s nice to have a code that used to take a day run in an hour. But if you can afford to wait a day, what’s the point of HPC?

Why go to all that trouble just to get your code to run faster?

OU Supercomputing Center for Education & Research 51

Why HPC is Worth the BotherWhat HPC gives you that you won’t get

elsewhere is the ability to do bigger, better, more exciting science.

That is, if your code can run faster, that means that you can tackle much bigger problems in the same amount of time that you used to do the smaller problem.

OU Supercomputing Center for Education & Research 52

Your Fantasy ProblemFor those of you with current research projects:

1. Get together with your research group.2. Imagine that you had an infinitely large,

infinitely fast computer.3. What problems would you run on it?

OU Supercomputing Center for Education & Research 53

References[1] http://www.dell.com/us/en/dhs/products/

model_inspn_2_inspn_4000.htm[2] http://www.ac3.com.au/edu/hpc-intro/node6.html[3] http://www.anandtech.com/showdoc.html?i=1460&p=2[4] http://developer.intel.com/design/chipsets/820/[5] http://www.toshiba.com/taecdpd/products/features/

MK2018gas-Over.shtml[6] http://www.toshiba.com/taecdpd/techdocs/sdr2002/2002spec.shtml[7] ftp://download.intel.com/design/Pentium4/manuals/24547003.pdf[8] http://configure.us.dell.com/dellstore/config.asp?

customer_id=19&keycode=6V944&view=1&order_code=40WX[9] http://www.us.buy.com/retail/computers/category.asp?loc=484[10] Ruud van der Pas, “The UltraSPARC-III Microprocessor: Architecture

Overview,” 2001, p. 23.[11] Kevin Dowd and Charles Severance, High Performance Computing,

2nd ed. O’Reilly, 1998, p. 16.[12] “Alpha 21264 Processor” (internal Compaq report), page 2.