Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries...

45
Some of the Software Challenges for Some of the Software Challenges for Some of the Software Challenges for Some of the Software Challenges for Numerical Linear Algebra Libraries to Numerical Linear Algebra Libraries to Run at Sustained Run at Sustained Petascale Petascale Level Level Jack Dongarra INNOVATIVE COMP ING LABORATORY f University of Tennessee Oak Ridge National Laboratory 6/13/2007 1

Transcript of Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries...

Page 1: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Some of the Software Challenges forSome of the Software Challenges forSome of the Software Challenges for Some of the Software Challenges for Numerical Linear Algebra Libraries to Numerical Linear Algebra Libraries to Run at Sustained Run at Sustained PetascalePetascale LevelLevel

Jack DongarraINNOVATIVE COMP ING LABORATORY

fUniversity of TennesseeOak Ridge National Laboratory

6/13/2007 1

Page 2: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

OutlineOutline

• Impact of Multicore• Impact of Multicore

• Using Mixed Precision in Numerical C iComputing

• Hybrid Architectures

• Self Adapting

• Fault Tolerant MethodsFault Tolerant Methods 

2

Page 3: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Time to Rethink Software AgainTime to Rethink Software Again• Must rethink the design of our software

Another disruptive technologyAnother disruptive technologySimilar to what happened with cluster computing and message passing

Rethink and rewrite the applications, algorithms, and software

N i l lib i f l ill h• Numerical libraries for example will changeFor example, both LAPACK and ScaLAPACKwill undergo major changes to accommodatewill undergo major changes to accommodate this

3

Page 4: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Major Changes to Math SoftwareMajor Changes to Math Software

• ScalarScalarFortran code in EISPACK

• VectorLevel 1 BLAS use in LINPACKLevel 1 BLAS use in LINPACK

• SMPLevel 3 BLAS use in LAPACK

• Distributed MemoryMessage Passing w/MPI in ScaLAPACK

• Many‐CoreMany CoreEvent driven multi‐threading in PLASMA

Parallel Linear Algebra Software for Multicore Architectures

4

Page 5: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Parallelism in LAPACK  / ScaLAPACKShared Memory Distributed Memory

ScaLAPACKLAPACK

Pa

PBLASPBLASPBLASATLASATLASATLAS Specialized Specialized Specialized BLASBLASBLAS

arallel

BLACSBLACSBLACSthreadsthreadsthreads

MPIMPIMPI

5Two well known open source software efforts for dense matrix problems.

Page 6: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Steps in the LAPACK LUSteps in the LAPACK LUDGETF2 LAPACK

(Factor a panel)

DLSWP LAPACK(B k d )

DLSWP LAPACK

(Backward swap)

DLSWP LAPACK(Forward swap)

DTRSM BLAS(Triangular solve)

6DGEMM BLAS

(Matrix multiply) Most of the work done here

Page 7: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

LU Timing Profile (4 LU Timing Profile (4 Core SystemCore System))Threads – no lookahead

1D decompositionTime for each componentDGETF2DLASWP(L)DLASWP(R)DTRSMDGEMM

DGETF2

DLSWP

DLSWP

DTRSM

7 DGEMMBulk Sync PhasesBulk Sync Phases

Page 8: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Adaptive Lookahead Adaptive Lookahead ‐‐ DynamicDynamic

8Event Driven Event Driven MultithreadingMultithreadingOut of Order ExecutionOut of Order Execution

Reorganizing algorithms to use

this approach

Page 9: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

ForkFork--Join vs. Dynamic ExecutionJoin vs. Dynamic ExecutionA

C

A

B C

T TTFork-Join – parallel BLAS

Time

Experiments on Experiments on 9

pe e ts ope e ts oIntel’s Quad Core Clovertown Intel’s Quad Core Clovertown with 2 Sockets w/ 8 Treadswith 2 Sockets w/ 8 Treads

Page 10: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

ForkFork--Join vs. Dynamic ExecutionJoin vs. Dynamic ExecutionA

C

A

B C

T TTFork-Join – parallel BLAS

Time

DAG-based – dynamic scheduling

Experiments on Experiments on

Time saved

10

pe e ts ope e ts oIntel’s Quad Core Clovertown Intel’s Quad Core Clovertown with 2 Sockets w/ 8 Treadswith 2 Sockets w/ 8 Treads

Page 11: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Fork-Join vs. Dynamic Execution

Breaking the “hour-glass” pattern of parallel processing

36

40

LU Factorization Cholesky Factorization QR Factorization

28

32

36

40

Dynamic DynamicDynamic

12

16

20

24

28

32

Gflo

p/s

16

20

24

28

Gflo

p/s

12

16

20

24

28

32

Gflo

p/s

Fork-JoinFork Join

0 2000 4000 6000 8000

0

4

8

Size0 2000 4000 6000 8000

8

12

Size0 2000 4000 6000 8000

0

4

8

Size

Fork-JoinFork-Join

Intel Clovertownclock - 2.66 GHz

11

2 sockets - quad-core8 cores total

Page 12: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Intel’s Clovertown Quad CoreIntel’s Clovertown Quad Core1. LAPACK (BLAS Fork-Join Parallelism)

2. ScaLAPACK (Mess Pass using mem copy)3. DAG Based (Dynamic Scheduling)

3 Implementations of LU factorization3 Implementations of LU factorizationQuad core w/2 sockets per board, w/ 8 TreadsQuad core w/2 sockets per board, w/ 8 Treads

35000

40000

45000

25000

30000

35000

p/s

15000

20000Mflo

8 Core Experiments

0

5000

100008 Core Experiments

12

01000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000 12000 13000 14000 15000

Problems Size

Page 13: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

MulticoreMulticore, FPGA, Heterogeneity, FPGA, Heterogeneity

• What about the potential of FPGA or h b id ?hybrid core processors?

13

Page 14: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

What about the IBM’s What about the IBM’s Cell Processor?Cell Processor?

• 9 CoresPower PC at 3.2 GHzPower PC at 3.2 GHz8 SPEs

• 204 8 Gflop/s peak! $600• 204.8 Gflop/s peak!The catch is that this is for 32 bit floating point; (Single Precision SP)

$

point; (Single Precision SP) And 64 bit floating point runs at 14.6 Gflop/stotal for all 8 SPEs!! 

Divide SP peak by 14; factor of 2 because of DP and 7 because of latency issuesThe SPEs are fully IEEE-754 compliant in double precision

14

The SPEs are fully IEEE 754 compliant in double precision. In single precision, they only implement round-towards-zero, denormalizednumbers are flushed to zero and NaNs are treated like normal numbers.PowerPC part is fully IEEE compliant.

Page 15: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Moving Data Around on the Cell

256 KB256 KB

Injection bandwidth25.6 GB/s

Injection bandwidth Injection bandwidthInjection bandwidth

Worst case memory bound operations (no reuse of data) 3 data movements (2 in and 1 out) with 2 ops (SAXPY)For the cell would be 4.6 Gflop/s (25.6 GB/s*2ops/12B)

Page 16: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

On the Way to Understanding How to Use On the Way to Understanding How to Use the Cell Something Else Happened …the Cell Something Else Happened …g ppg pp

Realized have the similar situation on our commodity S d S dour commodity processors.

That is, SP is 2X faster than DP on many systems

SizeSpeedup SGEMM/DGEMM

SizeSpeedup SGEMV/DGEMV

AMD Opteron 246 3000 2.00 5000 1.70many systems

Standard Intel Pentium and AMD O t h SSE2

Sun UltraSparc-IIe 3000 1.64 5000 1.66Intel PIII Coppermine 3000 2.03 5000 2.09PowerPC 970 3000 2.04 5000 1.44Intel Woodcrest 3000 1 81 5000 2 18Opteron have SSE2

2 flops/cycle DP4 flops/cycle SP

Intel Woodcrest 3000 1.81 5000 2.18Intel XEON 3000 2.04 5000 1.82Intel Centrino Duo 3000 2.71 5000 2.21

IBM PowerPC has AltiVec

8 flops/cycle SP4 fl / l DP Two things going on:

16

4 flops/cycle DPNo DP on AltiVec

Two things going on:• SP has higher execution rate and • Less data to move.

Page 17: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

32 or 64 bit Floating Point Precision?32 or 64 bit Floating Point Precision?• A long time ago 32 bit floating point was used

Still used in scientific apps but limited• Most apps use 64 bit floating point

Accumulation of round off errorA 100 TFlop/s computer running for 4 hours performs > p p g p1 Exaflop (1018 ops.)

Ill conditioned problemsIEEE SP exponent bits too few (8 bits, 10±38)p ( , )Critical sections need higher precision

Sometimes need extended precision (128 bit fl pt)However some can get by with 32 bit fl pt in some partsHowever some can get by with 32 bit fl pt in some parts

• Mixed precision a possibilityApproximate in lower precision and then refine or improve solution to high precision

17

improve solution to high precision.

Page 18: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Idea Something Like This…Idea Something Like This…• Exploit 32 bit floating point as much as possible.

Especially for the bulk of the computation

• Correct or update the solution with selective use of 64 bit floating point to provide a refined results

• Intuitively: Compute a 32 bit result, 

Calculate a correction to 32 bit result using selected higher precision and,

Perform the update of the 32 bit results with thePerform the update of the 32 bit results with the correction using high precision. 

18

Page 19: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

32 and 64 Bit Floating Point Arithmetic32 and 64 Bit Floating Point Arithmetic• Iterative refinement for dense systems,   Ax = b, can work this way.

Wilkinson, Moler, Stewart, & Higham provide error bound results when using working precision.It can be shown that using this approach we can compute the solutionIt can be shown that using this approach we can compute the solution to 64‐bit floating point precision.

Requires extra storage, total is 1.5 times normal;O(n3) work is done in lower precision

19

O(n3) work is done in lower precisionO(n2) work is done in high precision

Problems if the matrix is ill-conditioned in sp; O(108)

Page 20: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Results for Mixed Precision Iterative Refinement for Dense Ax = b

Architecture (BLAS)1 Intel Pentium III Coppermine (Goto)2 Intel Pentium III Katmai (Goto)3 Sun UltraSPARC IIe (Sunperf) 4 Intel Pentium IV Prescott (Goto)5 Intel Pentium IV-M Northwood (Goto)6 AMD Opteron (Goto)7 Cray X1 (libsci)8 IBM Power PC G5 (2 7 GHz) (VecLib)8 IBM Power PC G5 (2.7 GHz) (VecLib)9 Compaq Alpha EV6 (CXML)10 IBM SP Power3 (ESSL)11 SGI Octane (ATLAS)

• Single precision is faster than DP because:Higher parallelism within vector units

4 ops/cycle (usually) instead of 2 ops/cycleReduced data motion 

32 bit data instead of 64 bit dataHigher locality in cache

More data items in cache

Page 21: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Results for Mixed Precision Iterative Refinement for Dense Ax = b

Architecture (BLAS)1 Intel Pentium III Coppermine (Goto)2 Intel Pentium III Katmai (Goto)3 Sun UltraSPARC IIe (Sunperf) 4 Intel Pentium IV Prescott (Goto)5 Intel Pentium IV-M Northwood (Goto)6 AMD Opteron (Goto)7 Cray X1 (libsci)8 IBM Power PC G5 (2 7 GHz) (VecLib)8 IBM Power PC G5 (2.7 GHz) (VecLib)9 Compaq Alpha EV6 (CXML)10 IBM SP Power3 (ESSL)11 SGI Octane (ATLAS)

A hi (BLAS MPI) # DP S l DP S l #Architecture (BLAS-MPI) # procs n DP Solve/SP Solve

DP Solve/Iter Ref

# iter

AMD Opteron (Goto – OpenMPI MX) 32 22627 1.85 1.79 6

AMD Opteron (Goto – OpenMPI MX) 64 32000 1 90 1.83 6

• Single precision is faster than DP because:Higher parallelism within vector units

4 ops/cycle (usually) instead of 2 ops/cycle

1.90 1.83

Reduced data motion 32 bit data instead of 64 bit data

Higher locality in cacheMore data items in cache

Page 22: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

IBM Cell 3.2 GHz, Ax = bIBM Cell 3.2 GHz, Ax = b250

200

SP Peak (204 Gflop/s)

SP Ax=b IBM30

8 SGEMM (Embarrassingly Parallel)

100

150

GFl

op/s DP Peak (15 Gflop/s)

DP Ax=b IBM

.30 secs

50

00 500 1000 1500 2000 2500 3000 3500 4000 4500

3.9 secs

22

Matrix Size

Page 23: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

IBM Cell 3.2 GHz, Ax = bIBM Cell 3.2 GHz, Ax = b250

200

SP Peak (204 Gflop/s)SP Ax=b IBM

30

8 SGEMM (Embarrassingly Parallel)

100

150

GFl

op/s

DSGESVDP Peak (15 Gflop/s)DP Ax=b IBM

.30 secs

.47 secs

50

100

8.3X

00 500 1000 1500 2000 2500 3000 3500 4000 4500

3.9 secs

23

0 500 1000 1500 2000 2500 3000 3500 4000 4500

Matrix Size

Page 24: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

LINPACK Benchmark  LINPACK Benchmark  

Potential RealizedPotential Realized

24

Page 25: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Quadruple PrecisionQuadruple Precisionn Quad Precision

Ax = bIter. Refine.

DP to QP

time (s) time (s) Speedup

Intel Xeon 3.2 GHz

Reference ( ) ( ) p p

100 0.29 0.03 9.5 200 2.27 0.10 20.9

implementation of the quad precision BLAS

300 7.61 0.24 30.5 400 17.8 0.44 40.4 500 34 7 0 69 49 7

Accuracy: 10-32

No more than 3 500 34.7 0.69 49.7 600 60.1 1.01 59.0 700 94.9 1.38 68.7

steps of iterative refinement are needed.

800 141. 1.83 77.3 900 201. 2.33 86.3

1000 276 2 92 94 825• Variable precision factorization (with say < 32 bit precision) plus 64 bit 

refinement produces 64 bit accuracy

1000 276. 2.92 94.8

Page 26: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Sparse Direct Solver and Iterative RefinementSparse Direct Solver and Iterative Refinement

MUMPS package based on multifrontal approach which generates small dense matrix multiplies

1.8

2Speedup Over DP

Opteron w/Intel compiler Iterative RefinementSingle Precision

1

1.2

1.4

1.6

0.4

0.6

0.8

1

G64Si10H16airfoil_2dbcsstk39blockqp1c-71cavity26dawson5epb3finan51heart1kivap0kivap0multnasanem qa8

rm to v w

Ite ra tiv e R e fin e me n t

0

0.2

26

d 9 p1 6 n5 512t1 p004p006lt_dcop_01sasrbemeth26a8fkma10torso2venkat01wathen120

Tim Davis's Collection, n=100K - 3M

Page 27: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Sparse Iterative Methods (PCG)Sparse Iterative Methods (PCG)• Outer/Inner Iteration Inner iteration:

In 32 bit floating pointOuter iterations using 64 bit floating point

27• Outer iteration in 64 bit floating point and inner iteration 

in 32 bit floating point

Page 28: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Mixed Precision Computations forMixed Precision Computations forSparse Inner/OuterSparse Inner/Outer‐‐type Iterative Solverstype Iterative Solvers

1.752

2.252.5

Speedups for mixed precision Inner SP/Outer DP (SP/DP) iter. methods vs DP/DP(CG2, GMRES2, PCG2, and PGMRES2 with diagonal prec.)(Hi h i b )

0 50.75

11.251.5

CGPCG

(Higher is better)

22

2

1 25

00.250.5

11,142 25,980 79,275 230,793 602,091

CGGMRES PGMRES

2

0.75

1

1.25Iterations for mixed precision SP/DP iterative methods vs DP/DP(Lower is better)

0.25

0.5Machine:

Intel Woodcrest (3GHz, 1333MHz bus)

Stopping criteria:Relative to r0 residual reduction (10-12)

28

011,142 25,980 79,275 230,793 602,091

6,021 18,000 39,000 120,000 240,000

Matrix size

Condition number

Page 29: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Intriguing PotentialIntriguing Potential

• Exploit lower precision as much as possiblePayoff in performancePayoff in performance

Faster floating point Less data to move

• Automatically switch between SP and DP to match the• Automatically switch between SP and DP to match the desired accuracy

Compute solution in SP and then a correction to the solution in DP

P t ti l f GPU FPGA i l• Potential for GPU, FPGA, special purpose processorsWhat about 16 bit floating point?

Use as little you can get away with and improve the accuracy

Li d Ei l i i i bl• Linear systems and Eigenvalue, optimization problems, where Newton’s method is used.

29 Correction = - A\(b – Ax)

Page 30: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

How to Deal with Complexity? How to Deal with Complexity? 

• Adaptivity is the key for applications to• Adaptivity is the key for applications to effectively use available resources whose complexity is exponentially increasingp y p y g

• Goal:  Automatically bridge the gap between the y g g papplication and computers that are rapidly changing and getting more and more complex

Page 31: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Self Adapting Self Adapting Numerical Numerical SoftwareSoftware• Optimizing software to exploit the features of a                                      given 

system has historically been an exercise in                                                                       hand customizationhand customization. 

Time consuming and tedious 

Hard to predict performance from source code

Must be redone for every architecture and compiler

Software technology often lags hardware/architectureBest algorithm may depend on input, so some tuning may be needed at run-time.

• With good reason scientists expect their computing tools toWith good reason scientists expect their computing tools to serve them and not the other way around.

• There is a need for quick/dynamic deployment of optimized routines. 

ATLAS, PhiPAC, BeBoP, Spiral, FFTW, GCO, …

Page 32: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Examples of Automatic Performance TuningExamples of Automatic Performance Tuning

• Dense BLASSequentialSequentialATLAS (UTK) & PHiPAC (UCB)

• Fast Fourier Transform (FFT) & variations( )FFTW (MIT)Sequential and Parallelwww.fftw.org

• Digital Signal ProcessingSPIRAL: www spiral net (CMU)SPIRAL: www.spiral.net  (CMU)

• MPI Collectives (UCB, UTK)• More projects conferences government reportsMore projects, conferences, government reports, …

Page 33: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Generic Code OptimizationGeneric Code Optimization

• Can ATLAS‐like techniques be applied to arbitrary code?

• What do we mean by ATLAS‐like techniques?What do we mean by ATLAS like techniques?

Blocking

Loop unrolling

Data prefetch

Functional unit scheduling

etc.

• Referred to as empirical optimization

G t i tiGenerate many variations

Pick the best implementation by measuring the performance

Page 34: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

MADNESS Kernel TuningMADNESS Kernel Tuning

• Code from Robert Harrison at ORNL

• Extract matrix‐vector multiplyExtract matrix vector multiply                        kernel from doitgen routine

• Design specific code generator for small• Design specific code generator for small size matrix‐vector multiplication

T ti l bl k i d lli• Tune optimal block size and unrolling factor separately for each input size

Page 35: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

MFLOPS Opteron (1.8 GHz)

2500

1500

2000auto-tuned C matrix-vector kernel

1000

1500

MFL

OPS

hand-tuned Fortran multi-resolution k l

500

1000 kernel

reference kernel in C

01 4 7 10 13 16 19 22 25 28 31

SIZE

Page 36: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

MFLOPS Woodcrest (3.0 GHz)

3500

4000

2500

3000 auto-tuned C matrix-vector kernel

2000

2500

MFL

OPS hand-tuned Fortran

multi-resolution kernel

1000

1500 reference kernel in C

0

500

1 4 7 10 13 16 19 22 25 28

SIZE

Page 37: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Applying Self Adapting SoftwareApplying Self Adapting Software

• Numerical and Non‐numerical applicationsBLAS like ops / message passing collectives

• Static or Dynamic determine code to be usedPerform at make time / every time invoked

• Independent or dependent on data presentedIndependent or dependent on data presentedSame on each data set / depends on properties of data

37

Page 38: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Three Ideas for Fault Tolerant                               Three Ideas for Fault Tolerant                               Linear Algebra AlgorithmsLinear Algebra Algorithmsg gg g

Lossless diskless check‐pointing  for iterative methods

Checksum maintained in active processorsOn failure, roll back to checkpoint and continueN l t d tNo lost data

Page 39: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Three Ideas for Fault Tolerant                               Three Ideas for Fault Tolerant                               Linear Algebra AlgorithmsLinear Algebra Algorithmsg gg g

Lossless diskless check‐pointing  for iterative methods

Checksum maintained in active processorsOn failure, roll back to checkpoint and continueN l t d tNo lost data

Lossy approach for iterative methodsNo checkpoint for computed data maintainedmaintainedOn failure, approximate missing data        and carry onLost data but use approximation to recover

Page 40: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Three Ideas for Fault Tolerant                               Three Ideas for Fault Tolerant                               Linear Algebra AlgorithmsLinear Algebra Algorithmsg gg g

Lossless diskless check‐pointing  for iterative methods

Checksum maintained in active processorsOn failure, roll back to checkpoint and continueN l t d tNo lost data

Lossy approach for iterative methodsNo checkpoint maintainedOn failure approximate missing data andOn failure, approximate missing data and carry onLost data but use approximation to recover

Check‐pointless methods for dense algorithms

Checksum maintained as part of computationcomputationNo roll back needed; No lost data

Page 41: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Three Ideas for Fault Tolerant                               Three Ideas for Fault Tolerant                               Linear Algebra AlgorithmsLinear Algebra Algorithms

• Large‐scale fault toleranceadaptation: resilience and recoveryadaptation: resilience and recoverypredictive techniques for probability of failure

resource classes and capabilitiescoupled to application usage modescoupled to application usage modes

resilience implementation mechanismsadaptive checkpoint frequencyin memory checkpointsin memory checkpoints

• By monitoring, one can identifyPAPI performance problemsfailure probability

• When potential of failurepMigrate process to another processor

Page 42: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Multi, Many, …, ManyMulti, Many, …, Many‐‐MoreMore

• Multi, Many, Many‐MoreCore are here and coming fast

Parallelism for the masses

• Programming models needs to be more human‐centric and engage the full range of g g gissues associated with developing a parallel application on manycore hardware.pp y

• Autotuners should take on a larger, or at least complementary, role to compilers inleast complementary, role to compilers in translating parallel programs. 42

Page 43: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Summary of Current Unmet NeedsSummary of Current Unmet Needs• Performance / Portability• Memory bandwidth/Latency• Fault tolerance• Fault tolerance  • Adaptability: Some degree of autonomy to self optimize, test, or monitor. 

Able to change mode of operation: static or dynamic• Better programming modelsBetter programming models 

Global shared address space Visible locality 

• Maybe coming soon (incremental, yet offering real benefits):( )Global Address Space (GAS) languages:  UPC, Co‐Array Fortran, Titanium, 

Chapel, X10, Fortress)“Minor” extensions to existing languagesMore convenient than MPIH f i li i fHave performance transparency via explicit remote memory references

• What’s needed is a long‐term, balanced investment in hardware software algorithms and applications in the HPC

43

hardware, software, algorithms and applications in the HPC Ecosystem. 

Page 44: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Collaborators / SupportCollaborators / Support

Alfredo ButtariJ li LJulien LangouJulie LangouPiotr LuszczekPiotr LuszczekJakub KurzakStan Tomov

33 44

Page 45: Linear Algebra Libraries to Run at Sustained ... · PDF fileNumerical Linear Algebra Libraries to Run at Sustained PetascalePetascaleLevelLevel ... For example, both LAPACK and ScaLAPACK

Users Guide for SC on PS3Users Guide for SC on PS3

• SCOP3: A Rough Guide• SCOP3: A Rough Guide to Scientific Computing on theComputing on the PlayStation 3

S b• See webpage                                  for details

33