CS 267 Applications of Parallel Computers Dense Linear Algebra

69
CS267 Dense Linear Algebra I.1 Demmel Fa 2001 CS 267 Applications of Parallel Computers Dense Linear Algebra James Demmel http://www.cs.berkeley.edu/~demmel/ cs267_221001.ppt

description

CS 267 Applications of Parallel Computers Dense Linear Algebra. James Demmel http://www.cs.berkeley.edu/~demmel/cs267_221001.ppt. Outline. Motivation for Dense Linear Algebra (as opposed to sparse) Benchmarks Review Gaussian Elimination (GE) for solving Ax=b - PowerPoint PPT Presentation

Transcript of CS 267 Applications of Parallel Computers Dense Linear Algebra

Page 1: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.1 Demmel Fa 2001

CS 267 Applications of Parallel Computers

Dense Linear Algebra

James Demmel

http://www.cs.berkeley.edu/~demmel/cs267_221001.ppt

Page 2: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.2 Demmel Fa 2001

Outline

° Motivation for Dense Linear Algebra (as opposed to sparse)

° Benchmarks

° Review Gaussian Elimination (GE) for solving Ax=b

° Optimizing GE for caches on sequential machines• using matrix-matrix multiplication (BLAS)

° LAPACK library overview and performance

° Data layouts on parallel machines

° Parallel matrix-matrix multiplication review

° Parallel Gaussian Elimination

° ScaLAPACK library overview

° Eigenvalue problems

° Open Problems

Page 3: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.3 Demmel Fa 2001

Motivation

° 3 Basic Linear Algebra Problems• Linear Equations: Solve Ax=b for x

• Least Squares: Find x that minimizes ri2 where r=Ax-b

• Eigenvalues: Findand x where Ax = x

• Lots of variations depending on structure of A (eg symmetry)

° Why dense A, as opposed to sparse A?• Aren’t “most” large matrices sparse?

• Dense algorithms easier to understand

• Some applications yields large dense matrices

- Ax=b: Computational Electromagnetics

- Ax = x: Quantum Chemistry

• Benchmarking

- “How fast is your computer?” = “How fast can you solve dense Ax=b?”

• Large sparse matrix algorithms often yield smaller (but still large) dense problems

Page 4: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.4 Demmel Fa 2001

Computational Electromagnetics – Solve Ax=b

•Developed during 1980s, driven by defense applications

•Determine the RCS (radar cross section) of airplane

•Reduce signature of plane (stealth technology)

•Other applications are antenna design, medical equipment

•Two fundamental numerical approaches:

•MOM methods of moments ( frequency domain)

•Large dense matrices

•Finite differences (time domain)

•Even larger sparse matrices

Page 5: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.5 Demmel Fa 2001

Computational Electromagnetics

image: NW Univ. Comp. Electromagnetics Laboratory http://nueml.ece.nwu.edu/

- Discretize surface into triangular facets using standard modeling tools

- Amplitude of currents on surface are unknowns

- Integral equation is discretized into a set of linear equations

Page 6: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.6 Demmel Fa 2001

Computational Electromagnetics (MOM)

After discretization the integral equation

has the form

A x = b where

A is the (dense) impedance matrix,

x is the unknown vector of amplitudes, and

b is the excitation vector.

(see Cwik, Patterson, and Scott, Electromagnetic Scattering on the Intel Touchstone Delta, IEEE Supercomputing ‘92, pp 538 - 542)

Page 7: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.7 Demmel Fa 2001

Results for Parallel Implementation on Intel Delta

Task Time (hours)

Fill (compute n2 matrix entries) 9.20

(embarrassingly parallel but slow)

Factor (Gaussian Elimination, O(n3) ) 8.25

(good parallelism with right algorithm)

Solve (O(n2)) 2 .17

(reasonable parallelism with right algorithm)

Field Calc. (O(n)) 0.12

(embarrassingly parallel and fast)

The problem solved was for a matrix of size 48,672. 2.6 Gflops for Factor - The world record in 1991.

Page 8: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.8 Demmel Fa 2001

Current Records for Solving Dense Systems

Year System Size Machine # Procs Gflops (Peak)

1950's O(100) 1995 128,600 Intel Paragon 6768 281 ( 338)1996 215,000 Intel ASCI Red 7264 1068 (1453)1998 148,000 Cray T3E 1488 1127 (1786)1998 235,000 Intel ASCI Red 9152 1338 (1830) (200 MHz Ppro)

1999 374,000 SGI ASCI Blue 5040 1608 (2520)2000 362,000 Intel ASCI Red 9632 2380 (3207) (333 MHz Xeon)

2001 518,000 IBM ASCI White 8000 7226 (12000) (375 MHz Power 3)

source: Alan Edelman http://www-math.mit.edu/~edelman/records.html LINPACK Benchmark: http://www.netlib.org/performance/html/PDSreports.html

www.netlib.org, click on Performance DataBase Server

Page 9: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.9 Demmel Fa 2001

Current Records for Solving Small Dense Systems

MegaflopsMachine n=100 n=1000 Peak

Fujitsu VPP 5000 1156 8784 9600 (1 proc 300 MHz)Cray T90 (32 proc, 450 MHz) 29360 57600 (4 proc, 450 MHz) 1129 5735 7200IBM Power 4 (1 proc, 1300 MHz) 1074 2394 5200…Dell Itanium (4 proc, 800 MHz) 7358 12800 (2 proc, 800 MHz) 4504 6400 (1 proc, 800 MHz) 600 2382 3200

source: LINPACK Benchmark: http://www.netlib.org/performance/html/PDSreports.html

www.netlib.org, click on Performance DataBase Server

Page 10: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.10 Demmel Fa 2001

Computational Chemistry – Ax = x

° Seek energy levels of a molecule, crystal, etc.• Solve Schroedinger’s Equation for energy levels = eigenvalues

• Discretize to get Ax = Bx, solve for eigenvalues and eigenvectors x

• A and B large, symmetric or Hermitian matrices (B positive definite)

• May want some or all eigenvalues and/or eigenvectors

° MP-Quest (Sandia NL)• Si and sapphire crystals of up to 3072 atoms

• Local Density Approximation to Schroedinger Equation

• A and B up to n=40000, complex Hermitian

• Need all eigenvalues and eigenvectors

• Need to iterate up to 20 times (for self-consistency)

° Implemented on Intel ASCI Red• 9200 Pentium Pro 200 processors (4600 Duals, a CLUMP)

• Overall application ran at 605 Gflops (out of 1800 Gflops peak),

• Eigensolver ran at 684 Gflops

• www.cs.berkeley.edu/~stanley/gbell/index.html

• Runner-up for Gordon Bell Prize at Supercomputing 98

° Same problem arises in astrophysics...

Page 11: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.11 Demmel Fa 2001

Review of Gaussian Elimination (GE) for solving Ax=b

° Add multiples of each row to later rows to make A upper triangular

° Solve resulting triangular system Ux = c by substitution

… for each column i… zero it out below the diagonal by adding multiples of row i to later rowsfor i = 1 to n-1 … for each row j below row i for j = i+1 to n … add a multiple of row i to row j for k = i to n A(j,k) = A(j,k) - (A(j,i)/A(i,i)) * A(i,k)

Page 12: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.12 Demmel Fa 2001

Refine GE Algorithm (1)

° Initial Version

° Remove computation of constant A(j,i)/A(i,i) from inner loop

… for each column i… zero it out below the diagonal by adding multiples of row i to later rowsfor i = 1 to n-1 … for each row j below row i for j = i+1 to n … add a multiple of row i to row j for k = i to n A(j,k) = A(j,k) - (A(j,i)/A(i,i)) * A(i,k)

for i = 1 to n-1 for j = i+1 to n m = A(j,i)/A(i,i) for k = i to n A(j,k) = A(j,k) - m * A(i,k)

Page 13: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.13 Demmel Fa 2001

Refine GE Algorithm (2)

° Last version

° Don’t compute what we already know: zeros below diagonal in column i

for i = 1 to n-1 for j = i+1 to n m = A(j,i)/A(i,i) for k = i+1 to n A(j,k) = A(j,k) - m * A(i,k)

for i = 1 to n-1 for j = i+1 to n m = A(j,i)/A(i,i) for k = i to n A(j,k) = A(j,k) - m * A(i,k)

Page 14: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.14 Demmel Fa 2001

Refine GE Algorithm (3)

° Last version

° Store multipliers m below diagonal in zeroed entries for later use

for i = 1 to n-1 for j = i+1 to n m = A(j,i)/A(i,i) for k = i+1 to n A(j,k) = A(j,k) - m * A(i,k)

for i = 1 to n-1 for j = i+1 to n A(j,i) = A(j,i)/A(i,i) for k = i+1 to n A(j,k) = A(j,k) - A(j,i) * A(i,k)

Page 15: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.15 Demmel Fa 2001

Refine GE Algorithm (4)

° Last version for i = 1 to n-1 for j = i+1 to n A(j,i) = A(j,i)/A(i,i) for k = i+1 to n A(j,k) = A(j,k) - A(j,i) * A(i,k)

o Split Loopfor i = 1 to n-1 for j = i+1 to n A(j,i) = A(j,i)/A(i,i) for j = i+1 to n for k = i+1 to n A(j,k) = A(j,k) - A(j,i) * A(i,k)

Page 16: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.16 Demmel Fa 2001

Refine GE Algorithm (5)

° Last version

° Express using matrix operations (BLAS)

for i = 1 to n-1 A(i+1:n,i) = A(i+1:n,i) * ( 1 / A(i,i) ) A(i+1:n,i+1:n) = A(i+1:n , i+1:n ) - A(i+1:n , i) * A(i , i+1:n)

for i = 1 to n-1 for j = i+1 to n A(j,i) = A(j,i)/A(i,i) for j = i+1 to n for k = i+1 to n A(j,k) = A(j,k) - A(j,i) * A(i,k)

Page 17: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.17 Demmel Fa 2001

What GE really computes

° Call the strictly lower triangular matrix of multipliers M, and let L = I+M

° Call the upper triangle of the final matrix U

° Lemma (LU Factorization): If the above algorithm terminates (does not divide by zero) then A = L*U

° Solving A*x=b using GE• Factorize A = L*U using GE (cost = 2/3 n3 flops)

• Solve L*y = b for y, using substitution (cost = n2 flops)

• Solve U*x = y for x, using substitution (cost = n2 flops)

° Thus A*x = (L*U)*x = L*(U*x) = L*y = b as desired

for i = 1 to n-1 A(i+1:n,i) = A(i+1:n,i) / A(i,i) A(i+1:n,i+1:n) = A(i+1:n , i+1:n ) - A(i+1:n , i) * A(i , i+1:n)

Page 18: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.18 Demmel Fa 2001

Problems with basic GE algorithm

° What if some A(i,i) is zero? Or very small?• Result may not exist, or be “unstable”, so need to pivot

° Current computation all BLAS 1 or BLAS 2, but we know that BLAS 3 (matrix multiply) is fastest (earlier lectures…)

for i = 1 to n-1 A(i+1:n,i) = A(i+1:n,i) / A(i,i) … BLAS 1 (scale a vector) A(i+1:n,i+1:n) = A(i+1:n , i+1:n ) … BLAS 2 (rank-1 update) - A(i+1:n , i) * A(i , i+1:n)

PeakBLAS 3

BLAS 2

BLAS 1

Page 19: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.19 Demmel Fa 2001

Pivoting in Gaussian Elimination° A = [ 0 1 ] fails completely, even though A is “easy” [ 1 0 ]

° Illustrate problems in 3-decimal digit arithmetic:

A = [ 1e-4 1 ] and b = [ 1 ], correct answer to 3 places is x = [ 1 ] [ 1 1 ] [ 2 ] [ 1 ]

° Result of LU decomposition is

L = [ 1 0 ] = [ 1 0 ] … No roundoff error yet [ fl(1/1e-4) 1 ] [ 1e4 1 ]

U = [ 1e-4 1 ] = [ 1e-4 1 ] … Error in 4th decimal place [ 0 fl(1-1e4*1) ] [ 0 -1e4 ]

Check if A = L*U = [ 1e-4 1 ] … (2,2) entry entirely wrong [ 1 0 ]

° Algorithm “forgets” (2,2) entry, gets same L and U for all |A(2,2)|<5° Numerical instability° Computed solution x totally inaccurate

° Cure: Pivot (swap rows of A) so entries of L and U bounded

Page 20: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.20 Demmel Fa 2001

Gaussian Elimination with Partial Pivoting (GEPP)° Partial Pivoting: swap rows so that each multiplier |L(i,j)| = |A(j,i)/A(i,i)| <= 1

for i = 1 to n-1

find and record k where |A(k,i)| = max{i <= j <= n} |A(j,i)| … i.e. largest entry in rest of column i if |A(k,i)| = 0 exit with a warning that A is singular, or nearly so elseif k != i swap rows i and k of A end if A(i+1:n,i) = A(i+1:n,i) / A(i,i) … each quotient lies in [-1,1] A(i+1:n,i+1:n) = A(i+1:n , i+1:n ) - A(i+1:n , i) * A(i , i+1:n)

° Lemma: This algorithm computes A = P*L*U, where P is a permutation matrix° Since each entry of |L(i,j)| <= 1, this algorithm is considered numerically stable° For details see LAPACK code at www.netlib.org/lapack/single/sgetf2.f

Page 21: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.21 Demmel Fa 2001

Converting BLAS2 to BLAS3 in GEPP

° Blocking• Used to optimize matrix-multiplication

• Harder here because of data dependencies in GEPP

° Delayed Updates• Save updates to “trailing matrix” from several consecutive BLAS2

updates

• Apply many saved updates simultaneously in one BLAS3 operation

° Same idea works for much of dense linear algebra• Open questions remain

° Need to choose a block size b• Algorithm will save and apply b updates

• b must be small enough so that active submatrix consisting of b columns of A fits in cache

• b must be large enough to make BLAS3 fast

Page 22: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.22 Demmel Fa 2001

Blocked GEPP (www.netlib.org/lapack/single/sgetrf.f)for ib = 1 to n-1 step b … Process matrix b columns at a time end = ib + b-1 … Point to end of block of b columns apply BLAS2 version of GEPP to get A(ib:n , ib:end) = P’ * L’ * U’ … let LL denote the strict lower triangular part of A(ib:end , ib:end) + I

A(ib:end , end+1:n) = LL-1 * A(ib:end , end+1:n) … update next b rows of U A(end+1:n , end+1:n ) = A(end+1:n , end+1:n ) - A(end+1:n , ib:end) * A(ib:end , end+1:n) … apply delayed updates with single matrix-multiply … with inner dimension b

(For a correctness proof, see on-lines notes.)

Page 23: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.23 Demmel Fa 2001

Efficiency of Blocked GEPP

Page 24: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.24 Demmel Fa 2001

Overview of LAPACK

° Standard library for dense/banded linear algebra• Linear systems: A*x=b

• Least squares problems: minx || A*x-b ||2• Eigenvalue problems: Ax =x, Ax = Bx

• Singular value decomposition (SVD): A = UVT

° Algorithms reorganized to use BLAS3 as much as possible

° Basis of math libraries on many computers, Matlab 6

° Many algorithmic innovations remain• Projects available

• Automatic optimization

• Quadtree matrix data structures for locality

• New eigenvalue algorithms

Page 25: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.25 Demmel Fa 2001

Performance of LAPACK (n=1000)

Page 26: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.26 Demmel Fa 2001

Performance of LAPACK (n=100)

Page 27: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.27 Demmel Fa 2001

Recursive Algorithms

° Still uses delayed updates, but organized differently• (formulas on board)

° Can exploit recursive data layouts• 3x speedups on least squares for tall, thin matrices

° Theoretically optimal memory hierarchy performance

° See references at• http://lawra.uni-c.dk/lawra/index.html

• http://www.cs.berkeley.edu/~yelick/cs267f01/lectures/Lect14.html

• http://www.cs.umu.se/research/parallel/recursion/recursive-qr/

Page 28: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.28 Demmel Fa 2001

Recursive Algorithms – Limits

° Two kinds of dense matrix compositions

° One Sided • Sequence of simple operations applied on left of matrix

• Gaussian Elimination: A = L*U or A = P*L*U

- Symmetric Gaussian Elimination: A = L*D*LT

- Cholesky: A = L*LT

• QR Decomposition for Least Squares: A = Q*R

• Can be nearly 100% BLAS 3

• Susceptible to recursive algorithms

° Two Sided• Sequence of simple operations applied on both sides, alternating

• Eigenvalue algorithms, SVD

• At least ~25% BLAS 2

• Seem impervious to recursive approach

• Some recent progress on SVD (25% vs 50% BLAS2)

Page 29: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.29 Demmel Fa 2001

Parallelizing Gaussian Elimination

° Recall parallelization steps from earlier lecture• Decomposition: identify enough parallel work, but not too much

• Assignment: load balance work among threads

• Orchestrate: communication and synchronization

• Mapping: which processors execute which threads

° Decomposition• In BLAS 2 algorithm nearly each flop in inner loop can be done in

parallel, so with n2 processors, need 3n parallel steps

• This is too fine-grained, prefer calls to local matmuls instead

• Need to discuss parallel matrix multiplication

° Assignment• Which processors are responsible for which submatrices?

for i = 1 to n-1 A(i+1:n,i) = A(i+1:n,i) / A(i,i) … BLAS 1 (scale a vector) A(i+1:n,i+1:n) = A(i+1:n , i+1:n ) … BLAS 2 (rank-1 update) - A(i+1:n , i) * A(i , i+1:n)

Page 30: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.30 Demmel Fa 2001

Different Data Layouts for Parallel GE (on 4 procs)

The winner!

Bad load balance:P0 idle after firstn/4 steps

Load balanced, but can’t easilyuse BLAS2 or BLAS3

Can trade load balanceand BLAS2/3 performance by choosing b, butfactorization of blockcolumn is a bottleneck

Complicated addressing

Page 31: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.31 Demmel Fa 2001

PDGEMM = PBLAS routine for matrix multiply

Observations: For fixed N, as P increases Mflops increases, but less than 100% efficiency For fixed P, as N increases, Mflops (efficiency) rises

DGEMM = BLAS routine for matrix multiply

Maximum speed for PDGEMM = # Procs * speed of DGEMM

Observations (same as above): Efficiency always at least 48% For fixed N, as P increases, efficiency drops For fixed P, as N increases, efficiency increases

Page 32: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.32 Demmel Fa 2001

Review: BLAS 3 (Blocked) GEPP

for ib = 1 to n-1 step b … Process matrix b columns at a time end = ib + b-1 … Point to end of block of b columns apply BLAS2 version of GEPP to get A(ib:n , ib:end) = P’ * L’ * U’ … let LL denote the strict lower triangular part of A(ib:end , ib:end) + I

A(ib:end , end+1:n) = LL-1 * A(ib:end , end+1:n) … update next b rows of U A(end+1:n , end+1:n ) = A(end+1:n , end+1:n ) - A(end+1:n , ib:end) * A(ib:end , end+1:n) … apply delayed updates with single matrix-multiply … with inner dimension b

BLAS 3

Page 33: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.33 Demmel Fa 2001

Review: Row and Column Block Cyclic Layout

processors and matrix blocksare distributed in a 2d array

pcol-fold parallelismin any column, and calls to the BLAS2 and BLAS3 on matrices of size brow-by-bcol

serial bottleneck is eased

need not be symmetric in rows andcolumns

Page 34: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.34 Demmel Fa 2001

Distributed GE with a 2D Block Cyclic Layout

block size b in the algorithm and the block sizes brow and bcol in the layout satisfy b=brow=bcol.

shaded regions indicate busy processors or communication performed.

unnecessary to have a barrier between each step of the algorithm, e.g.. step 9, 10, and 11 can be pipelined

Page 35: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.35 Demmel Fa 2001

Distributed GE with a 2D Block Cyclic Layout

Page 36: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.36 Demmel Fa 2001

Ma

trix

mu

ltip

ly o

f

gre

en

= g

ree

n -

blu

e *

pin

k

Page 37: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.37 Demmel Fa 2001

PDGESV = ScaLAPACK parallel LU routine

Since it can run no faster than its inner loop (PDGEMM), we measure:Efficiency = Speed(PDGESV)/Speed(PDGEMM)

Observations: Efficiency well above 50% for large enough problems For fixed N, as P increases, efficiency decreases (just as for PDGEMM) For fixed P, as N increases efficiency increases (just as for PDGEMM) From bottom table, cost of solving Ax=b about half of matrix multiply for large enough matrices. From the flop counts we would

expect it to be (2*n3)/(2/3*n3) = 3 times faster, but communication makes it a little slower.

Page 38: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.38 Demmel Fa 2001

Page 39: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.39 Demmel Fa 2001

Page 40: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.40 Demmel Fa 2001

Scales well, nearly full machine speed

Page 41: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.41 Demmel Fa 2001

Old version,pre 1998 Gordon Bell Prize

Still have ideas to accelerateProject Available!

Old Algorithm, plan to abandon

Page 42: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.42 Demmel Fa 2001

The “Holy Grail” of Eigensolvers for Symmetric matrices

To be propagated throughout LAPACK and ScaLAPACK

Page 43: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.43 Demmel Fa 2001

Have good ideas to speedupProject available!

Hardest of all to parallelize

Page 44: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.44 Demmel Fa 2001

Out-of-core means matrix lives on disk; too big for main mem

Much harder to hide latency of disk

QR much easier than LU because no pivoting needed for QR

Page 45: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.45 Demmel Fa 2001

A small software project ...

Page 46: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.46 Demmel Fa 2001

Extra Slides

Page 47: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.47 Demmel Fa 2001

Parallelizing Gaussian Elimination

° Recall parallelization steps from Lecture 3• Decomposition: identify enough parallel work, but not too much

• Assignment: load balance work among threads

• Orchestrate: communication and synchronization

• Mapping: which processors execute which threads

° Decomposition• In BLAS 2 algorithm nearly each flop in inner loop can be done in

parallel, so with n2 processors, need 3n parallel steps

• This is too fine-grained, prefer calls to local matmuls instead

for i = 1 to n-1 A(i+1:n,i) = A(i+1:n,i) / A(i,i) … BLAS 1 (scale a vector) A(i+1:n,i+1:n) = A(i+1:n , i+1:n ) … BLAS 2 (rank-1 update) - A(i+1:n , i) * A(i , i+1:n)

Page 48: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.48 Demmel Fa 2001

Assignment of parallel work in GE

° Think of assigning submatrices to threads, where each thread responsible for updating submatrix it owns

• “owner computes” rule natural because of locality

° What should submatrices look like to achieve load balance?

Page 49: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.49 Demmel Fa 2001

Different Data Layouts for Parallel GE (on 4 procs)

The winner!

Bad load balance:P0 idle after firstn/4 steps

Load balanced, but can’t easilyuse BLAS2 or BLAS3

Can trade load balanceand BLAS2/3 performance by choosing b, butfactorization of blockcolumn is a bottleneck

Complicated addressing

Page 50: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.50 Demmel Fa 2001

The main steps in the solution process are

Fill: computing the matrix elements of A

Factor: factoring the dense matrix A

Solve: solving for one or more excitations b

Field Calc: computing the fields scattered from the object

Computational Electromagnetics (MOM)

Page 51: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.51 Demmel Fa 2001

Analysis of MOM for Parallel Implementation

Task Work Parallelism Parallel Speed

Fill O(n**2) embarrassing low

Factor O(n**3) moderately diff. very high

Solve O(n**2) moderately diff. high

Field Calc. O(n) embarrassing high

Page 52: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.52 Demmel Fa 2001

BLAS 3 (Blocked) GEPP, using Delayed Updates

for ib = 1 to n-1 step b … Process matrix b columns at a time end = ib + b-1 … Point to end of block of b columns apply BLAS2 version of GEPP to get A(ib:n , ib:end) = P’ * L’ * U’ … let LL denote the strict lower triangular part of A(ib:end , ib:end) + I

A(ib:end , end+1:n) = LL-1 * A(ib:end , end+1:n) … update next b rows of U A(end+1:n , end+1:n ) = A(end+1:n , end+1:n ) - A(end+1:n , ib:end) * A(ib:end , end+1:n) … apply delayed updates with single matrix-multiply … with inner dimension b

BLAS 3

Page 53: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.53 Demmel Fa 2001

BLAS2 version of Gaussian Elimination with Partial Pivoting (GEPP)

for i = 1 to n-1

find and record k where |A(k,i)| = max{i <= j <= n} |A(j,i)| … i.e. largest entry in rest of column i if |A(k,i)| = 0 exit with a warning that A is singular, or nearly so elseif k != i swap rows i and k of A end if A(i+1:n,i) = A(i+1:n,i) / A(i,i) … each quotient lies in [-1,1] … BLAS 1 A(i+1:n,i+1:n) = A(i+1:n , i+1:n ) - A(i+1:n , i) * A(i , i+1:n) … BLAS 2, most work in this line

Page 54: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.54 Demmel Fa 2001

How to proceed:

° Consider basic parallel matrix multiplication algorithms on simple layouts

• Performance modeling to choose best one

- Time (message) = latency + #words * time-per-word

- = + n*

° Briefly discuss block-cyclic layout

° PBLAS = Parallel BLAS

Page 55: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.55 Demmel Fa 2001

Parallel Matrix Multiply

° Computing C=C+A*B

° Using basic algorithm: 2*n3 Flops

° Variables are:• Data layout

• Topology of machine

• Scheduling communication

° Use of performance models for algorithm design

Page 56: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.56 Demmel Fa 2001

1D Layout

° Assume matrices are n x n and n is divisible by p

° A(i) refers to the n by n/p block column that processor i owns (similiarly for B(i) and C(i))

° B(i,j) is the n/p by n/p sublock of B(i) • in rows j*n/p through (j+1)*n/p

° Algorithm uses the formulaC(i) = C(i) + A*B(i) = C(i) + A(j)*B(j,i)

p0 p1 p2 p3 p5 p4 p6 p7

j

Page 57: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.57 Demmel Fa 2001

Matrix Multiply: 1D Layout on Bus or Ring

° Algorithm uses the formulaC(i) = C(i) + A*B(i) = C(i) + A(j)*B(j,i)

° First consider a bus-connected machine without broadcast: only one pair of processors can communicate at a time (ethernet)

° Second consider a machine with processors on a ring: all processors may communicate with nearest neighbors simultaneously

j

Page 58: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.58 Demmel Fa 2001

Naïve MatMul for 1D layout on Bus without Broadcast

Naïve algorithm:

C(myproc) = C(myproc) + A(myproc)*B(myproc,myproc)

for i = 0 to p-1

for j = 0 to p-1 except i

if (myproc == i) send A(i) to processor j

if (myproc == j)

receive A(i) from processor i

C(myproc) = C(myproc) + A(i)*B(i,myproc)

barrier

Cost of inner loop:

computation: 2*n*(n/p)2 = 2*n3/p2

communication: + *n2 /p

Page 59: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.59 Demmel Fa 2001

Naïve MatMul (continued)

Cost of inner loop:

computation: 2*n*(n/p)2 = 2*n3/p2

communication: + *n2 /p … approximately

Only 1 pair of processors (i and j) are active on any iteration,

an of those, only i is doing computation

=> the algorithm is almost entirely serial

Running time: (p*(p-1) + 1)*computation + p*(p-1)*communication

~= 2*n3 + p2* + p*n2*

this is worse than the serial time and grows with p

Page 60: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.60 Demmel Fa 2001

Better Matmul for 1D layout on a Processor Ring

° Proc i can communicate with Proc(i-1) and Proc(i+1) simultaneously for all i

Copy A(myproc) into TmpC(myproc) = C(myproc) + T*B(myproc , myproc)for j = 1 to p-1 Send Tmp to processor myproc+1 mod p Receive Tmp from processor myproc-1 mod p C(myproc) = C(myproc) + Tmp*B( myproc-j mod p , myproc)

° Same idea as for gravity in simple sharks and fish algorithm

° Time of inner loop = 2*( + *n2/p) + 2*n*(n/p)2

° Total Time = 2*n* (n/p)2 + (p-1) * Time of inner loop

~ 2*n3/p + 2*p* + 2**n2

° Optimal for 1D layout on Ring or Bus, even with with Broadcast: Perfect speedup for arithmetic A(myproc) must move to each other processor, costs at least (p-1)*cost of sending n*(n/p) words

° Parallel Efficiency = 2*n3 / (p * Total Time) = 1/(1 + * p2/(2*n3) + * p/(2*n) ) = 1/ (1 + O(p/n))

Grows to 1 as n/p increases (or and shrink)

Page 61: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.61 Demmel Fa 2001

MatMul with 2D Layout

° Consider processors in 2D grid (physical or logical)

° Processors can communicate with 4 nearest neighbors

• Broadcast along rows and columns

p(0,0) p(0,1) p(0,2)

p(1,0) p(1,1) p(1,2)

p(2,0) p(2,1) p(2,2)

Page 62: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.62 Demmel Fa 2001

Cannon’s Algorithm

… C(i,j) = C(i,j) + A(i,k)*B(k,j)

… assume s = sqrt(p) is an integer

forall i=0 to s-1 … “skew” A

left-circular-shift row i of A by i

… so that A(i,j) overwritten by A(i,(j+i)mod s)

forall i=0 to s-1 … “skew” B

up-circular-shift B column i of B by i

… so that B(i,j) overwritten by B((i+j)mod s), j)

for k=0 to s-1 … sequential

forall i=0 to s-1 and j=0 to s-1 … all processors in parallel

C(i,j) = C(i,j) + A(i,j)*B(i,j)

left-circular-shift each row of A by 1

up-circular-shift each row of B by 1

k

Page 63: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.63 Demmel Fa 2001

Communication in Cannon

C(1,2) = A(1,0) * B(0,2) + A(1,1) * B(1,2) + A(1,2) * B(2,2)

Page 64: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.64 Demmel Fa 2001

Cost of Cannon’s Algorithm

forall i=0 to s-1 … recall s = sqrt(p)

left-circular-shift row i of A by i … cost = s*( + *n2/p)

forall i=0 to s-1

up-circular-shift B column i of B by i … cost = s*( + *n2/p)

for k=0 to s-1

forall i=0 to s-1 and j=0 to s-1

C(i,j) = C(i,j) + A(i,j)*B(i,j) … cost = 2*(n/s)3 = 2*n3/p3/2

left-circular-shift each row of A by 1 … cost = + *n2/p

up-circular-shift each row of B by 1 … cost = + *n2/p

° Total Time = 2*n3/p + 4* s* + 4**n2/s ° Parallel Efficiency = 2*n3 / (p * Total Time) = 1/( 1 + * 2*(s/n)3 + * 2*(s/n) ) = 1/(1 + O(sqrt(p)/n)) ° Grows to 1 as n/s = n/sqrt(p) = sqrt(data per processor) grows° Better than 1D layout, which had Efficiency = 1/(1 + O(p/n))

Page 65: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.65 Demmel Fa 2001

Drawbacks to Cannon

° Hard to generalize for• p not a perfect square

• A and B not square

• Dimensions of A, B not perfectly divisible by s=sqrt(p)

• A and B not “aligned” in the way they are stored on processors

• block-cyclic layouts

° Memory hog (extra copies of local matrices)

Page 66: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.66 Demmel Fa 2001

SUMMA = Scalable Universal Matrix Multiply Algorithm

° Slightly less efficient, but simpler and easier to generalize

° Presentation from van de Geijn and Watts• www.netlib.org/lapack/lawns/lawn96.ps

• Similar ideas appeared many times

° Used in practice in PBLAS = Parallel BLAS• www.netlib.org/lapack/lawns/lawn100.ps

Page 67: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.67 Demmel Fa 2001

SUMMA

* =C(I,J)I

J

A(I,k)

k

k

B(k,J)

° I, J represent all rows, columns owned by a processor° k is a single row or column (or a block of b rows or columns)° C(I,J) = C(I,J) + k A(I,k)*B(k,J)

° Assume a pr by pc processor grid (pr = pc = 4 above) For k=0 to n-1 … or n/b-1 where b is the block size … = # cols in A(I,k) and # rows in B(k,J)

for all I = 1 to pr … in parallel

owner of A(I,k) broadcasts it to whole processor row

for all J = 1 to pc … in parallel

owner of B(k,J) broadcasts it to whole processor column Receive A(I,k) into Acol Receive B(k,J) into Brow C( myproc , myproc ) = C( myproc , myproc) + Acol * Brow

Page 68: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.68 Demmel Fa 2001

SUMMA performance

For k=0 to n/b-1 for all I = 1 to s … s = sqrt(p) owner of A(I,k) broadcasts it to whole processor row … time = log s *( + * b*n/s), using a tree for all J = 1 to s owner of B(k,J) broadcasts it to whole processor column … time = log s *( + * b*n/s), using a tree Receive A(I,k) into Acol Receive B(k,J) into Brow C( myproc , myproc ) = C( myproc , myproc) + Acol * Brow

… time = 2*(n/s)2*b

° Total time = 2*n3/p + * log p * n/b + * log p * n2 /s

° Parallel Efficiency = 1/(1 + * log p * p / (2*b*n2) + * log p * s/(2*n) )° ~Same term as Cannon, except for log p factor log p grows slowly so this is ok° Latency () term can be larger, depending on b When b=1, get * log p * n As b grows to n/s, term shrinks to * log p * s (log p times Cannon)° Temporary storage grows like 2*b*n/s° Can change b to tradeoff latency cost with memory

Page 69: CS 267 Applications of Parallel Computers Dense Linear Algebra

CS267 Dense Linear Algebra I.69 Demmel Fa 2001

Summary of Parallel Matrix Multiply Algorithms

° 1D Layout• Bus without broadcast - slower than serial

• Nearest neighbor communication on a ring (or bus with broadcast): Efficiency = 1/(1 + O(p/n))

° 2D Layout• Cannon

- Efficiency = 1/(1+O(p1/2 /n))

- Hard to generalize for general p, n, block cyclic, alignment

• SUMMA

- Efficiency = 1/(1 + O(log p * p / (b*n2) + log p * p1/2 /n))

- Very General

- b small => less memory, lower efficiency

- b large => more memory, high efficiency

• Gustavson et al

- Efficiency = 1/(1 + O(p1/3 /n) ) ??