CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV)...

322
CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector- Multiplication (SpMV) James Demmel www.cs.berkeley.edu/~demmel/cs267_Spr14

Transcript of CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV)...

Page 1: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

CS267 – Lecture 15

Automatic Performance Tuningand

Sparse-Matrix-Vector-Multiplication (SpMV)

James Demmel

www.cs.berkeley.edu/~demmel/cs267_Spr14

Page 2: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Outline

• Motivation for Automatic Performance Tuning• Results for sparse matrix kernels• OSKI = Optimized Sparse Kernel Interface

– pOSKI for multicore

• Tuning Higher Level Algorithms• Future Work, Class Projects

• BeBOP: Berkeley Benchmarking and Optimization Group– Many results shown from current and former members– Meet weekly W 12-1:30, in 380 Soda

Page 3: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Motivation for Automatic Performance Tuning• Writing high performance software is hard

– Make programming easier while getting high speed

• Ideal: program in your favorite high level language (Matlab, Python, PETSc…) and get a high fraction of peak performance

• Reality: Best algorithm (and its implementation) can depend strongly on the problem, computer architecture, compiler,…– Best choice can depend on knowing a lot of applied

mathematics and computer science

• How much of this can we teach?• How much of this can we automate?

Page 4: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Examples of Automatic Performance Tuning (1)

• Dense BLAS– Sequential– PHiPAC (UCB), then ATLAS (UTK) (used in Matlab)– math-atlas.sourceforge.net/– Internal vendor tools

• Fast Fourier Transform (FFT) & variations– Sequential and Parallel– FFTW (MIT)– www.fftw.org

• Digital Signal Processing– SPIRAL: www.spiral.net (CMU)

• Communication Collectives (UCB, UTK)• Rose (LLNL), Bernoulli (Cornell), Telescoping Languages

(Rice), …• More projects, conferences, government reports, …

Page 5: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Examples of Automatic Performance Tuning (2)

• What do dense BLAS, FFTs, signal processing, MPI reductions have in common?– Can do the tuning off-line: once per architecture,

algorithm– Can take as much time as necessary (hours, a week…)– At run-time, algorithm choice may depend only on few

parameters• Matrix dimension, size of FFT, etc.

Page 6: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Tuning Register Tile Sizes (Dense Matrix Multiply)

333 MHz Sun Ultra 2i

2-D slice of 3-D space; implementations color-coded by performance in Mflop/s

16 registers, but 2-by-3 tile size fastest

Needle in a haystack

Page 7: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Select a Matmul Implementation

Page 8: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Support Vector Classification

Page 9: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Machine Learning in Automatic Performance Tuning

• References– Statistical Models for Empirical Search-Based

Performance Tuning(International Journal of High Performance Computing Applications, 18 (1), pp. 65-94, February 2004)Richard Vuduc, J. Demmel, and Jeff A. Bilmes.

– Predicting and Optimizing System Utilization and Performance via Statistical Machine Learning (Computer Science PhD Thesis, University of California, Berkeley. UCB//EECS-2009-181 ) Archana Ganapathi

Page 10: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Machine Learning in Automatic Performance Tuning

• More references– Machine Learning for Predictive Autotuning

with Boosted Regression Trees, (Innovative Parallel Computing, 2012) J. Bergstra et al.

– Practical Bayesian Optimization of Machine Learning Algorithms,

(NIPS 2012) J. Snoek et al– OpenTuner: An Extensible Framework for

Program Autotuning, (dspace.mit.edu/handle/1721.1/81958) S.

Amarasinghe et al

Page 11: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Examples of Automatic Performance Tuning (3)

• What do dense BLAS, FFTs, signal processing, MPI reductions have in common?– Can do the tuning off-line: once per architecture, algorithm– Can take as much time as necessary (hours, a week…)– At run-time, algorithm choice may depend only on few

parameters• Matrix dimension, size of FFT, etc.

• Can’t always do off-line tuning– Algorithm and implementation may strongly depend

on data only known at run-time– Ex: Sparse matrix nonzero pattern determines both

best data structure and implementation of Sparse-matrix-vector-multiplication (SpMV)

– Part of search for best algorithm just be done (very quickly!) at run-time

Page 12: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Source: Accelerator Cavity Design Problem (Ko via Husbands)

Page 13: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Linear Programming Matrix

Page 14: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

A Sparse Matrix You Encounter Every Day

Page 15: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j)

for each row i

for k=ptr[i] to ptr[i+1]-1 do

y[i] = y[i] + val[k]*x[ind[k]]

SpMV with Compressed Sparse Row (CSR) Storage

Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j)

for each row i

for k=ptr[i] to ptr[i+1]-1 do

y[i] = y[i] + val[k]*x[ind[k]]

Page 16: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: The Difficulty of Tuning

• n = 21200• nnz = 1.5 M• kernel: SpMV

• Source: NASA structural analysis problem

Page 17: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: The Difficulty of Tuning

• n = 21200• nnz = 1.5 M• kernel: SpMV

• Source: NASA structural analysis problem

• 8x8 dense substructure

Page 18: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Taking advantage of block structure in SpMV

• Bottleneck is time to get matrix from memory– Only 2 flops for each nonzero in matrix

• Don’t store each nonzero with index, instead store each nonzero r-by-c block with index– Storage drops by up to 2x, if rc >> 1, all 32-bit

quantities– Time to fetch matrix from memory decreases

• Change both data structure and algorithm– Need to pick r and c– Need to change algorithm accordingly

• In example, is r=c=8 best choice?– Minimizes storage, so looks like a good idea…

Page 19: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Speedups on Itanium 2: The Need for Search

Reference

Best: 4x2

Mflop/s

Mflop/s

Page 20: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Register Profile: Itanium 2

190 Mflop/s

1190 Mflop/s

Page 21: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

SpMV Performance (Matrix #2): Generation 1

Power3 - 13% Power4 - 14%

Itanium 2 - 31%Itanium 1 - 7%

195 Mflop/s

100 Mflop/s

703 Mflop/s

469 Mflop/s

225 Mflop/s

103 Mflop/s

1.1 Gflop/s

276 Mflop/s

Page 22: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Register Profiles: IBM and Intel IA-64

Power3 - 17% Power4 - 16%

Itanium 2 - 33%Itanium 1 - 8%

252 Mflop/s

122 Mflop/s

820 Mflop/s

459 Mflop/s

247 Mflop/s

107 Mflop/s

1.2 Gflop/s

190 Mflop/s

Page 23: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Register Profiles: Sun and Intel x86

Ultra 2i - 11% Ultra 3 - 5%

Pentium III-M - 15%Pentium III - 21%

72 Mflop/s

35 Mflop/s

90 Mflop/s

50 Mflop/s

108 Mflop/s

42 Mflop/s

122 Mflop/s

58 Mflop/s

Page 24: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Another example of tuning challenges

• More complicated non-zero structure in general

• N = 16614• NNZ = 1.1M

Page 25: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Zoom in to top corner

• More complicated non-zero structure in general

• N = 16614• NNZ = 1.1M

Page 26: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

3x3 blocks look natural, but…

• More complicated non-zero structure in general

• Example: 3x3 blocking– Logical grid of 3x3 cells

• But would lead to lots of “fill-in”

Page 27: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Extra Work Can Improve Efficiency!

• More complicated non-zero structure in general

• Example: 3x3 blocking– Logical grid of 3x3 cells– Fill-in explicit zeros– Unroll 3x3 block multiplies– “Fill ratio” = 1.5

• On Pentium III: 1.5x speedup!– Actual mflop rate 1.52 =

2.25 higher

Page 28: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Automatic Register Block Size Selection

• Selecting the r x c block size– Off-line benchmark

• Precompute Mflops(r,c) using dense A for each r x c

• Once per machine/architecture– Run-time “search”

• Sample A to estimate Fill(r,c) for each r x c– Run-time heuristic model

• Choose r, c to minimize time ~ Fill(r,c) / Mflops(r,c)

Page 29: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Accurate and Efficient Adaptive Fill Estimation• Idea: Sample matrix

– Fraction of matrix to sample: s [0,1]– Cost ~ O(s * nnz)– Control cost by controlling s

• Search at run-time: the constant matters!• Control s automatically by computing statistical

confidence intervals– Idea: Monitor variance

• Cost of tuning– Lower bound: convert matrix in 5 to 40 unblocked

SpMVs– Heuristic: 1 to 11 SpMVs

Page 30: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Accuracy of the Tuning Heuristics (1/4)

NOTE: “Fair” flops used (ops on explicit zeros not counted as “work”)See p. 375 of Vuduc’s thesis for matrices

Page 31: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Accuracy of the Tuning Heuristics (2/4)

Page 32: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Accuracy of the Tuning Heuristics (2/4)DGEMV

Page 33: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Upper Bounds on Performance for blocked SpMV

• P = (flops) / (time)– Flops = 2 * nnz(A)

• Lower bound on time: Two main assumptions– 1. Count memory ops only (streaming)– 2. Count only compulsory, capacity misses: ignore conflicts

• Account for line sizes• Account for matrix size and nnz

• Charge minimum access “latency” i at Li cache & mem

– e.g., Saavedra-Barrera and PMaC MAPS benchmarks

1mem11

1memmem

Misses)(Misses)(Loads

HitsHitsTime

iiii

iii

Page 34: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: L2 Misses on Itanium 2

Misses measured using PAPI [Browne ’00]

Page 35: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Bounds on Itanium 2

Page 36: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Bounds on Itanium 2

Page 37: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Bounds on Itanium 2

Page 38: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Summary of Other Sequential Performance Optimizations

• Optimizations for SpMV– Register blocking (RB): up to 4x over CSR– Variable block splitting: 2.1x over CSR, 1.8x over RB– Diagonals: 2x over CSR– Reordering to create dense structure + splitting: 2x over

CSR– Symmetry: 2.8x over CSR, 2.6x over RB– Cache blocking: 2.8x over CSR– Multiple vectors (SpMM): 7x over CSR– And combinations…

• Sparse triangular solve– Hybrid sparse/dense data structure: 1.8x over CSR

• Higher-level kernels– A·AT·x, AT·A·x: 4x over CSR, 1.8x over RB– A·x: 2x over CSR, 1.5x over RB– [A·x, A·x, A·x, .. , Ak·x]

Page 39: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Sparse Triangular Factor

• Raefsky4 (structural problem) + SuperLU + colmmd

• N=19779, nnz=12.6 M

Dense trailing triangle: dim=2268, 20% of total nz

Can be as high as 90+%!1.8x over CSR

Page 40: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Cache Optimizations for AAT*x

• Cache-level: Interleave multiplication by A, AT

– Only fetch A from memory once

• Register-level: aiT to be rc block row, or diag

row

n

i

Tii

Tn

T

nT xaax

a

a

aaxAA1

1

1 )(

dot product“axpy”

Page 41: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Combining Optimizations (1/2)

• Register blocking, symmetry, multiple (k) vectors– Three low-level tuning parameters: r, c, v

v

kX

Y A

cr

+=

*

Page 42: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Combining Optimizations (2/2)

• Register blocking, symmetry, and multiple vectors [Ben Lee @ UCB]– Symmetric, blocked, 1 vector

• Up to 2.6x over nonsymmetric, blocked, 1 vector

– Symmetric, blocked, k vectors• Up to 2.1x over nonsymmetric, blocked, k vectors• Up to 7.3x over nonsymmetric, nonblocked, 1 vector

– Symmetric Storage: up to 64.7% savings

Page 43: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Why so much about SpMV?Contents of the “Sparse Motif”

• What is “sparse linear algebra”?• Direct solvers for Ax=b, least squares

– Sparse Gaussian elimination, QR for least squares– How to choose:

crd.lbl.gov/~xiaoye/SuperLU/SparseDirectSurvey.pdf

• Iterative solvers for Ax=b, least squares, Ax=λx, SVD– Used when SpMV only affordable operation on A –

• Krylov Subspace Methods

– How to choose • For Ax=b: www.netlib.org/templates/Templates.html• For Ax=λx: www.cs.ucdavis.edu/~bai/ET/contents.html

• What about Multigrid?– In overlap of sparse and (un)structured grid motifs – details later

Page 44: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

How to choose an iterative solver - example

All methods (GMRES, CGS,CG…) depend on SpMV (or variations…)See www.netlib.org/templates/Templates.html for details

Page 45: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Motif/Dwarf: Common Computational Methods(Red Hot Blue Cool)

Page 46: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Potential Impact on Applications: Omega3P• Application: accelerator cavity design [Ko]• Relevant optimization techniques

– Symmetric storage– Register blocking– Reordering, to create more dense blocks

• Reverse Cuthill-McKee ordering to reduce bandwidth– Do Breadth-First-Search, number nodes in reverse order visited

• Traveling Salesman Problem-based ordering to create blocks

– Nodes = columns of A– Weights(u, v) = no. of nonzeros u, v have in common– Tour = ordering of columns– Choose maximum weight tour– See [Pinar & Heath ’97]

• 2.1x speedup on Power 4

Page 47: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Source: Accelerator Cavity Design Problem (Ko via Husbands)

Page 48: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Post-RCM Reordering

Page 49: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

100x100 Submatrix Along Diagonal

Page 50: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Before: Green + RedAfter: Green + Blue

“Microscopic” Effect of RCM Reordering

Page 51: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

“Microscopic” Effect of Combined RCM+TSP Reordering

Before: Green + RedAfter: Green + Blue

Page 52: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

(Omega3P)

Page 53: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

How do permutations affect algorithms?

• A = original matrix, AP = A with permuted rows, columns• Naïve approach: permute x, multiply y=APx, permute y• Faster way to solve Ax=b

– Write AP = PTAP where P is a permutation matrix– Solve APxP = PTb for xP, using SpMV with AP, then let x = PxP

– Only need to permute vectors twice, not twice per iteration

• Faster way to solve Ax=λx– A and AP have same eigenvalues, no vectors to permute!– APxP =λxP implies Ax = λx where x = PxP

• Where else do optimizations change higher level algorithms? More later…

Page 54: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

55

Tuning SpMV on Multicore

Page 55: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

56

Multicore SMPs UsedAMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown)

IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls)

Source: Sam Williams

Page 56: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

57

Multicore SMPs Used(Conventional cache-based memory hierarchy)

AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown)

Sun T2+ T5140 (Victoria Falls) IBM QS20 Cell Blade

Source: Sam Williams

Page 57: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

58

Multicore SMPs Used(Local store-based memory hierarchy)

IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls)

AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown)

Source: Sam Williams

Page 58: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

59

Multicore SMPs Used(CMT = Chip-MultiThreading)

AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown)

IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls)

Source: Sam Williams

Page 59: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

60

Multicore SMPs Used(threads)

AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown)

IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls)

8 threads 8 threads

16* threads128 threads

*SPEs onlySource: Sam Williams

Page 60: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

61

Multicore SMPs Used(Non-Uniform Memory Access - NUMA)

AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown)

IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls)

*SPEs onlySource: Sam Williams

Page 61: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

62

Multicore SMPs Used(peak double precision flops)

AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown)

IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls)

75 GFlop/s 74 Gflop/s

29* GFlop/s19 GFlop/s

*SPEs onlySource: Sam Williams

Page 62: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

63

Multicore SMPs Used(Total DRAM bandwidth)

AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown)

IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls)

21 GB/s (read)

10 GB/s (write)21 GB/s

51 GB/s42 GB/s (read)

21 GB/s (write)

*SPEs onlySource: Sam Williams

Page 63: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

64

Results from

“Auto-tuning Sparse Matrix-Vector Multiplication (SpMV)”

Samuel Williams, Leonid Oliker, Richard Vuduc, John Shalf, Katherine Yelick, James Demmel, "Optimization of Sparse Matrix-Vector Multiplication on Emerging Multicore Platforms", Supercomputing (SC), 2007.

Page 64: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

65

Test matrices

• Suite of 14 matrices• All bigger than the caches of our SMPs• We’ll also include a median performance number

Dense

ProteinFEM /

SpheresFEM /

CantileverWind

TunnelFEM /Harbor

QCDFEM /Ship

Economics Epidemiology

FEM /Accelerator

Circuit webbase

LP

2K x 2K Dense matrixstored in sparse format

Well Structured(sorted by nonzeros/row)

Poorly Structuredhodgepodge

Extreme Aspect Ratio(linear programming)

Source: Sam Williams

Page 65: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

SpMV Parallelization• How do we parallelize a matrix-vector multiplication ?• By rows blocks• No inter-thread data dependencies, but random access to x

69

thre

ad

0th

read

1th

read

2th

read

3

Source: Sam Williams

Page 66: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

70

SpMV Performance(simple parallelization)

• Out-of-the box SpMV performance on a suite of 14 matrices

• Simplest solution = parallelization by rows

• Scalability isn’t great• Can we do better?

Naïve Pthreads

Naïve

Source: Sam Williams

Page 67: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Summary of Multicore Optimizations

• NUMA - Non-Uniform Memory Access – pin submatrices to memories close to cores assigned to

them

• Prefetch – values, indices, and/or vectors– use exhaustive search on prefetch distance

• Matrix Compression – not just register blocking (BCSR)– 32 or 16-bit indices, Block Coordinate format for

submatrices

• Cache-blocking– 2D partition of matrix, so needed parts of x,y fit in

cache71

Page 68: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

SpMV Performance(Matrix Compression)

80

After maximizing memory bandwidth, the only hope is to minimize memory traffic.

Compression: exploit register blocking other formats smaller indices

Use a traffic minimization heuristic rather than search

Benefit is clearlymatrix-dependent.

Register blocking enables efficient software prefetching (one per cache line)

Source: Sam Williams

Page 69: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

83

Auto-tuned SpMV Performance(cache and TLB blocking)

• Fully auto-tuned SpMV performance across the suite of matrices

• Why do some optimizations work better on some architectures?

• matrices with naturally small working sets

• architectures with giant caches

+Cache/LS/TLB Blocking

+Matrix Compression

+SW Prefetching

+NUMA/Affinity

Naïve Pthreads

Naïve

Source: Sam Williams

Page 70: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

84

Auto-tuned SpMV Performance(architecture specific optimizations)

• Fully auto-tuned SpMV performance across the suite of matrices

• Included SPE/local store optimized version

• Why do some optimizations work better on some architectures?

+Cache/LS/TLB Blocking

+Matrix Compression

+SW Prefetching

+NUMA/Affinity

Naïve Pthreads

Naïve

Source: Sam Williams

Page 71: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

85

Auto-tuned SpMV Performance(max speedup)

• Fully auto-tuned SpMV performance across the suite of matrices

• Included SPE/local store optimized version

• Why do some optimizations work better on some architectures?

+Cache/LS/TLB Blocking

+Matrix Compression

+SW Prefetching

+NUMA/Affinity

Naïve Pthreads

Naïve

2.7x2.7x 4.0x4.0x

2.9x2.9x 35x35x

Source: Sam Williams

Page 72: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Optimized Sparse Kernel Interface - pOSKI bebop.cs.berkeley.edu/poski• Provides sparse kernels automatically tuned for user’s

matrix & machine– BLAS-style functionality: SpMV, Ax & ATy

– Hides complexity of run-time tuning

• Based on OSKI – bebop.cs.berkeley.edu/oski– Autotuner for sequential sparse matrix operations:

• SpMV (Ax and ATx), ATAx, solve sparse triangular systems, …

– So far pOSKI only does multicore optimizations of SpMV• Class projects!

– Up to 4.5x faster SpMV (Ax) on Intel Sandy Bridge E

• On-going work by the Berkeley Benchmarking and Optimization (BeBop) group

Page 73: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Optimizations in pOSKI, so far

• Fully automatic heuristics for– Sparse matrix-vector multiply (Ax, ATx)

• Register-level blocking, Thread-level blocking• SIMD, software prefetching, software pipelining, loop unrolling• NUMA-aware allocations

• “Plug-in” extensibility– Very advanced users may write their own heuristics, create new

data structures/code variants and dynamically add them to the system, using embedded scripting language Lua

• Other optimizations under development– Cache-level blocking, Reordering (RCM, TSP), variable block structure,

index compressing, Symmetric storage, etc.

Page 74: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

How the pOSKI Tunes (Overview)

1. Build forTarget Arch.

2. Benchmark

GeneratedCode

Variants

Library Install-Time (offline) Application Run-Time

Sample Dense Matrix

(r,c)(r,c) = Register Block size(d) = prefetching distance(imp) = SIMD implementation

(r,c,d,imp,…)

BenchmarkData

&Selected

Code Variants

…..

…..2. EvaluateModels

3. SelectData Struct.

& Code

2. EvaluateModels

3. SelectData Struct.

& Code

User’s Matrix

1. Partition Workloadfrom program

monitoring

Empirical & Heuristic

Search

History

User’s hints

Submatrix thread

Submatrix threadSubmatrixSubmatrix ….

To user: Matrix handle for kernel calls

Page 75: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

How the pOSKI Tunes (Overview)• At library build/install-time

– Generate code variants• Code generator (Python) generates code variants for various

implementations – Collect benchmark data

• Measures and records speed of possible sparse data structure and code variants on target architecture

– Select best code variants & benchmark data• prefetching distance, SIMD implementation

– Installation process uses standard, portable GNU AutoTools• At run-time

– Library “tunes” using heuristic models• Models analyze user’s matrix & benchmark data to choose optimized data

structure and code• User may re-collect benchmark data with user’s sparse matrix (under

development) – Non-trivial tuning cost: up to ~40 mat-vecs

• Library limits the time it spends tuning based on estimated workload– provided by user or inferred by library

• User may reduce cost by saving tuning results for application on future runs with same or similar matrix (under development)

Page 76: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

How to Call pOSKI: Basic Usage• May gradually migrate existing apps

– Step 1: “Wrap” existing data structures– Step 2: Make BLAS-like kernel calls

int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */

double* x = …, *y = …; /* Let x and y be two dense vectors */

/* Compute y = ·y + ·A·x, 500 times */for( i = 0; i < 500; i++ )

my_matmult( ptr, ind, val, , x, , y );

Page 77: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

How to Call pOSKI: Basic Usage• May gradually migrate existing apps

– Step 1: “Wrap” existing data structures– Step 2: Make BLAS-like kernel calls

int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */

double* x = …, *y = …; /* Let x and y be two dense vectors *//* Step 1: Create a default pOSKI thread object */poski_threadarg_t *poski_thread = poski_InitThread();

/* Step 2: Create pOSKI wrappers around this data */poski_mat_t A_tunable = poski_CreateMatCSR(ptr, ind, val, nrows, ncols,

nnz, SHARE_INPUTMAT, poski_thread, NULL, …);

poski_vec_t x_view = poski_CreateVecView(x, ncols, UNIT_STRIDE, NULL);

poski_vec_t y_view = poski_CreateVecView(y, nrows, UNIT_STRIDE, NULL);

/* Compute y = ·y + ·A·x, 500 times */for( i = 0; i < 500; i++ )

my_matmult( ptr, ind, val, , x, , y );

Page 78: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

How to Call pOSKI: Basic Usage• May gradually migrate existing apps

– Step 1: “Wrap” existing data structures– Step 2: Make BLAS-like kernel calls

int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */

double* x = …, *y = …; /* Let x and y be two dense vectors *//* Step 1: Create a default pOSKI thread object */poski_threadarg_t *poski_thread = poski_InitThread();

/* Step 2: Create pOSKI wrappers around this data */poski_mat_t A_tunable = poski_CreateMatCSR(ptr, ind, val, nrows, ncols,

nnz, SHARE_INPUTMAT, poski_thread, NULL, …);

poski_vec_t x_view = poski_CreateVecView(x, ncols, UNIT_STRIDE, NULL);

poski_vec_t y_view = poski_CreateVecView(y, nrows, UNIT_STRIDE, NULL);

/* Step 3: Compute y = ·y + ·A·x, 500 times */for( i = 0; i < 500; i++ )

poski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);

Page 79: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

How to Call pOSKI: Tune with Explicit Hints

• User calls “tune” routine (optional)– May provide explicit tuning hints

poski_mat_t A_tunable = poski_CreateMatCSR( … );

/* … */

/* Tell pOSKI we will call SpMV 500 times (workload hint) */poski_TuneHint_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view,500);/* Tell pOSKI we think the matrix has 8x8 blocks (structural hint) */poski_TuneHint_Structure(A_tunable, HINT_SINGLE_BLOCKSIZE, 8, 8);

/* Ask pOSKI to tune */poski_TuneMat(A_tunable);

for( i = 0; i < 500; i++ )

poski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);

Page 80: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

How to Call pOSKI: Implicit Tuning• Ask library to infer workload (optional)

– Library profiles all kernel calls– May periodically re-tune

poski_mat_t A_tunable = poski_CreateMatCSR( … );

/* … */

for( i = 0; i < 500; i++ ) {

poski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);poski_TuneMat(A_tunable); /* Ask pOSKI to tune */

}

Page 81: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Performance on Intel Sandy Bridge E

4.8x

3.2x

4.5x

2.9x

4.1x4.5x

4.7x

• Jaketown: i7-3960X @ 3.3 GHz• #Cores: 6 (2 threads per core), L3:15MB• pOSKI SpMV (Ax) with double precision float-point• MKL Sparse BLAS Level 2: mkl_dcsrmv()

Page 82: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Is tuning SpMV all we can do?

• Iterative methods all depend on it• But speedups are limited

– Just 2 flops per nonzero– Communication costs dominate

• Can we beat this bottleneck?• Need to look at next level in stack:

– What do algorithms that use SpMV do?– Can we reorganize them to avoid communication?

• Only way significant speedups will be possible

Page 83: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Tuning Higher Level Algorithms than SpMV• We almost always do many SpMVs, not just one

– “Krylov Subspace Methods” (KSMs) for Ax=b, Ax = λx• Conjugate Gradients, GMRES, Lanczos, …

– Do a sequence of k SpMVs to get vectors [x1 , … , xk]

– Find best solution x as linear combination of [x1 , … , xk]

• Main cost is k SpMVs• Since communication usually dominates, can we do better?• Goal: make communication cost independent of k

– Parallel case: O(log P) messages, not O(k log P) - optimal• same bandwidth as before

– Sequential case: O(1) messages and bandwidth, not O(k) - optimal

• Achievable when matrix partitionable with low surface-to-volume ratio

Page 84: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Example: A tridiagonal, n=32, k=3• Works for any “well-partitioned” A

Page 85: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Example: A tridiagonal, n=32, k=3

Page 86: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Example: A tridiagonal, n=32, k=3

Page 87: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Example: A tridiagonal, n=32, k=3

Page 88: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Example: A tridiagonal, n=32, k=3

Page 89: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Example: A tridiagonal, n=32, k=3

Page 90: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Sequential Algorithm

• Example: A tridiagonal, n=32, k=3

Step 1

Page 91: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Sequential Algorithm

• Example: A tridiagonal, n=32, k=3

Step 1 Step 2

Page 92: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Sequential Algorithm

• Example: A tridiagonal, n=32, k=3

Step 1 Step 2 Step 3

Page 93: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Sequential Algorithm

• Example: A tridiagonal, n=32, k=3

Step 1 Step 2 Step 3 Step 4

Page 94: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Parallel Algorithm

• Example: A tridiagonal, n=32, k=3

Proc 1 Proc 2 Proc 3 Proc 4

Page 95: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Parallel Algorithm

• Example: A tridiagonal, n=32, k=3• Each processor communicates once with

neighbors

Proc 1 Proc 2 Proc 3 Proc 4

Page 96: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Parallel Algorithm

• Example: A tridiagonal, n=32, k=3

Proc 1

Page 97: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Parallel Algorithm

• Example: A tridiagonal, n=32, k=3

Proc 2

Page 98: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Parallel Algorithm

• Example: A tridiagonal, n=32, k=3

Proc 3

Page 99: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Parallel Algorithm

• Example: A tridiagonal, n=32, k=3

Proc 4

Page 100: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

1 2 3 4 … … 32

x

A·x

A2·x

A3·x

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

• Replace k iterations of y = Ax with [Ax, A2x, …, Akx]

• Parallel Algorithm

• Example: A tridiagonal, n=32, k=3• Each processor works on (overlapping)

trapezoid

Proc 1 Proc 2 Proc 3 Proc 4

Page 101: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Same idea works for general sparse matrices

Communication Avoiding Kernels:The Matrix Powers Kernel : [Ax, A2x, …, Akx]

Simple block-row partitioning (hyper)graph partitioning

Top-to-bottom processing Traveling Salesman Problem

Page 102: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Speedups on Intel Clovertown (8 core)

Page 103: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Performance Results• Measured Multicore (Clovertown) speedups up to 6.4x• Measured/Modeled sequential OOC speedup up to 3x• Modeled parallel Petascale speedup up to 6.9x• Modeled parallel Grid speedup up to 22x

• Sequential speedup due to bandwidth, works for many problem sizes

• Parallel speedup due to latency, works for smaller problems on many processors

• Multicore results used both techniques

Page 104: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Avoiding Communication in Iterative Linear Algebra

• k-steps of typical iterative solver for sparse Ax=b or Ax=λx– Does k SpMVs with starting vector– Finds “best” solution among all linear combinations of these k+1

vectors– Many such “Krylov Subspace Methods”

• Conjugate Gradients, GMRES, Lanczos, Arnoldi, … • Goal: minimize communication in Krylov Subspace Methods

– Assume matrix “well-partitioned,” with modest surface-to-volume ratio

– Parallel implementation• Conventional: O(k log p) messages, because k calls to SpMV• New: O(log p) messages - optimal

– Serial implementation• Conventional: O(k) moves of data from slow to fast memory• New: O(1) moves of data – optimal

• Lots of speed up possible (modeled and measured)– Price: some redundant computation

• Much prior work– See Mark Hoemmen’s PhD thesis, other papers at

bebop.cs.berkeley.edu

Page 105: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Minimizing Communication of GMRES to solve Ax=b• GMRES: find x in span{b,Ab,…,Akb} minimizing || Ax-b ||2

• Cost of k steps of standard GMRES vs new GMRES

Standard GMRES for i=1 to k w = A · v(i-1) MGS(w, v(0),…,v(i-1)) update v(i), H endfor solve LSQ problem with H

Sequential: #words_moved = O(k·nnz) from SpMV + O(k2·n) from MGSParallel: #messages = O(k) from SpMV + O(k2 · log p) from MGS

Communication-avoiding GMRES W = [ v, Av, A2v, … , Akv ] [Q,R] = TSQR(W) … “Tall Skinny QR” Build H from R, solve LSQ problem

Sequential: #words_moved = O(nnz) from SpMV + O(k·n) from TSQRParallel: #messages = O(1) from computing W + O(log p) from TSQR

•Oops – W from power method, precision lost!

Page 106: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

“Monomial” basis [Ax,…,Akx] fails to converge

A different polynomial basis does converge: [p1(A)x,…,pk(A)x]

Page 107: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Speed ups of GMRES on 8-core Intel Clovertown

Requires co-tuning kernels [MHDY09]

Page 108: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

CA-BiCGStab

Page 109: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

“New Algorithm Improves Performance and Accuracy on Extreme-Scale Computing Systems. On modern computer architectures, communication between processors takes longer than the performance of a floating point arithmetic operation by a given processor. ASCR researchers have developed a new method, derived from commonly used linear algebra methods, to minimize communications between processors and the memory hierarchy, by reformulating the communication patterns specified within the algorithm. This method has been implemented in the TRILINOS framework, a highly-regarded suite of software, which provides functionality for researchers around the world to solve large scale, complex multi-physics problems.”

FY 2010 Congressional Budget, Volume 4, FY2010 Accomplishments, Advanced Scientific Computing Research (ASCR), pages 65-67.

President Obama cites Communication-Avoiding Algorithms in the FY 2012 Department of Energy Budget

Request to Congress:

CA-GMRES (Hoemmen, Mohiyuddin, Yelick, JD)“Tall-Skinny” QR (Grigori, Hoemmen, Langou, JD)

Page 110: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Tuning space for Krylov Methods

Explicit (O(nnz)) Implicit (o(nnz))

Explicit (O(nnz))

CSR and variations

Vision, climate, AMR,…

Implicit (o(nnz))

Graph Laplacian Stencils

Nonzero entries

Indices

•Many different algorithms (GMRES, BiCGStab, CG, Lanczos,…), polynomials, preconditioning •Classifications of sparse operators for avoiding communication

• Explicit indices or nonzero entries cause most communication, along with vectors• Ex: With stencils (all implicit) all communication for vectors

• Operations• [x, Ax, A2x,…, Akx ] or [x, p1(A)x, p2(A)x, …, pk(A)x ]• Number of columns in x• [x, Ax, A2x,…, Akx ] and [y, ATy, (AT)2y,…, (AT)ky ], or [y, ATAy, (ATA)2y,…, (ATA)ky ], • return all vectors or just last one

• Cotuning and/or interleaving• W = [x, Ax, A2x,…, Akx ] and {TSQR(W) or WTW or … }• Ditto, but throw away W

Page 111: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Possible Class Projects• Come to BEBOP meetings (W 12 – 1:30, 380

Soda)• Experiment with SpMV on different

architectures– Which optimizations are most effective?

• Try to speed up particular matrices of interest– Data mining, “bottom solver” from AMR

• Explore tuning space of [x,Ax,…,Akx] kernel– Different matrix representations (last slide)– New Krylov subspace methods, preconditioning

• Experiment with new frameworks (SPF, Halide) • More details available

Page 112: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Extra Slides

Page 113: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Extensions

• Other Krylov methods– Arnoldi, CG, Lanczos, …

• Preconditioning– Solve MAx=Mb where preconditioning matrix M chosen to

make system “easier” • M approximates A-1 somehow, but cheaply, to accelerate

convergence

– Cheap as long as contributions from “distant” parts of the system can be compressed

• Sparsity• Low rank

– No implementations yet (class projects!)

Page 114: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Design Space for [x,Ax,…,Akx] (1/3)

• Mathematical Operation– How many vectors to keep

• All: Krylov Subspace Methods • Keep last vector Akx only (Jacobi, Gauss Seidel)

– Improving conditioning of basis• W = [x, p1(A)x, p2(A)x,…,pk(A)x]

• pi(A) = degree i polynomial chosen to reduce cond(W)

– Preconditioning (Ay=b MAy=Mb)• [x,Ax,MAx,AMAx,MAMAx,…,(MA)kx]

Page 115: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Design Space for [x,Ax,…,Akx] (2/3)

• Representation of sparse A– Zero pattern may be explicit or implicit– Nonzero entries may be explicit or implicit

• Implicit save memory, communication

Explicit pattern Implicit pattern

Explicit nonzeros General sparse matrix

Image segmentation

Implicit nonzeros Laplacian(graph)Multigrid (AMR)

“Stencil matrix”Ex: tridiag(-1,2,-1)

• Representation of dense preconditioners M

– Low rank off-diagonal blocks (semiseparable)

Page 116: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Design Space for [x,Ax,…,Akx] (3/3)

• Parallel implementation– From simple indexing, with redundant flops

surface/volume ratio– To complicated indexing, with fewer redundant flops

• Sequential implementation– Depends on whether vectors fit in fast memory

• Reordering rows, columns of A– Important in parallel and sequential cases– Can be reduced to pair of Traveling Salesmen Problems

• Plus all the optimizations for one SpMV!

Page 117: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Summary • Communication-Avoiding Linear Algebra (CALA)• Lots of related work

– Some going back to 1960’s– Reports discuss this comprehensively, not here

• Our contributions– Several new algorithms, improvements on old ones

• Preconditioning

– Unifying parallel and sequential approaches to avoiding communication

– Time for these algorithms has come, because of growing communication costs

– Why avoid communication just for linear algebra motifs?

Page 118: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Optimized Sparse Kernel Interface - OSKI

• Provides sparse kernels automatically tuned for user’s matrix & machine– BLAS-style functionality: SpMV, Ax & ATy, TrSV– Hides complexity of run-time tuning– Includes new, faster locality-aware kernels: ATAx, Akx

• Faster than standard implementations– Up to 4x faster matvec, 1.8x trisolve, 4x ATA*x

• For “advanced” users & solver library writers– Available as stand-alone library (OSKI 1.0.1h, 6/07)– Available as PETSc extension (OSKI-PETSc .1d, 3/06)– Bebop.cs.berkeley.edu/oski

• Under development: pOSKI for multicore

Page 119: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

How the OSKI Tunes (Overview)

Benchmarkdata

1. Build forTargetArch.

2. Benchmark

Heuristicmodels

1. EvaluateModels

Generatedcode

variants

2. SelectData Struct.

& Code

Library Install-Time (offline) Application Run-Time

To user:Matrix handlefor kernelcalls

Workloadfrom program

monitoring

Extensibility: Advanced users may write & dynamically add “Code variants” and “Heuristic models” to system.

HistoryMatrix

Page 120: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

How the OSKI Tunes (Overview)• At library build/install-time

– Pre-generate and compile code variants into dynamic libraries

– Collect benchmark data• Measures and records speed of possible sparse data structure

and code variants on target architecture– Installation process uses standard, portable GNU AutoTools

• At run-time– Library “tunes” using heuristic models

• Models analyze user’s matrix & benchmark data to choose optimized data structure and code

– Non-trivial tuning cost: up to ~40 mat-vecs• Library limits the time it spends tuning based on estimated

workload– provided by user or inferred by library

• User may reduce cost by saving tuning results for application on future runs with same or similar matrix

Page 121: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Optimizations in OSKI, so far• Fully automatic heuristics for

– Sparse matrix-vector multiply• Register-level blocking• Register-level blocking + symmetry + multiple vectors• Cache-level blocking

– Sparse triangular solve with register-level blocking and “switch-to-dense” optimization

– Sparse ATA*x with register-level blocking• User may select other optimizations manually

– Diagonal storage optimizations, reordering, splitting; tiled matrix powers kernel (Ak*x)

– All available in dynamic libraries– Accessible via high-level embedded script language

• “Plug-in” extensibility– Very advanced users may write their own heuristics, create new

data structures/code variants and dynamically add them to the system

• pOSKI under construction– Optimizations for multicore – more later

Page 122: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

How to Call OSKI: Basic Usage

• May gradually migrate existing apps– Step 1: “Wrap” existing data structures– Step 2: Make BLAS-like kernel calls

int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */

double* x = …, *y = …; /* Let x and y be two dense vectors */

/* Compute y = ·y + ·A·x, 500 times */for( i = 0; i < 500; i++ )

my_matmult( ptr, ind, val, , x, , y );

Page 123: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

How to Call OSKI: Basic Usage

• May gradually migrate existing apps– Step 1: “Wrap” existing data structures– Step 2: Make BLAS-like kernel calls

int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */

double* x = …, *y = …; /* Let x and y be two dense vectors *//* Step 1: Create OSKI wrappers around this data */oski_matrix_t A_tunable = oski_CreateMatCSR(ptr, ind, val, num_rows,

num_cols, SHARE_INPUTMAT, …);

oski_vecview_t x_view = oski_CreateVecView(x, num_cols, UNIT_STRIDE);

oski_vecview_t y_view = oski_CreateVecView(y, num_rows, UNIT_STRIDE);

/* Compute y = ·y + ·A·x, 500 times */for( i = 0; i < 500; i++ )

my_matmult( ptr, ind, val, , x, , y );

Page 124: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

How to Call OSKI: Basic Usage

• May gradually migrate existing apps– Step 1: “Wrap” existing data structures– Step 2: Make BLAS-like kernel calls

int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */

double* x = …, *y = …; /* Let x and y be two dense vectors *//* Step 1: Create OSKI wrappers around this data */oski_matrix_t A_tunable = oski_CreateMatCSR(ptr, ind, val, num_rows,

num_cols, SHARE_INPUTMAT, …);

oski_vecview_t x_view = oski_CreateVecView(x, num_cols, UNIT_STRIDE);

oski_vecview_t y_view = oski_CreateVecView(y, num_rows, UNIT_STRIDE);

/* Compute y = ·y + ·A·x, 500 times */for( i = 0; i < 500; i++ )

oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);/* Step 2 */

Page 125: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

How to Call OSKI: Tune with Explicit Hints

• User calls “tune” routine– May provide explicit tuning hints (OPTIONAL)

oski_matrix_t A_tunable = oski_CreateMatCSR( … );

/* … */

/* Tell OSKI we will call SpMV 500 times (workload hint) */oski_SetHintMatMult(A_tunable, OP_NORMAL, , x_view, , y_view, 500);/* Tell OSKI we think the matrix has 8x8 blocks (structural hint) */oski_SetHint(A_tunable, HINT_SINGLE_BLOCKSIZE, 8, 8);

oski_TuneMat(A_tunable); /* Ask OSKI to tune */

for( i = 0; i < 500; i++ )

oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);

Page 126: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

How the User Calls OSKI: Implicit Tuning

• Ask library to infer workload– Library profiles all kernel calls– May periodically re-tune

oski_matrix_t A_tunable = oski_CreateMatCSR( … );

/* … */

for( i = 0; i < 500; i++ ) {

oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);oski_TuneMat(A_tunable); /* Ask OSKI to tune */

}

Page 127: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Quick-and-dirty Parallelism: OSKI-PETSc

• Extend PETSc’s distributed memory SpMV (MATMPIAIJ)

p0

p1

p2

p3

• PETSc– Each process stores

diag (all-local) and off-diag submatrices

• OSKI-PETSc:– Add OSKI wrappers– Each submatrix

tuned independently

Page 128: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

OSKI-PETSc Proof-of-Concept Results

• Matrix 1: Accelerator cavity design (R. Lee @ SLAC)– N ~ 1 M, ~40 M non-zeros– 2x2 dense block substructure– Symmetric

• Matrix 2: Linear programming (Italian Railways)– Short-and-fat: 4k x 1M, ~11M non-zeros– Highly unstructured– Big speedup from cache-blocking: no native PETSc

format

• Evaluation machine: Xeon cluster– Peak: 4.8 Gflop/s per node

Page 129: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Accelerator Cavity Matrix

Page 130: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

OSKI-PETSc Performance: Accel. Cavity

Page 131: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Linear Programming Matrix

Page 132: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

OSKI-PETSc Performance: LP Matrix

Page 133: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Tuning SpMV for Multicore: Architectural Review

Page 134: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Cost of memory access in Multicore

• When physically partitioned, cache or memory access is non uniform (latency and bandwidth to memory/cache addresses varies)– NUMA = Non-Uniform Memory Access– NUCA = Non-Uniform Cache Access

• UCA & UMA architecture:

156

CoreCore

DRAMDRAM

CacheCache

CoreCore CoreCore CoreCore

Source: Sam Williams

Page 135: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Cost of memory access in Multicore

• When physically partitioned, cache or memory access is non uniform (latency and bandwidth to memory/cache addresses varies)– NUMA = Non-Uniform Memory Access– NUCA = Non-Uniform Cache Access

• UCA & UMA architecture:

157

CoreCore

DRAMDRAM

CacheCache

CoreCore CoreCore CoreCore

Source: Sam Williams

Page 136: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Cost of Memory Access in Multicore

• When physically partitioned, cache or memory access is non uniform (latency and bandwidth to memory/cache addresses varies)– NUMA = Non-Uniform Memory Access– NUCA = Non-Uniform Cache Access

• NUCA & UMA architecture:

158

CoreCore

CacheCache

CoreCore CoreCore CoreCore

CacheCache CacheCache CacheCache

Memory Controller HubMemory Controller Hub

DRAMDRAM Source: Sam Williams

Page 137: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Cost of Memory Access in Multicore

• When physically partitioned, cache or memory access is non uniform (latency and bandwidth to memory/cache addresses varies)– NUMA = Non-Uniform Memory Access– NUCA = Non-Uniform Cache Access

• NUCA & NUMA architecture:

159

CoreCore

CacheCache

CoreCore CoreCore CoreCore

CacheCache CacheCache CacheCache

DRAMDRAM DRAMDRAM

Source: Sam Williams

Page 138: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Cost of Memory Access in Multicore

• Proper cache locality to maximize performance

160

CoreCore

CacheCache

CoreCore CoreCore CoreCore

CacheCache CacheCache CacheCache

DRAMDRAM DRAMDRAM

Source: Sam Williams

Page 139: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Cost of Memory Access in Multicore

• Proper DRAM locality to maximize performance

161

CoreCore

CacheCache

CoreCore CoreCore CoreCore

CacheCache CacheCache CacheCache

DRAMDRAM DRAMDRAM

Source: Sam Williams

Page 140: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Optimizing Communication Complexity of Sparse Solvers

• Need to modify high level algorithms to use new kernel

• Example: GMRES for Ax=b where A= 2D Laplacian– x lives on n-by-n mesh– Partitioned on p½ -by- p½ processor grid– A has “5 point stencil” (Laplacian)

• (Ax)(i,j) = linear_combination(x(i,j), x(i,j±1), x(i±1,j))– Ex: 18-by-18 mesh on 3-by-3 processor grid

Page 141: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Minimizing Communication• What is the cost = (#flops, #words, #mess) of s

steps of standard GMRES?

GMRES, ver.1: for i=1 to s w = A * v(i-1) MGS(w, v(0),…,v(i-1)) update v(i), H endfor solve LSQ problem with H

n/p½

n/p½

• Cost(A * v) = s * (9n2 /p, 4n / p½ , 4 )• Cost(MGS = Modified Gram-Schmidt) = s2/2 * ( 4n2 /p , log p , log p )• Total cost ~ Cost( A * v ) + Cost (MGS)• Can we reduce the latency?

Page 142: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Minimizing Communication• Cost(GMRES, ver.1) = Cost(A*v) + Cost(MGS)

• Cost(W) = ( ~ same, ~ same , 8 )• Latency cost independent of s – optimal

• Cost (MGS) unchanged• Can we reduce the latency more?

= ( 9sn2 /p, 4sn / p½ , 4s ) + ( 2s2n2 /p , s2 log p / 2 , s2 log p / 2 )

• How much latency cost from A*v can you avoid? Almost all

GMRES, ver. 2: W = [ v, Av, A2v, … , Asv ] [Q,R] = MGS(W) Build H from R, solve LSQ problem

s = 3

Page 143: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Minimizing Communication• Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS)

= = ( 9sn2 /p, 4sn / p½ , 8 ) + ( 2s2n2 /p , s2 log p / 2 , s2 log p / 2 )

• How much latency cost from MGS can you avoid? Almost all

• Cost(TSQR) = ( ~ same, ~ same , log p )• Latency cost independent of s - optimal

GMRES, ver. 3: W = [ v, Av, A2v, … , Asv ] [Q,R] = TSQR(W) … “Tall Skinny QR” (See Lecture 11) Build H from R, solve LSQ problem

W = W1

W2

W3

W4

R1

R2

R3

R4

R12

R34

R1234

Page 144: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Minimizing Communication• Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS)

= ( 9sn2 /p, 4sn / p½ , 8 ) + ( 2s2n2 /p , s2 log p / 2 , s2 log p / 2 )

• How much latency cost from MGS can you avoid? Almost all

• Cost(TSQR) = ( ~ same, ~ same , log p )• Oops

GMRES, ver. 3: W = [ v, Av, A2v, … , Asv ] [Q,R] = TSQR(W) … “Tall Skinny QR” (See Lecture 11) Build H from R, solve LSQ problem

W =

W1

W2

W3

W4

R1

R2

R3

R4

R12

R34

R1234

Page 145: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Minimizing Communication• Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS)

= ( 9sn2 /p, 4sn / p½ , 8 ) + ( 2s2n2 /p , s2 log p / 2 , s2 log p / 2 )

• How much latency cost from MGS can you avoid? Almost all

• Cost(TSQR) = ( ~ same, ~ same , log p )• Oops – W from power method, precision lost!

GMRES, ver. 3: W = [ v, Av, A2v, … , Asv ] [Q,R] = TSQR(W) … “Tall Skinny QR” (See Lecture 11) Build H from R, solve LSQ problem

W =

W1

W2

W3

W4

R1

R2

R3

R4

R12

R34

R1234

Page 146: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.
Page 147: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Minimizing Communication• Cost(GMRES, ver. 3) = Cost(W) + Cost(TSQR)

= = ( 9sn2 /p, 4sn / p½ , 8 ) + ( 2s2n2 /p , s2 log p / 2 , log p )

• Latency cost independent of s, just log p – optimal• Oops – W from power method, so precision lost – What to do?

• Use a different polynomial basis• Not Monomial basis W = [v, Av, A2v, …], instead …• Newton Basis WN = [v, (A – θ1 I)v , (A – θ2 I)(A – θ1 I)v, …] or

• Chebyshev Basis WC = [v, T1(v), T2(v), …]

Page 148: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.
Page 149: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Tuning Higher Level Algorithms

• So far we have tuned a single sparse matrix kernel• y = AT*A*x motivated by higher level algorithm (SVD)

• What can we do by extending tuning to a higher level?• Consider Krylov subspace methods for Ax=b, Ax = x

• Conjugate Gradients (CG), GMRES, Lanczos, …• Inner loop does y=A*x, dot products, saxpys, scalar ops• Inner loop costs at least O(1) messages• k iterations cost at least O(k) messages

• Our goal: show how to do k iterations with O(1) messages• Possible payoff – make Krylov subspace methods much faster on machines with slow networks• Memory bandwidth improvements too (not discussed)• Obstacles: numerical stability, preconditioning, …

Page 150: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Krylov Subspace Methods for Solving Ax=b

• Compute a basis for a subspace V by doing y = A*x k times

• Find “best” solution in that Krylov subspace V

• Given starting vector x1, V spanned by x2 = A*x1, x3 = A*x2 , … , xk = A*xk-1

• GMRES finds an orthogonal basis of V by Gram-Schmidt, so it actually does a different set of SpMVs than in last bullet

Page 151: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Standard GMRES

r = b - A*x1, = length(r), v1 = r / … length(r) = sqrt( ri2 )

for m= 1 to k do w = A*vm … at least O(1) messages

for i = 1 to m do … Gram-Schmidt him = dotproduct(vi , w ) … at least O(1) messages, or log(p)

w = w – h im * vi

end for hm+1,m = length(w) … at least O(1) messages, or log(p)

vm+1 = w / hm+1,m

end forfind y minimizing length( Hk * y – e1 ) … small, local problem

new x = x1 + Vk * y … Vk = [v1 , v2 , … , vk ]

O(k2), or O(k2 log p), messages altogether

Page 152: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Computing [Ax,A2x,A3x,…,Akx] for A tridiagonal

Different basis for same Krylov subspaceWhat can Proc 1 compute without communication?

x(1:30):

(Ax)(1:30):(A2x)(1:30):

(A8x)(1:30):

Proc 1 Proc 2

. .

.

Page 153: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Computing [Ax,A2x,A3x,…,Akx] for A tridiagonal

x(1:30):

(Ax)(1:30):(A2x)(1:30):

. .

.

(A8x)(1:30):

Proc 1 Proc 2

Computing missing entries with 1 communication, redundant work

Page 154: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

x(1:30):

(Ax)(1:30):(A2x)(1:30):

. .

.

(A8x)(1:30):

Proc 1 Proc 2

Saving half the redundant work

Example: Computing [Ax,A2x,A3x,…,Akx] for A tridiagonal

Page 155: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Computing [Ax,A2x,A3x,…,Akx] for Laplacian

A = 5pt Laplacian in 2D,Communicated point for k=3 shown

Page 156: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Latency-Avoiding GMRES (1)

r = b - A*x1, = length(r), v1 = r / …O(log p) messages

Wk+1 = [v1 , A * v1 , A2 * v1 , … , Ak * v1 ] … O(1) messages

[Q, R] = qr(Wk+1) … QR decomposition, O(log p) messages

Hk = R(:, 2:k+1) * (R(1:k,1:k))-1 … small, local problem

find y minimizing length( Hk * y – e1 ) … small, local problem

new x = x1 + Qk * y … local problemO(log p) messages altogether

Independent of k

Page 157: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Latency-Avoiding GMRES (2)

[Q, R] = qr(Wk+1) … QR decomposition, O(log p) messages

Easy, but not so stable way to do it:

X(myproc) = Wk+1T(myproc) * Wk+1 (myproc)

… local computation Y = sum_reduction(X(myproc)) … O(log p) messages

… Y = Wk+1T* Wk+1

R = (cholesky(Y))T … small, local computation

Q(myproc) = Wk+1 (myproc) * R-1 … local computation

Page 158: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Numerical example (1)

Diagonal matrix with n=1000, Aii from 1 down to 10-5

Instability as k grows, after many iterations

Page 159: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Numerical Example (2)Partial remedy: restarting periodically (every 120 iterations)

Other remedies: high precision, different basis than [v , A * v , … , Ak * v ]

Page 160: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Operation Counts for [Ax,A2x,A3x,…,Akx] on p procs

Problem Per-proc cost

Standard Optimized

1D mesh #messages

2k 2

(tridiagonal)

#words sent

2k 2k

#flops 5kn/p 5kn/p + 5k2

memory (k+4)n/p (k+4)n/p + 8k3D mesh #messages 26k 26

27 pt stencil

#words sent

6kn2p-2/3 + 12knp-1/3

+ O(k)6kn2p-2/3 + 12k2np-1/3

+ O(k3)

#flops 53kn3/p 53kn3/p + O(k2n2p-2/3)

memory (k+28)n3/p+ 6n2p-2/3 + O(np-1/3)

(k+28)n3/p + 168kn2p-2/3 + O(k2np-1/3)

Page 161: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Summary and Future Work

• Dense– LAPACK– ScaLAPACK

• Communication primitives• Sparse

– Kernels, Stencils– Higher level algorithms

• All of the above on new architectures– Vector, SMPs, multicore, Cell, …

• High level support for tuning– Specification language– Integration into compilers

Page 162: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Extra Slides

Page 163: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

A Sparse Matrix You Encounter Every Day

Who am I?

I am aBig Repository

Of usefulAnd uselessFacts alike.

Who am I?

(Hint: Not your e-mail inbox.)

Page 164: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

What about the Google Matrix?

• Google approach– Approx. once a month: rank all pages using connectivity

structure• Find dominant eigenvector of a matrix

– At query-time: return list of pages ordered by rank• Matrix: A = G + (1-)(1/n)uuT

– Markov model: Surfer follows link with probability , jumps to a random page with probability 1-

– G is n x n connectivity matrix [n billions]• gij is non-zero if page i links to page j• Normalized so each column sums to 1• Very sparse: about 7—8 non-zeros per row (power law dist.)

– u is a vector of all 1 values– Steady-state probability xi of landing on page i is solution to x

= Ax

• Approximate x by power method: x = Akx0

– In practice, k 25

Page 165: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Current Work

• Public software release• Impact on library designs: Sparse BLAS, Trilinos, PETSc,

…• Integration in large-scale applications

– DOE: Accelerator design; plasma physics– Geophysical simulation based on Block Lanczos (ATA*X; LBL)

• Systematic heuristics for data structure selection?• Evaluation of emerging architectures

– Revisiting vector micros

• Other sparse kernels– Matrix triple products, Ak*x

• Parallelism• Sparse benchmarks (with UTK) [Gahvari & Hoemmen]• Automatic tuning of MPI collective ops [Nishtala, et al.]

Page 166: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Summary of High-Level Themes

• “Kernel-centric” optimization– Vs. basic block, trace, path optimization, for instance– Aggressive use of domain-specific knowledge

• Performance bounds modeling– Evaluating software quality– Architectural characterizations and consequences

• Empirical search– Hybrid on-line/run-time models

• Statistical performance models– Exploit information from sampling, measuring

Page 167: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Related Work

• My bibliography: 337 entries so far• Sample area 1: Code generation

– Generative & generic programming– Sparse compilers– Domain-specific generators

• Sample area 2: Empirical search-based tuning– Kernel-centric

• linear algebra, signal processing, sorting, MPI, …

– Compiler-centric• profiling + FDO, iterative compilation, superoptimizers,

self-tuning compilers, continuous program optimization

Page 168: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Future Directions (A Bag of Flaky Ideas)

• Composable code generators and search spaces• New application domains

– PageRank: multilevel block algorithms for topic-sensitive search?

• New kernels: cryptokernels– rich mathematical structure germane to performance; lots

of hardware

• New tuning environments– Parallel, Grid, “whole systems”

• Statistical models of application performance– Statistical learning of concise parametric models from

traces for architectural evaluation– Compiler/automatic derivation of parametric models

Page 169: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Possible Future Work

• Different Architectures– New FP instruction sets (SSE2)– SMP / multicore platforms– Vector architectures

• Different Kernels• Higher Level Algorithms

– Parallel versions of kenels, with optimized communication– Block algorithms (eg Lanczos)– XBLAS (extra precision)

• Produce Benchmarks– Augment HPCC Benchmark

• Make it possible to combine optimizations easily for any kernel

• Related tuning activities (LAPACK & ScaLAPACK)

Page 170: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Review of Tuning by Illustration

(Extra Slides)

Page 171: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Splitting for Variable Blocks and Diagonals

• Decompose A = A1 + A2 + … At

– Detect “canonical” structures (sampling)– Split

– Tune each Ai

– Improve performance and save storage

• New data structures– Unaligned block CSR

• Relax alignment in rows & columns

– Row-segmented diagonals

Page 172: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Variable Block Row (Matrix #12)

2.1x over CSR1.8x over RB

Page 173: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Row-Segmented Diagonals

2x over CSR

Page 174: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Mixed Diagonal and Block Structure

Page 175: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Summary

• Automated block size selection– Empirical modeling and search– Register blocking for SpMV, triangular solve, ATA*x

• Not fully automated– Given a matrix, select splittings and transformations

• Lots of combinatorial problems– TSP reordering to create dense blocks (Pinar ’97;

Moon, et al. ’04)

Page 176: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Extra Slides

Page 177: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

A Sparse Matrix You Encounter Every Day

Who am I?

I am aBig Repository

Of usefulAnd uselessFacts alike.

Who am I?

(Hint: Not your e-mail inbox.)

Page 178: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Problem Context

• Sparse kernels abound– Models of buildings, cars, bridges, economies, …– Google PageRank algorithm

• Historical trends– Sparse matrix-vector multiply (SpMV): 10% of peak– 2x faster with “hand-tuning”– Tuning becoming more difficult over time– Promise of automatic tuning: PHiPAC/ATLAS, FFTW, …

• Challenges to high-performance– Not dense linear algebra!

• Complex data structures: indirect, irregular memory access

• Performance depends strongly on run-time inputs

Page 179: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Key Questions, Ideas, Conclusions

• How to tune basic sparse kernels automatically?– Empirical modeling and search

• Up to 4x speedups for SpMV• 1.8x for triangular solve• 4x for ATA*x; 2x for A2*x• 7x for multiple vectors

• What are the fundamental limits on performance?– Kernel-, machine-, and matrix-specific upper bounds

• Achieve 75% or more for SpMV, limiting low-level tuning• Consequences for architecture?

• General techniques for empirical search-based tuning?– Statistical models of performance

Page 180: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Road Map

• Sparse matrix-vector multiply (SpMV) in a nutshell• Historical trends and the need for search• Automatic tuning techniques• Upper bounds on performance• Statistical models of performance

Page 181: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j)Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j)

for each row i

for k=ptr[i] to ptr[i+1] do

y[i] = y[i] + val[k]*x[ind[k]]

Compressed Sparse Row (CSR) Storage

Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j)

for each row i

for k=ptr[i] to ptr[i+1] do

y[i] = y[i] + val[k]*x[ind[k]]

Page 182: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Road Map

• Sparse matrix-vector multiply (SpMV) in a nutshell• Historical trends and the need for search• Automatic tuning techniques• Upper bounds on performance• Statistical models of performance

Page 183: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Historical Trends in SpMV Performance

• The Data– Uniprocessor SpMV performance since 1987– “Untuned” and “Tuned” implementations– Cache-based superscalar micros; some vectors– LINPACK

Page 184: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

SpMV Historical Trends: Mflop/s

Page 185: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: The Difficulty of Tuning

• n = 21216• nnz = 1.5 M• kernel: SpMV

• Source: NASA structural analysis problem

Page 186: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Still More Surprises

• More complicated non-zero structure in general

Page 187: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Still More Surprises

• More complicated non-zero structure in general

• Example: 3x3 blocking– Logical grid of 3x3 cells

Page 188: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Historical Trends: Mixed News

• Observations– Good news: Moore’s law like behavior– Bad news: “Untuned” is 10% peak or less, worsening– Good news: “Tuned” roughly 2x better today, and

improving– Bad news: Tuning is complex

– (Not really news: SpMV is not LINPACK)

• Questions– Application: Automatic tuning?– Architect: What machines are good for SpMV?

Page 189: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Road Map

• Sparse matrix-vector multiply (SpMV) in a nutshell• Historical trends and the need for search• Automatic tuning techniques

– SpMV [SC’02; IJHPCA ’04b]– Sparse triangular solve (SpTS) [ICS/POHLL ’02]– ATA*x [ICCS/WoPLA ’03]

• Upper bounds on performance• Statistical models of performance

Page 190: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

SPARSITY: Framework for Tuning SpMV

• SPARSITY: Automatic tuning for SpMV [Im & Yelick ’99]– General approach

• Identify and generate implementation space• Search space using empirical models & experiments

– Prototype library and heuristic for choosing register block size

• Also: cache-level blocking, multiple vectors

• What’s new?– New block size selection heuristic

• Within 10% of optimal — replaces previous version

– Expanded implementation space• Variable block splitting, diagonals, combinations

– New kernels: sparse triangular solve, ATA*x, A*x

Page 191: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Automatic Register Block Size Selection

• Selecting the r x c block size– Off-line benchmark: characterize the machine

• Precompute Mflops(r,c) using dense matrix for each r x c• Once per machine/architecture

– Run-time “search”: characterize the matrix• Sample A to estimate Fill(r,c) for each r x c

– Run-time heuristic model• Choose r, c to maximize Mflops(r,c) / Fill(r,c)

• Run-time costs– Up to ~40 SpMVs (empirical worst case)

Page 192: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Accuracy of the Tuning Heuristics (1/4)

NOTE: “Fair” flops used (ops on explicit zeros not counted as “work”)

DGEMV

Page 193: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Accuracy of the Tuning Heuristics (2/4)DGEMV

Page 194: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Accuracy of the Tuning Heuristics (3/4)DGEMV

Page 195: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Accuracy of the Tuning Heuristics (4/4)DGEMV

Page 196: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Road Map

• Sparse matrix-vector multiply (SpMV) in a nutshell• Historical trends and the need for search• Automatic tuning techniques• Upper bounds on performance

– SC’02

• Statistical models of performance

Page 197: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Motivation for Upper Bounds Model

• Questions– Speedups are good, but what is the speed limit?

• Independent of instruction scheduling, selection

– What machines are “good” for SpMV?

Page 198: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Upper Bounds on Performance: Blocked SpMV

• P = (flops) / (time)– Flops = 2 * nnz(A)

• Lower bound on time: Two main assumptions– 1. Count memory ops only (streaming)– 2. Count only compulsory, capacity misses: ignore conflicts

• Account for line sizes• Account for matrix size and nnz

• Charge min access “latency” i at Li cache & mem

– e.g., Saavedra-Barrera and PMaC MAPS benchmarks

1mem11

1memmem

Misses)(Misses)(Loads

HitsHitsTime

iiii

iii

Page 199: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Bounds on Itanium 2

Page 200: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Bounds on Itanium 2

Page 201: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Bounds on Itanium 2

Page 202: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Fraction of Upper Bound Across Platforms

Page 203: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Achieved Performance and Machine Balance

• Machine balance [Callahan ’88; McCalpin ’95]– Balance = Peak Flop Rate / Bandwidth (flops /

double)

• Ideal balance for mat-vec: 2 flops / double– For SpMV, even less

• SpMV ~ streaming– 1 / (avg load time to stream 1 array) ~ (bandwidth)– “Sustained” balance = peak flops / model bandwidth

i

iii Misses)(Misses)(LoadsTime mem11

Page 204: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.
Page 205: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Where Does the Time Go?

• Most time assigned to memory• Caches “disappear” when line sizes are equal

– Strictly increasing line sizes

1

memmem HitsHitsTimei

ii

Page 206: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Execution Time Breakdown: Matrix 40

Page 207: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Speedups with Increasing Line Size

Page 208: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Summary: Performance Upper Bounds

• What is the best we can do for SpMV?– Limits to low-level tuning of blocked implementations– Refinements?

• What machines are good for SpMV?– Partial answer: balance characterization

• Architectural consequences?– Example: Strictly increasing line sizes

Page 209: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Road Map

• Sparse matrix-vector multiply (SpMV) in a nutshell• Historical trends and the need for search• Automatic tuning techniques• Upper bounds on performance• Tuning other sparse kernels• Statistical models of performance

– FDO ’00; IJHPCA ’04a

Page 210: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Statistical Models for Automatic Tuning

• Idea 1: Statistical criterion for stopping a search– A general search model

• Generate implementation• Measure performance• Repeat

– Stop when probability of being within of optimal falls below threshold

• Can estimate distribution on-line

• Idea 2: Statistical performance models– Problem: Choose 1 among m implementations at run-

time– Sample performance off-line, build statistical model

Page 211: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Select a Matmul Implementation

Page 212: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Support Vector Classification

Page 213: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Road Map

• Sparse matrix-vector multiply (SpMV) in a nutshell• Historical trends and the need for search• Automatic tuning techniques• Upper bounds on performance• Tuning other sparse kernels• Statistical models of performance• Summary and Future Work

Page 214: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Summary of High-Level Themes

• “Kernel-centric” optimization– Vs. basic block, trace, path optimization, for instance– Aggressive use of domain-specific knowledge

• Performance bounds modeling– Evaluating software quality– Architectural characterizations and consequences

• Empirical search– Hybrid on-line/run-time models

• Statistical performance models– Exploit information from sampling, measuring

Page 215: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Related Work

• My bibliography: 337 entries so far• Sample area 1: Code generation

– Generative & generic programming– Sparse compilers– Domain-specific generators

• Sample area 2: Empirical search-based tuning– Kernel-centric

• linear algebra, signal processing, sorting, MPI, …

– Compiler-centric• profiling + FDO, iterative compilation, superoptimizers,

self-tuning compilers, continuous program optimization

Page 216: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Future Directions (A Bag of Flaky Ideas)

• Composable code generators and search spaces• New application domains

– PageRank: multilevel block algorithms for topic-sensitive search?

• New kernels: cryptokernels– rich mathematical structure germane to performance; lots

of hardware

• New tuning environments– Parallel, Grid, “whole systems”

• Statistical models of application performance– Statistical learning of concise parametric models from

traces for architectural evaluation– Compiler/automatic derivation of parametric models

Page 217: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Acknowledgements

• Super-advisors: Jim and Kathy• Undergraduate R.A.s: Attila, Ben, Jen, Jin,

Michael, Rajesh, Shoaib, Sriram, Tuyet-Linh• See pages xvi—xvii of dissertation.

Page 218: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

TSP-based Reordering: Before

(Pinar ’97;Moon, et al ‘04)

Page 219: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

TSP-based Reordering: After

(Pinar ’97;Moon, et al ‘04)

Up to 2xspeedupsover CSR

Page 220: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: L2 Misses on Itanium 2

Misses measured using PAPI [Browne ’00]

Page 221: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Distribution of Blocked Non-Zeros

Page 222: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Register Profile: Itanium 2

190 Mflop/s

1190 Mflop/s

Page 223: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Register Profiles: Sun and Intel x86

Ultra 2i - 11% Ultra 3 - 5%

Pentium III-M - 15%Pentium III - 21%

72 Mflop/s

35 Mflop/s

90 Mflop/s

50 Mflop/s

108 Mflop/s

42 Mflop/s

122 Mflop/s

58 Mflop/s

Page 224: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Register Profiles: IBM and Intel IA-64

Power3 - 17% Power4 - 16%

Itanium 2 - 33%Itanium 1 - 8%

252 Mflop/s

122 Mflop/s

820 Mflop/s

459 Mflop/s

247 Mflop/s

107 Mflop/s

1.2 Gflop/s

190 Mflop/s

Page 225: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Accurate and Efficient Adaptive Fill Estimation

• Idea: Sample matrix– Fraction of matrix to sample: s [0,1]– Cost ~ O(s * nnz)– Control cost by controlling s

• Search at run-time: the constant matters!

• Control s automatically by computing statistical confidence intervals– Idea: Monitor variance

• Cost of tuning– Lower bound: convert matrix in 5 to 40 unblocked

SpMVs– Heuristic: 1 to 11 SpMVs

Page 226: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Sparse/Dense Partitioning for SpTS

• Partition L into sparse (L1,L2) and dense LD:

2

1

2

1

2

1

b

b

x

x

LL

L

D

• Perform SpTS in three steps:

22

1222

111

ˆ)3(

ˆ)2(

)1(

bxL

xLbb

bxL

D

• Sparsity optimizations for (1)—(2); DTRSV for (3)• Tuning parameters: block size, size of dense triangle

Page 227: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

SpTS Performance: Power3

Page 228: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.
Page 229: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Summary of SpTS and AAT*x Results

• SpTS — Similar to SpMV– 1.8x speedups; limited benefit from low-level tuning

• AATx, ATAx– Cache interleaving only: up to 1.6x speedups– Reg + cache: up to 4x speedups

• 1.8x speedup over register only

– Similar heuristic; same accuracy (~ 10% optimal)– Further from upper bounds: 60—80%

• Opportunity for better low-level tuning a la PHiPAC/ATLAS

• Matrix triple products? Ak*x?– Preliminary work

Page 230: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Register Blocking: Speedup

Page 231: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Register Blocking: Performance

Page 232: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Register Blocking: Fraction of Peak

Page 233: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Confidence Interval Estimation

Page 234: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Costs of Tuning

Page 235: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Splitting + UBCSR: Pentium III

Page 236: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Splitting + UBCSR: Power4

Page 237: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Splitting+UBCSR Storage: Power4

Page 238: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.
Page 239: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Example: Variable Block Row (Matrix #13)

Page 240: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Dense Tuning is Hard, Too

• Even dense matrix multiply can be notoriously difficult to tune

Page 241: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Dense matrix multiply: surprising performance as register tile size varies.

Page 242: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.
Page 243: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Preliminary Results (Matrix Set 2): Itanium 2

Web/IR

Dense FEM FEM (var) Bio LPEcon Stat

Page 244: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Multiple Vector Performance

Page 245: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.
Page 246: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

What about the Google Matrix?

• Google approach– Approx. once a month: rank all pages using connectivity

structure• Find dominant eigenvector of a matrix

– At query-time: return list of pages ordered by rank• Matrix: A = G + (1-)(1/n)uuT

– Markov model: Surfer follows link with probability , jumps to a random page with probability 1-

– G is n x n connectivity matrix [n 3 billion]• gij is non-zero if page i links to page j• Normalized so each column sums to 1• Very sparse: about 7—8 non-zeros per row (power law dist.)

– u is a vector of all 1 values– Steady-state probability xi of landing on page i is solution to x

= Ax

• Approximate x by power method: x = Akx0

– In practice, k 25

Page 247: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.
Page 248: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

MAPS Benchmark Example: Power4

Page 249: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

MAPS Benchmark Example: Itanium 2

Page 250: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Saavedra-Barrera Example: Ultra 2i

Page 251: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.
Page 252: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Summary of Results: Pentium III

Page 253: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Summary of Results: Pentium III (3/3)

Page 254: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Execution Time Breakdown (PAPI): Matrix 40

Page 255: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Preliminary Results (Matrix Set 1): Itanium 2

LPFEM FEM (var) AssortedDense

Page 256: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Tuning Sparse Triangular Solve (SpTS)

• Compute x=L-1*b where L sparse lower triangular, x & b dense

• L from sparse LU has rich dense substructure– Dense trailing triangle can account for 20—90% of

matrix non-zeros

• SpTS optimizations– Split into sparse trapezoid and dense trailing triangle– Use tuned dense BLAS (DTRSV) on dense triangle– Use Sparsity register blocking on sparse part

• Tuning parameters– Size of dense trailing triangle– Register block size

Page 257: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Sparse Kernels and Optimizations

• Kernels– Sparse matrix-vector multiply (SpMV): y=A*x– Sparse triangular solve (SpTS): x=T-1*b– y=AAT*x, y=ATA*x– Powers (y=Ak*x), sparse triple-product (R*A*RT), …

• Optimization techniques (implementation space)– Register blocking– Cache blocking– Multiple dense vectors (x)– A has special structure (e.g., symmetric, banded, …)– Hybrid data structures (e.g., splitting, switch-to-

dense, …)– Matrix reordering

• How and when do we search?– Off-line: Benchmark implementations– Run-time: Estimate matrix properties, evaluate

performance models based on benchmark data

Page 258: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Cache Blocked SpMV on LSI Matrix: Ultra 2i

A10k x 255k3.7M non-zeros

Baseline:16 Mflop/s

Best block size& performance:16k x 64k28 Mflop/s

Page 259: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Cache Blocking on LSI Matrix: Pentium 4

A10k x 255k3.7M non-zeros

Baseline:44 Mflop/s

Best block size& performance:16k x 16k210 Mflop/s

Page 260: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Cache Blocked SpMV on LSI Matrix: Itanium

A10k x 255k3.7M non-zeros

Baseline:25 Mflop/s

Best block size& performance:16k x 32k72 Mflop/s

Page 261: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Cache Blocked SpMV on LSI Matrix: Itanium 2

A10k x 255k3.7M non-zeros

Baseline:170 Mflop/s

Best block size& performance:16k x 65k275 Mflop/s

Page 262: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Inter-Iteration Sparse Tiling (1/3)

• [Strout, et al., ‘01]• Let A be 6x6 tridiagonal• Consider y=A2x

– t=Ax, y=At

• Nodes: vector elements• Edges: matrix elements

aij

y1

y2

y3

y4

y5

t1

t2

t3

t4

t5

x1

x2

x3

x4

x5

Page 263: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Inter-Iteration Sparse Tiling (2/3)

• [Strout, et al., ‘01]• Let A be 6x6 tridiagonal• Consider y=A2x

– t=Ax, y=At

• Nodes: vector elements• Edges: matrix elements

aij

• Orange = everything needed to compute y1

– Reuse a11, a12

y1

y2

y3

y4

y5

t1

t2

t3

t4

t5

x1

x2

x3

x4

x5

Page 264: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Inter-Iteration Sparse Tiling (3/3)

• [Strout, et al., ‘01]• Let A be 6x6 tridiagonal• Consider y=A2x

– t=Ax, y=At

• Nodes: vector elements• Edges: matrix elements

aij

• Orange = everything needed to compute y1

– Reuse a11, a12

• Grey = y2, y3

– Reuse a23, a33, a43

y1

y2

y3

y4

y5

t1

t2

t3

t4

t5

x1

x2

x3

x4

x5

Page 265: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Inter-Iteration Sparse Tiling: Issues

• Tile sizes (colored regions) grow with no. of iterations and increasing out-degree– G likely to have a few

nodes with high out-degree (e.g., Yahoo)

• Mathematical tricks to limit tile size?– Judicious dropping of

edges [Ng’01]

y1

y2

y3

y4

y5

t1

t2

t3

t4

t5

x1

x2

x3

x4

x5

Page 266: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Summary and Questions

• Need to understand matrix structure and machine– BeBOP: suite of techniques to deal with different sparse

structures and architectures• Google matrix problem

– Established techniques within an iteration– Ideas for inter-iteration optimizations– Mathematical structure of problem may help

• Questions– Structure of G?– What are the computational bottlenecks?– Enabling future computations?

• E.g., topic-sensitive PageRank multiple vector version [Haveliwala ’02]

– See www.cs.berkeley.edu/~richie/bebop/intel/google for more info, including more complete Itanium 2 results.

Page 267: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Exploiting Matrix Structure

• Symmetry (numerical or structural)– Reuse matrix entries– Can combine with register blocking, multiple vectors,

• Matrix splitting– Split the matrix, e.g., into r x c and 1 x 1– No fill overhead

• Large matrices with random structure– E.g., Latent Semantic Indexing (LSI) matrices– Technique: cache blocking

• Store matrix as 2i x 2j sparse submatrices• Effective when x vector is large• Currently, search to find fastest size

Page 268: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Symmetric SpMV Performance: Pentium 4

Page 269: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

SpMV with Split Matrices: Ultra 2i

Page 270: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Cache Blocking on Random Matrices: Itanium

Speedup on four bandedrandom matrices.

Page 271: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Sparse Kernels and Optimizations

• Kernels– Sparse matrix-vector multiply (SpMV): y=A*x– Sparse triangular solve (SpTS): x=T-1*b– y=AAT*x, y=ATA*x– Powers (y=Ak*x), sparse triple-product (R*A*RT), …

• Optimization techniques (implementation space)– Register blocking– Cache blocking– Multiple dense vectors (x)– A has special structure (e.g., symmetric, banded, …)– Hybrid data structures (e.g., splitting, switch-to-dense, …)– Matrix reordering

• How and when do we search?– Off-line: Benchmark implementations– Run-time: Estimate matrix properties, evaluate

performance models based on benchmark data

Page 272: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Register Blocked SpMV: Pentium III

Page 273: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Register Blocked SpMV: Ultra 2i

Page 274: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Register Blocked SpMV: Power3

Page 275: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Register Blocked SpMV: Itanium

Page 276: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Possible Optimization Techniques

• Within an iteration, i.e., computing (G+uuT)*x once– Cache block G*x

• On linear programming matrices and matrices with random structure (e.g., LSI), 1.5—4x speedups

• Best block size is matrix and machine dependent

– Reordering and/or splitting of G to separate dense structure (rows, columns, blocks)

• Between iterations, e.g., (G+uuT)2x– (G+uuT)2x = G2x + (Gu)uTx + u(uTG)x + u(uTu)uTx

• Compute Gu, uTG, uTu once for all iterations• G2x: Inter-iteration tiling to read G only once

Page 277: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Multiple Vector Performance: Itanium

Page 278: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Sparse Kernels and Optimizations

• Kernels– Sparse matrix-vector multiply (SpMV): y=A*x– Sparse triangular solve (SpTS): x=T-1*b– y=AAT*x, y=ATA*x– Powers (y=Ak*x), sparse triple-product (R*A*RT), …

• Optimization techniques (implementation space)– Register blocking– Cache blocking– Multiple dense vectors (x)– A has special structure (e.g., symmetric, banded, …)– Hybrid data structures (e.g., splitting, switch-to-

dense, …)– Matrix reordering

• How and when do we search?– Off-line: Benchmark implementations– Run-time: Estimate matrix properties, evaluate

performance models based on benchmark data

Page 279: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

SpTS Performance: Itanium

(See POHLL ’02 workshop paper, at ICS ’02.)

Page 280: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Sparse Kernels and Optimizations

• Kernels– Sparse matrix-vector multiply (SpMV): y=A*x– Sparse triangular solve (SpTS): x=T-1*b– y=AAT*x, y=ATA*x– Powers (y=Ak*x), sparse triple-product (R*A*RT), …

• Optimization techniques (implementation space)– Register blocking– Cache blocking– Multiple dense vectors (x)– A has special structure (e.g., symmetric, banded, …)– Hybrid data structures (e.g., splitting, switch-to-dense, …)– Matrix reordering

• How and when do we search?– Off-line: Benchmark implementations– Run-time: Estimate matrix properties, evaluate

performance models based on benchmark data

Page 281: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Optimizing AAT*x

• Kernel: y=AAT*x, where A is sparse, x & y dense– Arises in linear programming, computation of SVD– Conventional implementation: compute z=AT*x, y=A*z

• Elements of A can be reused:

n

k

Tkk

Tn

T

n xaax

a

a

aay1

1

1 )(

• When ak represent blocks of columns, can apply register blocking.

Page 282: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Optimized AAT*x Performance: Pentium III

Page 283: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Current Directions

• Applying new optimizations– Other split data structures (variable block, diagonal,

…)– Matrix reordering to create block structure– Structural symmetry

• New kernels (triple product RART, powers Ak, …)• Tuning parameter selection• Building an automatically tuned sparse matrix

library– Extending the Sparse BLAS– Leverage existing sparse compilers as code

generation infrastructure– More thoughts on this topic tomorrow

Page 284: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Related Work

• Automatic performance tuning systems– PHiPAC [Bilmes, et al., ’97], ATLAS [Whaley & Dongarra

’98]– FFTW [Frigo & Johnson ’98], SPIRAL [Pueschel, et al.,

’00], UHFFT [Mirkovic and Johnsson ’00]– MPI collective operations [Vadhiyar & Dongarra ’01]

• Code generation– FLAME [Gunnels & van de Geijn, ’01]– Sparse compilers: [Bik ’99], Bernoulli [Pingali, et al., ’97]– Generic programming: Blitz++ [Veldhuizen ’98], MTL

[Siek & Lumsdaine ’98], GMCL [Czarnecki, et al. ’98], …

• Sparse performance modeling– [Temam & Jalby ’92], [White & Saddayappan ’97],

[Navarro, et al., ’96], [Heras, et al., ’99], [Fraguela, et al., ’99], …

Page 285: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

More Related Work

• Compiler analysis, models– CROPS [Carter, Ferrante, et al.]; Serial sparse tiling

[Strout ’01]– TUNE [Chatterjee, et al.]– Iterative compilation [O’Boyle, et al., ’98]– Broadway compiler [Guyer & Lin, ’99]– [Brewer ’95], ADAPT [Voss ’00]

• Sparse BLAS interfaces– BLAST Forum (Chapter 3)– NIST Sparse BLAS [Remington & Pozo ’94];

SparseLib++– SPARSKIT [Saad ’94]– Parallel Sparse BLAS [Fillipone, et al. ’96]

Page 286: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Context: Creating High-Performance Libraries

• Application performance dominated by a few computational kernels

• Today: Kernels hand-tuned by vendor or user• Performance tuning challenges

– Performance is a complicated function of kernel, architecture, compiler, and workload

– Tedious and time-consuming

• Successful automated approaches– Dense linear algebra: ATLAS/PHiPAC– Signal processing: FFTW/SPIRAL/UHFFT

Page 287: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Cache Blocked SpMV on LSI Matrix: Itanium

Page 288: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Sustainable Memory Bandwidth

Page 289: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Multiple Vector Performance: Pentium 4

Page 290: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Multiple Vector Performance: Itanium

Page 291: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Multiple Vector Performance: Pentium 4

Page 292: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Optimized AAT*x Performance: Ultra 2i

Page 293: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Optimized AAT*x Performance: Pentium 4

Page 294: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Tuning Pays Off—PHiPAC

Page 295: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Tuning pays off – ATLAS

Extends applicability of PHIPAC; Incorporated in Matlab (with rest of LAPACK)

Page 296: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Register Tile Sizes (Dense Matrix Multiply)

333 MHz Sun Ultra 2i

2-D slice of 3-D space; implementations color-coded by performance in Mflop/s

16 registers, but 2-by-3 tile size fastest

Page 297: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

High Precision GEMV (XBLAS)

Page 298: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

High Precision Algorithms (XBLAS)• Double-double (High precision word represented as pair of doubles)

– Many variations on these algorithms; we currently use Bailey’s• Exploiting Extra-wide Registers

– Suppose s(1) , … , s(n) have f-bit fractions, SUM has F>f bit fraction– Consider following algorithm for S = i=1,n s(i)

• Sort so that |s(1)| |s(2)| … |s(n)|• SUM = 0, for i = 1 to n SUM = SUM + s(i), end for, sum = SUM

– Theorem (D., Hida) Suppose F<2f (less than double precision)• If n 2F-f + 1, then error 1.5 ulps• If n = 2F-f + 2, then error 22f-F ulps (can be 1)• If n 2F-f + 3, then error can be arbitrary (S 0 but sum = 0 )

– Examples• s(i) double (f=53), SUM double extended (F=64)

– accurate if n 211 + 1 = 2049• Dot product of single precision x(i) and y(i)

– s(i) = x(i)*y(i) (f=2*24=48), SUM double extended (F=64) – accurate if n 216 + 1 = 65537

Page 299: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

More Extra Slides

Page 300: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Opteron (1.4GHz, 2.8GFlop peak)

Beat ATLAS DGEMV’s 365 Mflops

Nonsymmetric peak = 504 MFlops Symmetric peak = 612 MFlops

Page 301: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Awards• Best Paper, Intern. Conf. Parallel Processing, 2004

– “Performance models for evaluation and automatic performance tuning of symmetric sparse matrix-vector multiply”

• Best Student Paper, Intern. Conf. Supercomputing, Workshop on Performance Optimization via High-Level Languages and Libraries, 2003– Best Student Presentation too, to Richard Vuduc– “Automatic performance tuning and analysis of sparse triangular solve”

• Finalist, Best Student Paper, Supercomputing 2002– To Richard Vuduc– “Performance Optimization and Bounds for Sparse Matrix-vector Multiply”

• Best Presentation Prize, MICRO-33: 3rd ACM Workshop on Feedback-Directed Dynamic Optimization, 2000– To Richard Vuduc– “Statistical Modeling of Feedback Data in an Automatic Tuning System”

Page 302: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Accuracy of the Tuning Heuristics (4/4)

Page 303: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Can Match DGEMV PerformanceDGEMV

Page 304: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Fraction of Upper Bound Across Platforms

Page 305: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Achieved Performance and Machine Balance

• Machine balance [Callahan ’88; McCalpin ’95]– Balance = Peak Flop Rate / Bandwidth (flops /

double)– Lower is better (I.e. can hope to get higher fraction

of peak flop rate)

• Ideal balance for mat-vec: 2 flops / double– For SpMV, even less

Page 306: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.
Page 307: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Where Does the Time Go?

• Most time assigned to memory• Caches “disappear” when line sizes are equal

– Strictly increasing line sizes

1

memmem HitsHitsTimei

ii

Page 308: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Execution Time Breakdown: Matrix 40

Page 309: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Execution Time Breakdown (PAPI): Matrix 40

Page 310: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Speedups with Increasing Line Size

Page 311: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Summary: Performance Upper Bounds

• What is the best we can do for SpMV?– Limits to low-level tuning of blocked implementations– Refinements?

• What machines are good for SpMV?– Partial answer: balance characterization

• Architectural consequences?– Help to have strictly increasing line sizes

Page 312: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Evaluating algorithms and machines for SpMV

• Metrics– Speedups– Mflop/s (“fair” flops)– Fraction of peak

• Questions– Speedups are good, but what is “the best?”

• Independent of instruction scheduling, selection• Can SpMV be further improved or not?

– What machines are “good” for SpMV?

Page 313: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Tuning Dense BLAS —PHiPAC

Page 314: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Tuning Dense BLAS– ATLAS

Extends applicability of PHIPAC; Incorporated in Matlab (with rest of LAPACK)

Page 315: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.
Page 316: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Statistical Models for Automatic Tuning

• Idea 1: Statistical criterion for stopping a search– A general search model

• Generate implementation• Measure performance• Repeat

– Stop when probability of being within of optimal falls below threshold

• Can estimate distribution on-line

• Idea 2: Statistical performance models– Problem: Choose 1 among m implementations at run-

time– Sample performance off-line, build statistical model

Page 317: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

SpMV Historical Trends: Fraction of Peak

Page 318: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Motivation for Automatic Performance Tuning of SpMV

• Historical trends– Sparse matrix-vector multiply (SpMV): 10% of peak or

less

• Performance depends on machine, kernel, matrix– Matrix known at run-time– Best data structure + implementation can be surprising

• Our approach: empirical performance modeling and algorithm search

Page 319: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Accuracy of the Tuning Heuristics (3/4)

Page 320: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Accuracy of the Tuning Heuristics (3/4)DGEMV

Page 321: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Sparse CALA Summary

• Optimizing one SpMV too limiting• Take k steps of Krylov subspace method

– GMRES, CG, Lanczos, Arnoldi– Assume matrix “well-partitioned,” with modest surface-to-

volume ratio– Parallel implementation

• Conventional: O(k log p) messages• CALA: O(log p) messages - optimal

– Serial implementation• Conventional: O(k) moves of data from slow to fast memory• CALA: O(1) moves of data – optimal

– Can incorporate some preconditioners• Need to be able to “compress” interactions between distant i, j• Hierarchical, semiseparable matrices …

– Lots of speed up possible (measured and modeled)• Price: some redundant computation

Page 322: CS267 – Lecture 15 Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication (SpMV) James Demmel demmel/cs267_Spr14.

Potential Impact on Applications: T3P

• Application: accelerator design [Ko] • 80% of time spent in SpMV• Relevant optimization techniques

– Symmetric storage– Register blocking

• On Single Processor Itanium 2– 1.68x speedup

• 532 Mflops, or 15% of 3.6 GFlop peak– 4.4x speedup with multiple (8) vectors

• 1380 Mflops, or 38% of peak