Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

31
© 2016 IBM Corporation S7/8: SystemML’s Optimizer and Runtime Matthias Boehm 1 , Arvind C. Surve 2 1 IBM Research – Almaden 2 IBM Spark Technology Center IBM Research

Transcript of Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

Page 1: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

S7/8: SystemML’s Optimizer and Runtime

Matthias Boehm1, Arvind C. Surve2

1 IBM Research – Almaden2 IBM Spark Technology Center

IBM Research

Page 2: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Abstraction: The Good, the Bad and the Ugly

2 IBM Research

q = t(X) %*% (w * (X %*% v))

[adapted from Peter Alvaro: "I See What You Mean“, Strange Loop, 2015]

Simple & Analysis-CentricData Independence

Platform IndependenceAdaptivity

(Missing)Size InformationOperator

Selection

(Missing) Rewrites

Distributed Operations

Distributed Storage

(Implicit) Copy-on-WriteData Skew

Load Imbalance

Latency

Complex Control Flow

Local / Remote Memory Budgets

The Ugly: Expectations ≠ Reality

è Understanding of optimizer and runtime techniques underpinning declarative, large-scale ML

Efficiency & Performance

Page 3: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Outline§ Common Framework§ Optimizer-Centric Techniques§ Runtime-Centric Techniques

3 IBM Research

Page 4: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

ML Program Compilation§ Script:

§ Operator DAG– a.k.a. “graph” – a.k.a. intermediate

representation (IR)

§ Runtime Plan– Compiled runtime plans

(e.g., instructions)– Interpreted plans

4 IBM Research

while(...) {q = t(X) %*% (w * (X %*% v))...

}

X v

ba+*

ba+*

b(*)r(t)

w

q

OperationData Dependency

(incl data flow properties)

SPARK mapmmchain X.MATRIX.DOUBLE w.MATRIX.DOUBLE v.MATRIX.DOUBLE_mVar4.MATRIX.DOUBLE XtwXv

[Multiple] Consumers of Intermediates

[Multiple] DAG roots (outputs)

No cycles[Multiple] DAG leafs (inputs)

Page 5: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Distributed Matrix Representation§ Collection of “Matrix Blocks” (and keys)

– a.k.a. “tiles”, a.k.a. “chunks”– Bag semantics (duplicates, unordered)– Logical (Fixed-Size) Blocking

+ join processing / independence- (sparsity skew)

– E.g., SystemML on Spark:JavaPairRDD<MatrixIndexes,MatrixBlock>

– Blocks encoded independently (dense/sparse)

§ Partitioning– Logical Partitioning

(e.g., row-/column-wise)– Physical Partitioning

(e.g., Hash / Grid)

5 IBM Research

Logical Blocking 3,400x2,700 Matrix

(w/ Bc=1,000)

Physical Blocking and Partitioning

Page 6: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Distributed Matrix Representation (2)§ Matrix Block

– Most operations defined here– Local matrix: single block– Different representations

§ Common Block Representations– Dense (linearized arrays)– MCSR (modified CSR)– CSR (compressed sparse rows), CSC– COO (Coordinate matrix)– …

6 IBM Research

.7 .1

.2 .4.3

Example 3x3 Matrix

.7 0 .1 .2 .4 0 0 .3 0Dense (row-major)

.7

.1

.2

.4

.3

02011

0245

CSR

.7

.1

.2

.4

.3

02011

COO

00112

.7 .12

MCSR0

.2 .410

.31

Page 7: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Common Workload Characteristics§ Common Operations

– Matrix-Vector X v (e.g., LinregCG, Logreg, GLM, L2SVM, PCA)

– Vector-Matrix vT X (e.g., LinregCG, LinregDS, Logreg, GLM, L2SVM)

– MMChain XT(w*X v) (e.g., LinregCG, Logreg, GLM)

– TSMM XTX (e.g., LinregDS, PCA)

§ Data Characteristics– Tall and skinny matrices– Wide matrices often sparse– Non-uniform sparsity– Transformed data often w/

low column cardinality

7 IBM Research

Example: LinregCG (Conjugate Gradient)1: X = read($1); # n x m matrix2: y = read($2); # n x 1 vector3: maxi = 50; lambda = 0.001; 4: intercept = $3;5: ...6: r = -(t(X) %*% y); 7: norm_r2 = sum(r * r); p = -r;8: w = matrix(0, ncol(X), 1); i = 0;9: while(i<maxi & norm_r2>norm_r2_trgt) {10: q = (t(X) %*% (X %*% p))+lambda*p;11: alpha = norm_r2/as.scalar(t(p)%*%q);12: w = w + alpha * p;13: old_norm_r2 = norm_r2;14: r = r + alpha * q;15: norm_r2 = sum(r * r);16: beta = norm_r2 / old_norm_r2;17: p = -r + beta * p; i = i + 1; 18: }19: write(w, $4, format="text");

Page 8: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Excursus: Roofline Analysis Matrix-Vector Multiply§ Single Node: 2x6 E5-2440 @2.4GHz–2.9GHz, DDR3 RAM @1.3GHz (ECC)

– Max mem bandwidth (local): 2 sock x 3 chan x 8B x 1.3G trans/s à 2 x 32GB/s– Max mem bandwidth (QPI, full duplex) à 2 x 12.8GB/s– Max floating point ops: 12 cores x 2*4dFP-units x 2.4GHz à 2 x 115.2GFlops/s

§ Roofline Analysis– Processor

performance – Off-chip

memory traffic

8 IBM Research

[S. Williams, A. Waterman, D. A. Patterson: Roofline: An Insightful Visual Performance Model for MulticoreArchitectures. Commun. ACM 52(4): 65-76 (2009)]

SystemMLMv

SystemMLMt(Mv)

SystemMLMM (n=768)

36x

è IO-bound matrix-vector mult

Page 9: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Outline§ Common Framework§ Optimizer-Centric Techniques

– Basic DAG Compilation (incl memory estimates)– Rewrites (DAG, Program)– Operator Selection (incl fused operators)– Dynamic Recompilation– Inter-Procedural Analysis (IPA)– (Hands-On Lab: Rewrites and Handling of Size Information)– Ongoing Research (resource opt, spoof)

§ Runtime-Centric Techniques

9 IBM Research

Page 10: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Recap: SystemML’s Compilation Chain

10 IBM Research

EXPLAINhops

STATS

DEBUG

IPAParFor OptResource OptGDF Opt

EXPLAINruntime

Various optimization decisions at different

compilation steps

[Matthias Boehm et al:SystemML's Optimizer: Plan Generation for Large-Scale Machine Learning Programs. IEEE Data Eng. Bull 2014]

HOP (High-level operator)LOP (Low-level operator)

Page 11: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

§ Example LinregDS

dg(rand)(103x1,103)

r(diag)

X(108x103,1011)

y(108x1,108)

ba(+*) ba(+*)

r(t)

b(+)b(solve)

write

800MB

800GB

800GB8KB

172KB

1.6TB

1.6TB

16MB8MB

8KB

Basic HOP and LOP DAG Compilation

11 IBM Research

X = read($1);y = read($2);intercept = $3; lambda = 0.001;...if( intercept == 1 ) {

ones = matrix(1,nrow(X),1); X = append(X, ones);

}I = matrix(1, ncol(X), 1);A = t(X) %*% X + diag(I)*lambda;b = t(X) %*% y;beta = solve(A, b);...write(beta, $4);

HOP DAG(after rewrites)

LOP DAG(after rewrites)

Cluster Config:• client mem: 4 GB• map/red mem: 2 GB

Scenario: X: 108 x 103, 1011

y: 108 x 1, 108

map

reduce

1.6GB

800MB

16KB

è Hybrid Runtime Plans:• Size propagation over ML programs • Worst-case sparsity / memory estimates• Integrated CP / MR / Spark runtime

CPMR

CP

CP

CP

MRMR

CP

X

y

r’(CP)

mapmm(MR) tsmm(MR)

ak+(MR) ak+(MR)

r’(CP)

part(CP)800MB

Page 12: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Static and Dynamic Rewrites§ Types of Rewrites

– Static: size-independent rewrites– Dynamic: size-dependent rewrites

§ Examples Static Rewrites– Common Subexpression Elimination– Constant Folding– Static Algebraic Simplification Rewrites– Branch Removal– Right/Left Indexing Vectorization– For Loop Vectorization– Checkpoint injection (caching) – Repartition injection

§ Examples Dynamic Rewrites– Matrix Multiplication Chain Optimization– Dynamic Algebraic Simplification Rewrites

12 IBM Research

Cascading rewrite effect(enables other rewrites, IPA,

operator selection)High performance impact

(direct/indirect)

Page 13: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

§ Static Simplification Rewrites (size-independent patterns)

Example Static Simplification Rewrites

13 IBM Research

Rewrite Category Static PatternsRemove Unnecessary Operations

t(t(X)), X/1, X*1, X-0, -(-X) à X matrix(1,)/X à 1/X sum(t(X)) à sum(X)rand(,min=-1,max=1)*7 à rand(,min=-7,max=7)-rand(,min=-2,max=1) à rand(,min=-1,max=2)t(cbind(t(X),t(Y))) à rbind(X,Y)

Simplify Bushy Binary (X*(Y*(Z%*%v))) à (X*Y)*(Z%*%v)

Binary to Unary X+X à 2*X X*X à X^2 X-X*Y à X*(1-Y) X*(1-X) à sprop(X) 1/(1+exp(-X)) à sigmoid(X)X*(X>0) à selp(X) (X-7)*(X!=0) à X -nz 7(X!=0)*log(X) à log_nz(X) aggregate(X,y,count) à aggregate(y,y,count)

Simplify Permutation Matrix Construction

outer(v,seq(1,N),"==") à rexpand(v,max=N,row)table(seq(1,nrow(v)),v,N) à rexpand(v,max=N,row)

Simplify Operation over Matrix Multiplication

trace(X%*%Y) à sum(X*t(Y))(X%*%Y)[7,3] à X[7,] %*% Y[,3]

Page 14: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

§ Dynamic Simplification Rewrites (size-dependent patterns)

Example Dynamic Simplification Rewrites

14 IBM Research

Rewrite Category Dynamic PatternsRemove / Simplify Unnecessary Indexing

X[a:b,c:d] = Y à X = Y iff dims(X)=dims(Y)X = Y[, 1] à X = Y iff dims(X)=dims(Y) X[,1]=Y;X[,2]=Z à X=cbind(Y,Z) iff ncol(X)=2,col

Fuse / Pushdown Operations

t(rand(10, 1)) à rand(1, 10) iff nrow/ncol=1sum(diag(X)) à trace(X) iff ncol(X)>1diag(X)*7 à diag(X*7) iff ncol(X)=1sum(X^2) àt(X)%*%X, àsumSq(X) iff ncol(X)=1, >1

Remove Empty / Unnecessary Operations

X%*%Y à matrix(0,…) iff nnz(X)=0|nnz(Y)=0 X*Y à matrix(0,…), X+YàX, X-YàX iff nnz(Y)=0round(X)àmatrix(0), t(X)àmatrix(0) iff nnz(X)=0X*(Y%*%matrix(1,)) à X*Y iff ncol(Y)=1

Simplify Aggregates / Scalar Operations

rowSums(X) àsum(X) àX iff nrow(X)=1, ncol(X)=1rowSums(X*Y) à X%*%t(Y) iff nrow(Y)=1 X*Y à X*as.scalar(Y) iff nrow(Y)=1&ncol(Y)=1

Simplify Diag Matrix Multiplications

diag(X)%*%Y à Y*X iff ncol(X)=1&ncol(Y)>1diag(X%*%Y)->rowSums(X*t(Y)) iff ncol(Y)>1

Page 15: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Matrix Multiplication Chain Optimization§ Problem

– Given a matrix multiplication chain (sequence) of n matrices M1, M2, …Mn– Matrix multiplication is associative– Find the optimal full parenthesization of the product M1M2 …Mn

§ Search Space Characteristics– Naïve exhaustive search: Catalan numbers à Ω(4n / n3/2))– Few distinct subproblems: any i and j, w/ 1 ≤ i ≤ j ≤ n: Θ(n2)– DP characteristics apply: (1) optimal substructure, (2) overlapping subproblems– Text book dynamic programming

algorithm: Θ(n3) time, Θ(n2) space

– Best known algorithm: O(n log n)

15 IBM Research

[T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein: Introduction to Algorithms, Third Edition, The MIT Press, pages 370-377, 2009]

[T. C. Hu, M. T. Shing: Computation of Matrix Chain Products. Part II. SIAM J. Comput. 13(2): 228-251, 1984]

vs.M11kx1k

M21kx1k

M31

2,002 MFLOPs

M11kx1k

M21kx1k

M31

4 MFLOPs

Page 16: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Matrix Multiplication Chain Optimization (2)

16 IBM Research

M1 M2 M3 M4 M510x7 7x5 5x1 1x3 3x9

M1 M2 M3 M4 M5

Cost matrix m

0 0 0 0 0

1

2

3

4

5 1

2

3

4

5

j i

350 35 15 27

105 56 72

135 125

222m[1,3] = min(

m[1,1] + m[2,3] + p1p2p4,m[1,2] + m[3,3] + p1p3p4 )

= min(0 + 35 + 10*7*1,350 + 0 + 10*5*1 )

= min(105,400 )

Page 17: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Matrix Multiplication Chain Optimization (3)

17 IBM Research

Optimal split matrix s

1 2 3 42 41 3 3

3 33

M1 M2 M3 M4 M510x7 7x5 5x1 1x3 3x9

M1 M2 M3 M4 M5

Cost matrix m

0 0 0 0 0

1

2

3

4

5 1

2

3

4

5

j i

350 35 15 27

105 56 72

135 125

222

( M1 M2 M3 M4 M5 )( ( M1 M2 M3 ) ( M4 M5 ) )

( ( M1 ( M2 M3 ) ) ( M4 M5 ) )è ((M1 (M2 M3)) (M4 M5))

getOpt(s,1,5)getOpt(s,1,3)getOpt(s,4,5)

Page 18: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Example Operator Selection: Matrix Multiplication

§ Hop-Lop Rewrites– Aggregation (w/o, singleblock/multiblock)– Partitioning (w/o, CP/MR, col/rowblock)– Empty block materialization in output– Transpose-MM rewrite t(X)%*%y à t(t(y)%*%X)– CP degree of parallelism (multi-threaded mm)

18 IBM Research

X

r(t)

ba(+*)

y

MR

MR

t(X)%*%yExample:

MapMM(MR,left)

Transform(CP,’)

Partition(CP,col)

Transform(CP,’)

Xy

Aggregate(MR,ak+)

Group(MR)

Exec Type MM Ops PatternCP MM

MMChainTSMMPMM

X %*% Yt(X) (w * (X %*% v))t(X) %*% Xrmr(diag(v)) %*% X

MR / Spark

(* only Spark)

MapMMMapMMChainTSMMZipMM * CPMMRMM PMM

X %*% Yt(X) (w * (X %*% v))t(X) %*% Xt(X) %*% Yrmr(diag(v)) %*% XX %*% YX %*% Y

Page 19: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Example Fused Operators (1): MMChain§ Matrix Multiplication Chains: q = t(X) %*% (w * (X %*% v))

– Very common pattern– MV-ops IO / memory-

bandwidth bound– Problem: Data dependency

forces two passes over X

è Fused mmchain operator– Key observation: values of D are row-aligned wrt to X– Single-pass operation (map-side in MR/Spark / cache-conscious in CP/GPU)

19 IBM Research

X

v

* w D

t(X)

=Step 1:

Step 2a:D

qX

t(D)

Step 2b:

t(q)q

à

[Arash Ashari et al.: On optimizing machine learning workloads via kernel fusion. PPOPP 2015]

Page 20: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Example Fused Operators (2): WSLoss§ Weighted Squared Loss: wsl = sum(W * (X – L %*% t(R))^2)

– Common pattern for factorization algorithms– W and X usually very sparse (< 0.001)– Problem: “Outer” product of L%*%t(R) creates three dense

intermediates in the size of Xè Fused wsloss operator

– Key observations: Sparse W* allows selective computation, full aggregate significantly reduces memory requirements

20 IBM Research

L–

t(R)

XWsum *

2

Page 21: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Rewrites and Operator Selection in Action§ Example: Use case Mlogreg, X: 108x103, K=1 (2 classes), 2GB mem§ Applied Rewrites

– Original DML snippet of inner loop:Q = P[, 1:K] * (X %*% ssX_V);HV = t(X) %*% (Q - P[, 1:K] * (rowSums(Q) %*% matrix(1, rows=1, cols=K)));

– After remove unnecessary (1) matrix multiply (2) unary aggregateQ = P[, 1:K] * (X %*% ssX_V);HV = t(X) %*% (Q - P[, 1:K] * Q);

– After simplify distributive binary operationQ = P[, 1:K] * (X %*% ssX_V);HV = t(X) %*% ((1 - P[, 1:K]) * Q);

– After simplify bushy binary operationHV = t(X) %*% (((1 - P[, 1:K]) * P[, 1:K]) * (X %*% ssX_V));

– After fuse binary dag to unary operation (sample proportion)HV = t(X) %*% (sprop(P[, 1:K] * (X %*% ssX_V));

§ Operator Selection– Exec Type: MR, because mem estimate > 800GB– MM Type: MapMMChain, because XtwXv and w=sprop(P[,1:K])< 2GB– CP partitioning of w into 32MB chunks of rowblocks

21 IBM Research

Recall: Cascading rewrite effect

Page 22: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Dynamic Recompilation – Motivation § Problem of unknown/changing sizes

– Unknown or changing sizes and sparsity of intermediates (across loop iterations / conditional control flow).

– These unknowns lead to very conservative fallback plans.§ Example ML Program Scenarios

– Scripts w/ complex function call patterns– Scripts w/ UDFs– Data-dependent operators

Y = table( seq(1,nrow(X)), y )grad = t(X) %*% (P - Y);

– Computed size expressions– Changing dimensions or sparsity

è Dynamic recompilation techniques as robust fallback strategy– Shares goals and challenges with adaptive query processing– However, ML domain-specific techniques and rewrites

22 IBM Research

Ex: Stepwise LinregDSwhile( continue ) {

parfor( i in 1:n ) {if( fixed[1,i]==0 ) {

X = cbind(Xg, Xorig[,i])AIC[1,i] = linregDS(X,y)

}}#select & append best to Xg

}

Page 23: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

§ Optimizer Recompilation Decisions– Split HOP DAGs for recompilation: prevent unknowns but keep DAGs as

large as possible; we split after reads w/ unknown sizes and specific operators– Mark HOP DAGs for recompilation: MR due to unknown sizes / sparsity

§ Dynamic Recompilation at Runtime on recompilation hooks (last level program blocks, predicates, recompile once functions, specific MR jobs)– Deep Copy DAG: (e.g., for

non-reversible dynamic rewrites)– Update DAG Statistics: (based

on exact symbol table meta data) – Dynamic Rewrites: (exact stats

allow very aggressive rewrites) – Recompute Memory Estimates:

(w/ unconditional scope of single DAG)– Generate Runtime Instructions:

(construct LOPs / instructions)

23 IBM Research

X

r(t)

ba(+*)

P

CP

MR

b(-)

Y

MR[100x1M,-1]

[100x-1,-1]

[1Mx100,-1] [1Mx-1,-1] [1Mx-1,-1]

[1Mx-1,-1]

X 1Mx100,99M

P 1Mx7,7M

Y 1Mx7,7M

[1Mx100,99M] [1Mx7,7M] [1Mx7,7M]

[1Mx7,-1][100x1M,99M]

[100x7,-1]

CP

CP

Dynamic Recompilation – Compiler and Runtime

Page 24: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Inter-Procedural Analysis – Motivation § Challenges

– Multiple function calls with different inputs– Conditional control flow – Complex function call graphs (incl recursion)

§ Example (multiple calls w/ different inputs)

24 IBM Research

X = read($X1)X = foo(X);if( $X2 != “ ” ) {

X2 = read($X2);X2 = foo(X2);X = cbind(X, X2);

} ...

foo = function (Matrix[Double] A) return (Matrix[Double] B)

{B = A – colSums(A);if( sum(B!=B)>0 )

print(“NaNs encountered.”); }

Size propagation into foo() would be incorrect!

Inter- and Intra-Procedural Analysis

1M x 1k

1M x 2

Page 25: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Inter-Procedural Analysis (2)§ Collect IPA Function Candidates

– Functions called once– Functions called with consistent sizes (dims/nnz)– Unary size-preserving functions

§ Size Propagation (via dynamic recompilation)– Inter- and intra-procedural size propagation (in execution order)– Control-flow-aware propagation and reconciliation

§ Additional IPA Passes– Remove unused functions– Flag functions

“recompile once”

– Remove constant binary operations

25 IBM Research

A = matrix(1, nrow(X), ncol(X));while(...)

... * A

1M x 1k

1M x 1k

foo

OK!

foo = function (Matrix[Double] A) return (Matrix[Double] C)

{B = rand(nrow(A),1);while(...)

C = A / rowSums(A) * B}

recompile once on entry w/ A

--------

Page 26: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Hands-On Lab: Rewrites and Handling of Size Information§ Exercise 1: Sum-Product Rewrite: sum(A %*% t(B))

– a) What’s happening for A:=[900x1000], B:=[700,1000]– b) What’s happening for A:=[900x1], B:=[700x1]

§ Exercise 2: Matrix Multiplication Chains: A %*% B %*% C %*% D %*% E– What’s happening as we change dimensions of A, B, C, D, E

(start with dimensions given on slide 17)

§ Exercise 3: Dynamic Recompilation– What’s happening during compilation/runtime to gather size informationif( $1 == 1 ) {

Y = rand(rows=nrow(X), cols=1, min=1, max=maxval);X = cbind(X, table(seq(1,nrow(Y)),Y));}

print(sum(X));

26 IBM Research

Without size information there is (almost) no optimization.

Page 27: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

Resource Optimizer Overview§ Basic Ideas

– Optimize ML program resource configurations via online what-if analysis – Generating and costing runtime plans– Program-aware grid enumeration, pruning, and re-optimization techniques

27 IBM Research

[B. Huang, M. Boehm, Y. Tian, B. Reinwald, S. Tatikonda, F. R. Reiss: Resource Elasticity for

Large-Scale Machine Learning, SIGMOD 2015]

Page 28: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

SPOOF: Sum-Product Optimization and Operator Fusion§ Example Script: PNMF

28 IBM Research

[C. Liu, H. Yang, J. Fan, L. He, Y. Wang: Distributed Nonnegative Matrix Factorization for Web-Scale Dyadic Data Analysis on MapReduce. WWW 2010]

1: X = read("./input/X")2: k = 20; eps = 1e-15; max_iter = 10; iter = 1;3: W = rand(rows=nrow(X), cols=k, min=0, max=0.025)4: H = rand(rows=k, cols=ncol(X), min=0, max=0.025)5: while( iter < max_iter ) {6: H = (H*(t(W)%*%(X/(W%*%H+eps)))) / t(colSums(W));7: W = (W*((X/(W%*%H+eps))%*%t(H))) / t(rowSums(H));8: obj = sum(W%*%H) - sum(X*log(W%*%H+eps));9: print("iter=" + iter + " obj=" + obj);10: iter = iter + 1;11: } 12: write(W, "./output/W"); 13: write(H, "./output/H");

3x fused operators & 1x sum-product optè changed asymptotic behavior

Page 29: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

SPOOF: Sum-Product Optimization and Operator Fusion (2)

29 IBM Research

[T. Elgamal, S. Luo, M. Boehm, A. V. Evfimievski, S. Tatikonda, B. Reinwald, P. Sen: SPOOF: Sum-Product Optimization and Operator Fusion for Large-Scale Machine Learning. In prep]

Page 30: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation

From SystemR to SystemML – A Comparison§ Similarities

– Declarative specification (fixed semantics): SQL vs DML– Simplification rewrites (Starburst QGM rewrites vs static/dynamic rewrites)– Operator selection (physical operators for join vs matrix multiply)– Operator reordering (join enumeration vs matrix multiplication chain opt)– Adaptive query processing (progressive reop vs dynamic recompile)– Physical layout (NSM/DSM/PAX page layouts vs dense/sparse block formats)– Buffer pool (pull-based page cache vs anti-caching of in-memory variables)– Advanced optimizations (source code gen, compression, GPUs, etc)– Cost model / stats (est. time for IO/compute/latency; histograms vs dims/nnz)

§ Differences– Algebra (relational algebra vs linear algebra)– Programs (query trees vs DAGs, conditional control flow, often iterative)– Optimizations (algebra-specific semantics, rewrites, and constraints)– Scale (10s-100s vs 10s-10,000s of operators)– Data preparation (ETL vs feature engineering)– Physical design, transactions processing, multi-tenancy, etc

30 IBM Research

Page 31: Apache SystemML Optimizer and Runtime techniques by Matthias Boehm

© 2016 IBM Corporation31 IBM Research

SystemML is Open Source:• Apache Incubator Project (11/2015)• Website: http://systemml.apache.org/ • Source code: https://github.com/

apache/incubator-systemml