MAGMA - University of Tennessee

2
MATRIX ALGEBRA ON GPU AND MULTICORE ARCHITECTURES The MAGMA project, led by the linear algebra research groups at University of Tennessee, UC Berkeley, and UC Denver, aims to develop a linear algebra library similar to LAPACK but for heterogeneous/hybrid architectures, starting with current “Multicore+GPU” systems. This transition cannot be done automatically, as in many cases new algorithms that significantly differ from algorithms for conventional architectures will be needed. Preliminary studies on a new class of “heterogeneity-aware” algorithms of “reduced communication” and “high-parallelism” confirm that this is the case. Hardware Software HETEROGENEITY-AWARE ALGORITHMS INNOVATIVE DATA STRUCTURES PRECONDITIONING TO REDUCE PIVOTING MIXED PRECISION ALGORITHMS DGETF2 DLSWP DTRSM DGEMM Extracting parallelism from the BLAS routines DEFINE DEPENDENCIES EXTRACT PARALLELISM REDUCE COMMUNICATION INCREASE IN PARALLELISM INCREASE IN COMMUNICATION COST (VS COMPUTATION) Hybrid / Heterogeneous Designs Multicore + GPUs NEW ALGORITHMS NEEDED HARDWARE TO SOFTWARE TRENDS SPONSORED BY University of Coimbra University of California Berkeley DENSE LINEAR ALGEBRA (DLA) FOR GPUs DLA Algorithms, due to high ratio of floating point calculations to data required, have been of high performance on standard architectures. Therefore, special purpose architectures have not been able to significantly accelerate them up until recently. This has changed as CPUs move to multi/manycores with an exponentially growing gap between processor speed and memory, while GPUs have consistently outpaced them both in performance and memory bandwidth. First CUDA GPU results to significantly outperform CPUs on DLA started appearing at the beginning of 2008 (illustrated below for the GEMM operation and on the reverse page for the main factorizations and solvers). PEAK GEMM ON CURRENT MULTICORES VS GPUs SINGLE PRECISION GFlop/s DOUBLE PRECISION not available 400 375 350 325 300 275 250 225 200 175 150 125 100 75 50 25 0 Quadro FX 5600 (120 @ 1.5 GHz) Intel Xeon Tigerton (4 x 4 @ 2.4 GHz) GeForce GTX 280 (240 @ 1.29 GHz) Intel Xeon Harpertown (2 x 4 @ 2.33 GHz) 1. GPU computing has reached a point to significantly outperform current multicores on DLA (in spite of DLA's traditionally high performance on x86 architectures). 2. Architecture trends have moved towards heterogeneous (GPU + CPU) designs of increased parallelism and communi- cation costs, and software trends have to reflect on that. MAGMA addresses this with innovative heterogeneity-aware algorithms/techniques on extracting parallelism and reducing communication. 3. There are significant differences between the new algorithms and those for conventional CPUs. 4. The new techniques in many cases present an opportunity for trade-off between speed and accuracy. 5. The need for DLA for hybrid systems will grow, motivating our Future work directions, As envisioned in the MAGMA Project, Towards a self contained DLA library similar to LAPACK but for heterogeneous architectures. HARDWARE TRENDS AND CURRENT WORK ON MAGMA SHOW: MAGMA INNOVATIVE COMPUTING LABORATORY CENTER for INFORMATION TECHNOLOGY RESEARCH http://icl.cs.utk.edu/magma/

Transcript of MAGMA - University of Tennessee

Page 1: MAGMA - University of Tennessee

MATRIX ALGEBRA ON GPU AND MULTICORE ARCHITECTURES

The MAGMA project, led by the linear algebra research groups at University of Tennessee, UC Berkeley, and UC Denver, aims to develop a linear algebra library similar to LAPACK but for heterogeneous/hybrid architectures, starting with current “Multicore+GPU” systems. This transition cannot be done automatically, as in many cases new algorithms that significantly differ from algorithms for conventional architectures will be needed. Preliminary studies on a new class of “heterogeneity-aware” algorithms of “reduced communication” and “high-parallelism” confirm that this is the case.

Hardware Software

• HETEROGENEITY-AWARE ALGORITHMS • INNOVATIVE DATA STRUCTURES• PRECONDITIONING TO REDUCE PIVOTING• MIXED PRECISION ALGORITHMS

DGETF2

DLSWP

DTRSM

DGEMM

Extracting parallelism from the BLAS routines

DEFINEDEPENDENCIES

EXTRACT PARALLELISM

REDUCE COMMUNICATION

INCREASE INPARALLELISM

INCREASE INCOMMUNICATIONCOST (VS COMPUTATION)

Hybrid / Heterogeneous Designs

Multicore + GPUsN

EW A

LGO

RIT

HM

S N

EED

ED

HARDWARE TO SOFTWARE TRENDS

SPONSORED BY

University of Coimbra

University of CaliforniaBerkeley

DENSE LINEAR ALGEBRA (DLA)�FOR GPUs

DLA Algorithms, due to high ratio of floating point calculations to data required, have been of high performance on standard architectures. Therefore, special purpose architectures have not been able to significantly accelerate them up until recently. This has changed as CPUs move to multi/manycores with an exponentially growing gap between processor speed and memory, while GPUs have consistently outpaced them both in performance and memory bandwidth.First CUDA GPU results to significantly outperform CPUs on DLA started appearing at the beginning of 2008 (illustrated below for the GEMM operation and on the reverse page for the main factorizations and solvers).

PEAK GEMM ON CURRENT MULTICORES VS GPUs

SINGLE PRECISION

GFl

op/s

DOUBLE PRECISION

not a

vaila

ble

400375350325300275250225200175150125100

755025

0

Quadro FX 5600(120 @ 1.5 GHz)

Intel Xeon Tigerton(4 x 4 @ 2.4 GHz)

GeForce GTX 280(240 @ 1.29 GHz)

Intel Xeon Harpertown(2 x 4 @ 2.33 GHz)

1. GPU computing has reached a point to significantly outperform current multicores on DLA (in spite of DLA's traditionally high performance on x86 architectures).

2. Architecture trends have moved towards heterogeneous (GPU + CPU) designs of increased parallelism and communi-cation costs, and software trends have to reflect on that. MAGMA addresses this with innovative heterogeneity-aware algorithms/techniques on extracting parallelism and reducing communication.

3. There are significant differences between the new algorithms and those for conventional CPUs.

4. The new techniques in many cases present an opportunity for trade-off between speed and accuracy.

5. The need for DLA for hybrid systems will grow, motivating our Future work directions, As envisioned in the MAGMA Project, Towards a self contained DLA library similar to LAPACK but for heterogeneous architectures.

HARDWARE TRENDS AND CURRENT WORK ON MAGMA SHOW:

MAGMA

I N N O VAT I V E C O M P U T I N G L A B O R AT O RY C E N T E R fo r I N F O R M AT I O N T E C H N O LO G Y R E S E A R C H

http://icl.cs.utk.edu/magma/

Page 2: MAGMA - University of Tennessee

PERFORMANCE RESULTS1

GPU ALGORITHMS ≠ TRADITIONAL ALGORITHMS

QR

Cholesky

PP LU

Iterative Mixed Precison(GPU Iterations)Iterative Mixed Precision(CPU Iterations)Direct Solver

0

50

100

150

200

250

300

350

400

1 2 3 4 5 6 7 8 10 12 14

GFl

op/s

Matrix size x 1000Single Precision

Main Factorizations

0

75

150

225

300

1 2 3 4 5 6 7

Matrix size x 1000Double Precision2

Solvers

Single Core + Single GPU

2 Cores + 2 GPUs [Partial Pivoting (PP) LU]

8 Cores + 1 GPU [Preconditioned Limited Pivoting (LP) LU(NB)3]

16 Cores / Tigerton [Pairwise Pivoting (PwP) LU from PLASMA]

8 Cores / Harpertown [PwP LU from PLASMA]

0

20

40

60

80

100

120

1 2 3 4 5 6 7

Matrix size x 1000Double Precision LU Factorization

GFl

op/s

Matrix size x 1000Single Precision LU Factorization

0

100

200

300

400

500

600

1 2 3 4 5 6 7 8 10 12 21

Multicores / Multi-GPUs

1 The new techniques often gain in speed for the price of reduced accuracy. Understanding this trade-off of speed vs accuracy can lead to very efficient algorithms.

2 Mixed precision solvers often achieve 4 x speedup compared to DP solvers but the speed depends on the conditioning of the matrix. In these performance results, we considered three steps of iterative refinement (on symmetric and positive definite matrices using Cholesky).

3 Limited amount of pivoting (within the block size NB or more) is justified by a specially designed unitary transformation: experiments with random matrices show that LP LU(NB+64) for example is comparable in accuracy to PP LU, and LP LU(NB) loses only from 1 to 2 digits of accuracy to gain up to 30% in speed compared to PP LU.

HARDWARE USED

GPU: GeForce GTX 280 (240 Cores @ 1.30 GHz) Tigerton: Intel Xeon (4 x 4 Cores @ 2.4 GHz)�Host: Intel Xeon (2 x 4 Cores @ 2.33 GHz) Harpertown: Intel Xeon (2 x 4 Cores @2.33 GHz)

1. Splitting Algorithms into tasks• The concept of representing algorithms as Directed Acyclic Graphs (DAGs) where the nodes represent the sub-tasks and the edges the dependencies�

• Heterogeneity-aware splitting 2. Scheduling task execution

• Crucial for performance, for example scheduling tasks on the critical path 'as soon as possible' frees more parallelism

3. GPU triangular solvers through explicitly inverting the triangular matrix• Significantly accelerates both TRSM (up to 3 times) and TRSV (order of magnitude)

A. HETEROGENEITY-AWARE ALGORITHMSAlgorithms for hybrid GPU + multicore computing should split the computation to fully exploit the power that each of the hybrid components offers.1. 'Small' tasks of low parallelism to be executed on the

CPU (for example tasks on the critical path)�2. Larger tasks of high parallelism to be executed on the

GPU�3. Proper scheduling should explore asynchronicity

between CPU and GPU4. Blocking strategies

• Varying block sizes (as in QR)� • Two-level blocking (as in Cholesky)

5. Work partitioning (specific) for hybrid GPU + Multicore�

GPU

GPU

GPU

Critical Path

Algorithms as DAGs(small tasks/tiles for multicore)

Current hybrid CPU-GPU algorithms(small and large tasks)

NB N - 7 nb nb

1 CorePanel fact.

(or more, e.g. 1/socket)

1 GPUUpdate trailing

sub-matrix

7 CoresUpdate trailing

sub-matrix

-

An LU factorization work splitting for Single GPU + 8 cores CPU hostThe first N – 7nb columns reside on the GPU and� are processed by 1 GPU + 1 core, the rest resides and is processed by the remaining cores

B. INNOVATIVE DATA STRUCTURESNon-traditional data layouts may be beneficial in reducing communication costs, e.g. to avoid severely penalized strided memory access in pivoting on�the GPU, the matrix is laid out in the GPU memory in row-major order (to often double the performance).

REDUCE COMMUNICATION

C. PRECONDITIONING FOR REDUCED PIVOTING

D. MIXED PRECISION ALGORITHMS

E. TILED ALGORITHMS

F. COMMUNICATION-OPTIMAL ALGORITHMS

EXTRACTING PARALLELISM

MAGMA

I N N O VAT I V E C O M P U T I N G L A B O R AT O RY C E N T E R fo r I N F O R M AT I O N T E C H N O LO G Y R E S E A R C H

http://icl.cs.utk.edu/magma/