Compilation for Heterogeneous Platforms

28
Compilation for Heterogeneous Platforms Grid in a Box and on a Chip Ken Kennedy Rice University http://www.cs.rice.edu/~ken/Presentations/ Heterogeneous.pdf

description

Compilation for Heterogeneous Platforms. Grid in a Box and on a Chip. Ken Kennedy Rice University http://www.cs.rice.edu/~ken/Presentations/Heterogeneous.pdf. Senior Researchers. Ken Kennedy John Mellor-Crummey Zoran Budimlic Keith Cooper Tim Harvey Guohua Jin Charles Koelbel - PowerPoint PPT Presentation

Transcript of Compilation for Heterogeneous Platforms

Page 1: Compilation for Heterogeneous Platforms

Compilation for Heterogeneous Platforms

Grid in a Box and on a Chip

Ken KennedyRice University

http://www.cs.rice.edu/~ken/Presentations/Heterogeneous.pdf

Page 2: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Senior Researchers

Ken KennedyJohn Mellor-Crummey

Zoran BudimlicKeith CooperTim HarveyGuohua Jin

Charles Koelbel

Plus: Rice recruiting to open HPC faculty slot

Page 3: Compilation for Heterogeneous Platforms

Background: Scheduling for Grids

Virtual Grid Application Development Software (VGrADS)

Project

Page 4: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Scheduling Grid Workflows

• Vision:— Automate scheduling of DAG workflows onto Grid resources

• Strategy— Application includes performance models for all workflow steps– Performance models automatically constructed

— Off-line scheduler maps applications onto Grid resources in advance of execution – Uses performance model as surrogates for execution

time– Models data movement costs– Use heuristic schedulers (problem is NP-complete)

• Results— Dramatic reduction in time to completion of workflow (makespan)

Page 5: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Workflow Scheduling Results

NoneGHz Only

Accurate

Heuristic

Random

0

200

400

600

800

1000

1200

Time (min)

Dramatic makespan reduction of offline scheduling over online scheduling — Application: Montage

Value of performance models and heuristics for offline scheduling — Application: EMAN

”Scheduling Strategies for Mapping Application Workflows onto the Grid”

HPDC’05

CF=1 CF=10

CF=100

Offline

Online0

5000000

10000000

15000000

20000000

25000000

30000000

35000000

40000000

45000000

Simulated Makespan

Compute Factors

Scheduling Strategy

Online vs. Offline - Heterogeneous Platform (Compute Intensive Case)

Offline

Online

”Resource Allocation Strategies for Workflows in Grids”

CCGrid’05

Page 6: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Performance Model Construction

• Vision: Automatic performance modeling— From binary for a single resource and execution profiles— Generate a distinct model for each target resource

• Research— Uniprocessor modeling

– Can be extended to parallel MPI steps— Memory hierarchy behavior— Models for instruction mix

– Application-specific models– Scheduling

Page 7: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Performance Prediction Overview

Object Code

Binary Instrumenter

Instrumented Code

Execute

BB Counts

Communication Volume & Frequency

Memory Reuse

Distance

Binary Analyzer

Control flow graph

Loop nesting structure

BB instruction mixPost Processing Tool

Architecture neutral model Scheduler

Architecture Description

Performance Prediction for Target

ArchitectureStatic Analysis

Dynamic Analysis

Post Processing

Page 8: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Modeling Memory Reuse Distance

Page 9: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Execution Behavior: NAS LU 2D 3.0

Itanium2 (256KB L2, 1.5MB L3)

0

1000

2000

3000

4000

5000

6000

0 20 40 60 80 100 120 140 160 180 200

Mesh size

Cycles / Cell / Time step

Measured time Scheduler latency L2 miss penaltyL3 miss penalty TLB miss penalty Predicted time

Page 10: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Observations

• Performance models focus on uniprocessor memory hierarchy— Reuse distance is the key to all levels— Corrected for conflict misses

• Single-processor performance models for programmable scratch memories are easier— No conflict misses, no complex replacement policies— Compiler challenges are harder

• Performance models for parallel applications are complex, but possible— Many successful hand constructions (e.g., Hoisie work at LANL)

• With uniprocessor performance models, good workflow scheduling is achievable— Based on GrADS/VGrADS Experience

• Performance models make Grid utilization better

Page 11: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Application to Heterogeneous Platforms

• Assume a processor with different computational components— CPU + GPU or FPGA— Cell: PPE + SPEs— Cray: Commodity micro (AMD) + vector/multithreaded unit

— Consider data movement costs (bandwidth and local memory size)

• Partition application loop nests into computations that have affinity to different units— Example: vectorizable loops versus scalar computations

– Transformations to distribute loops into different types

– Transformations (tiling and fusion) to increase local reuse

— May need different partitionings due to data movement costs

• Schedule computations onto units, estimating run time including data movement— Scheduling will favor reuse from local memories— Best mapping may involve alternative partitionings

Page 12: Compilation for Heterogeneous Platforms

Multicore and Cell

Compilers and Tools for Next-Generation Chip Architechtures

Page 13: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Bandwidth Management• Multicore raises computational power rapidly

— Bandwidth onto chip unlikely to keep up

• Multicore systems will feature shared caches— Replaces false sharing with enhanced probability of conflict misses

• Challenges for effective use of bandwidth— Enhancing reuse when multiple processors are using cache— Reorganizing data to increase density of cache block use— Reorganizing computation to ensure reuse of data by multiple cores– Inter-core pipelining

— Managing conflict misses– With and without architectural help

• Without architectural help— Data reorganization within pages and synchronization to minimize conflict misses– May require special memory allocation run-time

primitives

Page 14: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Conflict Misses

• Unfortunate fact:— If a scientific calculation is sweeping across strips of > k arrays on a machine with k-way associativity and

— All k strips overlap in one associativity group, then– Every access to the overlap group location is a miss

2

1

3

On each outer loop iteration, 1 evicts 2 which evicts 3 which evicts 1In a 2-way associative cache, all are misses!

This limits loop fusion, a profitable reuse strategy

Page 15: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Controlling Conflicts: An Example

• Cache and Page Parameters— 256K Cache, 4-way set associative, 32-byte blocks

– 1024 associativity groups— 64K Page

– 2048 cache blocks— Each block in a page maps to a unique associativity group– 2 different lines in a page map to the same

associativity group

• In General— Let A = number of associativity groups in cache— Let P = number of cache blocks in a page— If P ≥ A then each block in a page maps to a single associativity group– No matter where the page is loaded

— If P < A then a block can map to A/P different associativity groups– Depending on where the page is loaded

Page 16: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Questions

• Can we do data allocation precisely within a page so that conflict misses are minimized in a given computation?— Extensive work on minimizing self-conflict misses— Little work on inter-array conflict minimization— No work, to my knowledge, on interprocessor conflict minimization

• Can we synchronize computations so that multiple cores do not interfere with one another?— Even reuse of blocks across processors

• Might it be possible to convince vendors to provide additional features to help control cache, particularly conflict misses— Allocation of part of cache as a scratchpad— Dynamic modification of cache mapping

Page 17: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

— 1 PPE (AltiVec), 8 SPE (vector) each with 256KB LS— Peak performance: At 3.2GHz, ~200GFlops for SP, ~20GFlops for DP

— Each SPE has its own address space and can only access data from LS, data transfer explicitly controlled through DMA– Each SPE can transfer 8 bytes/cycle each direction– Total memory bandwidth 25.6 GB/s = 8 bytes per cycle

total– On-chip bandwidth = 200 GB/s

PPE

L1,L2

Cache

SPE

LS

SPE

LS

SPE

LS

SPE

LS

SPE

LS

SPE

LS

SPE

LS

SPE

LS

EIB

MIC

I/O(BEI)

Compiling for Cell

Page 18: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Compiler Challenges

• Implications of Bandwidth versus Flops— If a computation needs one input word per flop, SPEs must wait for data– Eight SPEs can each get 16 bytes (4 words) every 16

cycles – Peak under those conditions = 8 flops per 16 cycles

= 1.6 Gflops— Reuse in local storage is essential to get closer to peak

• Compiler Challenges— Find sufficient parallelism for SPE

– coarse and fine granularity— Effective data movement strategy

– DMA latency hiding – Data reuse in local storage

Page 19: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

vector merge

1 2 3 4 1 2 3 4

2 3 4 1

Our Work on Short Vector Units

• AltiVec on PowerPC, SSE on x86— Short vector length (16 bytes), contiguous memory access for vectors, vector intrinsics only available in C/C++

— Data alignment constraints

Altivec SSE

Page 20: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Compilation Strategy for F90

• Procedure outlining array statements and vectorizable loops— rewriting computation in C/C++ to utilize the intrinsics

• Loop alignment to adjust relative data alignment between lhs and rhs — in complement to software pipelined vector accesses

• Data padding for data alignment

• Vectorized scalar replacement for data reuse in vector registers

Page 21: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

• Employ integrated techniques for vectorization, padding, alignment, scalar replacement to compile for short vector machines

• Extending source-to-source vectorizer to support CELL

DO J = 2, N-1; A(2:N-1, J) = (A(2:N-1, J-1) + A(2:N-1, J+1) + A(1:N-2, J) + A(3:N, J))/4; ENDDO

1.72

2.34

2.39

2.19

Speedup of

cLalign+VPAR

to cOrig+V on

Pentium IV (Intel Compiler)

1.81

1.88

1.85

1.23

Speedup of

cLalign+VPARS

to Orig+V on

PowerPC G5 (VAST)

Compiling for Computational Accelerators

Y. Zhao and K. Kennedy. Scalarization on Short Vector Machines." IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS). Austin, Texas, March 2005.

Page 22: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Related Work on Data Movement

• Related work: software cache— Eichenberger et al, IBM compiler, cache lookup code before each memory reference, 64KB, 4-way, 128bytes cache line

• Related work: data management in generated code— Eichenberger et al, loop blocking, prefetching

• Related work: array copying— Copy regions of a big array into contiguous area of a small array– Comparable to DMA transfer

— Lam et al, reduce cache conflict misses in blocked loop nests

— Temam et al, cost-benefit models for array copying— Yi, general algorithm with heuristics for cost-benefits analysis

• Related work: software prefetching— Prefetch data into cache before use

– Comparable to multi-buffering for overlapping computation and communication

— Callahan et al, Mowry et al, prefetch analysis, prefetch placement

Page 23: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Current Work

• Code generation: extending our compiling strategy for attached short vector units (SSE and AltiVec) onto the CELL processor— Programming model (current)

– Input: a sequential version Fortran 90 program– Output: PPE code + SPE code– SPE code obtained by procedure outlining

parallelizable loops and array statements— Automatic parallelization and vectorization

– Dependence analysis– Multiple SPEs: multiple threads with data/task

partitioning– Each SPE: vectorization using intrinsics, apply data

alignment

Page 24: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

• Code for sequential version code

• Parallel version— Range [0 : N-3] split across SPEs— Multi-buffering

• Experiment setup— 2.1GHz CELL blade at UTK— Xlc compiler 1.0-2— N=22000111, double-buffering

for(i = 0; i < N-2; i++) {

B[i+1] = 0.0;

for(k = 0; k < ComputationPerData; k++)

B[i+1] += (A[i] + A[i+2] + C[i+2]) * (0.35/ComputationPerData);

}

DMA Size and Computation Per Data

Page 25: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Speedup of 8 SPEs to 1 SPE

DMA Size and Computation Per Data

• DMA transfer (floats): 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048

• Computation Replication: 1, 2, 4, 8, 16, 32, 64, 128, 256

Page 26: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Future Work

• Compiling General Programs for Cell— Data alignment at vector and DMA accesses

– Vector: loop peeling, software pipelined memory accesses, loop alignment, array padding

– DMA: loop peeling, array padding— Data movement

– Buffer management for multi-buffering– Data reuse

Loop blocking, loop fusion— Parallelization and enabling transformations

– Loop distribution, loop interchange, loop skewing, privatization

— Heterogeneous parallelism– Partitioning programs between PPE and SPE– Use of PPE cache as scratch memory

Page 27: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Automatic Tuning

• For many applications, picking the right optimization parameters is difficult— Example: tiling in a cache hierarchy

• Pretuning of applications and libraries can correct for these deficiencies— ATLAS and PHiPACK experience

• General automatic tuning strategy— Generate search space of feasible optimization parameters

— Prune search space based on compiler models— Use heuristic search to find good solutions in reasonable time

• Search space pruning— Trade-off between tiling and loop fusion

– Need to determine “effective cache size”— Qasem and Kennedy: ICS 06

– Construct model for effective cache size based on tolerance

– Tune by varying tolerance

Page 28: Compilation for Heterogeneous Platforms

Center for High Performance Software Research

Summary

• Partitioning for heterogeneous systems requires:— Compiler partitioning of computations for different components

— Performance estimation of partitioned code on each component

— Scheduling with data movement costs taken into account— Transformations to make effective use of bandwidth and local memory on each component– Loop fusion and tiling are critical

• Rice has a substantive track record in these areas— Automatic vectorization/parallelization— Restructuring for data reuse in registers and cache

– Prefetching, scalar replacement, cache blocking, loop fusion

— Construction of performance models for uniprocessor memory hierarchies

— Scheduling onto Grid heterogeneous components with data movement

— Generating code for short vector units and Cell