How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software...

58
How to Design Convergent HPC, Deep Learning and Big Data Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The Ohio State University E-mail: [email protected] http://www.cse.ohio-state.edu/~panda Keynote Talk at SCAsia (Mar ‘19) by

Transcript of How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software...

Page 1: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

How to Design Convergent HPC, Deep Learning and Big Data Analytics Software Stacks for Exascale Systems?

Dhabaleswar K. (DK) PandaThe Ohio State University

E-mail: [email protected]://www.cse.ohio-state.edu/~panda

Keynote Talk at SCAsia (Mar ‘19)

by

Page 2: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

2Network Based Computing Laboratory SCAsia’19

High-End Computing (HEC): PetaFlop to ExaFlop

Expected to have an ExaFlop system in 2021-2022!

100 PFlops in 2017

1 EFlops in 2021-2022?

143PFlops in 2018

Page 3: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

3Network Based Computing Laboratory SCAsia’19

Big Data (Hadoop, Spark,

HBase, Memcached,

etc.)

Deep Learning(Caffe, TensorFlow, BigDL,

etc.)

HPC (MPI, RDMA, Lustre, etc.)

Increasing Usage of HPC, Big Data and Deep Learning

Convergence of HPC, Big Data, and Deep Learning!

Increasing Need to Run these applications on the Cloud!!

Page 4: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

4Network Based Computing Laboratory SCAsia’19

Can We Run HPC, Big Data and Deep Learning Jobs on Existing HPC Infrastructure?

Physical Compute

Page 5: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

5Network Based Computing Laboratory SCAsia’19

Can We Run HPC, Big Data and Deep Learning Jobs on Existing HPC Infrastructure?

Page 6: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

6Network Based Computing Laboratory SCAsia’19

Can We Run HPC, Big Data and Deep Learning Jobs on Existing HPC Infrastructure?

Page 7: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

7Network Based Computing Laboratory SCAsia’19

Can We Run HPC, Big Data and Deep Learning Jobs on Existing HPC Infrastructure?

Spark Job

Hadoop Job Deep LearningJob

Page 8: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

8Network Based Computing Laboratory SCAsia’19

Designing Communication Middleware for Multi-Petaflop and Exaflop Systems: Challenges

Programming ModelsMPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,

OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.

Application Kernels/Applications

Networking Technologies(InfiniBand, 40/100GigE,

Aries, and OmniPath)

Multi/Many-coreArchitectures

Accelerators(NVIDIA and FPGA)

MiddlewareCo-Design

Opportunities and

Challenges across Various

Layers

PerformanceScalability

Fault-Resilience

Communication Library or Runtime for Programming ModelsPoint-to-point

CommunicationCollective

CommunicationEnergy-

AwarenessSynchronization

and LocksI/O and

File SystemsFault

Tolerance

Page 9: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

9Network Based Computing Laboratory SCAsia’19

• MVAPICH Project – MPI and PGAS (MVAPICH) Library with CUDA-Awareness

• HiDL Project – High-Performance Deep Learning

• HiBD Project – High-Performance Big Data Analytics Library

• Commercial Support from X-ScaleSolutions

• Conclusions and Q&A

Presentation Overview

Page 10: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

10Network Based Computing Laboratory SCAsia’19

Parallel Programming Models Overview

P1 P2 P3

Shared Memory

P1 P2 P3

Memory Memory Memory

P1 P2 P3

Memory Memory MemoryLogical shared memory

Shared Memory Model

SHMEM, DSMDistributed Memory Model

MPI (Message Passing Interface)

Partitioned Global Address Space (PGAS)

OpenSHMEM, UPC, Chapel, X10, CAF, …

• Programming models provide abstract machine models

• Models can be mapped on different types of systems– e.g. Distributed Shared Memory (DSM), MPI within a node, etc.

• PGAS models and Hybrid MPI+PGAS models are gradually receiving importance

Page 11: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

11Network Based Computing Laboratory SCAsia’19

Overview of the MVAPICH2 Project• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)

– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 2001, First version available in 2002

– MVAPICH2-X (MPI + PGAS), Available since 2011

– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014

– Support for Virtualization (MVAPICH2-Virt), Available since 2015

– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015

– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015

– Used by more than 2,950 organizations in 86 countries

– More than 527,000 (> 0.5 million) downloads from the OSU site directly

– Empowering many TOP500 clusters (Nov ‘18 ranking)

• 3rd ranked 10,649,640-core cluster (Sunway TaihuLight) at NSC, Wuxi, China

• 14th, 556,104 cores (Oakforest-PACS) in Japan

• 17th, 367,024 cores (Stampede2) at TACC

• 27th, 241,108-core (Pleiades) at NASA and many others

– Available with software stacks of many vendors and Linux Distros (RedHat, SuSE, and OpenHPC)

– http://mvapich.cse.ohio-state.edu

• Empowering Top500 systems for over a decadePartner in the upcoming TACC Frontera System

Page 12: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

12Network Based Computing Laboratory SCAsia’19

0

100000

200000

300000

400000

500000

600000Se

p-04

Feb-

05Ju

l-05

Dec-

05M

ay-0

6O

ct-0

6M

ar-0

7Au

g-07

Jan-

08Ju

n-08

Nov

-08

Apr-

09Se

p-09

Feb-

10Ju

l-10

Dec-

10M

ay-1

1O

ct-1

1M

ar-1

2Au

g-12

Jan-

13Ju

n-13

Nov

-13

Apr-

14Se

p-14

Feb-

15Ju

l-15

Dec-

15M

ay-1

6O

ct-1

6M

ar-1

7Au

g-17

Jan-

18Ju

n-18

Nov

-18

Num

ber o

f Dow

nloa

ds

Timeline

MV

0.9.

4

MV2

0.9

.0

MV2

0.9

.8

MV2

1.0

MV

1.0

MV2

1.0.

3

MV

1.1

MV2

1.4

MV2

1.5

MV2

1.6

MV2

1.7

MV2

1.8

MV2

1.9

MV2

-GD

R 2.

0b

MV2

-MIC

2.0

MV2

2.3

.1

MV2

-X2.

3rc1

MV2

Virt

2.2

MV2

-GD

R 2

.3O

SU IN

AM 0

.9.4

MVAPICH2 Release Timeline and Downloads

Page 13: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

13Network Based Computing Laboratory SCAsia’19

Architecture of MVAPICH2 Software Family

High Performance Parallel Programming Models

Message Passing Interface(MPI)

PGAS(UPC, OpenSHMEM, CAF, UPC++)

Hybrid --- MPI + X(MPI + PGAS + OpenMP/Cilk)

High Performance and Scalable Communication RuntimeDiverse APIs and Mechanisms

Point-to-point

Primitives

Collectives Algorithms

Energy-

Awareness

Remote Memory Access

I/O and

File Systems

Fault

ToleranceVirtualization Active

MessagesJob Startup

Introspection & Analysis

Support for Modern Networking Technology(InfiniBand, iWARP, RoCE, Omni-Path)

Support for Modern Multi-/Many-core Architectures(Intel-Xeon, OpenPOWER, Xeon-Phi, ARM, NVIDIA GPGPU)

Transport Protocols Modern Features

RC XRC UD DC UMR ODPSR-IOV

Multi Rail

Transport MechanismsShared

MemoryCMA IVSHMEM

Modern Features

MCDRAM* NVLink CAPI*

* Upcoming

XPMEM

Page 14: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

14Network Based Computing Laboratory SCAsia’19

MVAPICH2 Software Family

Requirements Library

MPI with IB, iWARP, Omni-Path, and RoCE MVAPICH2

Advanced MPI Features/Support, OSU INAM, PGAS and MPI+PGAS with IB, Omni-Path, and RoCE

MVAPICH2-X

MPI with IB, RoCE & GPU and Support for Deep Learning MVAPICH2-GDR

HPC Cloud with MPI & IB MVAPICH2-Virt

Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA

MPI Energy Monitoring Tool OEMT

InfiniBand Network Analysis and Monitoring OSU INAM

Microbenchmarks for Measuring MPI and PGAS Performance OMB

Page 15: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

15Network Based Computing Laboratory SCAsia’19

Big Data (Hadoop, Spark,

HBase, Memcached,

etc.)

Deep Learning(Caffe, TensorFlow, BigDL,

etc.)

HPC (MPI, RDMA, Lustre, etc.)

Convergent Software Stacks for HPC, Big Data and Deep Learning

Page 16: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

16Network Based Computing Laboratory SCAsia’19

One-way Latency and Bandwidth: MPI over IB with MVAPICH2

00.20.40.60.8

11.21.41.61.8

2

TrueScale-QDRConnectX-3-FDRConnectIB-DualFDRConnectX-5-EDROmni-Path

Small Message Latency

Message Size (bytes)

Late

ncy

(us)

1.11

1.19

0.98

1.15

1.04

TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch

ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch

Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch

0

2000

4000

6000

8000

10000

12000

14000TrueScale-QDRConnectX-3-FDRConnectIB-DualFDRConnectX-5-EDROmni-Path

Unidirectional Bandwidth

Band

wid

th

(MBy

tes/

sec)

Message Size (bytes)

12,590

3,373

6,356

12,35812,366

Page 17: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

17Network Based Computing Laboratory SCAsia’19

MPI_Allreduce on KNL + Omni-Path (10,240 Processes)

0

50

100

150

200

250

300

4 8 16 32 64 128 256 512 1024 2048 4096

Late

ncy

(us)

Message SizeMVAPICH2 MVAPICH2-OPT IMPI

0200400600800

100012001400160018002000

8K 16K 32K 64K 128K 256KMessage Size

MVAPICH2 MVAPICH2-OPT IMPI

OSU Micro Benchmark 64 PPN

2.4X

• For MPI_Allreduce latency with 32K bytes, MVAPICH2-OPT can reduce the latency by 2.4XM. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based Multi-Leader Design, SuperComputing '17. Available in MVAPICH2-X 2.3b

Page 18: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

18Network Based Computing Laboratory SCAsia’19

Optimized CMA-based Collectives for Large Messages

1

10

100

1000

10000

100000

10000001K 2K 4K 8K 16

K32

K64

K12

8K25

6K51

2K 1M 2M 4MMessage Size

KNL (2 Nodes, 128 Procs)

MVAPICH2-2.3a

Intel MPI 2017

OpenMPI 2.1.0

Tuned CMA

Late

ncy

(us)

1

10

100

1000

10000

100000

1000000

1K 2K 4K 8K 16K

32K

64K

128K

256K

512K 1M 2M

Message Size

KNL (4 Nodes, 256 Procs)

MVAPICH2-2.3a

Intel MPI 2017

OpenMPI 2.1.0

Tuned CMA1

10

100

1000

10000

100000

1000000

1K 2K 4K 8K 16K

32K

64K

128K

256K

512K 1M

Message Size

KNL (8 Nodes, 512 Procs)

MVAPICH2-2.3a

Intel MPI 2017

OpenMPI 2.1.0

Tuned CMA

• Significant improvement over existing implementation for Scatter/Gather with 1MB messages (up to 4x on KNL, 2x on Broadwell, 14x on OpenPower)

• New two-level algorithms for better scalability• Improved performance for other collectives (Bcast, Allgather, and Alltoall)

~ 2.5xBetter

~ 3.2xBetter

~ 4xBetter

~ 17xBetter

S. Chakraborty, H. Subramoni, and D. K. Panda, Contention Aware Kernel-Assisted MPI Collectives for Multi/Many-core Systems, IEEE Cluster ’17, BEST Paper Finalist

Performance of MPI_Gather on KNL nodes (64PPN)

Available in MVAPICH2-X 2.3b

Page 19: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

19Network Based Computing Laboratory SCAsia’19

Shared Address Space (XPMEM)-based Collectives Design

1

10

100

1000

10000

100000

16K 32K 64K 128K 256K 512K 1M 2M 4M

Late

ncy

(us)

Message Size

MVAPICH2-2.3bIMPI-2017v1.132MVAPICH2-X-2.3rc1

OSU_Allreduce (Broadwell 256 procs)

• “Shared Address Space”-based true zero-copy Reduction collective designs in MVAPICH2

• Offloaded computation/communication to peers ranks in reduction collective operation

• Up to 4X improvement for 4MB Reduce and up to 1.8X improvement for 4M AllReduce

73.2

1.8X

1

10

100

1000

10000

100000

16K 32K 64K 128K 256K 512K 1M 2M 4MMessage Size

MVAPICH2-2.3bIMPI-2017v1.132MVAPICH2-2.3rc1

OSU_Reduce (Broadwell 256 procs)

4X

36.1

37.9

16.8

J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and D. Panda, Designing Efficient Shared Address Space Reduction Collectives for Multi-/Many-cores, International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018.

Available in MVAPICH2-X 2.3rc1

Page 20: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

20Network Based Computing Laboratory SCAsia’19

0

0.5

1

0 1 2 4 8 16 32 64 128 256 512 1K 2K

Late

ncy

(us)

MVAPICH2-2.3.1

SpectrumMPI-2019.02.07

Intra-node Point-to-Point Performance on OpenPower

Platform: Two nodes of OpenPOWER (POWER9-ppc64le) CPU using Mellanox EDR (MT4121) HCA

Intra-Socket Small Message Latency Intra-Socket Large Message Latency

Intra-Socket Bi-directional BandwidthIntra-Socket Bandwidth

0.22us0

20

40

60

80

100

4K 8K 16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

MVAPICH2-2.3.1

SpectrumMPI-2019.02.07

0

10000

20000

30000

40000

1 8 64 512 4K 32K 256K 2M

Band

wid

th (M

B/s) MVAPICH2-2.3.1

SpectrumMPI-2019.02.07

0

20000

40000

1 8 64 512 4K 32K 256K 2MBi

-Ban

dwid

th (M

B/s) MVAPICH2-2.3.1

SpectrumMPI-2019.02.07

Page 21: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

21Network Based Computing Laboratory SCAsia’19

0

0.5

1

1.5

0 1 2 4 8 16 32 64 128256512 1K 2K 4K

Late

ncy

(us)

MVAPICH2-2.3

Intra-node Point-to-point Performance on ARM Cortex-A72

0

2000

4000

6000

8000

10000

Band

wid

th (M

B/s) MVAPICH2-2.3

0

5000

10000

15000

20000

Bidi

rect

iona

l Ban

dwid

th MVAPICH2-2.3

Platform: ARM Cortex A72 (aarch64) processor with 64 cores dual-socket CPU. Each socket contains 32 cores.

Small Message Latency Large Message Latency

Bi-directional BandwidthBandwidth

0.27 micro-second (1 bytes)

0

200

400

600

800

8K 16K 32K 64K 128K 256K 512K 1M 2M 4M

Late

ncy

(us)

MVAPICH2-2.3

Page 22: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

22Network Based Computing Laboratory SCAsia’19

0

20

40

60

80

100

120

140

160

MILC Leslie3D POP2 LAMMPS WRF2 LU

Exec

utio

n Ti

me

in (s

)

Intel MPI 18.1.163

MVAPICH2-X

31%

SPEC MPI 2007 Benchmarks: Broadwell + InfiniBand

MVAPICH2-X outperforms Intel MPI by up to 31%

Configuration: 448 processes on 16 Intel E5-2680v4 (Broadwell) nodes having 28 PPN and interconnected with 100Gbps Mellanox MT4115 EDR ConnectX-4 HCA

29% 5%

-12%1%

11%

Page 23: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

23Network Based Computing Laboratory SCAsia’19

Application Scalability on Skylake and KNL (Stamepede2)MiniFE (1300x1300x1300 ~ 910 GB)

Runtime parameters: MV2_SMPI_LENGTH_QUEUE=524288 PSM2_MQ_RNDV_SHM_THRESH=128K PSM2_MQ_RNDV_HFI_THRESH=128K

0

50

100

150

2048 4096 8192

Exec

utio

n Ti

me

(s)

No. of Processes (KNL: 64ppn)

MVAPICH2

0

20

40

60

2048 4096 8192

Exec

utio

n Ti

me

(s)

No. of Processes (Skylake: 48ppn)

MVAPICH2

0

500

1000

1500

48 96 192 384 768No. of Processes (Skylake: 48ppn)

MVAPICH2

NEURON (YuEtAl2012)

Courtesy: Mahidhar Tatineni @SDSC, Dong Ju (DJ) Choi@SDSC, and Samuel Khuvis@OSC ---- Testbed: TACC Stampede2 using MVAPICH2-2.3b

0

1000

2000

3000

4000

64 128 256 512 1024 2048 4096No. of Processes (KNL: 64ppn)

MVAPICH2

0

500

1000

1500

68 136 272 544 1088 2176 4352No. of Processes (KNL: 68ppn)

MVAPICH2

0

500

1000

1500

2000

48 96 192 384 768 1536 3072No. of Processes (Skylake: 48ppn)

MVAPICH2

Cloverleaf (bm64) MPI+OpenMP, NUM_OMP_THREADS = 2

Page 24: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

24Network Based Computing Laboratory SCAsia’19

At Sender:

At Receiver:MPI_Recv(r_devbuf, size, …);

insideMVAPICH2

• Standard MPI interfaces used for unified data movement

• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)

• Overlaps data movement from GPU with RDMA transfers

High Performance and High Productivity

MPI_Send(s_devbuf, size, …);

GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU

Page 25: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

25Network Based Computing Laboratory SCAsia’19

0

2000

4000

6000

1 2 4 8 16 32 64 128

256

512 1K 2K 4K

Band

wid

th (M

B/s)

Message Size (Bytes)

GPU-GPU Inter-node Bi-Bandwidth

MV2-(NO-GDR) MV2-GDR-2.3

01000200030004000

1 2 4 8 16 32 64 128

256

512 1K 2K 4K

Band

wid

th (M

B/s)

Message Size (Bytes)

GPU-GPU Inter-node Bandwidth

MV2-(NO-GDR) MV2-GDR-2.3

0

10

20

300 1 2 4 8 16 32 64 128

256

512 1K 2K 4K 8K

Late

ncy

(us)

Message Size (Bytes)

GPU-GPU Inter-node Latency

MV2-(NO-GDR) MV2-GDR 2.3

MVAPICH2-GDR-2.3Intel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores

NVIDIA Volta V100 GPUMellanox Connect-X4 EDR HCA

CUDA 9.0Mellanox OFED 4.0 with GPU-Direct-RDMA

10x

9x

Optimized MVAPICH2-GDR Design

1.85us11X

Page 26: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

26Network Based Computing Laboratory SCAsia’19

• Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)• HoomdBlue Version 1.0.5

• GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768 MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768 MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384

Application-Level Evaluation (HOOMD-blue)

0

500

1000

1500

2000

2500

4 8 16 32

Aver

age

Tim

e St

eps p

er

seco

nd (T

PS)

Number of Processes

MV2 MV2+GDR

0500

100015002000250030003500

4 8 16 32Aver

age

Tim

e St

eps p

er

seco

nd (T

PS)

Number of Processes

64K Particles 256K Particles

2X2X

Page 27: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

27Network Based Computing Laboratory SCAsia’19

Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland

0

0.2

0.4

0.6

0.8

1

1.2

16 32 64 96Nor

mal

ized

Exec

utio

n Ti

me

Number of GPUs

CSCS GPU cluster

Default Callback-based Event-based

00.20.40.60.8

11.2

4 8 16 32

Nor

mal

ized

Exec

utio

n Ti

me

Number of GPUs

Wilkes GPU Cluster

Default Callback-based Event-based

• 2X improvement on 32 GPUs nodes• 30% improvement on 96 GPU nodes (8 GPUs/node)

C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data Movement Processing on Modern GPU-enabled Systems, IPDPS’16

On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application

Cosmo model: http://www2.cosmo-model.org/content/tasks/operational/meteoSwiss/

Page 28: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

28Network Based Computing Laboratory SCAsia’19

0

5

10

15

20

1 2 4 8 16 32 64 128

256

512 1K 2K 4K 8K

Late

ncy

(us)

Message Size (Bytes)

INTRA-NODE LATENCY (SMALL)

Intra-Socket

Inter-Socket

MVAPICH2-GDR: Performance on OpenPOWER (NVLink + Volta)

05

1015202530

1 4 16 64 256 1K 4K 16K

64K

256K 1M 4M

Band

wid

th (G

B/se

c)

Message Size (Bytes)

INTER-NODE BANDWIDTH

Platform: OpenPOWER (ppc64le) nodes equipped with a dual-socket CPU, 4 Volta V100-SXM2 GPUs, and 2port EDR InfiniBand Interconnect

0

100

200

300

400

500

16K 32K 64K 128K256K512K 1M 2M 4M

Late

ncy

(us)

Message Size (Bytes)

INTRA-NODE LATENCY (LARGE)

Intra-Socket

Inter-Socket

0

5

10

15

0 1 2 4 8 16 32 64 128

256

512 1K 2K 4K 8K

Late

ncy

(us)

Message Size (Bytes)

INTER-NODE LATENCY (SMALL)

0

100

200

300

400

Late

ncy

(us)

Message Size (Bytes)

INTER-NODE LATENCY (LARGE)

Intra-node Bandwidth: 63.7 GB/sec (NVLINK)Intra-node Latency: 5.77 us (without GDRCopy)

Inter-node Latency: 5.77 us (without GDRCopy) Inter-node Bandwidth: 23.7 GB/sec (2 port EDR)Available since MVAPICH2-GDR 2.3a

0

20

40

60

80

1 4 16 64 256 1K 4K 16K

64K

256K 1M 4M

Band

wid

th (G

B/se

c)

Message Size (Bytes)

INTRA-NODE BANDWIDTH

Intra-SocketInter-Socket

Page 29: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

29Network Based Computing Laboratory SCAsia’19

0

1000

2000

16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

MVAPICH2-GDR-Next

SpectrumMPI-10.1.0

OpenMPI-3.0.0

3X0

2000

4000

16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

Message Size

MVAPICH2-GDR-Next

SpectrumMPI-10.1.0

OpenMPI-3.0.0

34%

0

1000

2000

3000

4000

16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

Message Size

MVAPICH2-GDR-Next

SpectrumMPI-10.1.0

OpenMPI-3.0.0

0

1000

2000

16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(us)

MVAPICH2-GDR-Next

SpectrumMPI-10.1.0

OpenMPI-3.0.0

Optimized All-Reduce with XPMEM on OpenPOWER(N

odes

=1, P

PN=2

0)

Optimized Runtime Parameters: MV2_CPU_BINDING_POLICY=hybrid MV2_HYBRID_BINDING_POLICY=bunch

• Optimized MPI All-Reduce Design in MVAPICH2– Up to 2X performance improvement over Spectrum MPI and 4X over OpenMPI for intra-node

2X

(Nod

es=2

, PPN

=20)

4X48%

3.3X

2X

2X

Page 30: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

30Network Based Computing Laboratory SCAsia’19

• MVAPICH Project – MPI and PGAS (MVAPICH) Library with CUDA-Awareness

• HiDL Project – High-Performance Deep Learning

• HiBD Project – High-Performance Big Data Analytics Library

• Commercial Support from X-ScaleSolutions

• Conclusions and Q&A

Presentation Overview

Page 31: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

31Network Based Computing Laboratory SCAsia’19

• Deep Learning frameworks are a different game altogether

– Unusually large message sizes (order of megabytes)

– Most communication based on GPU buffers

• Existing State-of-the-art– cuDNN, cuBLAS, NCCL --> scale-up performance

– NCCL2, CUDA-Aware MPI --> scale-out performance• For small and medium message sizes only!

• Proposed: Can we co-design the MPI runtime (MVAPICH2-GDR) and the DL framework (Caffe) to achieve both?

– Efficient Overlap of Computation and Communication

– Efficient Large-Message Communication (Reductions)

– What application co-designs are needed to exploit communication-runtime co-designs?

Deep Learning: New Challenges for MPI Runtimes

Scal

e-up

Per

form

ance

Scale-out PerformanceA. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17)

cuDNN

gRPC

Hadoop

MPI

MKL-DNN

DesiredNCCL2

Page 32: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

32Network Based Computing Laboratory SCAsia’19

Big Data (Hadoop, Spark,

HBase, Memcached,

etc.)

Deep Learning(Caffe, TensorFlow, BigDL,

etc.)

HPC (MPI, RDMA, Lustre, etc.)

Convergent Software Stacks for HPC, Big Data and Deep Learning

Page 33: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

33Network Based Computing Laboratory SCAsia’19

• Efficient Allreduce is crucial for Horovod’s overall training performance

– Both MPI and NCCL designs are available

• We have evaluated Horovod extensively and compared across a wide range of designs using gRPC and gRPC extensions

• MVAPICH2-GDR achieved up to 90%scaling efficiency for ResNet-50 Training on 64 Pascal GPUs

Exploiting CUDA-Aware MPI for TensorFlow (Horovod)

Awan et al., “Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation”, (To be presented) CCGrid ‘19. https://arxiv.org/abs/1810.11112

Page 34: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

34Network Based Computing Laboratory SCAsia’19

0

10000

20000

30000

40000

50000

512K 1M 2M 4M

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2 BAIDU OPENMPI

0100000020000003000000400000050000006000000

8388

608

1677

7216

3355

4432

6710

8864

1342

1772

8

2684

3545

6

5368

7091

2

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2 BAIDU OPENMPI

1

10

100

1000

10000

100000

4 16 64 256

1024

4096

1638

4

6553

6

2621

44

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2 BAIDU OPENMPI

• 16 GPUs (4 nodes) MVAPICH2-GDR vs. Baidu-Allreduce and OpenMPI 3.0

MVAPICH2-GDR: Allreduce Comparison with Baidu and OpenMPI

*Available since MVAPICH2-GDR 2.3a

~30X betterMV2 is ~2X better

than Baidu

~10X better OpenMPI is ~5X slower than Baidu

~4X better

Page 35: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

35Network Based Computing Laboratory SCAsia’19

MVAPICH2-GDR vs. NCCL2 – Allreduce Operation (DGX-2)• Optimized designs in upcoming MVAPICH2-GDR offer better/comparable performance for most cases

• MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) on 1 DGX-2 node (16 Volta GPUs)

1

10

100

1000

10000

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2-GDR-Next NCCL-2.3

~1.7X better

Platform: Nvidia DGX-2 system (16 Nvidia Volta GPUs connected with NVSwitch), CUDA 9.2

0

10

20

30

40

50

60

8 16 32 64 128

256

512 1K 2K 4K 8K 16K

32K

64K

128K

Late

ncy

(us)

Message Size (Bytes)

MVAPICH2-GDR-Next NCCL-2.3

~2.5X better

Page 36: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

36Network Based Computing Laboratory SCAsia’19

MVAPICH2-GDR vs. NCCL2 – ResNet-50 Training• ResNet-50 Training using TensorFlow benchmark on 1 DGX-2 node (8 Volta GPUs)

0500

10001500200025003000

1 2 4 8

Imag

e pe

r sec

ond

Number of GPUs

NCCL-2.3 MVAPICH2-GDR-Next

7.5% higher

Platform: Nvidia DGX-2 system (16 Nvidia Volta GPUs connected with NVSwitch), CUDA 9.2

75

80

85

90

95

100

1 2 4 8

Scal

ing

Effic

ienc

y (%

)

Number of GPUs

NCCL-2.3 MVAPICH2-GDR-Next

Scaling Efficiency =Actual throughput

Ideal throughput at scale× 100%

Page 37: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

37Network Based Computing Laboratory SCAsia’19

• High-Performance Design of TensorFlow over RDMA-enabled Interconnects

– High performance RDMA-enhanced design with native InfiniBand support at the verbs-level for gRPC and TensorFlow

– RDMA-based data communication

– Adaptive communication protocols

– Dynamic message chunking and accumulation

– Support for RDMA device selection

– Easily configurable for different protocols (native InfiniBand and IPoIB)

• Current release: 0.9.1

– Based on Google TensorFlow 1.3.0

– Tested with• Mellanox InfiniBand adapters (e.g., EDR)

• NVIDIA GPGPU K80

• Tested with CUDA 8.0 and CUDNN 5.0

– http://hidl.cse.ohio-state.edu

RDMA-TensorFlow Distribution

OpenPOWER support will be coming soon

Page 38: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

38Network Based Computing Laboratory SCAsia’19

0

50

100

150

200

16 32 64

Imag

es /

Seco

nd

Batch Size

gRPPC (IPoIB-100Gbps)Verbs (RDMA-100Gbps)MPI (RDMA-100Gbps)AR-gRPC (RDMA-100Gbps)

Performance Benefit for RDMA-TensorFlow (Inception3)

• TensorFlow Inception3 performance evaluation on an IB EDR cluster– Up to 20% performance speedup over Default gRPC (IPoIB) for 8 GPUs– Up to 34% performance speedup over Default gRPC (IPoIB) for 16 GPUs– Up to 37% performance speedup over Default gRPC (IPoIB) for 24 GPUs

4 Nodes (8 GPUS) 8 Nodes (16 GPUS) 12 Nodes (24 GPUS)

0

200

400

600

16 32 64

Imag

es /

Seco

nd

Batch Size

gRPPC (IPoIB-100Gbps)Verbs (RDMA-100Gbps)MPI (RDMA-100Gbps)AR-gRPC (RDMA-100Gbps)

0

100

200

300

400

16 32 64Im

ages

/ Se

cond

Batch Size

gRPPC (IPoIB-100Gbps)Verbs (RDMA-100Gbps)MPI (RDMA-100Gbps)AR-gRPC (RDMA-100Gbps)

Page 39: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

39Network Based Computing Laboratory SCAsia’19

• Caffe : A flexible and layered Deep Learning framework.

• Benefits and Weaknesses– Multi-GPU Training within a single node

– Performance degradation for GPUs across different sockets

– Limited Scale-out

• OSU-Caffe: MPI-based Parallel Training – Enable Scale-up (within a node) and Scale-out (across

multi-GPU nodes)

– Scale-out on 64 GPUs for training CIFAR-10 network on CIFAR-10 dataset

– Scale-out on 128 GPUs for training GoogLeNet network on ImageNet dataset

OSU-Caffe: Scalable Deep Learning

0

50

100

150

200

250

8 16 32 64 128

Trai

ning

Tim

e (s

econ

ds)

No. of GPUs

GoogLeNet (ImageNet) on 128 GPUs

Caffe OSU-Caffe (1024) OSU-Caffe (2048)

Invalid use caseOSU-Caffe publicly available from

http://hidl.cse.ohio-state.edu/

Page 40: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

40Network Based Computing Laboratory SCAsia’19

Using HiDL Packages for Deep Learning on Existing HPC Infrastructure

Hadoop Job

Page 41: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

41Network Based Computing Laboratory SCAsia’19

• MVAPICH Project – MPI and PGAS (MVAPICH) Library with CUDA-Awareness

• HiDL Project – High-Performance Deep Learning

• HiBD Project – High-Performance Big Data Analytics Library

• Commercial Support from X-ScaleSolutions

• Conclusions and Q&A

Presentation Overview

Page 42: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

42Network Based Computing Laboratory SCAsia’19

• Substantial impact on designing and utilizing data management and processing systems in multiple tiers

– Front-end data accessing and serving (Online)• Memcached + DB (e.g. MySQL), HBase

– Back-end data analytics (Offline)• HDFS, MapReduce, Spark

Data Management and Processing on Modern Datacenters

Page 43: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

43Network Based Computing Laboratory SCAsia’19

Big Data (Hadoop, Spark,

HBase, Memcached,

etc.)

Deep Learning(Caffe, TensorFlow, BigDL,

etc.)

HPC (MPI, RDMA, Lustre, etc.)

Convergent Software Stacks for HPC, Big Data and Deep Learning

Page 44: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

44Network Based Computing Laboratory SCAsia’19

• RDMA for Apache Spark

• RDMA for Apache Hadoop 3.x (RDMA-Hadoop-3.x)

• RDMA for Apache Hadoop 2.x (RDMA-Hadoop-2.x)

– Plugins for Apache, Hortonworks (HDP) and Cloudera (CDH) Hadoop distributions

• RDMA for Apache Kafka

• RDMA for Apache HBase

• RDMA for Memcached (RDMA-Memcached)

• RDMA for Apache Hadoop 1.x (RDMA-Hadoop)

• OSU HiBD-Benchmarks (OHB)

– HDFS, Memcached, HBase, and Spark Micro-benchmarks

• http://hibd.cse.ohio-state.edu

• Users Base: 300 organizations from 35 countries

• More than 29,300 downloads from the project site

The High-Performance Big Data (HiBD) Project

Available for InfiniBand and RoCEAlso run on Ethernet

Available for x86 and OpenPOWER

Support for Singularity and Docker

Page 45: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

45Network Based Computing Laboratory SCAsia’19

HiBD Release Timeline and Downloads

0

5000

10000

15000

20000

25000

30000

Num

ber o

f Dow

nloa

ds

Timeline

RDM

A-H

adoo

p 2.

x 0.

9.1

RDM

A-H

adoo

p 2.

x 0.

9.6

RDM

A-M

emca

ched

0.9

.4

RDM

A-H

adoo

p 2.

x 0.

9.7

RDM

A-Sp

ark

0.9.

1

RDM

A-H

Base

0.9

.1

RDM

A-H

adoo

p 2.

x 1.

0.0

RDM

A-Sp

ark

0.9.

4

RDM

A-H

adoo

p 2.

x 1.

3.0

RDM

A-M

emca

ched

0.9

.6&

OH

B-0.

9.3

RDM

A-Sp

ark

0.9.

5

RDM

A-H

adoo

p 1.

x 0.

9.0

RDM

A-H

adoo

p 1.

x 0.

9.8

RDM

A-M

emca

ched

0.9

.1 &

OH

B-0.

7.1

RDM

A-H

adoo

p 1.

x 0.

9.9

Page 46: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

46Network Based Computing Laboratory SCAsia’19

• HHH: Heterogeneous storage devices with hybrid replication schemes are supported in this mode of operation to have better fault-tolerance as well as performance. This mode is enabled by default in the package.

• HHH-M: A high-performance in-memory based setup has been introduced in this package that can be utilized to perform all I/O operations in-memory and obtain as much performance benefit as possible.

• HHH-L: With parallel file systems integrated, HHH-L mode can take advantage of the Lustre available in the cluster.

• HHH-L-BB: This mode deploys a Memcached-based burst buffer system to reduce the bandwidth bottleneck of shared file system access. The burst buffer design is hosted by Memcached servers, each of which has a local SSD.

• MapReduce over Lustre, with/without local disks: Besides, HDFS based solutions, this package also provides support to run MapReduce jobs on top of Lustre alone. Here, two different modes are introduced: with local disks and without local disks.

• Running with Slurm and PBS: Supports deploying RDMA for Apache Hadoop 2.x with Slurm and PBS in different running modes (HHH, HHH-M, HHH-L, and MapReduce over Lustre).

Different Modes of RDMA for Apache Hadoop 2.x

Page 47: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

47Network Based Computing Laboratory SCAsia’19

050

100150200250300350400

80 120 160

Exec

utio

n Ti

me

(s)

Data Size (GB)

IPoIB (EDR)OSU-IB (EDR)

0100200300400500600700800

80 160 240

Exec

utio

n Ti

me

(s)

Data Size (GB)

IPoIB (EDR)OSU-IB (EDR)

Performance Numbers of RDMA for Apache Hadoop 2.x –RandomWriter & TeraGen in OSU-RI2 (EDR)

Cluster with 8 Nodes with a total of 64 maps

• RandomWriter– 3x improvement over IPoIB

for 80-160 GB file size

• TeraGen– 4x improvement over IPoIB for

80-240 GB file size

RandomWriter TeraGen

Reduced by 3x Reduced by 4x

Page 48: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

48Network Based Computing Laboratory SCAsia’19

Using HiBD Packages for Big Data Processing on Existing HPC Infrastructure

Page 49: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

49Network Based Computing Laboratory SCAsia’19

• InfiniBand FDR, SSD, 32/64 Worker Nodes, 768/1536 Cores, (768/1536M 768/1536R)

• RDMA vs. IPoIB with 768/1536 concurrent tasks, single SSD per node. – 32 nodes/768 cores: Total time reduced by 37% over IPoIB (56Gbps)

– 64 nodes/1536 cores: Total time reduced by 43% over IPoIB (56Gbps)

Performance Evaluation of RDMA-Spark on SDSC Comet – HiBench PageRank

32 Worker Nodes, 768 cores, PageRank Total Time 64 Worker Nodes, 1536 cores, PageRank Total Time

050

100150200250300350400450

Huge BigData Gigantic

Tim

e (s

ec)

Data Size (GB)

IPoIB

RDMA

0

100

200

300

400

500

600

700

800

Huge BigData Gigantic

Tim

e (s

ec)

Data Size (GB)

IPoIB

RDMA

43%37%

Page 50: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

50Network Based Computing Laboratory SCAsia’19

Using HiBD Packages for Big Data Processing on Existing HPC Infrastructure

Page 51: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

51Network Based Computing Laboratory SCAsia’19

• MVAPICH Project – MPI and PGAS (MVAPICH) Library with CUDA-Awareness

• HiDL Project – High-Performance Deep Learning

• HiBD Project – High-Performance Big Data Analytics Library

• Commercial Support from X-ScaleSolutions

• Conclusions and Q&A

Presentation Overview

Page 52: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

52Network Based Computing Laboratory SCAsia’19

• Supported through X-ScaleSolutions (http://x-scalesolutions.com)• Benefits:

– Help and guidance with installation of the library

– Platform-specific optimizations and tuning

– Timely support for operational issues encountered with the library

– Web portal interface to submit issues and tracking their progress

– Advanced debugging techniques

– Application-specific optimizations and tuning

– Obtaining guidelines on best practices

– Periodic information on major fixes and updates

– Information on major releases

– Help with upgrading to the latest release

– Flexible Service Level Agreements • Support provided to Lawrence Livermore National Laboratory (LLNL) for the last two years

Commercial Support for MVAPICH2, HiBD, and HiDL Libraries

Page 53: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

53Network Based Computing Laboratory SCAsia’19

• MVAPICH Project – MPI and PGAS (MVAPICH) Library with CUDA-Awareness

• HiDL Project – High-Performance Deep Learning

• HiBD Project – High-Performance Big Data Analytics Library

• Commercial Support from X-ScaleSolutions

• Conclusions and Q&A

Presentation Overview

Page 54: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

54Network Based Computing Laboratory SCAsia’19

• Upcoming Exascale systems need to be designed with a holistic view of HPC, Big Data, Deep Learning, and Cloud

• Presented an overview of designing convergent software stacks for HPC, Big Data, and Deep Learning

• Presented solutions enable HPC, Big Data, and Deep Learning communities to take advantage of current and next-generation systems

Concluding Remarks

Page 55: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

55Network Based Computing Laboratory SCAsia’19

Funding AcknowledgmentsFunding Support by

Equipment Support by

Page 56: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

56Network Based Computing Laboratory SCAsia’19

Personnel AcknowledgmentsCurrent Students (Graduate)

– A. Awan (Ph.D.)

– M. Bayatpour (Ph.D.)

– S. Chakraborthy (Ph.D.)

– C.-H. Chu (Ph.D.)– S. Guganani (Ph.D.)

Past Students – A. Augustine (M.S.)

– P. Balaji (Ph.D.)

– R. Biswas (M.S.)

– S. Bhagvat (M.S.)

– A. Bhat (M.S.)

– D. Buntinas (Ph.D.)

– L. Chai (Ph.D.)

– B. Chandrasekharan (M.S.)

– N. Dandapanthula (M.S.)

– V. Dhanraj (M.S.)

– T. Gangadharappa (M.S.)

– K. Gopalakrishnan (M.S.)

– R. Rajachandrasekar (Ph.D.)

– G. Santhanaraman (Ph.D.)

– A. Singh (Ph.D.)

– J. Sridhar (M.S.)

– S. Sur (Ph.D.)

– H. Subramoni (Ph.D.)

– K. Vaidyanathan (Ph.D.)

– A. Vishnu (Ph.D.)

– J. Wu (Ph.D.)

– W. Yu (Ph.D.)

– J. Zhang (Ph.D.)

Past Research Scientist– K. Hamidouche

– S. Sur

Past Post-Docs– D. Banerjee

– X. Besseron

– H.-W. Jin

– W. Huang (Ph.D.)

– W. Jiang (M.S.)

– J. Jose (Ph.D.)

– S. Kini (M.S.)

– M. Koop (Ph.D.)

– K. Kulkarni (M.S.)

– R. Kumar (M.S.)

– S. Krishnamoorthy (M.S.)

– K. Kandalla (Ph.D.)

– M. Li (Ph.D.)

– P. Lai (M.S.)

– J. Liu (Ph.D.)

– M. Luo (Ph.D.)

– A. Mamidala (Ph.D.)

– G. Marsh (M.S.)

– V. Meshram (M.S.)

– A. Moody (M.S.)

– S. Naravula (Ph.D.)

– R. Noronha (Ph.D.)

– X. Ouyang (Ph.D.)

– S. Pai (M.S.)

– S. Potluri (Ph.D.)

– J. Hashmi (Ph.D.)

– A. Jain (Ph.D.)– K. S. Khorassani (Ph.D.)

– P. Kousha (Ph.D.)

– D. Shankar (Ph.D.)

– J. Lin

– M. Luo

– E. Mancini

Current Research Asst. Professor– X. Lu

Past Programmers– D. Bureddy

– J. Perkins

Current Research Specialist– J. Smith

– S. Marcarelli

– J. Vienne

– H. Wang

Current Post-doc– A. Ruhela

– K. Manian

Current Students (Undergraduate)– V. Gangal (B.S.)

– M. Haupt (B.S.)

– N. Sarkauskas (B.S.)

– A. Yeretzian (B.S.)

Past Research Specialist– M. Arnold

Current Research Scientist– H. Subramoni

Page 57: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

57Network Based Computing Laboratory SCAsia’19

• Looking for Bright and Enthusiastic Personnel to join as

– PhD Students

– Post-Doctoral Researchers

– MPI Programmer/Software Engineer

– Hadoop/Big Data Programmer/Software Engineer

– Deep Learning and Cloud Programmer/Software Engineer

• If interested, please send an e-mail to [email protected]

Multiple Positions Available in My Group

Page 58: How to Design Convergent HPC, Deep Learning and Big Data … · 2019. 9. 3. · Analytics Software Stacks for Exascale Systems? Dhabaleswar K. (DK) Panda The ... Can We Run HPC, Big

58Network Based Computing Laboratory SCAsia’19

Thank You!

Network-Based Computing Laboratoryhttp://nowlab.cse.ohio-state.edu/

[email protected]

The High-Performance MPI/PGAS Projecthttp://mvapich.cse.ohio-state.edu/

The High-Performance Deep Learning Projecthttp://hidl.cse.ohio-state.edu/

The High-Performance Big Data Projecthttp://hibd.cse.ohio-state.edu/