The MVAPICH2 Project: Latest Developments and Plans...

49
The MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing Hari Subramoni The Ohio State University E-mail: [email protected] http://www.cse.ohio-state.edu/~ subramon Presentation at OSU Booth (SC ‘19) by

Transcript of The MVAPICH2 Project: Latest Developments and Plans...

Page 1: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

The MVAPICH2 Project: Latest Developments and Plans Towards

Exascale Computing

Hari Subramoni

The Ohio State University

E-mail: [email protected]

http://www.cse.ohio-state.edu/~subramon

Presentation at OSU Booth (SC ‘19)

by

Page 2: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 2OSU Booth SC’19

Drivers of Modern HPC Cluster Architectures

• Multi-core/many-core technologies

• Remote Direct Memory Access (RDMA)-enabled networking (InfiniBand and RoCE)

• Solid State Drives (SSDs), Non-Volatile Random-Access Memory (NVRAM), NVMe-SSD

• Accelerators (NVIDIA GPGPUs and Intel Xeon Phi)

• Available on HPC Clouds, e.g., Amazon EC2, NSF Chameleon, Microsoft Azure, etc.

Accelerators / Coprocessors high compute density, high

performance/watt>1 TFlop DP on a chip

High Performance Interconnects - InfiniBand<1usec latency, 200Gbps

Bandwidth>Multi-core Processors

SSD, NVMe-SSD, NVRAM

K - Computer

Sunway TaihuLight

Summit Sierra

Page 3: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 3OSU Booth SC’19

Parallel Programming Models Overview

P1P1 P2P2 P3P3

Shared Memory

P1P1 P2P2 P3P3

Memory

Memory

Memory

P1P1 P2P2 P3P3

Memory

Memory

Memory

Logical shared memory

Shared Memory Model

SHMEM, DSMDistributed Memory Model

MPI (Message Passing Interface)

Partitioned Global Address Space (PGAS)

Global Arrays, UPC, Chapel, X10, CAF, …

• Programming models provide abstract machine models

• Models can be mapped on different types of systems– e.g. Distributed Shared Memory (DSM), MPI within a node,

etc.

• PGAS models and Hybrid MPI+PGAS models are gradually receiving importance

Page 4: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 4OSU Booth SC’19

Designing Communication Libraries for Multi-Petaflop and Exaflop Systems: Challenges

Programming ModelsMPI, PGAS (UPC, Global Arrays, OpenSHMEM),

CUDA, OpenMP, OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.

Application Kernels/Applications

Networking Technologies(InfiniBand, 40/100GigE,

Aries, and OmniPath)

Multi/Many-coreArchitectures

Accelerators(GPU and FPGA)

Middleware

Co-Design Opportunities and

Challenges across Various Layers

Performance

Scalability

Fault-Resilienc

e

Co-Design Opportunities and

Challenges across Various Layers

Performance

Scalability

Fault-Resilienc

e

Communication Library or Runtime for Programming ModelsPoint-to-

point Communica

tion

Point-to-point

Communication

Collective Communica

tion

Collective Communica

tion

Energy-

Awareness

Energy-

Awareness

Synchronization and

Locks

Synchronization and

Locks

I/O and

File Systems

I/O and

File Systems

Fault

Tolerance

Fault

Tolerance

Page 5: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 5OSU Booth SC’19

• Scalability for million to billion processors– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-

sided)– Scalable job start-up

• Scalable Collective communication– Offload– Non-blocking– Topology-aware

• Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores)– Multiple end-points per node

• Support for efficient multi-threading• Integrated Support for GPGPUs and FPGAs• Fault-tolerance/resiliency• QoS support for communication and I/O• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC,

MPI+UPC++, MPI + OpenSHMEM, CAF, …)• Virtualization • Energy-Awareness

MPI+X Programming model: Broad Challenges at Exascale

Page 6: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 6OSU Booth SC’19

Overview of the MVAPICH2 Project• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over

Converged Ethernet (RoCE)

– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 2001, First version available in 2002

– MVAPICH2-X (MPI + PGAS), Available since 2011

– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014

– Support for Virtualization (MVAPICH2-Virt), Available since 2015

– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015

– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015

– Used by more than 3,050 organizations in 89 countries

– More than 614,000 (> 0.6 million) downloads from the OSU site directly– Empowering many TOP500 clusters (Nov ‘18 ranking)

• 3rd, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China

• 5th, 448, 448 cores (Frontera) at TACC

• 8th, 391,680 cores (ABCI) in Japan

• 15th, 570,020 cores (Neurion) in South Korea and many others

– Available with software stacks of many vendors and Linux Distros (RedHat, SuSE, and

OpenHPC)

– http://mvapich.cse.ohio-state.edu• Empowering Top500 systems for over a decade

Partner in the TACC Frontera System

Page 7: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 7OSU Booth SC’19

0

100000

200000

300000

400000

500000

600000

Timeline

Num

ber of Dow

nlo

ads

MV 0 . 9 . 4

MV 2 0 . 9 . 0

MV 2 0 . 9 . 8

MV 2 1 . 0

MV 1 . 0

MV 2 1 . 0 . 3

MV 1 . 1

MV 2 1 . 4

MV 2 1 . 5

MV 2 1 . 6

MV 2 1 . 7

MV 2 1 . 8

MV 2 1 . 9M

V 2 - G D R 2 . 0 b

MV 2 - M I C 2 . 0

MV 2 - G D R 2 . 3 . 2

MV 2 - X 2 . 3 r c 2

MV 2 V i r t 2 . 2M

V 2 2 . 3 . 2

OS U I N A M 0 . 9 . 3

MV 2 - A z u r e 2 . 3 . 2

M

V 2 - A W S 2 . 3

MVAPICH2 Release Timeline and Downloads

Page 8: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 8OSU Booth SC’19

Architecture of MVAPICH2 Software Family

High Performance Parallel Programming Models

Message Passing Interface(MPI)

PGAS(UPC, OpenSHMEM, CAF, UPC++)

Hybrid --- MPI + X(MPI + PGAS + OpenMP/Cilk)

High Performance and Scalable Communication RuntimeDiverse APIs and Mechanisms

Point-to-point

Primitives

Point-to-point

Primitives

Collectives

Algorithms

Collectives

Algorithms

Energy-

Awareness

Energy-

Awareness

Remote Memory Access

Remote Memory Access

I/O and

File Systems

I/O and

File Systems

Fault

Tolerance

Fault

ToleranceVirtualiza

tion

Virtualization

Active Message

s

Active Message

s

Job Startup

Job Startup

Introspection &

Analysis

Introspection &

Analysis

Support for Modern Networking Technology(InfiniBand, iWARP, RoCE, Omni-Path, Elastic Fabric Adapter)

Support for Modern Multi-/Many-core Architectures(Intel-Xeon, OpenPOWER, Xeon-Phi, ARM, NVIDIA GPGPU)

Transport Protocols Modern Features

RCRC SRD

SRD UDUD DCDC UM

R

UMR

ODP

ODP

SR-IOV

SR-IOV

Multi

Rail

Multi

Rail

Transport MechanismsShare

d Memo

ry

Shared

Memory

CMA

CMA

IVSHMEM

IVSHMEM

Modern Features

Optane*Optane* NVLink

NVLink

CAPI*

CAPI*

* Upcoming

XPMEM

XPMEM

Page 9: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 9OSU Booth SC’19

MVAPICH2 Software Family Requirements Library

MPI with IB, iWARP, Omni-Path, and RoCE MVAPICH2

Advanced MPI Features/Support, OSU INAM, PGAS and MPI+PGAS with IB, Omni-Path, and RoCE

MVAPICH2-X

MPI with IB, RoCE & GPU and Support for Deep Learning

MVAPICH2-GDR

HPC Cloud with MPI & IB MVAPICH2-Virt

Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA

MPI Energy Monitoring Tool OEMT

InfiniBand Network Analysis and Monitoring OSU INAM

Microbenchmarks for Measuring MPI and PGAS Performance

OMB

Page 10: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 10OSU Booth SC’19

MVAPICH2 Distributions • MVAPICH2

– Basic MPI support for IB, Omni-Path, iWARP and RoCE

• MVAPICH2-X– Advanced MPI features and support for INAM

– MPI, PGAS and Hybrid MPI+PGAS support for IB

• MVAPICH2-Virt– Optimized for HPC Clouds with IB and SR-IOV virtualization

– Support for OpenStack, Docker, and Singularity

• OSU Micro-Benchmarks (OMB)– MPI (including CUDA-aware MPI), OpenSHMEM and UPC

• OSU INAM– InfiniBand Network Analysis and Monitoring Tool

Page 11: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 11OSU Booth SC’19

Startup Performance on TACC Frontera

• MPI_Init takes 3.9 seconds on 57,344 processes on 1,024 nodes• MPI_Init takes 195 seconds on 229376 processes on 4096 nodes while MVAPICH2

takes 31 seconds• All numbers reported with 56 processes per node

4.5s3.9s

New designs available in MVAPICH2-2.3.2

56 112 224 448 896 1792 3584 7168 14336 28672 573440

1000

2000

3000

4000

5000

MPI_Init on Frontera

Intel MPI 2019

Number of Processes

Tim

e T

ak

en

(M

illi

se

co

nd

s)

Page 12: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 12OSU Booth SC’19

One-way Latency: MPI over IB with MVAPICH2

00.20.40.60.8

11.21.41.61.8 Small Message La-

tency

Message Size (bytes)

L a t e n c y ( u s )

1.11

1.19

1.01

1.15

1.041.1

TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch

ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch

Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switchConnectX-6-HDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch

0

20

40

60

80

100

120TrueScale-QDRConnectX-3-FDRConnectIB-DualFDRConnectX-4-EDROmni-PathConnectX-6 HDR

Large Message La-tency

Message Size (bytes)

L a t e n c y ( u s )

Page 13: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 13OSU Booth SC’19

Bandwidth: MPI over IB with MVAPICH2

0

5000

10000

15000

20000

25000

30000 Unidirectional Band-width

B a n d w i d t h ( M B y t e s / s e c )

Message Size (bytes)

12,590

3,373

6,356

12,083

12,366

24,532

0

10000

20000

30000

40000

50000

60000TrueScale-QDRConnectX-3-FDRConnectIB-DualFDRConnectX-4-EDROmni-PathConnectX-6 HDR

Bidirectional Band-width

B a n d w i d t h ( M B y t e s / s e c )

Message Size (bytes)

21,22712,161

21,983

6,228

48,027

24,136

TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch

ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch

Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switchConnectX-6-HDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch

Page 14: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 14OSU Booth SC’19

4 8 16 32 64 128 256 512 1K 2K0

0.10.20.30.40.50.60.70.8

MVAPICH2-X 2.3.2Late

ncy

(u

s)

0.25us

Intra-node Point-to-Point Performance on OpenPOWER

Platform: Two nodes of OpenPOWER (Power9-ppc64le) CPU using Mellanox EDR (MT4121) HCA

Intra-Socket Small Message Latency

Intra-Socket Large Message Latency

Intra-Socket Bi-directional Bandwidth

Intra-Socket Bandwidth

4K 8K 16K 32K 64K 128K 256K 512K 1M 2M0

50

100

150

200

250

300MVAPICH2-X 2.3.2La

ten

cy (

us)

0

5000

10000

15000

20000

25000

30000

35000MVAPICH2-X 2.3.2

Ba

nd

wid

th (

MB

/s)

0500010000150002000025000300003500040000

MVAPICH2-X 2.3.2

Ba

nd

wid

th (

MB

/s)

Page 15: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 15OSU Booth SC’19

0 1 2 4 8 16 32 64

0

0.5

1

1.5

2

2.5

3

Latency - Small Messages

MVAPICH2-X

OpenMPI+UCX

Message Size (Bytes)

Late

ncy

(us)

128 256 512 1024 2048 4096 8192 16384

0

5

10

15

20

25

30

35

Latency - Medium Messages

MVAPICH2-X

Message Size (Bytes)

Late

ncy

(us)

32K 64K 128K256K512K 1M 2M 4M0

500

1000

1500

2000

2500Latency - Large Messages

MVAPICH2-X

Message Size (Bytes)

La

ten

cy

(u

s)

1 2 4 8 16 32 64 128

0

100

200

300

400

Bandwidth - Small Messages

MVAPICH2-X

Message Size (Bytes)

Ban

dw

idth

(M

B/s

)

256 512 1K 2K 4K 8K 16K 32K

0

2000

4000

6000

8000

10000

Bandwidth – Medium Messages

MVAPICH2-X

Message Size (Bytes)

Ba

nd

wid

th

(MB

/s)

64K 128K 256K 512K 1M 2M 4M

0

2000

4000

6000

8000

10000

12000

Bandwidth - Large Messages

MVAPICH2-X

Message Size (Bytes)

Ba

nd

wid

th

(MB

/s)

Point-to-point: Latency & Bandwidth (Inter-socket) on ARM

3.5x better

8.3x better

5x better

Page 16: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 16OSU Booth SC’19

MPI_Allreduce on KNL + Omni-Path (10,240 Processes)

4 8 16 32 64 128

256

512

1024

2048

4096

0

50

100

150

200

250

300

MVAPICH2 MVAPICH2-OPT IMPIMessage Size

Late

ncy

(us)

8K 16K 32K 64K 128K 256K0

200

400

600

800

1000

1200

1400

1600

1800

2000

MVAPICH2 MVAPICH2-OPT IMPIMessage Size

OSU Micro Benchmark 64 PPN

2.4X

• For MPI_Allreduce latency with 32K bytes, MVAPICH2-OPT can reduce the latency by 2.4XM. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction

Collectives with Data Partitioning-based Multi-Leader Design, SuperComputing '17. Available since MVAPICH2-X 2.3b

Page 17: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 17OSU Booth SC’19

4 8 16 32 64 1280

2

4

6

8

101 PPN*, 8 NodesMVAPICH2

MVAPICH2-SHArP

Message Size (Bytes)

Pure

Com

munic

ation L

ate

ncy

(us)

4 8 16 32 64 12805

101520253035404550

1 PPN, 8 NodesMVAPICH2

MVAPICH2-SHArP

Message Size (Bytes)

Com

mu

nic

ati

on-C

om

pu

tati

on

Ove

rlap

(%

)

Evaluation of SHArP based Non Blocking Allreduce

MPI_Iallreduce Benchmark

2.3x

*PPN: Processes Per Node

• Complete offload of Allreduce collective operation to Switch helps to have much higher overlap of communication and computation

Low

er is

Bette

r

Hig

her

is B

ett

er

Available since MVAPICH2 2.3a

Page 18: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 18OSU Booth SC’19

● Enhance existing support for MPI_T in MVAPICH2 to expose a richer set of performance and control variables

● Get and display MPI Performance Variables (PVARs) made available by the runtime in TAU

● Control the runtime’s behavior via MPI Control Variables (CVARs)

● Introduced support for new MPI_T based CVARs to MVAPICH2○ MPIR_CVAR_MAX_INLINE_MSG_SZ,

MPIR_CVAR_VBUF_POOL_SIZE, MPIR_CVAR_VBUF_SECONDARY_POOL_SIZE

● TAU enhanced with support for setting MPI_T CVARs in a non-interactive mode for uninstrumented applications

Performance Engineering Applications using MVAPICH2 and TAU

VBUF usage without CVAR based tuning as displayed by ParaProf

VBUF usage with CVAR based tuning as displayed by ParaProf

Page 19: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 19OSU Booth SC’19

Dynamic and Adaptive Tag Matching

Normalized Total Tag Matching Time at 512 Processes

Normalized to Default (Lower is Better)

Normalized Memory Overhead per Process at 512 Processes

Compared to Default (Lower is Better)Adaptive and Dynamic Design for MPI Tag Matching; M. Bayatpour, H. Subramoni, S. Chakraborty, and D. K. Panda;

IEEE Cluster 2016. [Best Paper Nominee]

Ch

allen

geTag matching is a

significant overhead for receiversExisting Solutions are - Static and do not adapt dynamically to communication pattern - Do not consider memory overhead

Solu

tionA new tag matching

design - Dynamically adapt to communication patterns- Use different strategies for different ranks - Decisions are based on the number of request object that must be traversed before hitting on the required one

Resu

ltsBetter performance

than other state-of-the art tag-matching schemesMinimum memory consumption

Will be available in future MVAPICH2 releases

Page 20: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 20OSU Booth SC’19

Dynamic and Adaptive MPI Point-to-point Communication Protocols

Process on Node 1

Process on Node 2

Eager Threshold for Example Communication Pattern with Different Designs

0 1 2 3

4 5 6 7

Default

16 KB

16 KB

16 KB

16 KB

0 1 2 3

4 5 6 7

Manually Tuned

128 KB

128 KB

128 KB

128 KB

0 1 2 3

4 5 6 7

Dynamic + Adaptive

32 KB

64 KB

128 KB

32 KB

H. Subramoni, S. Chakraborty, D. K. Panda, Designing Dynamic & Adaptive MPI Point-to-Point Communication Protocols for Efficient Overlap of Computation & Communication, ISC'17 - Best Paper

128 256 512 1K0

100200300400500600 Execution Time of Amber

Default Threshold=17K Threshold=64KThreshold=128K Dynamic Threshold

Number of ProcessesWall

Clo

ck T

ime

(sec

ond

s)

128 256 512 1K0

2

4

6

8

10 Relative Memory Consumption of Amber

Default Threshold=17KThreshold=64K Threshold=128K

Number of ProcessesR

ela

tive

Mem

ory

Con

sum

pti

on

Default Poor overlap; Low memory requirement

Low Performance; High Productivity

Manually Tuned Good overlap; High memory requirement

High Performance; Low Productivity

Dynamic + Adaptive Good overlap; Optimal memory requirement

High Performance; High Productivity

Process Pair

Eager Threshold (KB)

0 – 4 32

1 – 5 64

2 – 6 128

3 – 7 32

Desired Eager Threshold

Page 21: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 21OSU Booth SC’19

MVAPICH2 Distributions • MVAPICH2

– Basic MPI support for IB, Omni-Path, iWARP and RoCE

• MVAPICH2-X– Advanced MPI features and support for INAM

– MPI, PGAS and Hybrid MPI+PGAS support for IB

• MVAPICH2-Virt– Optimized for HPC Clouds with IB and SR-IOV virtualization

– Support for OpenStack, Docker, and Singularity

• OSU Micro-Benchmarks (OMB)– MPI (including CUDA-aware MPI), OpenSHMEM and UPC

• OSU INAM– InfiniBand Network Analysis and Monitoring Tool

Page 22: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 22OSU Booth SC’19

MVAPICH2-X for Hybrid MPI + PGAS Applications

• Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI– Possible deadlock if both runtimes are not progressed

– Consumes more network resource

• Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF

– Available with since 2012 (starting with MVAPICH2-X 1.9) – http://mvapich.cse.ohio-state.edu

High Performance and Scalable Unified Communication Runtime

Support for Modern Multi-/Many-core Architectures(Intel-Xeon, GPGPUs, OpenPOWER, ARM…)

High Performance Parallel Programming ModelsMPI

Message Passing InterfacePGAS

(UPC, OpenSHMEM, CAF, UPC++)Hybrid --- MPI + X

(MPI + PGAS + OpenMP/Cilk)

Diverse APIs and MechanismsOptimized

Point-to-point Primitives

Optimized Point-to-point

Primitives

Collectives Algorithms

(Blocking and Non-Blocking)

Collectives Algorithms

(Blocking and Non-Blocking)

Remote Memory Access

Remote Memory Access

Fault

Tolerance

Fault

ToleranceActive

Messages

Active Messages

Scalable Job Startup

Scalable Job Startup

Introspection & Analysis

with OSU INAM

Introspection & Analysis

with OSU INAM

Support for Modern Networking Technologies(InfiniBand, iWARP, RoCE, Omni-Path…)

Support for Efficient Intra-node Communication(POSIX SHMEM, CMA, LiMIC, XPMEM…)

Page 23: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 23OSU Booth SC’19

Optimized CMA-based Collectives for Large Messages

100

1000

10000

100000

1000000KNL (2 Nodes, 128 Procs)

MVAPICH2-2.3a

Intel MPI 2017

Message Size

Late

ncy

(u

s)

100

1000

10000

100000

1000000KNL (4 Nodes, 256 Procs)

MVAPICH2-2.3a

Intel MPI 2017

Message Size

100

1000

10000

100000

1000000KNL (8 Nodes, 512 Procs)

MVAPICH2-2.3a

Intel MPI 2017

Message Size

• Significant improvement over existing implementation for Scatter/Gather with 1MB messages (up to 4x on KNL, 2x on Broadwell, 14x on OpenPower)

• New two-level algorithms for better scalability• Improved performance for other collectives (Bcast, Allgather, and

Alltoall)

~ 2.5xBetter

~ 3.2xBetter

~ 4xBetter

~ 17xBetter

S. Chakraborty, H. Subramoni, and D. K. Panda, Contention Aware Kernel-Assisted MPI Collectives for Multi/Many-core Systems,

IEEE Cluster ’17, BEST Paper Finalist

Performance of MPI_Gather on KNL nodes (64PPN)

Available since MVAPICH2-X 2.3b

Page 24: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 24OSU Booth SC’19

Shared Address Space (XPMEM)-based Collectives Design

16K 32K 64K 128K256K512K 1M 2M 4M10

100

1000

10000

100000 MVAPICH2-2.3b

Message Size

Late

ncy

(us)

OSU_Allreduce (Broadwell 256 procs)

• “Shared Address Space”-based true zero-copy Reduction collective designs in MVAPICH2

• Offloaded computation/communication to peers ranks in reduction collective operation

• Up to 4X improvement for 4MB Reduce and up to 1.8X improvement for 4M AllReduce

73.2

1.8X

16K 32K 64K 128K256K512K 1M 2M 4M1

10

100

1000

10000

100000 MVAPICH2-2.3b

IMPI-2017v1.132

Message Size

OSU_Reduce (Broadwell 256 procs)

4X

36.1

37.9

16.8

J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and D. Panda, Designing Efficient Shared Address Space Reduction Collectives for Multi-/Many-cores, International Parallel

& Distributed Processing Symposium (IPDPS '18), May 2018.

Available in MVAPICH2-X 2.3rc1

Page 25: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 25OSU Booth SC’19

Minimizing Memory Footprint by Direct Connect (DC) Transport

Nod

e 0

P1

P0

Node 1

P3

P2Node

3

P7

P6

Nod

e 2

P5

P4

IBNetwork

• Constant connection cost (One QP for any peer)• Full Feature Set (RDMA, Atomics etc)• Separate objects for send (DC Initiator) and

receive (DC Target)– DC Target identified by “DCT Number”– Messages routed with (DCT Number, LID)– Requires same “DC Key” to enable communication

• Available since MVAPICH2-X 2.2a

160 320 6200

0.20.40.60.8

11.2

NAMD - Apoa1: Large data setRC DC-Pool UD XRC

Number of Processes

No

rma

lize

d E

xe

cuti

on

Tim

e

80 160 320 6400.5

5

50

500

1022

4797

1 1 1

2

10 10 10 10

1 1

35

Memory Footprint for AlltoallRC DC-Pool UD XRC

Number of Processes

Connect

ion M

em

ory

(K

B)

H. Subramoni, K. Hamidouche, A. Venkatesh, S. Chakraborty and D. K. Panda, Designing MPI Library with Dynamic Connected Transport (DCT) of InfiniBand : Early Experiences. IEEE International Supercomputing Conference (ISC ’14)

Page 26: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 26OSU Booth SC’19

• Introduced by Mellanox to support direct local and remote noncontiguous memory access

• Avoid packing at sender and unpacking at receiver

• Available since MVAPICH2-X 2.2b

User-mode Memory Registration (UMR)

0

100

200

300

400Small & Medium Message Latency

UMR

Message Size (Bytes)

Late

ncy (u

s)

2M 4M 8M 16M0

5000

10000

15000

20000Large Message Latency

UMR

Message Size (Bytes)

La

ten

cy

(u

s)

Connect-IB (54 Gbps): 2.8 GHz Dual Ten-core (IvyBridge) Intel PCI Gen3 with Mellanox IB FDR switchM. Li, H. Subramoni, K. Hamidouche, X. Lu and D. K. Panda, “High Performance MPI

Datatype Support with User-mode Memory Registration: Challenges, Designs and Benefits”, CLUSTER, 2015 26

Page 27: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 27OSU Booth SC’19

• Applications no longer need to pin down underlying physical pages

• Memory Region (MR) are NEVER pinned by the OS

• Paged in by the HCA when needed

• Paged out by the OS when reclaimed

• ODP can be divided into two classes

– Explicit ODP• Applications still register memory buffers for

communication, but this operation is used to define access control for IO rather than pin-down the pages

– Implicit ODP• Applications are provided with a special

memory key that represents their complete address space, does not need to register any virtual address range

• Advantages

• Simplifies programming

• Unlimited MR sizes

• Physical memory optimization

On-Demand Paging (ODP)

AWP-

ODC-256

AWP-

ODC-51

2 BT CG LU MG SP LJ

EAM

Boltz

man

n

Lenn

ard

1

10

100

1000

Applications (64 Processes)

Pin-downODP

Execu

tion

Tim

e (

s)

M. Li, K. Hamidouche, X. Lu, H. Subramoni, J. Zhang, and D. K. Panda, “Designing MPI Library with On-Demand Paging (ODP) of InfiniBand: Challenges and Benefits”, SC 2016. Available since MVAPICH2-X 2.3b

Page 28: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 28OSU Booth SC’19 28

Impact of Zero Copy MPI Message Passing using HW Tag Matching (Point-to-point)

32K 64K 128K256K512K 1M 2M 4M0

100200300400

Rendezvousosu_latency

MVAPICH2MVAPICH2+HW-TM

Message Size (byte)

Late

ncy

(us)

0 2 8 32 128

512 2K 8K

0

4

8

Eagerosu_latency

MVAPICH2MVAPICH2+HW-TM

Message Size (byte)

Late

ncy

(us)

Removal of intermediate buffering/copies can lead up to 35% performance improvement in latency of medium messages on TACC Frontera

35%

Page 29: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 29OSU Booth SC’19

Benefits of the New Asynchronous Progress Design: Broadwell + InfiniBand

Up to 44% performance improvement in P3DFFT application with 448 processesUp to 19% and 9% performance improvement in HPL application with 448 and 896 processes

56 112 224 4480

20406080

100120140

MVAPICH2-X Async MVAPICH2-X DefaultIntel MPI 18.1.163

Number of processes

Tim

e

per

loop in s

eco

nds

( 28 PPN )

224 448 89680

100

120

140

106

119

109100 100 100

MVAPICH2-X Async MVAPICH2-X Default

Number of Processes

No

rma

lize

d P

erf

orm

an

ce i

n G

FL

OP

S

Memory Consumption = 69%

PPN=28

P3DFFT High Performance Linpack (HPL)

44%

33%

Lower is better Higher is better

A. Ruhela, H. Subramoni, S. Chakraborty, M. Bayatpour, P. Kousha, and D.K. Panda, Efficient Asynchronous Communication Progress for MPI without Dedicated Resources, EuroMPI 2018 Available in MVAPICH2-X 2.3rc1

Page 30: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 30OSU Booth SC’19

Evaluation of Applications on Frontera (Cascade Lake + HDR100)

MIMD Lattice Computation (MILC)

Performance of MILC and WRF2 applications scales well with increase in system size

13824 27648 41472 699840

50

100

150

200

250

Number of Processes

CG

Tim

e

7168 14336 286720

0.5

1

1.5

2

2.5

Number of Processes

Tim

e

per

S

tep

WRF2

PPN=54 PPN=56

Page 31: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 31OSU Booth SC’19

Hybrid (MPI+PGAS) Programming

• Application sub-kernels can be re-written in MPI/PGAS based on communication characteristics

• Benefits:– Best of Distributed Computing Model

– Best of Shared Memory Computing Model

Kernel 1MPI

Kernel 2MPI

Kernel 3MPI

Kernel NMPI

HPC Application

Kernel 2PGAS

Kernel NPGAS

Page 32: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 32OSU Booth SC’19

Application Level Performance with Graph500 and Sort

Graph500 Execution Time

J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models, International Supercomputing Conference (ISC’13), June 2013

J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation, Int'l Conference on Parallel Processing (ICPP '12), September 2012

4K 8K 16K0

10

20

30

40MPI-SimpleMPI-CSCMPI-CSRHybrid (MPI+OpenSHMEM)

No. of Processes

Tim

e (

s)

13X

7.6X

• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design

• 8,192 processes- 2.4X improvement over MPI-CSR- 7.6X improvement over MPI-Simple

• 16,384 processes- 1.5X improvement over MPI-CSR- 13X improvement over MPI-Simple

J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned Global Address Space Programming Models (PGAS '13), October 2013.

Sort Execution Time

500GB-512 1TB-1K 2TB-2K 4TB-4K0

50010001500200025003000

MPI Hybrid

Input Data - No. of Processes

Tim

e (

secon

ds)

51%

• Performance of Hybrid (MPI+OpenSHMEM) Sort Application• 4,096 processes, 4 TB Input Size

- MPI – 2408 sec; 0.16 TB/min- Hybrid – 1172 sec; 0.36 TB/min- 51% improvement over MPI-design

Page 33: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 33OSU Booth SC’19

MVAPICH2 Distributions • MVAPICH2

– Basic MPI support for IB, Omni-Path, iWARP and RoCE

• MVAPICH2-X– Advanced MPI features and support for INAM

– MPI, PGAS and Hybrid MPI+PGAS support for IB

• MVAPICH2-Virt– Optimized for HPC Clouds with IB and SR-IOV virtualization

– Support for OpenStack, Docker, and Singularity

• OSU Micro-Benchmarks (OMB)– MPI (including CUDA-aware MPI), OpenSHMEM and UPC

• OSU INAM– InfiniBand Network Analysis and Monitoring Tool

Page 34: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 34OSU Booth SC’19

• Virtualization has many benefits– Fault-tolerance– Job migration– Compaction

• Have not been very popular in HPC due to overhead associated with Virtualization

• New SR-IOV (Single Root – IO Virtualization) support available with Mellanox InfiniBand adapters changes the field

• Enhanced MVAPICH2 support for SR-IOV• MVAPICH2-Virt 2.2 supports:

– OpenStack, Docker, and singularity

Can HPC and Virtualization be Combined?

J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based Virtualized InfiniBand Clusters? EuroPar'14J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Libray over SR-IOV enabled InfiniBand Clusters, HiPC’14 J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR-IOV: an Efficient Approach to build HPC Clouds, CCGrid’15

Page 35: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 35OSU Booth SC’19

milc leslie3d pop2 GAPgeofem zeusmp2 lu0

50

100

150

200

250

300

350

400MV2-SR-IOV-DefMV2-SR-IOV-OptMV2-Native

Ex

ec

uti

on

Tim

e (

s)

1%

9.5%

22,20 24,10 24,16 24,20 26,10 26,160

1000

2000

3000

4000

5000

6000

MV2-SR-IOV-DefMV2-SR-IOV-OptMV2-Native

Problem Size (Scale, Edgefactor)

Ex

ec

uti

on

Tim

e (

ms

)

2%

• 32 VMs, 6 Core/VM

• Compared to Native, 2-5% overhead for Graph500 with 128 Procs

• Compared to Native, 1-9.5% overhead for SPEC MPI2007 with 128 Procs

Application-Level Performance on Chameleon

SPEC MPI2007

Graph500

5%

Page 36: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 36OSU Booth SC’19

Performance of Radix on Microsoft Azure

0

5

10

15

20

25

30Total Execution Time on HC (Lower is better)

MVA-PICH2-X

Number of Processes (Nodes X PPN)

Execu

tion T

ime (Seco

nds)

3x faster

60(1X60) 120(2X60) 240(4X60)0

5

10

15

20

25Total Execution Time on HB (Lower is better)

MVAPICH2-XHPCx

Number of Processes (Nodes X PPN)

Execu

tion T

ime (Seco

nds)

38% faster

Page 37: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 37OSU Booth SC’19

Amazon Elastic Fabric Adapter (EFA)

Feature UD SRD

Send/Recv ✔ ✔

Send w/ Immediate ✖ ✖

RDMA Read/Write/Atomic ✖ ✖

Scatter Gather Lists ✔ ✔

Shared Receive Queue

✖ ✖

Reliable Delivery ✖ ✔

Ordering ✖ ✖

Inline Sends ✖ ✖

Global Routing Header

✔ ✖

MTU Size 4KB 8KB0

5

10

15

20

25Verbs-level Latency

UDSRD

Message Size (Bytes)

Late

ncy (u

s)

0

0.5

1

1.5

2

2.5Unidirectional Message Rate

UDSRD

Message Size (Bytes)

Me

ssa

ge

Ra

te

(millio

n m

sg

/se

c)

16.95

15.69

2.02

1.77

• Enhanced version of Elastic Network Adapter (ENA)• Allows OS bypass, up to 100 Gbps bandwidth

• New QP type: Scalable Reliable Datagram (SRD)• Network aware multi-path routing - low tail

latency• Guaranteed Delivery, no ordering guarantee

• Exposed through verbs and libfabric interfaces

Page 38: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 38OSU Booth SC’19

MPI-level Performance with SRD

0

200

400

600

800

1000Gatherv – 8 node 36 ppn

MV2X-UDMV2X-SRDOpenMPI

Message Size (Bytes)

Late

ncy (u

s)

4 8 16 32 64 128256512 1k 2k 4k0

200

400

600

800Allreduce – 8 node 36 ppn

MV2X-UDMV2X-SRDOpenMPI

Message Size (Bytes)

Late

ncy (u

s)

1

10

100

1000

10000Scatterv – 8 node 36 ppn

MV2X-UDMV2X-SRDOpenMPI

Message Size (Bytes)

Late

ncy (u

s)

0

20

40

60

80 miniGhostMV2X OpenMPI

Processes (Nodes X PPN)

Execu

tion T

ime (

Seco

nds)

05

1015202530 CloverLeaf

MV2X-UDMV2X-SRDOpenMPI

Processes (Nodes X PPN)

Execu

tion T

ime (

Seco

nds)10% better

27.5% better

Instance type: c5n.18xlargeCPU: Intel Xeon Platinum 8124M @ 3.00GHzMVAPICH2 version: MVAPICH2-X 2.3rc2 + SRD supportOpenMPI version: Open MPI v3.1.3 with libfabric 1.7

S. Chakraborty, S. Xu, H. Subramoni, and D. K. Panda, Designing Scalable and High-performance MPI Libraries on Amazon Elastic Fabric Adapter, to be presented at the 26th Symposium on High Performance Interconnects, (HOTI ’19)

Page 39: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 39OSU Booth SC’19

MVAPICH2 Distributions • MVAPICH2

– Basic MPI support for IB, Omni-Path, iWARP and RoCE

• MVAPICH2-X– Advanced MPI features and support for INAM

– MPI, PGAS and Hybrid MPI+PGAS support for IB

• MVAPICH2-Virt– Optimized for HPC Clouds with IB and SR-IOV virtualization

– Support for OpenStack, Docker, and Singularity

• OSU Micro-Benchmarks (OMB)– MPI (including CUDA-aware MPI), OpenSHMEM and UPC

• OSU INAM– InfiniBand Network Analysis and Monitoring Tool

Page 40: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 40OSU Booth SC’19

• Available since 2004

• Suite of microbenchmarks to study communication performance of various programming models

• Benchmarks available for the following programming models– Message Passing Interface (MPI)– Partitioned Global Address Space (PGAS)

• Unified Parallel C (UPC)

• Unified Parallel C++ (UPC++)

• OpenSHMEM

• Benchmarks available for multiple accelerator based architectures– Compute Unified Device Architecture (CUDA)

– OpenACC Application Program Interface

• Part of various national resource procurement suites like NERSC-8 / Trinity Benchmarks

• Please visit the following link for more information– http://mvapich.cse.ohio-state.edu/benchmarks/

OSU Microbenchmarks

Page 41: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 41OSU Booth SC’19

MVAPICH2 Distributions • MVAPICH2

– Basic MPI support for IB, Omni-Path, iWARP and RoCE

• MVAPICH2-X– Advanced MPI features and support for INAM

– MPI, PGAS and Hybrid MPI+PGAS support for IB

• MVAPICH2-Virt– Optimized for HPC Clouds with IB and SR-IOV virtualization

– Support for OpenStack, Docker, and Singularity

• OSU Micro-Benchmarks (OMB)– MPI (including CUDA-aware MPI), OpenSHMEM and UPC

• OSU INAM– InfiniBand Network Analysis and Monitoring Tool

Page 42: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 42OSU Booth SC’19

Overview of OSU INAM• A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with

inputs from the MPI runtime– http://mvapich.cse.ohio-state.edu/tools/osu-inam/

• Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes

• Capability to analyze and profile node-level, job-level and process-level activities for MPI communication– Point-to-Point, Collectives and RMA

• Ability to filter data based on type of counters using “drop down” list

• Remotely monitor various metrics of MPI processes at user specified granularity

• "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2-X

• Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes

• OSU INAM 0.9.4 released on 11/10/2018– Enhanced performance for fabric discovery using optimized OpenMP-based multi-threaded designs

– Ability to gather InfiniBand performance counters at sub-second granularity for very large (>2000 nodes) clusters

– Redesign database layout to reduce database size

– Enhanced fault tolerance for database operations• Thanks to Trey Dockendorf @ OSC for the feedback

– OpenMP-based multi-threaded designs to handle database purge, read, and insert operations simultaneously

– Improved database purging time by using bulk deletes

– Tune database timeouts to handle very long database operations

– Improved debugging support by introducing several debugging levels

Page 43: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 43OSU Booth SC’19

OSU INAM Features

• Show network topology of large clusters• Visualize traffic pattern on different links• Quickly identify congested links/links in error state• See the history unfold – play back historical state of the network

Comet@SDSC --- Clustered View

(1,879 nodes, 212 switches, 4,377 network links)

Finding Routes Between Nodes

Page 44: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 44OSU Booth SC’19

OSU INAM Features (Cont.)

Visualizing a Job (5 Nodes)

• Job level view• Show different network metrics (load, error, etc.) for

any live job• Play back historical data for completed jobs to

identify bottlenecks

• Node level view - details per process or per node• CPU utilization for each rank/node• Bytes sent/received for MPI operations (pt-to-pt,

collective, RMA)• Network metrics (e.g. XmitDiscard, RcvError) per

rank/node

Estimated Process Level Link Utilization• Estimated Link Utilization view

• Classify data flowing over a network link at different granularity in conjunction with MVAPICH2-X 2.2rc1• Job level and• Process level

Page 45: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 45OSU Booth SC’19

• MPI runtime has many parameters• Tuning a set of parameters can help you to extract higher

performance• Compiled a list of such contributions through the

MVAPICH Website– http://mvapich.cse.ohio-state.edu/best_practices/

• Initial list of applications– Amber– HoomDBlue– HPCG– Lulesh– MILC– Neuron– SMG2000– Cloverleaf– SPEC (LAMMPS, POP2, TERA_TF, WRF2)

• Soliciting additional contributions, send your results to mvapich-help at cse.ohio-state.edu.

• We will link these results with credits to you.

Applications-Level Tuning: Compilation of Best Practices

Page 46: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 46OSU Booth SC’19

• Supported through X-ScaleSolutions (http://x-scalesolutions.com)• Benefits:

– Help and guidance with installation of the library

– Platform-specific optimizations and tuning

– Timely support for operational issues encountered with the library

– Web portal interface to submit issues and tracking their progress

– Advanced debugging techniques

– Application-specific optimizations and tuning

– Obtaining guidelines on best practices

– Periodic information on major fixes and updates

– Information on major releases

– Help with upgrading to the latest release

– Flexible Service Level Agreements • Support provided to Lawrence Livermore National Laboratory (LLNL)

during last two years

Commercial Support for MVAPICH2 Libraries

Page 47: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 47OSU Booth SC’19

MVAPICH2 – Plans for Exascale• Performance and Memory scalability toward 1M-10M cores• Hybrid programming (MPI + OpenSHMEM, MPI + UPC, MPI + CAF …)

• MPI + Task*• Enhanced Optimization for GPUs and FPGAs*• Taking advantage of advanced features of Mellanox InfiniBand

• Tag Matching*• Adapter Memory*

• Enhanced communication schemes for upcoming architectures• NVLINK*• CAPI*

• Extended topology-aware collectives• Extended Energy-aware designs and Virtualization Support• Extended Support for MPI Tools Interface (as in MPI 3.0)• Extended FT support• Support for * features will be available in future MVAPICH2 Releases

Page 48: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 48OSU Booth SC’19

• Presentations at OSU and X-Scale Booth (#2094)

– Members of the MVAPICH, HiBD and HiDL members

– External speakers

• Presentations at SC main program (Tutorials, Workshops, BoFs,

Posters, and Doctoral Showcase)

• Presentation at many other booths (Mellanox, Intel, Microsoft,

and AWS) and satellite events

• Complete details available at

http://mvapich.cse.ohio-state.edu/conference/752/talks/

Join us for Multiple Events at SC ‘19

Page 49: The MVAPICH2 Project: Latest Developments and Plans ...mvapich.cse.ohio-state.edu/static/media/talks/slide/hari_mv2_overvie… · Parallel Programming Models Overview P1P1 P2P2 P3P3

Network Based Computing Laboratory 49OSU Booth SC’19

Thank You!

Network Based Computing

LaboratoryNetwork-Based Computing Laboratoryhttp://nowlab.cse.ohio-state.edu/

[email protected]

The High-Performance MPI/PGAS Project

http://mvapich.cse.ohio-state.edu/

The High-Performance Deep Learning Project

http://hidl.cse.ohio-state.edu/

The High-Performance Big Data Project

http://hibd.cse.ohio-state.edu/