InfiniBand Strengthens Leadership as the Interconnect Of Choice

48
TOP500 Supercomputers, Nov 2014 InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment

Transcript of InfiniBand Strengthens Leadership as the Interconnect Of Choice

Page 1: InfiniBand Strengthens Leadership as the Interconnect Of Choice

TOP500 Supercomputers, Nov 2014

InfiniBand Strengthens Leadership as the Interconnect

Of Choice By Providing Best Return on Investment

Page 2: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 2

TOP500 Performance Trends

Explosive high-performance computing market growth

Clusters continue to dominate with 86% of the TOP500 list

79% CAGR39% CAGR

Mellanox InfiniBand solutions provide the highest systems utilization in the TOP500

for both high-performance computing and clouds

Page 3: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 3

InfiniBand is the de-facto Interconnect solution for High-Performance Computing

• Positioned to continue and expand in Cloud and Web 2.0

InfiniBand is the most used interconnect on the TOP500 list, connecting 225 systems

• Increasing 8.7% year-over-year, from 207 system in Nov’13 to 225 in Nov’14

FDR InfiniBand is the most used technology on the TOP500 list, connecting 141 systems

• Grew 1.8X year-over-year from 80 systems in Nov’13 to 141 in Nov’14

InfiniBand enables the most efficient system on the list with 99.8% efficiency – record!

InfiniBand enables 24 out of the 25 most efficient systems (and the top 17 most efficient systems)

InfiniBand is the most used interconnect for Petascale-performance systems with 24 systems

TOP500 Major Highlights

Page 4: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 4

InfiniBand is the most used interconnect technology for high-performance computing• InfiniBand accelerates 225 systems, 45% of the list

FDR InfiniBand connects the fastest InfiniBand systems• TACC (#7), Government (#10), NASA (#11), Eni (#12), LRZ (#14)

InfiniBand connects the most powerful clusters • 24 of the Petascale-performance systems

The most used interconnect solution in the TOP100, TOP200, TOP300, TOP400• Connects 48% (48 systems) of the TOP100 while Ethernet only 2% (2 system)• Connects 51.5% (103 systems) of the TOP200 while Ethernet only 15.5% (31 systems)• Connects 50.7% (152 systems) of the TOP300 while Ethernet only 24.3% (73 systems)

• Connects 49% (196 systems) of the TOP400 while Ethernet only 29.8% (119 systems)• Connects 45% (225 systems) of the TOP500, Ethernet connects 37.4% (187 systems)

InfiniBand is the interconnect of choice for accelerator-based systems• 80% of the accelerator-based systems are connected with InfiniBand

Diverse set of applications• High-end HPC, commercial HPC, Cloud and enterprise data center

InfiniBand in the TOP500

Page 5: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 5

Mellanox FDR InfiniBand is the fastest interconnect solution on the TOP500

• More than 12GB/s throughput, less than 0.7usec latency

• Being used in 141 systems on the TOP500 list – 1.8X increase from the Nov’13 list

• Connects the fastest InfiniBand-based supercomputers

- TACC (#7), Government (#10), NASA (#11), Eni (#12), LRZ (#14)

Mellanox InfiniBand is the most efficient interconnect technology on the list

• Enables the highest system utilization on the TOP500 – 99.8% system efficiency

• Enables the top 17 and 24 out of 25 highest utilized systems on the TOP500 list

Mellanox InfiniBand is the only Petascale-proven, standard interconnect solution

• Connects 24 out of the 54 Petaflop-performance systems on the list

• Connects 1.5X the number of Cray based systems in the Top100, 5X in TOP500

Mellanox’s end-to-end scalable solutions accelerate GPU-based systems

• GPUDirect RDMA technology enables faster communications and higher performance

Mellanox in the TOP500

Page 6: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 6

InfiniBand is the de-facto interconnect solution for performance demanding applications

TOP500 Interconnect Trends

Page 7: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 7

TOP500 Petascale-Performance Systems

Mellanox InfiniBand is the interconnect of choice for Petascale computing• Overall 24 system of which 19 System use FDR InfiniBand

Page 8: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 8

TOP100: Interconnect Trends

InfiniBand is the most used interconnect solution in the TOP100

The natural choice for world-leading supercomputers: performance, efficiency, scalability

Page 9: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 9

TOP500 InfiniBand Accelerated Systems

Number of Mellanox FDR InfiniBand systems grew 1.8X from Nov’13 to Nov’14

• Accelerates 63% of the InfiniBand-based systems (141 systems out of 225)

Page 10: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 10

InfiniBand is the most used interconnect of the TOP100, 200, 300, 400 supercomputers

Due to superior performance, scalability, efficiency and return-on-investment

InfiniBand versus Ethernet – TOP100, 200, 300, 400, 500

Page 11: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 11

TOP500 Interconnect Placement

InfiniBand is the high performance interconnect of choice• Connects the most powerful clusters, and provides the highest system utilization

InfiniBand is the best price/performance interconnect for HPC systems

Page 12: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 12

InfiniBand’s Unsurpassed System Efficiency

TOP500 systems listed according to their efficiency

InfiniBand is the key element responsible for the highest system efficiency; in average 30% higher than 10GbE

Mellanox delivers efficiencies of up to 99.8% with InfiniBand

Average Efficiency

• InfiniBand: 87%

• Cray: 79%

• 10GbE: 67%

• GigE: 40%

Page 13: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 13

Mellanox InfiniBand connects the most efficient system on the list

• 24 of the 25 most efficient systems

Enabling a record system efficiency of 99.8%, only 0.2% less than the theoretical limit!

Enabling The Most Efficient Systems

Page 14: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 14

TOP500 Interconnect Comparison

InfiniBand systems account for 2.6X the performance of Ethernet systems

The only scalable HPC interconnect solutions

Page 15: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 15

InfiniBand connected systems’ performance demonstrate highest growth rate

InfiniBand responsible for 2.6X the performance versus Ethernet on the TOP500 list

TOP500 Performance Trend

Page 16: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 16

InfiniBand Performance Trends

InfiniBand-connected CPUs grew 28% from Nov‘13 to Nov‘14

InfiniBand-based system performance grew 44% from Nov‘13 to Nov‘14

Mellanox InfiniBand is the most efficient and scalable interconnect solution

Driving factors: performance, efficiency, scalability, many-many cores

Page 18: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 18

Proud to Accelerate Future DOE Leadership Systems (“CORAL”)

“Summit” System “Sierra” System

5X – 10X Higher Application Performance versus Current Systems

Mellanox EDR 100Gb/s InfiniBand, IBM POWER CPUs, NVIDIA Tesla GPUs

Mellanox EDR 100G Solutions Selected by the DOE for 2017 Leadership Systems

Deliver Superior Performance and Scalability over Current / Future Competition

Page 19: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 19

Interconnect Solutions Leadership – Software and Hardware

ICs Switches/GatewaysAdapter Cards Cables/ModulesMetro / WAN

Comprehensive End-to-End InfiniBand and Ethernet Hardware Products

Comprehensive End-to-End Interconnect Software Products

Page 20: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 20

The Future is Here

Copper (Passive, Active) Optical Cables (VCSEL) Silicon Photonics

36 EDR (100Gb/s) Ports, <90ns Latency

Throughput of 7.2Tb/s

Enter the Era of 100Gb/s

100Gb/s Adapter, 0.7us latency

150 million messages per second

(10 / 25 / 40 / 50 / 56 / 100Gb/s)

Page 21: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 21

The Advantage of 100G Copper Cables (over Fiber)

5000 Nodes Cluster

CAPEX Saving: $2.1M

Power Saving: 12.5KWh

Lower Cost, Lower Power Consumption, Higher Reliability

1000 Nodes Cluster

CAPEX Saving: $420K

Power Saving: 2.5KWh

SC’14 Demonstration: 4m, 6m and 8m Copper Cables

Page 22: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 22

Enter the World of Scalable Performance – 100Gb/s Switch

StoreAnalyze

7th Generation InfiniBand Switch

36 EDR (100Gb/s) Ports, <90ns Latency

Throughput of 7.2 Tb/s

InfiniBand Router

Adaptive Routing

Boundless Performance

Page 23: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 23

Enter the World of Scalable Performance – 100Gb/s Adapter

Connect. Accelerate. Outperform

ConnectX-4: Highest Performance Adapter in the Market

InfiniBand: SDR / DDR / QDR / FDR / EDR

Ethernet: 10 / 25 / 40 / 50 / 56 / 100GbE

100Gb/s, <0.7us latency

150 million messages per second

OpenPOWER CAPI technology

CORE-Direct technology

GPUDirect RDMA

Dynamically Connected Transport (DCT)

Ethernet offloads (HDS, RSS, TSS, LRO, LSOv2)

Page 24: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 24

End-to-End Interconnect Solutions for All Platforms

Highest Performance and Scalability for

X86, Power, GPU, ARM and FPGA-based Compute and Storage Platforms

Smart Interconnect to Unleash The Power of All Compute Architectures

x86Open

POWERGPU ARM FPGA

Page 25: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 25

Mellanox Delivers Highest Application Performance

World Record Performance with Mellanox Connect-IB and HPC-X

Achieve Higher Performance with 67% less compute infrastructure versus Cray

25% 5%

Page 26: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 26

HPC Clouds – Performance Demands Mellanox Solutions

San Diego Supercomputing Center “Comet” System (2015) to

Leverage Mellanox Solutions and Technology to Build HPC Cloud

Page 27: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 27

Technology Roadmap – One-Generation Lead over the Competition

2000 202020102005

20Gbs 40Gbs 56Gbs 100Gbs

“Roadrunner”Mellanox Connected

1st3rd

TOP500 2003Virginia Tech (Apple)

2015

200Gbs

Terascale Petascale Exascale

Mellanox

Page 28: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 28

Mellanox Interconnect Advantages

Mellanox solutions provide a proven, scalable and high performance end-to-end connectivity

Standards-based (InfiniBand , Ethernet), supported by large eco-system

Flexible, support all compute architectures: x86, Power, ARM, GPU, FPGA etc.

Backward and future compatible

Proven, most used solution for Petascale systems and overall TOP500

Paving The Road to Exascale Computing

Page 29: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 29

“Stampede” system

6,000+ nodes (Dell), 462462 cores, Intel Phi co-processors

5.2 Petaflops

Mellanox end-to-end FDR InfiniBand

Texas Advanced Computing Center/Univ. of Texas - #7

PetaflopMellanox Connected

Page 30: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 30

Pleiades system

SGI Altix ICE

20K InfiniBand nodes

3.4 sustained Petaflop performance

Mellanox end-to-end FDR and QDR InfiniBand

Supports variety of scientific and engineering projects• Coupled atmosphere-ocean models

• Future space vehicle design

• Large-scale dark matter halos and galaxy evolution

NASA Ames Research Center - #11

PetaflopMellanox Connected

Page 32: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 32

IBM iDataPlex and Intel Sandy Bridge

147456 cores

Mellanox end-to-end FDR InfiniBand solutions

2.9 sustained Petaflop performance

The fastest supercomputer in Europe

91% efficiency

Leibniz Rechenzentrum SuperMUC - #14

PetaflopMellanox Connected

Page 33: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 33

Tokyo Institute of Technology - #15

TSUBAME 2.0, first Petaflop system in Japan

2.8 PF performance

HP ProLiant SL390s G7 1400 servers

Mellanox 40Gb/s InfiniBand

PetaflopMellanox Connected

Page 36: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 36

Occigen system

1.6 sustained Petaflop performance

Bull bullx DLC

Mellanox end-to-end FDR InfiniBand

Grand Equipement National de Calcul Intensif - Centre Informatique

National de l'Enseignement Suprieur (GENCI-CINES) - #26

PetaflopMellanox Connected

Page 38: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 38

Bull Bullx B510, Intel Sandy Bridge

77184 cores

Mellanox end-to-end FDR InfiniBand solutions

1.36 sustained Petaflop performance

CEA/TGCC-GENCI - #33

PetaflopMellanox Connected

Page 39: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 39

IBM iDataPlex DX360M4

Mellanox end-to-end FDR InfiniBand solutions

1.3 sustained Petaflop performance

Max-Planck-Gesellschaft MPI/IPP - # 34

PetaflopMellanox Connected

Page 40: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 40

Dawning TC3600 Blade Supercomputer

5200 nodes, 120640 cores, NVIDIA GPUs

Mellanox end-to-end 40Gb/s InfiniBand solutions

• ConnectX-2 and IS5000 switches

1.27 sustained Petaflop performance

The first Petaflop systems in China

National Supercomputing Centre in Shenzhen (NSCS) - #35

PetaflopMellanox Connected

Page 41: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 41

“Yellowstone” system

72,288 processor cores, 4,518 nodes (IBM)

Mellanox end-to-end FDR InfiniBand, full fat tree, single plane

NCAR (National Center for Atmospheric Research) - #36

PetaflopMellanox Connected

Page 42: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 42

Bull Bullx B510, Intel Sandy Bridge

70560 cores

1.24 sustained Petaflop performance

Mellanox end-to-end InfiniBand solutions

International Fusion Energy Research Centre (IFERC), EU(F4E) -

Japan Broader Approach collaboration - #38

PetaflopMellanox Connected

Page 43: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 43

The “Cartesius” system - the Dutch supercomputer

Bull Bullx DLC B710/B720 system

Mellanox end-to-end InfiniBand solutions

1.05 sustained Petaflop performance

SURFsara - #45

PetaflopMellanox Connected

Page 44: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 44

Commissariat a l'Energie Atomique (CEA) - #47

Tera 100, first Petaflop system in Europe - 1.05 PF performance

4,300 Bull S Series servers

140,000 Intel® Xeon® 7500 processing cores

300TB of central memory, 20PB of storage

Mellanox 40Gb/s InfiniBand

PetaflopMellanox Connected

Page 46: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 46

“Conte” system

HP Cluster Platform SL250s Gen8

Intel Xeon E5-2670 8C 2.6GHz

Intel Xeon Phi 5110P accelerator

Total of 77,520 cores

Mellanox FDR 56Gb/s InfiniBand end-to-end

Mellanox Connect-IB InfiniBand adapters

Mellanox MetroX long Haul InfiniBand solution

980 Tflops performance

Purdue University - #53

Page 47: InfiniBand Strengthens Leadership as the Interconnect Of Choice

© 2014 Mellanox Technologies 47

“MareNostrum 3” system

1.1 Petaflops peak performance

~50K cores, 91% efficiency

Mellanox FDR InfiniBand

Barcelona Supercomputing Center - #57

Page 48: InfiniBand Strengthens Leadership as the Interconnect Of Choice

Thank You