Mellanox Announces HDR 200 Gb/s InfiniBand Solutions

21
ConnectX-6 Adapters, Quantum Switches and LinkX Cables November 2016 Interconnect Your Future with 200Gb/s HDR InfiniBand

Transcript of Mellanox Announces HDR 200 Gb/s InfiniBand Solutions

ConnectX-6 Adapters, Quantum Switches and LinkX Cables

November 2016

Interconnect Your Future with 200Gb/s HDR InfiniBand

© 2016 Mellanox Technologies 2

Exponential Data Growth – The Need for Intelligent and Faster Interconnect

CPU-Centric Data-Centric

Higher Data Speeds and In-Network Computing Enable Performance and Scale

Must Wait for the DataCreates Performance Bottlenecks

Analyze Data as it MovesEnables Performance and Scale

© 2016 Mellanox Technologies 3

Introducing 200G HDR InfiniBand Technology and Solutions

World’s First 200G Adapter

World’s First 200G Switch

© 2016 Mellanox Technologies 4

Quantum and ConnectX-6 Smart Interconnect Solutions

In-Network Computing (Aggregation, Reduction)Flexible Topologies (Fat-Tree, Torus, Dragonfly, etc.)

Advanced Adaptive Routing

16Tb/s Switch CapacityExtremely Low Latency of 90ns390M Messages / Second / Port

40 Ports of 200G HDR InfiniBand80 Ports of 100G HDR100 InfiniBand

Modular Switch: 800 Ports 200G, 1600 Ports 100G

In-Network Computing (Collectives, Matching)In-Network Memory

Storage (NVMe), Security and Network Offloads

PCIe Gen3 and Gen4Integrated PCIe Switch and Multi-Host

Advanced Adaptive Routing

200Gb/s Throughput (InfiniBand, Ethernet)0.6usec Latency (end-to-end)200M Messages per Second

© 2016 Mellanox Technologies 5

Highest-Performance 200Gb/s Interconnect Solutions

TransceiversActive Optical and Copper Cables(10 / 25 / 40 / 50 / 56 / 100 / 200Gb/s) VCSELs, Silicon Photonics and Copper

40 HDR (200Gb/s) InfiniBand Ports80 HDR100 InfiniBand PortsThroughput of 16Tb/s, <90ns Latency

200Gb/s Adapter, 0.6us Latency200 Million Messages per Second(10 / 25 / 40 / 50 / 56 / 100 / 200Gb/s)

MPI, SHMEM/PGAS, UPCFor Commercial and Open Source ApplicationsLeverages Hardware Accelerations

© 2016 Mellanox Technologies 6

Interconnect Technology: The Need for Speed and Intelligence

40G QDR

56G FDR

100G EDR

200G HDR

100 Nodes 1,000 Nodes 10,000 Nodes 100,000 Nodes

LS-DYNA(FEA)

OpenFOAM(CFD)

HumanGenome

The Large Hadron Collider (CERN)

BrainMapping

SPEE

D

SIZE

Weather

HomelandSecurity

1,000,000 Nodes

400G NDRCosmologicalSimulations

© 2016 Mellanox Technologies 7

World’s First 200G Adapter

ConnectX-6

© 2016 Mellanox Technologies 8

ConnectX-6 Highest-Performance, Highest Scalable Adapter

� 200Gb/s InfiniBand and Ethernet adapter, 0.6usec latency end-to-end

� 200 million messages/second, 2X higher versus competition

� 32 lanes of PCIe Gen3, 16 lanes of PCIe Gen4, integrated PCIe switch

HighestPerformance

HighestScalability

LowestTCO

� In-Network Computing enables data computation everywhere

� In-Network Memory enables distributed application-accessible data

� NVMe over Fabric acceleration engines, data encryption engines, enhanced adaptive routing

� Multi-Host Technology supporting up to 8 different hosts over a single adapter

� Flexible topologies (including Host-Chaining), IP over InfiniBand gateway for single wire solution

� Open and standard technology, large OEM, ISV and support eco-system

Generation Ahead Leadership, Data Center Competitive Advantage

© 2016 Mellanox Technologies 9

ConnectX-6 HDR 200G Architecture

X86 OpenPOWER GPU ARM FPGA

In-Network Computing

RDMA Transport eSwitch & Routing

In-NetworkMemory

Multi-Host TechnologyPCIe Switch

InfiniBand / Ethernet10, 20, 40, 50, 56, 100, 200G

PCIe Gen4

PCIe Gen4

Breakthrough Performance

& Total Cost of Ownership!

© 2016 Mellanox Technologies 10

ConnectX-6 In-Network Computing and Acceleration Engines

RDMA Collectives

Tag Matching Security

Storage

NetworkTransport

Most Efficient Data Movement for Compute and Storage platforms

200G with <1%CPU Utilization

CORE-Direct Technology for Unlimited Platform Scale

Accelerates MPI, SHMEM/PGAS Communication Performance

MPI Tag-Matching OffloadMPI Rendezvous Protocol Offload

Accelerates MPI Application Performance

All Communications Managed and Operated by the Network Hardware; Adaptive Routing and

Congestion Management

Maximizes CPU Availability for Applications, increases Network Efficiency

NVMe over Fabrics Offloads, T10-DIF and Erasure Coding offloads

Efficient End-to-End Data Protection, Background Check-Pointing (burst-buffer) and More. Increase System Performance and CPU

Availability

Data Encryption / Decryption (IEEE XTS standard) and Key Management; Federal Information

Processing Standards (FIPS) Compliant

Enhances Data Security Options, Enables Protection Between Users Sharing the Same

Resources (Different Keys)

© 2016 Mellanox Technologies 11

Multi-Host Technology – Changing Data Center Economics

InfiniBand / Ethernet10, 20, 40, 50, 56, 100, 200G

At Least 50% Savings on Data Center InfrastructureMultiple OEM Designs To Be Announced

© 2016 Mellanox Technologies 12

World’s First 200G Switch

Quantum

© 2016 Mellanox Technologies 13

Quantum Highest-Performance, Highest Scalable Switch

� 200Gb/s per port, 16Tb/s switch capacity, 1.7X higher versus competition

� 90ns port-to-port switch latency, 20% better versus competition

� 390 Million messages / second per port, 2X higher versus competition

HighestPerformance

HighestScalability

LowestTCO

� 128,000 nodes possible in 3-level Fat-Tree fabric for 100Gb/s, 4.6X higher versus competition

� In-Network Computing enables data computation everywhere

� Flexible topologies (Fat-Tree, Torus, Dragonfly+ and more)

� 80 ports 100G HDR100 or 40 ports 200G HDR, making Quantum the most efficient switch

� Requires 50% fewer switches and cables versus competition for same network throughput

� Open and standard technology, large OEM, ISV and support eco-system

Generation Ahead Leadership, Data Center Competitive Advantage

© 2016 Mellanox Technologies 14

Quantum – Most Scalable Switch for Lowest TCO

400-Node 100G InfiniBand Platform

1 2 9 10

1 2 5

15 Switches

384-node 100G Omni-Path Platform

1 2 7

24 Switches

40% Fewer Switches and 50% Fewer Cables with InfiniBand

Omni-Path

1 2 15 16

© 2016 Mellanox Technologies 15

1 2

1 2 63 64

Quantum – Most Scalable Switch for Lowest TCO

1536-Node 100G InfiniBand Platform 1536-node 100G Omni-Path Platform

66% Less Real Estate, 75% Fewer Cables, 50% Power Saving with InfiniBand

Omni-Path

768 Cables, 24KW Power 3072 Cables, 49KW Power

© 2016 Mellanox Technologies 16

Quantum HDR 200G In-Network Computing – SHArP Technology

Delivering up to 10X Performance Improvement

for HPC / Machine Learning Communications

SHArP 2.0 Expand s SHArP Technology to Analyze

All Message Sizes, to Meet the Needs of

Machine Learning Applications

SHArP 2.0 Enables Quantum to Manage and Execute

HPC / Machine Learning Operations in the Network

© 2016 Mellanox Technologies 17

End-to-End 200G HDR InfiniBand

© 2016 Mellanox Technologies 18

InfiniBand Delivers Higher Performance Over Competition

Automotive Simulations

Material Modeling

DNA Modeling

Weather Simulations

Computational Fluid Dynamics

Mellanox Deliver Highest Data Center Return on Investment

28% Higher 48% Higher 42% Higher 24% Higher 17% Higher

© 2016 Mellanox Technologies 19

InfiniBand: The Smart Choice for HPC Platforms and Applications

“We chose a co-design approach supporting in the best possible manner our key applications. The only interconnect that really could deliver that was Mellanox’s InfiniBand.”

“One of the big reasons we use InfiniBand and not an alternative is that we’ve got backwards compatibility with our existing solutions.”

“InfiniBand is the most advanced interconnect technology in the world, with dramatic communication overhead reduction that fully unleashes cluster performance.”

“InfiniBand is the best that is required for our applications. It enhances and unlocks the potential of the system.”

Watch Video Watch Video

Watch VideoWatch Video

Watch Video

“In HPC, the processor should be going 100% of the time on a science question, not on a communications question. This is why the offload capability of Mellanox‘snetwork is critical.”

“We have users that move tens of terabytes of data and this needs to happen very, very rapidly. InfiniBand is the way to do it.”

Watch Video

© 2016 Mellanox Technologies 20

Mellanox Chosen to Connect Future #1 HPC Systems

“Summit” System “Sierra” System

Proud to Pave the Path to Exascale

Thank You