FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERData center virtualization using VMware® ESX Server or...

6
FLEXIBLE I/O FOR THE DYNAMIC DATA CENTER Mellanox 10 and 40 Gigabit Ethernet Converged Network Adapters

Transcript of FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERData center virtualization using VMware® ESX Server or...

Page 1: FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERData center virtualization using VMware® ESX Server or Citrix XenServer™ Enterprise data center applications I/O consolidation (single unified

Optimal Price/PerformanceConnectX-2 EN 10 and 40 Gigabit Ethernet removes I/O bottlenecks in mainstream servers that are limiting application performance. Servers supporting PCI Express 2.0 with 5GT/s will be able to fully utilize both 10Gb/s ports, while the 40GigE port will fully saturate the host bus, delivering the I/O bandwidth required by these high-end servers. Hardware-based stateless offload engines handle the TCP/UDP/IP segmentation, reassembly, and checksum calculations that would otherwise burden the host process. These offload technologies are fully compatible with Microsoft RSS and NetDMA. Total cost of ownership is optimized by maintaining an end-to-end Ethernet network on existing operating systems and applications.

Mellanox provides 10 and 40 Gigabit Ethernet adapters suitable for all network environments. The dual port SFP+ adapter supports 10GBASE-SR, -LR, and direct attached copper cable providing the flexibility to connect over short, medium, and long distances. Dual port 10GBASE-T adapters provide easy connections up to 100m over familiar UTP wiring. The dual port 10GBASE-CX4 and single port 40GBASE-CR4 adapters with their powered connectors can utilize active copper and fiber cables as well as passive copper.

Converged EthernetConnectX-2 EN delivers the features needed for a converged network with support for Data Center Bridging (DCB). Fibre Channel frame encapsulation compliant with T11 and hardware offloads simplifies FCoE deployment while efficient RDMA transactions are enabled by Low Latency Ethernet running over DCB fabrics. By maintaining link-level interoperability with existing Ethernet networks, IT managers can leverage existing data center fabric management solutions.

I/O VirtualizationConnectX-2 EN support for hardware-based I/O virtualization provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. ConnectX-2 EN gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.

Quality of ServiceResource allocation per application or per VM is provided and protected by the advanced QoS supported by ConnectX-2 EN. Service levels for multiple traffic types can be based on IETF DiffServ or IEEE 802.1p/Q, along with the DCB enhancements, allowing system administrators to prioritize traffic by application, virtual machine, or protocol. This powerful combination of QoS and prioritization provides the ultimate fine-grain control of traffic – ensuring that applications run smoothly in today’s complex environment.

Software SupportConnectX-2 EN is supported by a full suite of software drivers for Microsoft Windows, Linux distributions, VMware and Citrix XenServer. ConnectX-2 EN supports stateless offload and is fully interoperable with standard TCP/UDP/IP stacks. ConnectX-2 EN supports various management interfaces and has a rich set of configuring and management tools across operating systems.

ETHERNET– IEEE Std 802.3ae 10 Gigabit Ethernet– IEEE Std 802.3ak 10GBASE-CX4– IEEE Std 802.3an 10GBASE-T– IEEE Draft P802.3ba/D2.0 40GBASE-CR4– IEEE Std 802.3ad Link Aggregation and Failover– IEEE Std 802.3x Pause– IEEE 802.1Q, .1p VLAN tags and priority– IEEE P802.1au D2.0 Congestion Notification– IEEE P802.1az D0.2 Enhanced Transmission Selection– IEEE P802.1bb D1.0 Priority-based Flow Control– Multicast– Jumbo frame support (10KB)– 128 MAC/VLAN addresses per port

TCP/UDP/IP STATELESS OFFLOAD– TCP/UDP/IP checksum offload– TCP Large Send (< 64KB) or Giant Send (64KB-16MB) Offload

for segmentation– Receive Side Scaling (RSS) up to 32 queues– Line rate packet filtering

ADDITIONAL CPU OFFLOADS– RDMA, Send/Receive (LLE)– Traffic steering across multiple cores– Intelligent interrupt coalescence– Compliant to Microsoft RSS and NetDMA

HARDWARE-BASED I/O VIRTUALIZATION– Single Root IOV– Address translation and protection– Multiple queues per virtual machine– VMware NetQueue support

STORAGE SUPPORT– Fibre Channel over Ethernet– T11-compliant frame format

CPU– AMD X86, X86_64– Intel X86, EM64T, IA-32, IA-64– SPARC– PowerPC, MIPS, and Cell

PCI EXPRESS INTERFACE– PCIe Base 2.0 compliant, 1.1 compatible– 5.0GT/s link rate x8 (20+20Gb/s or

40+40Gb/s bidirectional bandwidth)– Fits x8 or x16 slots– Support for MSI/MSI-X mechanisms

OPERATING SYSTEMS/DISTRIBUTIONS– Novell SuSE Linux Enterprise Server

(SLES), Red Hat Enterprise Linux (RHEL), and other Linux distributions

– Microsoft Windows XP Server 2003/2008, – VMware ESX 3.5– Citrix XenServer 4.1

MANAGEMENT– MIB, MIB-II, MIB-II Extensions, RMON,

RMON 2– Configuration and diagnostic tools

SAFETY – USA/Canada: cTUVus UL– EU: IEC60950– Germany: TUV/GS– International: CB Scheme

EMC (EMISSIONS)– USA: FCC, Class A– Canada: ICES, Class A– EU: EN55022, Class A– EU: EN55024, Class A– EU: EN61000-3-2, Class A– EU: EN61000-3-3, Class A– Japan: VCCI, Class A– Taiwan: BSMI, Class A

ENVIRONMENTAL– EU: IEC 60068-2-64: Random Vibration– EU: IEC 60068-2-29: Shocks, Type I / II– EU: IEC 60068-2-32: Fall Test

OPERATING CONDITIONS– Operating temperature: 0 to 55° C– Air flow: 200LFM @ 55° C – Requires 3.3V, 12V supplies

FEATURE SUMMARY COMPLIANCE COMPATIBILITY

350 Oakmead Parkway, Suite 100Sunnyvale, CA 94085Tel: 408-970-3400Fax: 408-970-3403www.mellanox.com

© Copyright 2009. Mellanox Technologies. All rights reserved.Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. BridgeX, FabricIT, PhyX, and Virtual Protocol Interconnect are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERMellanox 10 and 40 Gigabit Ethernet Converged Network Adapters

VALUE PROPOSITIONS– As customers deploy VMware and implement server virtualization, the need

for bandwidth and availability increases significantly. ConnectX-2 EN supports all networking features available in VMware ESX Server 3.5 including NetQueue for boosting I/O performance.

– Mellanox dual-port ConnectX-2 EN adapters support PCIe 2.0 x8 (5 GT/s), enabling high availability and high performance.

BENEFITS■ Industry-leading throughput and latency performance■ Enabling I/O consolidation by supporting TCP/IP, FC over

Ethernet and RDMA over Ethernet transport protocols on a single adapter

■ Improved productivity and efficiency by delivering VM scaling and superior server utilization

■ Supports industry-standard SR-IO Virtualization technology and delivers VM protection and granular levels of I/O services to applications

■ High-availability and high-performance for data center networking

■ Software compatible with standard TCP/UDP/IP and iSCSI stacks

■ High level silicon integration and no external memory design provides low power, low cost and high reliability

TARGET APPLICATIONS■ Data center virtualization using VMware® ESX Server or

Citrix XenServer™■ Enterprise data center applications■ I/O consolidation (single unified wire for networking,

storage and clustering)■ Storage consolidations using FCoE■ Future-proofing Web 2.0 data centers and cloud

computing■ Video streaming■ Accelerating back-up and restore operations

– One of the significant challenges in a large data center is managing the number of cables connected to a server. Mellanox ConnectX-2 dramatically reduces the number cables per server by providing multi-protocol support to leverage the single fabric for multiple applications.

Ports 2 x 10GigE 2 x 10GigE 1 x 40GigE 2 x 10GigE

ASIC ConnectX-2 EN ConnectX-2 EN ConnectX-2 EN ConnectX-2 ENt

Connector CX4 SFP+ QSFP RJ45

Cabling Type* CX4 Copper Direct Attached Copper CR4 Copper CAT5E up to 55m SR and LR Fiber Optic Optical Assembled Cables CAT6 up to 55m / CAT6A up to 100m

Host Bus PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0

Speed 5.0GT/s 5.0GT/s 5.0GT/s 5.0GT/s

Lanes x8 x8 x8 x8

Features Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV

OS Support RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 RoHS Yes Yes Yes Yes

Ordering Part MNEH29B-XTR MNPH29C-XTR MNQH19-XTR MNTH29C-XTR

10GBASE-T ADAPTERS

CONNECTIVITY– Interoperable with 10Gigabit Ethernet

switches and routers– MNEH29B: 20m+ of copper CX4 cable,

with powered connectors supporting active copper or fiber cables

– MNPH29B: - 100m (OM-2) or 300m (OM-3) of

multimode fiber cable, duplex LC connector from SFP+ optics module

- 10km single mode fiber cable, duplex LC connector from SFP+ optics module

- 7m+ direct attached copper cable through SFP+ connector

– MNTH29B: - 100m of Cat6a and Cat7 UTP - 55m of Cat5e and Cat6– MNQH19: 7m+ of copper QSFP cable, with

powered connectors supporting active copper or fiber cables

COPPER ADAPTERS

Mellanox ConnectX®-2 EN Ethernet

Network Interface Cards (NIC)

deliver high bandwidth and industry-leading

10 and 40 Gigabit Ethernet connectivity with

stateless offloads for converged fabrics in

Enterprise Data Centers, High-Performance

Computing, and Embedded environments.

Clustered databases, web infrastructure, and

IP video servers are just a few example

applications that will achieve significant

throughput and latency improvements

resulting in faster access, real-time response

and increased number of users per server.

ConnectX-2 EN improves network performance

by increasing available bandwidth to the CPU

and providing enhanced performance,

especially in virtualized server environments.

*Please visit Mellanox's web site for more cable information, best usage practice and availability.

Mellanox continues its leadership in providing high-performance networking technologies by delivering superior utilization and scaling in ConnectX®-2 EN 10 and 40 Gigabit Ethernet Adapters, enabling data centers to do more with less.

6387

_rev

3

FIBER OPTIC AND COPPER ADAPTERS

Page 2: FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERData center virtualization using VMware® ESX Server or Citrix XenServer™ Enterprise data center applications I/O consolidation (single unified

Optimal Price/PerformanceConnectX-2 EN 10 and 40 Gigabit Ethernet removes I/O bottlenecks in mainstream servers that are limiting application performance. Servers supporting PCI Express 2.0 with 5GT/s will be able to fully utilize both 10Gb/s ports, while the 40GigE port will fully saturate the host bus, delivering the I/O bandwidth required by these high-end servers. Hardware-based stateless offload engines handle the TCP/UDP/IP segmentation, reassembly, and checksum calculations that would otherwise burden the host process. These offload technologies are fully compatible with Microsoft RSS and NetDMA. Total cost of ownership is optimized by maintaining an end-to-end Ethernet network on existing operating systems and applications.

Mellanox provides 10 and 40 Gigabit Ethernet adapters suitable for all network environments. The dual port SFP+ adapter supports 10GBASE-SR, -LR, and direct attached copper cable providing the flexibility to connect over short, medium, and long distances. Dual port 10GBASE-T adapters provide easy connections up to 100m over familiar UTP wiring. The dual port 10GBASE-CX4 and single port 40GBASE-CR4 adapters with their powered connectors can utilize active copper and fiber cables as well as passive copper.

Converged EthernetConnectX-2 EN delivers the features needed for a converged network with support for Data Center Bridging (DCB). Fibre Channel frame encapsulation compliant with T11 and hardware offloads simplifies FCoE deployment while efficient RDMA transactions are enabled by Low Latency Ethernet running over DCB fabrics. By maintaining link-level interoperability with existing Ethernet networks, IT managers can leverage existing data center fabric management solutions.

I/O VirtualizationConnectX-2 EN support for hardware-based I/O virtualization provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. ConnectX-2 EN gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.

Quality of ServiceResource allocation per application or per VM is provided and protected by the advanced QoS supported by ConnectX-2 EN. Service levels for multiple traffic types can be based on IETF DiffServ or IEEE 802.1p/Q, along with the DCB enhancements, allowing system administrators to prioritize traffic by application, virtual machine, or protocol. This powerful combination of QoS and prioritization provides the ultimate fine-grain control of traffic – ensuring that applications run smoothly in today’s complex environment.

Software SupportConnectX-2 EN is supported by a full suite of software drivers for Microsoft Windows, Linux distributions, VMware and Citrix XenServer. ConnectX-2 EN supports stateless offload and is fully interoperable with standard TCP/UDP/IP stacks. ConnectX-2 EN supports various management interfaces and has a rich set of configuring and management tools across operating systems.

ETHERNET– IEEE Std 802.3ae 10 Gigabit Ethernet– IEEE Std 802.3ak 10GBASE-CX4– IEEE Std 802.3an 10GBASE-T– IEEE Draft P802.3ba/D2.0 40GBASE-CR4– IEEE Std 802.3ad Link Aggregation and Failover– IEEE Std 802.3x Pause– IEEE 802.1Q, .1p VLAN tags and priority– IEEE P802.1au D2.0 Congestion Notification– IEEE P802.1az D0.2 Enhanced Transmission Selection– IEEE P802.1bb D1.0 Priority-based Flow Control– Multicast– Jumbo frame support (10KB)– 128 MAC/VLAN addresses per port

TCP/UDP/IP STATELESS OFFLOAD– TCP/UDP/IP checksum offload– TCP Large Send (< 64KB) or Giant Send (64KB-16MB) Offload

for segmentation– Receive Side Scaling (RSS) up to 32 queues– Line rate packet filtering

ADDITIONAL CPU OFFLOADS– RDMA, Send/Receive (LLE)– Traffic steering across multiple cores– Intelligent interrupt coalescence– Compliant to Microsoft RSS and NetDMA

HARDWARE-BASED I/O VIRTUALIZATION– Single Root IOV– Address translation and protection– Multiple queues per virtual machine– VMware NetQueue support

STORAGE SUPPORT– Fibre Channel over Ethernet– T11-compliant frame format

CPU– AMD X86, X86_64– Intel X86, EM64T, IA-32, IA-64– SPARC– PowerPC, MIPS, and Cell

PCI EXPRESS INTERFACE– PCIe Base 2.0 compliant, 1.1 compatible– 5.0GT/s link rate x8 (20+20Gb/s or

40+40Gb/s bidirectional bandwidth)– Fits x8 or x16 slots– Support for MSI/MSI-X mechanisms

OPERATING SYSTEMS/DISTRIBUTIONS– Novell SuSE Linux Enterprise Server

(SLES), Red Hat Enterprise Linux (RHEL), and other Linux distributions

– Microsoft Windows XP Server 2003/2008, – VMware ESX 3.5– Citrix XenServer 4.1

MANAGEMENT– MIB, MIB-II, MIB-II Extensions, RMON,

RMON 2– Configuration and diagnostic tools

SAFETY – USA/Canada: cTUVus UL– EU: IEC60950– Germany: TUV/GS– International: CB Scheme

EMC (EMISSIONS)– USA: FCC, Class A– Canada: ICES, Class A– EU: EN55022, Class A– EU: EN55024, Class A– EU: EN61000-3-2, Class A– EU: EN61000-3-3, Class A– Japan: VCCI, Class A– Taiwan: BSMI, Class A

ENVIRONMENTAL– EU: IEC 60068-2-64: Random Vibration– EU: IEC 60068-2-29: Shocks, Type I / II– EU: IEC 60068-2-32: Fall Test

OPERATING CONDITIONS– Operating temperature: 0 to 55° C– Air flow: 200LFM @ 55° C – Requires 3.3V, 12V supplies

FEATURE SUMMARY COMPLIANCE COMPATIBILITY

350 Oakmead Parkway, Suite 100Sunnyvale, CA 94085Tel: 408-970-3400Fax: 408-970-3403www.mellanox.com

© Copyright 2009. Mellanox Technologies. All rights reserved.Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. BridgeX, FabricIT, PhyX, and Virtual Protocol Interconnect are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERMellanox 10 and 40 Gigabit Ethernet Converged Network Adapters

VALUE PROPOSITIONS– As customers deploy VMware and implement server virtualization, the need

for bandwidth and availability increases significantly. ConnectX-2 EN supports all networking features available in VMware ESX Server 3.5 including NetQueue for boosting I/O performance.

– Mellanox dual-port ConnectX-2 EN adapters support PCIe 2.0 x8 (5 GT/s), enabling high availability and high performance.

BENEFITS■ Industry-leading throughput and latency performance■ Enabling I/O consolidation by supporting TCP/IP, FC over

Ethernet and RDMA over Ethernet transport protocols on a single adapter

■ Improved productivity and efficiency by delivering VM scaling and superior server utilization

■ Supports industry-standard SR-IO Virtualization technology and delivers VM protection and granular levels of I/O services to applications

■ High-availability and high-performance for data center networking

■ Software compatible with standard TCP/UDP/IP and iSCSI stacks

■ High level silicon integration and no external memory design provides low power, low cost and high reliability

TARGET APPLICATIONS■ Data center virtualization using VMware® ESX Server or

Citrix XenServer™■ Enterprise data center applications■ I/O consolidation (single unified wire for networking,

storage and clustering)■ Storage consolidations using FCoE■ Future-proofing Web 2.0 data centers and cloud

computing■ Video streaming■ Accelerating back-up and restore operations

– One of the significant challenges in a large data center is managing the number of cables connected to a server. Mellanox ConnectX-2 dramatically reduces the number cables per server by providing multi-protocol support to leverage the single fabric for multiple applications.

Ports 2 x 10GigE 2 x 10GigE 1 x 40GigE 2 x 10GigE

ASIC ConnectX-2 EN ConnectX-2 EN ConnectX-2 EN ConnectX-2 ENt

Connector CX4 SFP+ QSFP RJ45

Cabling Type* CX4 Copper Direct Attached Copper CR4 Copper CAT5E up to 55m SR and LR Fiber Optic Optical Assembled Cables CAT6 up to 55m / CAT6A up to 100m

Host Bus PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0

Speed 5.0GT/s 5.0GT/s 5.0GT/s 5.0GT/s

Lanes x8 x8 x8 x8

Features Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV

OS Support RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 RoHS Yes Yes Yes Yes

Ordering Part MNEH29B-XTR MNPH29C-XTR MNQH19-XTR MNTH29C-XTR

10GBASE-T ADAPTERS

CONNECTIVITY– Interoperable with 10Gigabit Ethernet

switches and routers– MNEH29B: 20m+ of copper CX4 cable,

with powered connectors supporting active copper or fiber cables

– MNPH29B: - 100m (OM-2) or 300m (OM-3) of

multimode fiber cable, duplex LC connector from SFP+ optics module

- 10km single mode fiber cable, duplex LC connector from SFP+ optics module

- 7m+ direct attached copper cable through SFP+ connector

– MNTH29B: - 100m of Cat6a and Cat7 UTP - 55m of Cat5e and Cat6– MNQH19: 7m+ of copper QSFP cable, with

powered connectors supporting active copper or fiber cables

COPPER ADAPTERS

Mellanox ConnectX®-2 EN Ethernet

Network Interface Cards (NIC)

deliver high bandwidth and industry-leading

10 and 40 Gigabit Ethernet connectivity with

stateless offloads for converged fabrics in

Enterprise Data Centers, High-Performance

Computing, and Embedded environments.

Clustered databases, web infrastructure, and

IP video servers are just a few example

applications that will achieve significant

throughput and latency improvements

resulting in faster access, real-time response

and increased number of users per server.

ConnectX-2 EN improves network performance

by increasing available bandwidth to the CPU

and providing enhanced performance,

especially in virtualized server environments.

*Please visit Mellanox's web site for more cable information, best usage practice and availability.

Mellanox continues its leadership in providing high-performance networking technologies by delivering superior utilization and scaling in ConnectX®-2 EN 10 and 40 Gigabit Ethernet Adapters, enabling data centers to do more with less.

6387

_rev

3FIBER OPTIC AND COPPER ADAPTERS

Page 3: FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERData center virtualization using VMware® ESX Server or Citrix XenServer™ Enterprise data center applications I/O consolidation (single unified

Optimal Price/PerformanceConnectX-2 EN 10 and 40 Gigabit Ethernet removes I/O bottlenecks in mainstream servers that are limiting application performance. Servers supporting PCI Express 2.0 with 5GT/s will be able to fully utilize both 10Gb/s ports, while the 40GigE port will fully saturate the host bus, delivering the I/O bandwidth required by these high-end servers. Hardware-based stateless offload engines handle the TCP/UDP/IP segmentation, reassembly, and checksum calculations that would otherwise burden the host process. These offload technologies are fully compatible with Microsoft RSS and NetDMA. Total cost of ownership is optimized by maintaining an end-to-end Ethernet network on existing operating systems and applications.

Mellanox provides 10 and 40 Gigabit Ethernet adapters suitable for all network environments. The dual port SFP+ adapter supports 10GBASE-SR, -LR, and direct attached copper cable providing the flexibility to connect over short, medium, and long distances. Dual port 10GBASE-T adapters provide easy connections up to 100m over familiar UTP wiring. The dual port 10GBASE-CX4 and single port 40GBASE-CR4 adapters with their powered connectors can utilize active copper and fiber cables as well as passive copper.

Converged EthernetConnectX-2 EN delivers the features needed for a converged network with support for Data Center Bridging (DCB). Fibre Channel frame encapsulation compliant with T11 and hardware offloads simplifies FCoE deployment while efficient RDMA transactions are enabled by Low Latency Ethernet running over DCB fabrics. By maintaining link-level interoperability with existing Ethernet networks, IT managers can leverage existing data center fabric management solutions.

I/O VirtualizationConnectX-2 EN support for hardware-based I/O virtualization provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. ConnectX-2 EN gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.

Quality of ServiceResource allocation per application or per VM is provided and protected by the advanced QoS supported by ConnectX-2 EN. Service levels for multiple traffic types can be based on IETF DiffServ or IEEE 802.1p/Q, along with the DCB enhancements, allowing system administrators to prioritize traffic by application, virtual machine, or protocol. This powerful combination of QoS and prioritization provides the ultimate fine-grain control of traffic – ensuring that applications run smoothly in today’s complex environment.

Software SupportConnectX-2 EN is supported by a full suite of software drivers for Microsoft Windows, Linux distributions, VMware and Citrix XenServer. ConnectX-2 EN supports stateless offload and is fully interoperable with standard TCP/UDP/IP stacks. ConnectX-2 EN supports various management interfaces and has a rich set of configuring and management tools across operating systems.

ETHERNET– IEEE Std 802.3ae 10 Gigabit Ethernet– IEEE Std 802.3ak 10GBASE-CX4– IEEE Std 802.3an 10GBASE-T– IEEE Draft P802.3ba/D2.0 40GBASE-CR4– IEEE Std 802.3ad Link Aggregation and Failover– IEEE Std 802.3x Pause– IEEE 802.1Q, .1p VLAN tags and priority– IEEE P802.1au D2.0 Congestion Notification– IEEE P802.1az D0.2 Enhanced Transmission Selection– IEEE P802.1bb D1.0 Priority-based Flow Control– Multicast– Jumbo frame support (10KB)– 128 MAC/VLAN addresses per port

TCP/UDP/IP STATELESS OFFLOAD– TCP/UDP/IP checksum offload– TCP Large Send (< 64KB) or Giant Send (64KB-16MB) Offload

for segmentation– Receive Side Scaling (RSS) up to 32 queues– Line rate packet filtering

ADDITIONAL CPU OFFLOADS– RDMA, Send/Receive (LLE)– Traffic steering across multiple cores– Intelligent interrupt coalescence– Compliant to Microsoft RSS and NetDMA

HARDWARE-BASED I/O VIRTUALIZATION– Single Root IOV– Address translation and protection– Multiple queues per virtual machine– VMware NetQueue support

STORAGE SUPPORT– Fibre Channel over Ethernet– T11-compliant frame format

CPU– AMD X86, X86_64– Intel X86, EM64T, IA-32, IA-64– SPARC– PowerPC, MIPS, and Cell

PCI EXPRESS INTERFACE– PCIe Base 2.0 compliant, 1.1 compatible– 5.0GT/s link rate x8 (20+20Gb/s or

40+40Gb/s bidirectional bandwidth)– Fits x8 or x16 slots– Support for MSI/MSI-X mechanisms

OPERATING SYSTEMS/DISTRIBUTIONS– Novell SuSE Linux Enterprise Server

(SLES), Red Hat Enterprise Linux (RHEL), and other Linux distributions

– Microsoft Windows XP Server 2003/2008, – VMware ESX 3.5– Citrix XenServer 4.1

MANAGEMENT– MIB, MIB-II, MIB-II Extensions, RMON,

RMON 2– Configuration and diagnostic tools

SAFETY – USA/Canada: cTUVus UL– EU: IEC60950– Germany: TUV/GS– International: CB Scheme

EMC (EMISSIONS)– USA: FCC, Class A– Canada: ICES, Class A– EU: EN55022, Class A– EU: EN55024, Class A– EU: EN61000-3-2, Class A– EU: EN61000-3-3, Class A– Japan: VCCI, Class A– Taiwan: BSMI, Class A

ENVIRONMENTAL– EU: IEC 60068-2-64: Random Vibration– EU: IEC 60068-2-29: Shocks, Type I / II– EU: IEC 60068-2-32: Fall Test

OPERATING CONDITIONS– Operating temperature: 0 to 55° C– Air flow: 200LFM @ 55° C – Requires 3.3V, 12V supplies

FEATURE SUMMARY COMPLIANCE COMPATIBILITY

350 Oakmead Parkway, Suite 100Sunnyvale, CA 94085Tel: 408-970-3400Fax: 408-970-3403www.mellanox.com

© Copyright 2009. Mellanox Technologies. All rights reserved.Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. BridgeX, FabricIT, PhyX, and Virtual Protocol Interconnect are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERMellanox 10 and 40 Gigabit Ethernet Converged Network Adapters

VALUE PROPOSITIONS– As customers deploy VMware and implement server virtualization, the need

for bandwidth and availability increases significantly. ConnectX-2 EN supports all networking features available in VMware ESX Server 3.5 including NetQueue for boosting I/O performance.

– Mellanox dual-port ConnectX-2 EN adapters support PCIe 2.0 x8 (5 GT/s), enabling high availability and high performance.

BENEFITS■ Industry-leading throughput and latency performance■ Enabling I/O consolidation by supporting TCP/IP, FC over

Ethernet and RDMA over Ethernet transport protocols on a single adapter

■ Improved productivity and efficiency by delivering VM scaling and superior server utilization

■ Supports industry-standard SR-IO Virtualization technology and delivers VM protection and granular levels of I/O services to applications

■ High-availability and high-performance for data center networking

■ Software compatible with standard TCP/UDP/IP and iSCSI stacks

■ High level silicon integration and no external memory design provides low power, low cost and high reliability

TARGET APPLICATIONS■ Data center virtualization using VMware® ESX Server or

Citrix XenServer™■ Enterprise data center applications■ I/O consolidation (single unified wire for networking,

storage and clustering)■ Storage consolidations using FCoE■ Future-proofing Web 2.0 data centers and cloud

computing■ Video streaming■ Accelerating back-up and restore operations

– One of the significant challenges in a large data center is managing the number of cables connected to a server. Mellanox ConnectX-2 dramatically reduces the number cables per server by providing multi-protocol support to leverage the single fabric for multiple applications.

Ports 2 x 10GigE 2 x 10GigE 1 x 40GigE 2 x 10GigE

ASIC ConnectX-2 EN ConnectX-2 EN ConnectX-2 EN ConnectX-2 ENt

Connector CX4 SFP+ QSFP RJ45

Cabling Type* CX4 Copper Direct Attached Copper CR4 Copper CAT5E up to 55m SR and LR Fiber Optic Optical Assembled Cables CAT6 up to 55m / CAT6A up to 100m

Host Bus PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0

Speed 5.0GT/s 5.0GT/s 5.0GT/s 5.0GT/s

Lanes x8 x8 x8 x8

Features Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV

OS Support RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 RoHS Yes Yes Yes Yes

Ordering Part MNEH29B-XTR MNPH29C-XTR MNQH19-XTR MNTH29C-XTR

10GBASE-T ADAPTERS

CONNECTIVITY– Interoperable with 10Gigabit Ethernet

switches and routers– MNEH29B: 20m+ of copper CX4 cable,

with powered connectors supporting active copper or fiber cables

– MNPH29B: - 100m (OM-2) or 300m (OM-3) of

multimode fiber cable, duplex LC connector from SFP+ optics module

- 10km single mode fiber cable, duplex LC connector from SFP+ optics module

- 7m+ direct attached copper cable through SFP+ connector

– MNTH29B: - 100m of Cat6a and Cat7 UTP - 55m of Cat5e and Cat6– MNQH19: 7m+ of copper QSFP cable, with

powered connectors supporting active copper or fiber cables

COPPER ADAPTERS

Mellanox ConnectX®-2 EN Ethernet

Network Interface Cards (NIC)

deliver high bandwidth and industry-leading

10 and 40 Gigabit Ethernet connectivity with

stateless offloads for converged fabrics in

Enterprise Data Centers, High-Performance

Computing, and Embedded environments.

Clustered databases, web infrastructure, and

IP video servers are just a few example

applications that will achieve significant

throughput and latency improvements

resulting in faster access, real-time response

and increased number of users per server.

ConnectX-2 EN improves network performance

by increasing available bandwidth to the CPU

and providing enhanced performance,

especially in virtualized server environments.

*Please visit Mellanox's web site for more cable information, best usage practice and availability.

Mellanox continues its leadership in providing high-performance networking technologies by delivering superior utilization and scaling in ConnectX®-2 EN 10 and 40 Gigabit Ethernet Adapters, enabling data centers to do more with less.

6387

_rev

3

FIBER OPTIC AND COPPER ADAPTERS

Page 4: FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERData center virtualization using VMware® ESX Server or Citrix XenServer™ Enterprise data center applications I/O consolidation (single unified

Optimal Price/PerformanceConnectX-2 EN 10 and 40 Gigabit Ethernet removes I/O bottlenecks in mainstream servers that are limiting application performance. Servers supporting PCI Express 2.0 with 5GT/s will be able to fully utilize both 10Gb/s ports, while the 40GigE port will fully saturate the host bus, delivering the I/O bandwidth required by these high-end servers. Hardware-based stateless offload engines handle the TCP/UDP/IP segmentation, reassembly, and checksum calculations that would otherwise burden the host process. These offload technologies are fully compatible with Microsoft RSS and NetDMA. Total cost of ownership is optimized by maintaining an end-to-end Ethernet network on existing operating systems and applications.

Mellanox provides 10 and 40 Gigabit Ethernet adapters suitable for all network environments. The dual port SFP+ adapter supports 10GBASE-SR, -LR, and direct attached copper cable providing the flexibility to connect over short, medium, and long distances. Dual port 10GBASE-T adapters provide easy connections up to 100m over familiar UTP wiring. The dual port 10GBASE-CX4 and single port 40GBASE-CR4 adapters with their powered connectors can utilize active copper and fiber cables as well as passive copper.

Converged EthernetConnectX-2 EN delivers the features needed for a converged network with support for Data Center Bridging (DCB). Fibre Channel frame encapsulation compliant with T11 and hardware offloads simplifies FCoE deployment while efficient RDMA transactions are enabled by Low Latency Ethernet running over DCB fabrics. By maintaining link-level interoperability with existing Ethernet networks, IT managers can leverage existing data center fabric management solutions.

I/O VirtualizationConnectX-2 EN support for hardware-based I/O virtualization provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. ConnectX-2 EN gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.

Quality of ServiceResource allocation per application or per VM is provided and protected by the advanced QoS supported by ConnectX-2 EN. Service levels for multiple traffic types can be based on IETF DiffServ or IEEE 802.1p/Q, along with the DCB enhancements, allowing system administrators to prioritize traffic by application, virtual machine, or protocol. This powerful combination of QoS and prioritization provides the ultimate fine-grain control of traffic – ensuring that applications run smoothly in today’s complex environment.

Software SupportConnectX-2 EN is supported by a full suite of software drivers for Microsoft Windows, Linux distributions, VMware and Citrix XenServer. ConnectX-2 EN supports stateless offload and is fully interoperable with standard TCP/UDP/IP stacks. ConnectX-2 EN supports various management interfaces and has a rich set of configuring and management tools across operating systems.

ETHERNET– IEEE Std 802.3ae 10 Gigabit Ethernet– IEEE Std 802.3ak 10GBASE-CX4– IEEE Std 802.3an 10GBASE-T– IEEE Draft P802.3ba/D2.0 40GBASE-CR4– IEEE Std 802.3ad Link Aggregation and Failover– IEEE Std 802.3x Pause– IEEE 802.1Q, .1p VLAN tags and priority– IEEE P802.1au D2.0 Congestion Notification– IEEE P802.1az D0.2 Enhanced Transmission Selection– IEEE P802.1bb D1.0 Priority-based Flow Control– Multicast– Jumbo frame support (10KB)– 128 MAC/VLAN addresses per port

TCP/UDP/IP STATELESS OFFLOAD– TCP/UDP/IP checksum offload– TCP Large Send (< 64KB) or Giant Send (64KB-16MB) Offload

for segmentation– Receive Side Scaling (RSS) up to 32 queues– Line rate packet filtering

ADDITIONAL CPU OFFLOADS– RDMA, Send/Receive (LLE)– Traffic steering across multiple cores– Intelligent interrupt coalescence– Compliant to Microsoft RSS and NetDMA

HARDWARE-BASED I/O VIRTUALIZATION– Single Root IOV– Address translation and protection– Multiple queues per virtual machine– VMware NetQueue support

STORAGE SUPPORT– Fibre Channel over Ethernet– T11-compliant frame format

CPU– AMD X86, X86_64– Intel X86, EM64T, IA-32, IA-64– SPARC– PowerPC, MIPS, and Cell

PCI EXPRESS INTERFACE– PCIe Base 2.0 compliant, 1.1 compatible– 5.0GT/s link rate x8 (20+20Gb/s or

40+40Gb/s bidirectional bandwidth)– Fits x8 or x16 slots– Support for MSI/MSI-X mechanisms

OPERATING SYSTEMS/DISTRIBUTIONS– Novell SuSE Linux Enterprise Server

(SLES), Red Hat Enterprise Linux (RHEL), and other Linux distributions

– Microsoft Windows XP Server 2003/2008, – VMware ESX 3.5– Citrix XenServer 4.1

MANAGEMENT– MIB, MIB-II, MIB-II Extensions, RMON,

RMON 2– Configuration and diagnostic tools

SAFETY – USA/Canada: cTUVus UL– EU: IEC60950– Germany: TUV/GS– International: CB Scheme

EMC (EMISSIONS)– USA: FCC, Class A– Canada: ICES, Class A– EU: EN55022, Class A– EU: EN55024, Class A– EU: EN61000-3-2, Class A– EU: EN61000-3-3, Class A– Japan: VCCI, Class A– Taiwan: BSMI, Class A

ENVIRONMENTAL– EU: IEC 60068-2-64: Random Vibration– EU: IEC 60068-2-29: Shocks, Type I / II– EU: IEC 60068-2-32: Fall Test

OPERATING CONDITIONS– Operating temperature: 0 to 55° C– Air flow: 200LFM @ 55° C – Requires 3.3V, 12V supplies

FEATURE SUMMARY COMPLIANCE COMPATIBILITY

350 Oakmead Parkway, Suite 100Sunnyvale, CA 94085Tel: 408-970-3400Fax: 408-970-3403www.mellanox.com

© Copyright 2009. Mellanox Technologies. All rights reserved.Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. BridgeX, FabricIT, PhyX, and Virtual Protocol Interconnect are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERMellanox 10 and 40 Gigabit Ethernet Converged Network Adapters

VALUE PROPOSITIONS– As customers deploy VMware and implement server virtualization, the need

for bandwidth and availability increases significantly. ConnectX-2 EN supports all networking features available in VMware ESX Server 3.5 including NetQueue for boosting I/O performance.

– Mellanox dual-port ConnectX-2 EN adapters support PCIe 2.0 x8 (5 GT/s), enabling high availability and high performance.

BENEFITS■ Industry-leading throughput and latency performance■ Enabling I/O consolidation by supporting TCP/IP, FC over

Ethernet and RDMA over Ethernet transport protocols on a single adapter

■ Improved productivity and efficiency by delivering VM scaling and superior server utilization

■ Supports industry-standard SR-IO Virtualization technology and delivers VM protection and granular levels of I/O services to applications

■ High-availability and high-performance for data center networking

■ Software compatible with standard TCP/UDP/IP and iSCSI stacks

■ High level silicon integration and no external memory design provides low power, low cost and high reliability

TARGET APPLICATIONS■ Data center virtualization using VMware® ESX Server or

Citrix XenServer™■ Enterprise data center applications■ I/O consolidation (single unified wire for networking,

storage and clustering)■ Storage consolidations using FCoE■ Future-proofing Web 2.0 data centers and cloud

computing■ Video streaming■ Accelerating back-up and restore operations

– One of the significant challenges in a large data center is managing the number of cables connected to a server. Mellanox ConnectX-2 dramatically reduces the number cables per server by providing multi-protocol support to leverage the single fabric for multiple applications.

Ports 2 x 10GigE 2 x 10GigE 1 x 40GigE 2 x 10GigE

ASIC ConnectX-2 EN ConnectX-2 EN ConnectX-2 EN ConnectX-2 ENt

Connector CX4 SFP+ QSFP RJ45

Cabling Type* CX4 Copper Direct Attached Copper CR4 Copper CAT5E up to 55m SR and LR Fiber Optic Optical Assembled Cables CAT6 up to 55m / CAT6A up to 100m

Host Bus PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0

Speed 5.0GT/s 5.0GT/s 5.0GT/s 5.0GT/s

Lanes x8 x8 x8 x8

Features Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV

OS Support RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 RoHS Yes Yes Yes Yes

Ordering Part MNEH29B-XTR MNPH29C-XTR MNQH19-XTR MNTH29C-XTR

10GBASE-T ADAPTERS

CONNECTIVITY– Interoperable with 10Gigabit Ethernet

switches and routers– MNEH29B: 20m+ of copper CX4 cable,

with powered connectors supporting active copper or fiber cables

– MNPH29B: - 100m (OM-2) or 300m (OM-3) of

multimode fiber cable, duplex LC connector from SFP+ optics module

- 10km single mode fiber cable, duplex LC connector from SFP+ optics module

- 7m+ direct attached copper cable through SFP+ connector

– MNTH29B: - 100m of Cat6a and Cat7 UTP - 55m of Cat5e and Cat6– MNQH19: 7m+ of copper QSFP cable, with

powered connectors supporting active copper or fiber cables

COPPER ADAPTERS

Mellanox ConnectX®-2 EN Ethernet

Network Interface Cards (NIC)

deliver high bandwidth and industry-leading

10 and 40 Gigabit Ethernet connectivity with

stateless offloads for converged fabrics in

Enterprise Data Centers, High-Performance

Computing, and Embedded environments.

Clustered databases, web infrastructure, and

IP video servers are just a few example

applications that will achieve significant

throughput and latency improvements

resulting in faster access, real-time response

and increased number of users per server.

ConnectX-2 EN improves network performance

by increasing available bandwidth to the CPU

and providing enhanced performance,

especially in virtualized server environments.

*Please visit Mellanox's web site for more cable information, best usage practice and availability.

Mellanox continues its leadership in providing high-performance networking technologies by delivering superior utilization and scaling in ConnectX®-2 EN 10 and 40 Gigabit Ethernet Adapters, enabling data centers to do more with less.

6387

_rev

3

FIBER OPTIC AND COPPER ADAPTERS

Page 5: FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERData center virtualization using VMware® ESX Server or Citrix XenServer™ Enterprise data center applications I/O consolidation (single unified

Optimal Price/PerformanceConnectX-2 EN 10 and 40 Gigabit Ethernet removes I/O bottlenecks in mainstream servers that are limiting application performance. Servers supporting PCI Express 2.0 with 5GT/s will be able to fully utilize both 10Gb/s ports, while the 40GigE port will fully saturate the host bus, delivering the I/O bandwidth required by these high-end servers. Hardware-based stateless offload engines handle the TCP/UDP/IP segmentation, reassembly, and checksum calculations that would otherwise burden the host process. These offload technologies are fully compatible with Microsoft RSS and NetDMA. Total cost of ownership is optimized by maintaining an end-to-end Ethernet network on existing operating systems and applications.

Mellanox provides 10 and 40 Gigabit Ethernet adapters suitable for all network environments. The dual port SFP+ adapter supports 10GBASE-SR, -LR, and direct attached copper cable providing the flexibility to connect over short, medium, and long distances. Dual port 10GBASE-T adapters provide easy connections up to 100m over familiar UTP wiring. The dual port 10GBASE-CX4 and single port 40GBASE-CR4 adapters with their powered connectors can utilize active copper and fiber cables as well as passive copper.

Converged EthernetConnectX-2 EN delivers the features needed for a converged network with support for Data Center Bridging (DCB). Fibre Channel frame encapsulation compliant with T11 and hardware offloads simplifies FCoE deployment while efficient RDMA transactions are enabled by Low Latency Ethernet running over DCB fabrics. By maintaining link-level interoperability with existing Ethernet networks, IT managers can leverage existing data center fabric management solutions.

I/O VirtualizationConnectX-2 EN support for hardware-based I/O virtualization provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. ConnectX-2 EN gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.

Quality of ServiceResource allocation per application or per VM is provided and protected by the advanced QoS supported by ConnectX-2 EN. Service levels for multiple traffic types can be based on IETF DiffServ or IEEE 802.1p/Q, along with the DCB enhancements, allowing system administrators to prioritize traffic by application, virtual machine, or protocol. This powerful combination of QoS and prioritization provides the ultimate fine-grain control of traffic – ensuring that applications run smoothly in today’s complex environment.

Software SupportConnectX-2 EN is supported by a full suite of software drivers for Microsoft Windows, Linux distributions, VMware and Citrix XenServer. ConnectX-2 EN supports stateless offload and is fully interoperable with standard TCP/UDP/IP stacks. ConnectX-2 EN supports various management interfaces and has a rich set of configuring and management tools across operating systems.

ETHERNET– IEEE Std 802.3ae 10 Gigabit Ethernet– IEEE Std 802.3ak 10GBASE-CX4– IEEE Std 802.3an 10GBASE-T– IEEE Draft P802.3ba/D2.0 40GBASE-CR4– IEEE Std 802.3ad Link Aggregation and Failover– IEEE Std 802.3x Pause– IEEE 802.1Q, .1p VLAN tags and priority– IEEE P802.1au D2.0 Congestion Notification– IEEE P802.1az D0.2 Enhanced Transmission Selection– IEEE P802.1bb D1.0 Priority-based Flow Control– Multicast– Jumbo frame support (10KB)– 128 MAC/VLAN addresses per port

TCP/UDP/IP STATELESS OFFLOAD– TCP/UDP/IP checksum offload– TCP Large Send (< 64KB) or Giant Send (64KB-16MB) Offload

for segmentation– Receive Side Scaling (RSS) up to 32 queues– Line rate packet filtering

ADDITIONAL CPU OFFLOADS– RDMA, Send/Receive (LLE)– Traffic steering across multiple cores– Intelligent interrupt coalescence– Compliant to Microsoft RSS and NetDMA

HARDWARE-BASED I/O VIRTUALIZATION– Single Root IOV– Address translation and protection– Multiple queues per virtual machine– VMware NetQueue support

STORAGE SUPPORT– Fibre Channel over Ethernet– T11-compliant frame format

CPU– AMD X86, X86_64– Intel X86, EM64T, IA-32, IA-64– SPARC– PowerPC, MIPS, and Cell

PCI EXPRESS INTERFACE– PCIe Base 2.0 compliant, 1.1 compatible– 5.0GT/s link rate x8 (20+20Gb/s or

40+40Gb/s bidirectional bandwidth)– Fits x8 or x16 slots– Support for MSI/MSI-X mechanisms

OPERATING SYSTEMS/DISTRIBUTIONS– Novell SuSE Linux Enterprise Server

(SLES), Red Hat Enterprise Linux (RHEL), and other Linux distributions

– Microsoft Windows XP Server 2003/2008, – VMware ESX 3.5– Citrix XenServer 4.1

MANAGEMENT– MIB, MIB-II, MIB-II Extensions, RMON,

RMON 2– Configuration and diagnostic tools

SAFETY – USA/Canada: cTUVus UL– EU: IEC60950– Germany: TUV/GS– International: CB Scheme

EMC (EMISSIONS)– USA: FCC, Class A– Canada: ICES, Class A– EU: EN55022, Class A– EU: EN55024, Class A– EU: EN61000-3-2, Class A– EU: EN61000-3-3, Class A– Japan: VCCI, Class A– Taiwan: BSMI, Class A

ENVIRONMENTAL– EU: IEC 60068-2-64: Random Vibration– EU: IEC 60068-2-29: Shocks, Type I / II– EU: IEC 60068-2-32: Fall Test

OPERATING CONDITIONS– Operating temperature: 0 to 55° C– Air flow: 200LFM @ 55° C – Requires 3.3V, 12V supplies

FEATURE SUMMARY COMPLIANCE COMPATIBILITY

350 Oakmead Parkway, Suite 100Sunnyvale, CA 94085Tel: 408-970-3400Fax: 408-970-3403www.mellanox.com

© Copyright 2009. Mellanox Technologies. All rights reserved.Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. BridgeX, FabricIT, PhyX, and Virtual Protocol Interconnect are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERMellanox 10 and 40 Gigabit Ethernet Converged Network Adapters

VALUE PROPOSITIONS– As customers deploy VMware and implement server virtualization, the need

for bandwidth and availability increases significantly. ConnectX-2 EN supports all networking features available in VMware ESX Server 3.5 including NetQueue for boosting I/O performance.

– Mellanox dual-port ConnectX-2 EN adapters support PCIe 2.0 x8 (5 GT/s), enabling high availability and high performance.

BENEFITS■ Industry-leading throughput and latency performance■ Enabling I/O consolidation by supporting TCP/IP, FC over

Ethernet and RDMA over Ethernet transport protocols on a single adapter

■ Improved productivity and efficiency by delivering VM scaling and superior server utilization

■ Supports industry-standard SR-IO Virtualization technology and delivers VM protection and granular levels of I/O services to applications

■ High-availability and high-performance for data center networking

■ Software compatible with standard TCP/UDP/IP and iSCSI stacks

■ High level silicon integration and no external memory design provides low power, low cost and high reliability

TARGET APPLICATIONS■ Data center virtualization using VMware® ESX Server or

Citrix XenServer™■ Enterprise data center applications■ I/O consolidation (single unified wire for networking,

storage and clustering)■ Storage consolidations using FCoE■ Future-proofing Web 2.0 data centers and cloud

computing■ Video streaming■ Accelerating back-up and restore operations

– One of the significant challenges in a large data center is managing the number of cables connected to a server. Mellanox ConnectX-2 dramatically reduces the number cables per server by providing multi-protocol support to leverage the single fabric for multiple applications.

Ports 2 x 10GigE 2 x 10GigE 1 x 40GigE 2 x 10GigE

ASIC ConnectX-2 EN ConnectX-2 EN ConnectX-2 EN ConnectX-2 ENt

Connector CX4 SFP+ QSFP RJ45

Cabling Type* CX4 Copper Direct Attached Copper CR4 Copper CAT5E up to 55m SR and LR Fiber Optic Optical Assembled Cables CAT6 up to 55m / CAT6A up to 100m

Host Bus PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0

Speed 5.0GT/s 5.0GT/s 5.0GT/s 5.0GT/s

Lanes x8 x8 x8 x8

Features Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV

OS Support RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 RoHS Yes Yes Yes Yes

Ordering Part MNEH29B-XTR MNPH29C-XTR MNQH19-XTR MNTH29C-XTR

10GBASE-T ADAPTERS

CONNECTIVITY– Interoperable with 10Gigabit Ethernet

switches and routers– MNEH29B: 20m+ of copper CX4 cable,

with powered connectors supporting active copper or fiber cables

– MNPH29B: - 100m (OM-2) or 300m (OM-3) of

multimode fiber cable, duplex LC connector from SFP+ optics module

- 10km single mode fiber cable, duplex LC connector from SFP+ optics module

- 7m+ direct attached copper cable through SFP+ connector

– MNTH29B: - 100m of Cat6a and Cat7 UTP - 55m of Cat5e and Cat6– MNQH19: 7m+ of copper QSFP cable, with

powered connectors supporting active copper or fiber cables

COPPER ADAPTERS

Mellanox ConnectX®-2 EN Ethernet

Network Interface Cards (NIC)

deliver high bandwidth and industry-leading

10 and 40 Gigabit Ethernet connectivity with

stateless offloads for converged fabrics in

Enterprise Data Centers, High-Performance

Computing, and Embedded environments.

Clustered databases, web infrastructure, and

IP video servers are just a few example

applications that will achieve significant

throughput and latency improvements

resulting in faster access, real-time response

and increased number of users per server.

ConnectX-2 EN improves network performance

by increasing available bandwidth to the CPU

and providing enhanced performance,

especially in virtualized server environments.

*Please visit Mellanox's web site for more cable information, best usage practice and availability.

Mellanox continues its leadership in providing high-performance networking technologies by delivering superior utilization and scaling in ConnectX®-2 EN 10 and 40 Gigabit Ethernet Adapters, enabling data centers to do more with less.

6387

_rev

3

FIBER OPTIC AND COPPER ADAPTERS

Page 6: FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERData center virtualization using VMware® ESX Server or Citrix XenServer™ Enterprise data center applications I/O consolidation (single unified

Optimal Price/PerformanceConnectX-2 EN 10 and 40 Gigabit Ethernet removes I/O bottlenecks in mainstream servers that are limiting application performance. Servers supporting PCI Express 2.0 with 5GT/s will be able to fully utilize both 10Gb/s ports, while the 40GigE port will fully saturate the host bus, delivering the I/O bandwidth required by these high-end servers. Hardware-based stateless offload engines handle the TCP/UDP/IP segmentation, reassembly, and checksum calculations that would otherwise burden the host process. These offload technologies are fully compatible with Microsoft RSS and NetDMA. Total cost of ownership is optimized by maintaining an end-to-end Ethernet network on existing operating systems and applications.

Mellanox provides 10 and 40 Gigabit Ethernet adapters suitable for all network environments. The dual port SFP+ adapter supports 10GBASE-SR, -LR, and direct attached copper cable providing the flexibility to connect over short, medium, and long distances. Dual port 10GBASE-T adapters provide easy connections up to 100m over familiar UTP wiring. The dual port 10GBASE-CX4 and single port 40GBASE-CR4 adapters with their powered connectors can utilize active copper and fiber cables as well as passive copper.

Converged EthernetConnectX-2 EN delivers the features needed for a converged network with support for Data Center Bridging (DCB). Fibre Channel frame encapsulation compliant with T11 and hardware offloads simplifies FCoE deployment while efficient RDMA transactions are enabled by Low Latency Ethernet running over DCB fabrics. By maintaining link-level interoperability with existing Ethernet networks, IT managers can leverage existing data center fabric management solutions.

I/O VirtualizationConnectX-2 EN support for hardware-based I/O virtualization provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. ConnectX-2 EN gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.

Quality of ServiceResource allocation per application or per VM is provided and protected by the advanced QoS supported by ConnectX-2 EN. Service levels for multiple traffic types can be based on IETF DiffServ or IEEE 802.1p/Q, along with the DCB enhancements, allowing system administrators to prioritize traffic by application, virtual machine, or protocol. This powerful combination of QoS and prioritization provides the ultimate fine-grain control of traffic – ensuring that applications run smoothly in today’s complex environment.

Software SupportConnectX-2 EN is supported by a full suite of software drivers for Microsoft Windows, Linux distributions, VMware and Citrix XenServer. ConnectX-2 EN supports stateless offload and is fully interoperable with standard TCP/UDP/IP stacks. ConnectX-2 EN supports various management interfaces and has a rich set of configuring and management tools across operating systems.

ETHERNET– IEEE Std 802.3ae 10 Gigabit Ethernet– IEEE Std 802.3ak 10GBASE-CX4– IEEE Std 802.3an 10GBASE-T– IEEE Draft P802.3ba/D2.0 40GBASE-CR4– IEEE Std 802.3ad Link Aggregation and Failover– IEEE Std 802.3x Pause– IEEE 802.1Q, .1p VLAN tags and priority– IEEE P802.1au D2.0 Congestion Notification– IEEE P802.1az D0.2 Enhanced Transmission Selection– IEEE P802.1bb D1.0 Priority-based Flow Control– Multicast– Jumbo frame support (10KB)– 128 MAC/VLAN addresses per port

TCP/UDP/IP STATELESS OFFLOAD– TCP/UDP/IP checksum offload– TCP Large Send (< 64KB) or Giant Send (64KB-16MB) Offload

for segmentation– Receive Side Scaling (RSS) up to 32 queues– Line rate packet filtering

ADDITIONAL CPU OFFLOADS– RDMA, Send/Receive (LLE)– Traffic steering across multiple cores– Intelligent interrupt coalescence– Compliant to Microsoft RSS and NetDMA

HARDWARE-BASED I/O VIRTUALIZATION– Single Root IOV– Address translation and protection– Multiple queues per virtual machine– VMware NetQueue support

STORAGE SUPPORT– Fibre Channel over Ethernet– T11-compliant frame format

CPU– AMD X86, X86_64– Intel X86, EM64T, IA-32, IA-64– SPARC– PowerPC, MIPS, and Cell

PCI EXPRESS INTERFACE– PCIe Base 2.0 compliant, 1.1 compatible– 5.0GT/s link rate x8 (20+20Gb/s or

40+40Gb/s bidirectional bandwidth)– Fits x8 or x16 slots– Support for MSI/MSI-X mechanisms

OPERATING SYSTEMS/DISTRIBUTIONS– Novell SuSE Linux Enterprise Server

(SLES), Red Hat Enterprise Linux (RHEL), and other Linux distributions

– Microsoft Windows XP Server 2003/2008, – VMware ESX 3.5– Citrix XenServer 4.1

MANAGEMENT– MIB, MIB-II, MIB-II Extensions, RMON,

RMON 2– Configuration and diagnostic tools

SAFETY – USA/Canada: cTUVus UL– EU: IEC60950– Germany: TUV/GS– International: CB Scheme

EMC (EMISSIONS)– USA: FCC, Class A– Canada: ICES, Class A– EU: EN55022, Class A– EU: EN55024, Class A– EU: EN61000-3-2, Class A– EU: EN61000-3-3, Class A– Japan: VCCI, Class A– Taiwan: BSMI, Class A

ENVIRONMENTAL– EU: IEC 60068-2-64: Random Vibration– EU: IEC 60068-2-29: Shocks, Type I / II– EU: IEC 60068-2-32: Fall Test

OPERATING CONDITIONS– Operating temperature: 0 to 55° C– Air flow: 200LFM @ 55° C – Requires 3.3V, 12V supplies

FEATURE SUMMARY COMPLIANCE COMPATIBILITY

350 Oakmead Parkway, Suite 100Sunnyvale, CA 94085Tel: 408-970-3400Fax: 408-970-3403www.mellanox.com

© Copyright 2009. Mellanox Technologies. All rights reserved.Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. BridgeX, FabricIT, PhyX, and Virtual Protocol Interconnect are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

FLEXIBLE I/O FOR THE DYNAMIC DATA CENTERMellanox 10 and 40 Gigabit Ethernet Converged Network Adapters

VALUE PROPOSITIONS– As customers deploy VMware and implement server virtualization, the need

for bandwidth and availability increases significantly. ConnectX-2 EN supports all networking features available in VMware ESX Server 3.5 including NetQueue for boosting I/O performance.

– Mellanox dual-port ConnectX-2 EN adapters support PCIe 2.0 x8 (5 GT/s), enabling high availability and high performance.

BENEFITS■ Industry-leading throughput and latency performance■ Enabling I/O consolidation by supporting TCP/IP, FC over

Ethernet and RDMA over Ethernet transport protocols on a single adapter

■ Improved productivity and efficiency by delivering VM scaling and superior server utilization

■ Supports industry-standard SR-IO Virtualization technology and delivers VM protection and granular levels of I/O services to applications

■ High-availability and high-performance for data center networking

■ Software compatible with standard TCP/UDP/IP and iSCSI stacks

■ High level silicon integration and no external memory design provides low power, low cost and high reliability

TARGET APPLICATIONS■ Data center virtualization using VMware® ESX Server or

Citrix XenServer™■ Enterprise data center applications■ I/O consolidation (single unified wire for networking,

storage and clustering)■ Storage consolidations using FCoE■ Future-proofing Web 2.0 data centers and cloud

computing■ Video streaming■ Accelerating back-up and restore operations

– One of the significant challenges in a large data center is managing the number of cables connected to a server. Mellanox ConnectX-2 dramatically reduces the number cables per server by providing multi-protocol support to leverage the single fabric for multiple applications.

Ports 2 x 10GigE 2 x 10GigE 1 x 40GigE 2 x 10GigE

ASIC ConnectX-2 EN ConnectX-2 EN ConnectX-2 EN ConnectX-2 ENt

Connector CX4 SFP+ QSFP RJ45

Cabling Type* CX4 Copper Direct Attached Copper CR4 Copper CAT5E up to 55m SR and LR Fiber Optic Optical Assembled Cables CAT6 up to 55m / CAT6A up to 100m

Host Bus PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0

Speed 5.0GT/s 5.0GT/s 5.0GT/s 5.0GT/s

Lanes x8 x8 x8 x8

Features Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Stateless Offload, FCoE Offload, Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV Priority Flow Control, SR-IOV

OS Support RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008, RHEL, SLES, Win2003/2008 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 FreeBSD, VMWare ESX3.5 RoHS Yes Yes Yes Yes

Ordering Part MNEH29B-XTR MNPH29C-XTR MNQH19-XTR MNTH29C-XTR

10GBASE-T ADAPTERS

CONNECTIVITY– Interoperable with 10Gigabit Ethernet

switches and routers– MNEH29B: 20m+ of copper CX4 cable,

with powered connectors supporting active copper or fiber cables

– MNPH29B: - 100m (OM-2) or 300m (OM-3) of

multimode fiber cable, duplex LC connector from SFP+ optics module

- 10km single mode fiber cable, duplex LC connector from SFP+ optics module

- 7m+ direct attached copper cable through SFP+ connector

– MNTH29B: - 100m of Cat6a and Cat7 UTP - 55m of Cat5e and Cat6– MNQH19: 7m+ of copper QSFP cable, with

powered connectors supporting active copper or fiber cables

COPPER ADAPTERS

Mellanox ConnectX®-2 EN Ethernet

Network Interface Cards (NIC)

deliver high bandwidth and industry-leading

10 and 40 Gigabit Ethernet connectivity with

stateless offloads for converged fabrics in

Enterprise Data Centers, High-Performance

Computing, and Embedded environments.

Clustered databases, web infrastructure, and

IP video servers are just a few example

applications that will achieve significant

throughput and latency improvements

resulting in faster access, real-time response

and increased number of users per server.

ConnectX-2 EN improves network performance

by increasing available bandwidth to the CPU

and providing enhanced performance,

especially in virtualized server environments.

*Please visit Mellanox's web site for more cable information, best usage practice and availability.

Mellanox continues its leadership in providing high-performance networking technologies by delivering superior utilization and scaling in ConnectX®-2 EN 10 and 40 Gigabit Ethernet Adapters, enabling data centers to do more with less.

6387

_rev

3

FIBER OPTIC AND COPPER ADAPTERS