EXTREME Lippis AA Report Spring 2013
-
Upload
ritu-nathan -
Category
Documents
-
view
214 -
download
0
Transcript of EXTREME Lippis AA Report Spring 2013
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 1/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for
Two-Tier Ethernet Network Architecture
April, 2013
© Lippis Enterprises, Inc. 2013
A Report on the:
Extreme Networks Open Fabric
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 2/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
2 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Acknowledgements
We thank the ollowing people or their help and support in making possible this first industry
evaluation o active-active Ethernet Fabrics:
Victor Alston, CEO o Ixia—or his continuing support o these industry initiatives
throughout its execution since 2009.
Leviton—or the use o fiber optic cables equipped with optical SFP+ connectors to link Ixia
test equipment to 10GbE switches under test.
Siemon—or the use o copper and fiber optic cables equipped with QSFP+ connectors to link
Ixia test equipment to 40GbE switches under test.
Michael Githens, Lab Program Manager at Ixia—or his technical competence, extra effort
and dedication to airness as he executed test week and worked with participating vendors to
answer their many questions.
Henry He, echnical Product Manager at Ixia—or his insights into active-active networking
protocols and multiple test design interactions with the vendor community.
Jim Smith, VP o Marketing at Ixia—or his support and contribution to creating a successulindustry event.
Mike Elias, Photography and Videography at Ixia—or video podcast editing.
All participating vendors and their technical plus marketing teams—or their support o not
only the test event and multiple test configuration file iterations but to each other, as many
provided helping hands on many levels to competitors.
Bill Nicholson or his graphic artist skills that make this report look amazing.
Jeannette ibbetts or her editing that makes this report read as smooth as a test report can.
License Rights
© 2013 Lippis Enterprises, Inc. All rights reserved. Tis report is being sold solely to the purchaser set orth below. Te
report, including the written text, graphics, data, images, illustrations, marks, logos, sound or video clips, photographs
and/or other works (singly or collectively, the “Content”), may be used only by such purchaser or inormational purpos-
es and such purchaser may not (and may not authorize a third party to) copy, transmit, reproduce, cite, publicly display,host, post, perorm, distribute, alter, transmit or create derivative works o any Content or any portion o or excerpts
rom the Content in any ashion (either externally or internally to other individuals within a corporate structure) unless
specifically authorized in writing by Lippis Enterprises. Te purchaser agrees to maintain all copyright, trademark and
other notices on the Content. Te Report and all o the Content are protected by U.S. and/or international copyright laws
and conventions, and belong to Lippis Enterprises, its licensors or third parties. No right, title or interest in any Content
is transerred to the purchaser.
Purchaser Name:
Note that, currently Ixia’s statistics do not support the combination o Multicast traffic running over
ports within a LAG. Tereore, packet loss or this scenario was not accurately calculated and is
thereore not valid.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 3/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
3 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Table of Contents
Executive Summary .............................................................................................................................................4
Market Background .............................................................................................................................................7
Active-Active Fabric est Methodology ............................................................................................................8
Ethernet Fabrics Evaluated ................................................................................................................................12
Vendor ested:
Extreme Networks Open Fabric ........................................................................................................12
Active-Active Vendor Cross-Vendor Analysis ...............................................................................................23
Cross Vendor Analysis
Fabric est .............................................................................................................................................24
Server-oR Reliability est .................................................................................................................30
Cloud Simulation est .........................................................................................................................31
Ethernet Fabric Industry Recommendations .................................................................................................34
erms o Use ........................................................................................................................................................35
About Nick Lippis ..............................................................................................................................................36
© Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.co
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 4/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
4 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Executive Summary
o assist I business leaders with the design and procurement o their private or public data center
cloud abrics, the Lippis Report and Ixia have conducted an open industry evaluation o Active-Active
Ethernet Fabrics consisting o 10GbE (Gigabit Ethernet) and 40GbE data center switches. In thisreport, I architects are provided the first comparative Ethernet Fabric perormance and reliability
inormation to assist them in purchase decisions and product differentiation.
Te Lippis test report based on independent validation at Ixia’s iSimCity laboratory communicates
credibility, competence, openness and trust to potential buyers o Ethernet Fabrics based upon
active-active protocols, such as RILL, or ransparent Interconnection o Lots o Links, and SPBM,
or Shortest Path Bridging MAC mode, and configured with 10GbE and 40GbE data center switching
equipment. Most suppliers utilized MLAG, or Multi-System Link Aggregation, or some version o it,
to create a two-tier abric without an IS-IS (Intermediate System to Intermediate System) protocol be-
tween switches. Te Lippis/Ixia tests are open to all suppliers and are air, thanks to well-vetted customEthernet Fabric tests scripts that are repeatable. Te Lippis/Ixia Active-Active Ethernet Fabric est was
ree or vendors to participate and open to all industry suppliers o 10GbE, 40GbE and 100GbE switch-
ing equipment, both modular and fixed configurations.
Tis report communicates test results that took place during the late autumn and early winter o
2012/2013 in the modern Ixia test lab, iSimCity, located in Santa Clara, CA. Ixia supplied all test equip
ment needed to conduct the tests while Leviton provided optical SPF+ connectors and optical cabling.
Siemon provided copper and optical cables equipped with QSFP+ connectors or 40GbE connections.
Each Ethernet Fabric supplier was allocated lab time to run the test with the assistance o an Ixia engi-
neer. Each switch vendor configured its equipment while Ixia engineers ran the test and logged result-
ing data.
Te tests conducted were an industry first set o Ethernet Fabric est scripts that were vetted over a six-
month period with participating vendors, Ixia and Lippis Enterprises. We call this test suite the Lippis
Fabric Benchmark, and it consisted o a single- and dual-homed abric configuration which three tra-
fic types, including multicast, many-to-many or unicast mesh and unicast rom multicast returns. Te
Lippis Fabric Benchmark test suite measured Ethernet Fabric latency in both packet size iterations and
Lippis Cloud Simulation modes. Reliability or packet loss and packet loss duration was also measured
at various points in the Fabric. Te new Lippis Cloud Simulation measured latency o the abric as tra-
fic load increased rom 50% to 100% consisting o north-to-south plus east-to-west traffic flows.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 5/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
5 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Ethernet Fabrics evaluated were:
Arista Sofware-Defined Cloud Network (SDCN) consisting o its 7050S-64 10/40G
Data Center Switch oR with 7508 Core Switches
Avaya Virtual Enterprise Network Architecture or VENA Fabric Connect consisting o its
VSP 7000
Brocade Virtual Cluster Switching or VCS consisting o its VDXM 6720 oR and
VDXM 8770 Core Switch
Extreme Networks Open Fabric consisting o its Summit® X670V oR and
BlackDiamond X8 Core Switch
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 6/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
6 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
The following lists our top ten findings:
1. New Fabric Latency Metrics: Te industry is amiliar
with switch latency metrics measured via cut-through
or store and orward. Te industry is not amiliarwith abric latency. It’s anticipated that abric latency
metrics reported here will take some time or the
industry to digest. Te more abrics that are tested, the
greater utility o this metric.
2. Consistent Fabric Perormance: We ound, as
expected, that abric latency or each vendor was
consistent, meaning that as packet sizes and loads
increased, so did required processing and thus latency.
Also, we ound that non-blocking and ully mesh
configurations offered zero abric packet loss providing
consistency o operations.
3. No Dual-Homing Perormance Penalty: In ully
meshed, non-blocking abric designs, we ound no
material abric latency difference as servers were
dual homed to different op-o-Racks (oRs). Fabric
latency measurement in dual homed were the same as
single-homed configuration (as expected) even though
increased reliability or availability was introduced to
the design via dual-homing server ports to two oRs
plus adding MLAGs between oRs. Tis was true in
both Arista and Extreme’s test results.
4. CLI-Less Provisioning: Brocade’s VCS, which is
RILL based, offered several unique attributes, such as
adding a switch to its Ethernet Fabric plus bandwidth
between switches without CLI (Command Line
Interace) provisioning. Fabric configuration was
surprisingly simple, thanks to its ISL, or Inter-Switch
Link runking.
5. Different Fabric Approaches: In this Lippis/
Ixia Active-Active Ethernet Fabric est, different
approaches to abrics were encountered. Avaya tested
its Distributed oR, or doR, as part o its Fabric
Connect offering, which stacks o oRs horizontally
and can offer advantages or smaller data centers with
dominate east-west flows. Extreme Network’s Open
Fabric utilizes its high perormance and port dense
BlackDiamond X8 switch, which enables a abric to be
built with just a ew devices.
6. ECMP n-way Scales: MLAG at Layer 2 and Equal
Cost Multi-Path, or ECMP, at Layer 3 are dominant
approaches to increasing bandwidth between switches.Arista demonstrated that an ECMP-based abric scale
to 32 with bandwidth consistency among links that is
bandwidth is evenly distributed between the 32 10GbE
7. Balanced Hashing: We ound that vendors utilized
slightly different hashing algorithms, yet we ound
no difference in hash results. Tat is, we ound evenly
distributed traffic load between links within a LAG
(Link Aggregation) during different traffic load
scenarios.
8. vCenter Integration: All abric vendors offer vCenterintegration so that virtualization and network
operations teams can view each other’s administrative
domains to address vMotion within L2 confines.
9. MLAG the Path to Two Tier Data Center Networks:
MLAG provides a path to both two-tier data center
networking plus RILL and/or SPB (Shortest Path
Bridging) in the uture. MLAG takes traditional link
aggregation and extends it by allowing one device to
essentially dual home into two different devices, thus
adding limited multipath capability to traditional LAG
10. More TRILL and SPB: It’s disappointing that
there are not more vendors prepared to publicly
test RILL and SPB implementations as Avaya and
Brocade demonstrated their ease o deployment and
multipathing value in this Lippis/Ixia Active-Active
Ethernet Fabric est.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 7/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
7 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Market Background
Data center network design has been undergoing rapid
changes in a ew short years afer VMware launched VM
(Virtual Machine) Virtual Center in 2003. Server virtualiza-tion enabled not only efficiency o compute, but a new I
delivery model through private and public cloud computing
plus new business models to emerge. Fundamental to mod-
ern data center networking is that traffic patterns have shif-
ed rom once dominant north-south or client-to-server to
now a combination o north-south plus east-west or server-
server and server-storage. In many public and private cloud
acilities, east-west traffic dominates flows. Tere are many
drivers contributing to this change, in addition to server
virtualization, such as increased compute density scale,
hyperlinked servers, mobile computing, cloud economics,etc. Tis simple act o traffic pattern change has given rise
to the need or ew network switches or tiers, lower latency,
higher perormance, higher reliability and lower power
consumption in the design o data center networks.
In addition to traffic shifs and changes, service providers
and enterprise I organizations have been under increasing
pressure to reduce network operational cost and enable sel-
service so that customers and business units may provision
I needed. At the February 13, 2013, Open Networking
User Group in Boston at Fidelity Investments, hosted by the
Lippis Report, Fidelity showed the beginning o exponential
growth in VM Creation/Deletion by business unit manag-
ers since August 2012. Reduced OpEx and sel-service are
driving a undamental need or networking to be included
in application, VM, storage, compute and workload auto
provisioning.
o address these industry realities, various non-profit oun-
dations have ormed, including the Open Compute Project
Foundation, Te OpenStack Foundation, Te Open Net-
working Foundation, etc. Tese oundations seek to open
up I markets to lower acquisition cost or CapEx and inject
innovation, especially around auto provisioning to lower
OpEx. While the oundations are developing open source
sofware and standards, the vendor community has been in-
novating through traditional mechanism, including product
development and standards organizations, such as the IEF,
IEEE and others.
Data center networks have been ramping up to build private
and public cloud inrastructure with 10/40GbE and soon
100GbE data center switches with 400GbE on the horizon.At the center o next generation data center/cloud network-
ing design are active-active protocols to increase application
perormance, thanks to its lower latency and increased reli-
ability via dual-homed, ull-meshed non-blocking network
abric plus CLI-less bandwidth provisioning.
o deliver a two-tier network, spanning tree protocol (SP)
is eliminated and replaced with active-active multipath links
between servers and oR switches, oR Core switches and
between, Core switches. Te industry is offering multiple
active-active protocol options, such as Brocade’s VCS Fab-ric, Cisco’s FabricPath, Juniper’s QFabric, RILL, SPBM and
LAG Protocol. MLAG and ECMP are design approaches to
limited active-active multipathing; they are widely used and
central to many vendors’ SP alternative strategies, but they
lack CLI-less provisioning.
Ethernet abrics are promoted to be the optimal platorm to
address a range o data center design requirements, includ-
ing converged storage/network, network virtualization,
Open Networking, including Sofware-Defined Networking
or SDN, and simply the way to keep up with ever-increasingapplication and traffic load.
In this industry first Lippis/Ixia Active-Active Ethernet
Fabric est , we provide I architects with comparative
active-active protocol perormance inormation to assist
them in purchase decisions and product differentiation.
New data center Ethernet abric design requires automated
configuration to support VM moves across L3 boundaries,
low latency, high perormance and resiliency under north-
south plus east-west flows, and minimum number o
network tiers.
Te goal o the evaluation is to provide the industry with
comparative perormance and reliability test data across all
active-active protocols. Both modular switching (Core plus
End-o-Row, or EoR) products and fixed oR configuration
switches were welcome.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 8/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
8 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Lippis Active-Active Fabric Test Methodology
Tere are two active-active test configurations—single and
dual homed—used in the Lippis/Ixia Active-Active Ethernet
Fabric est. Tese configurations consisted o two or ouroRs and two Core switches to test Ethernet abric latency,
throughput and reliability. For those vendors that offer
RILL, SPBM or FabricPath Core switches were not needed
as Ixia’s IxNetwork simulated these active-active protocols
where latency, throughput and reliability can be measured.
Most companies ran both simulated and non-simulated test
runs or IS-IS based active-active and MLAG, respectively.
Te single- and dual-homed configurations and traffic pro-
files are detailed below. Tese configurations were used or
those vendors wishing to test MLAG and/or RILL, SPBMand FabricPath without the use o Ixia’s simulation.
Single-Homed Configuration: In the single-homed con-
figuration, two oR and two Core switches made up an
Ethernet abric. Tirty-two-10GbE links connected Ixia test
equipment to two oR, which are divided into eight, our-
port LAGs. Each oR connected to two Core switches with
16-10GbE or our-40GbE links. Tereore, the load placed
on this Ethernet abric was 32-10GbE ports, or 320Gbs.
A mix o unicast, multicast and mesh or any-to-any flows
to represent the Brownian motion typical in modern data
center networks was placed upon this abric while latencyand throughput were measured rom ingress to egress; that
is, rom oR-to-Core-oR, representing abric latency and
throughput.
Dual-Homed Configuration: In the dual-homed configu-
ration, our oRs and two Core switches created the Ether-
net abric. Each Ixia port, acting as a virtual server, was dual
homed to separate oR switches, which is a best practice in
high availability data centers and cloud computing acili-
ties. Te load placed on this Ethernet abric was the same
32 10GbE, or 320Gbs, as in the single-homed configuration,
with a mix o unicast, multicast and mesh or any-to-any
flows placed upon the abric. Each oR was configured
with eight-10GbE server ports. Eight-10GbE or two-40GbE
lagged ports connect oRs. Finally the oRs were con-
nected to Core switches via eight-10GbE or dual-40GbE
port MLAGs. Latency and throughput were measured rom
ingress to egress; that is, rom oR-to-Core-oR, represent-
ing abric latency and throughput rather than device.
L1 L2 L5 L6 L3 L4 L7 L8
10G
Ixia Port LAG AssignmentIxia Port LAG Assignment
40G
4x40G
Core Core
ToR ToR
2 4 5 6 17 20 21 24 9 12 15 16 25 28 29 32
Single-Homed Topology
XXXX
ToR ToR
40G
4x40G
Core Core
1 4 17 31 3 10 22 32
Dual-Homed Topology
ToR ToR
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 9/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
9 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Test Traffic Profiles: Te logical network was an iMix o
unicast traffic in a many-to-many or mesh configuration,
multicast traffic and unicast return or multicast peers
where LAGs are segmented into traffic types. LAGs 1, 2,
3 and 4 were used or unicast traffic. LAGs 5 and 6 weremulticast sources distributing to multicast groups in LAGs
7 and 8. LAGs 7 and 8 were unicast returns or multicast
peers within LAGs 5 and 6.
Lippis Cloud Performance Test
In addition to testing the abric with unicast, multicast andmany-to-many traffic flows at varying packet sizes, the
Lippis Cloud Perormance est iMix was used to generate
traffic and measure system latency and throughput rom
ingress to egress. o understand the perormance o the
Ethernet abric under load, six iterations o the Lippis Cloud
Perormance est at traffic loads o 50%, 60%, 70%, 80%,
90% and 100% were perormed, measuring latency and
throughput on the oR switch. Te oR was connected to
Ixia test gear via 28-10GbE links.
Te Lippis Cloud Perormance est iMix consisted o east-
west database, iSCSI (Internet Small Computer System
Interace) and Microsof Exchange traffic, plus north-south
HP (Hypertext ranser Protocol) and Youube traffic.
Each traffic type is explained below:
East-West database traffic was set up as a request/response.
A single 64-byte request was sent out and three different-
sized responses were returned (64, 1518 and 9216 bytes).
A total o eight ports were used or east-west traffic. Four
ports were set as east and our ports were set as west. Tese
eight ports were not used in any other part o the test. Te
transmit rate was a total 70% o line rate in each direction.
Te response traffic was urther broken down with weightso 1/2/1 or 64/1518/9216 byte rames or the three response
sizes. Tat is, the weight specifies what proportion rom the
rate set per direction will be applied to the corresponding
x ports rom the traffic profile.
East-West iSCSI traffic was set up as a request/response
with our east and west ports used in each direction. Each
direction was sending at 70% o line rate. Te request was
64 bytes and the response was 9216 bytes.
East-West Microsof Exchange traffic was set up on two
east and two west ports. Te request and response were
both 1518 and set at 70% o line rate.
Te ollowing summarizes the east-west flows:
Database: 4 East (requestors) to 4 West (responders)
iSCSI: 4 East (requestors) to 4 West (responders)
MS Exchange: 2 East (requestors) to 2 West
(responders)
Database/iSCSI/MS Exchange Weights: 1/2/1, i.e.,
25%/50%/25% o rate set per direction and applicable
on selected ports. East rate: 70% = West rate: 70%.
North-South HTTP traffic was set up on our north and
our south ports. Te request was 83 bytes and the response
was 305 bytes. Te line rate on these ports was 46.667% line
rate in each direction.
North-South YouTube traffic was using the same our
north and south ports as the HP traffic. Te request was500 bytes at line rate o 23.333%. Tere were three responses
totaling 23.333% in a 5/2/1 percentage breakdown o 1518,
512 and 64 bytes.
Traffic Profiles
Unicast(Many to Many)
Switch 1 Switch 2
1
2
5
6
3
4
7
8
Multicast
Switch 1 Switch 2
1
2
5
6
3
4
7
8
Unicast(For Multicast Peers)
Switch 1 Switch 2
1
2
5
6
3
4
7
8
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 10/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
10 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Reliability
Fabric reliability was tested in three critical areas: 1) be-
tween Ixia test gear and the oR, 2) between oR and core
and 3) with the loss o an entire core. Te system under test(SU) was configured in the “single-homed” test design.
Only the Ixia-to-oR reliability test was required, all other
reliability tests were optional.
Server to ToR Reliability Test: A stream o unicast many-
to-many flows at 128-byte size packets was sent to the
network abric. While the abric was processing this load,
a 10GbE link was disconnected in LAGs 3, 4, 7 and 8 with
packet loss and packet loss duration being measured and
reported. Note that packet loss duration can vary as the link
ailure detection is based on a polled cycle. Repeated tests
may show results in the nanosecond range or slightly higher
numbers in the millisecond range. Te poll interval or link
detection is not configurable.
ToR to Core Reliability Test: Tere were two parts to this
Lippis/Ixia Reliability est. First, a link that connected oR
switches to Core switches while unicast many-to-many tra-
fic flows were being processed was pulled with the resulting
packet loss plus packet loss duration recorded by Ixia test
equipment. Ten the link was restored, and the resulting
packet loss plus packet loss duration was recorded by Ixiatest equipment. A stream o unicast many-to-many flows at
128-byte size packets was sent to the Ethernet abric. While
the abric was processing this load, a link was disconnected,
and packet loss plus packet loss duration was measured.
When the link was restored, the abric reconfigured itsel
while packet loss and packet loss duration was measured.
Again note that packet loss duration can vary as the link
ailure detection is based on a polled cycle. Repeated tests
may show results in the nanosecond range or slightly higher
numbers in the millisecond range. Te poll interval or linkdetection is not configurable.
Core Switch Shut Down Reliability Test: As above, packet
loss and packet loss duration was measured when the abric
was orced to re-configure due to a link being shut down
and restored while a 128-byte size packet o many-to-many
unicast traffic flowed through the abric. Tis Reliability
est measured the result o an entire Core switch being shut
down and restored. Again note that packet loss duration can
vary as the link ailure detection is based on a polled cycle.
Repeated tests may show results in the nanosecond range or
slightly higher numbers in the millisecond range. Te poll
interval or link detection is not configurable.
Active-Active Simulation Mode Test
For those wishing to test their abric with RILL, SPBM and
FabricPath, the Lippis/Ixia Active-Active est offered a sim-
ulated core option. Te simulated core option eliminated
the need or Core switches in the configuration, requiring
only oRs or the single- and dual-homed configurations.
IxNetwork IS-IS simulated a ully meshed, non-blocking
core. wo and then our oR switches connected to the sim-
ulated core. Note that an equal number o server and core
acing links were required to achieve non-blocking. oR
switches were connected in an “n-way” active-active config-
uration between oR and Ixia test equipment with Ixia gear
configured or the specific abric protocol. N is a maximum
o 32. Te DU was connected to Ixia test equipment with
L1 L2 L5 L6 L3 L4 L7 L8
10G
Ixia Port LAG AssignmentIxia Port LAG Assignment
40G
4x40G
Core Core
ToR ToR
2 4 5 6 17 20 21 24 9 12 15 16 25 28 29 32
Single-Homed Topology
XXXX
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 11/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
11 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
enough ports to drive traffic equal to the “n-way” active-
active links. Te active-active links were connected to Ixia
test equipment in core simulation mode where throughput,
packet loss and latency or unicast rom multicast returns,
multicast and unicast many-to-many traffic were measured.
Te same LAG and traffic profiles as detailed above wereapplied to the simulation mode configuration while latency
and throughput were measured or RILL, or SPBM and/or
FabricPath.
Optional Fabric Tests
Te ollowing were optional tests, which were designed to
demonstrate areas where abric engineering investment and
differentiation has been made. Te acquisition o the 2013
Active-Active Fabric est report distribution license was
required to participate in these optional tests; please contact
[email protected] to request a copy.
Most o these optional tests, i not all, were short 10-minute
video podcast demonstrations as they were ocused upon
optional cost reduction via either ease o use or automation
o network configuration.
VM Fabric Join/Remove and Live Migration
Demonstration: One o the most pressing issues or I
operations is or an Ethernet abric to support VMs. By
support, we mean the level o difficulty o a VM to join or
be removed rom the abric. In addition to live VM joins
and removes, the ability or the abric to support the live
migration o VMs across L3 boundaries without the needor network re-configuration is a undamental requirement.
Tereore, the objective or this test was to observe and
measure how the network abric responded during VM
join/remove plus live migration. In this optional test, the
vendor demonstrated the process in which a VM joins and
is removed rom the abric. In addition, a VM was migrated
live while we observed needed, i any, network abric
configuration changes. Tis demonstration was captured
on video and edited into a (maximum) 10-minute videopodcast. A link to the video podcast is included in the
final test report. Vendors may use SPB, RILL, FabricPath,
VXLAN (Virtual Extensible Local Area Network) over
ECMP, etc. wo servers, each with 30 VMs, were available
or this test. Vendors were responsible or NIC (Network
Interace Controller) cards plus other equipment necessary
to perorm this test.
East-West Traffic Flow Perormance Test: Networking
oR switches so that east-west traffic flow does not need to
traverse a Core switch is being proposed by various vendors
as part o their network abric. As such, an optional test ran
RFC 2544 across three interconnected oR switches with
bidirectional L2/L3 traffic ingress at oR switch 1 and egress
at oR 3. Troughput, latency and packet loss was measured
with horizontal oR latency compared to traditional oR-
Core-oR.
Video feature: Click to view a discussion on the
Lippis Report Test Methodology
Virtual Entities withinIxia’s IxNetworkSimulated Fully Meshed,
Non-Blocking Core
Core
DUT
Core
CoreCore
Edge Edge
Host/servers
Host/servers
DUT
Active-Active Links
Active-Active Links
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 12/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
12 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Extreme Networks Open Fabric
Extreme Networks X670V and BD-X8 Active-Active Test Configuration
Hardware Configuration/Ports Test Scripts Software Version
Devices undertest
Summit-X670V-48x with VIM4-40G4Xhttp://extremenetworks.com/products/summit-x670.aspx
EXOS 15.3
BD-X8 with 1 BDXA-40G24X, 2 BDX-MM1, 3 BDXA-FM20T
http://extremenetworks.com/products/blackdiamond-x.aspxEXOS 15.3
Single Homed 2x X670V ToR 32 10GbE/16 Each ToR + Each ToR 4-40GbE to Core EXOS 15.3
2x BD X8 Cores 8-40GbE connect ToRs EXOS 15.3
4-40GbE MLAG between Cores EXOS 15.3
Dual Homed 4x X670V ToR 32 10GbE/8 Each ToR + Each ToR 2-40GbE to Core EXOS 15.3
2x BD X8 Core 2-40GbE MLAG between ToRs in each compute zone EXOS 15.3
4-40GbE MLAG between Cores EXOS 15.3
Test Equipment Ixia XG12 High Performance Chassis Single Home Active-Active IxOS 6.30 EA SP2
Dual Home Active-Active IxNetwork 6.10 EA
Cloud Performance Test IxNetwork 7.0 EA &
IxOS 6.40 EA
Ixia Line Cards Xcellon Flex AP10G16S (16 port 10G module)
Xcellon Flex Combo 10/40GE AP (16 port 10G / 4 port 40G)
Xcellon Flex 4x40GEQSFP+ (4 port 40G)
www.ixiacom.com
Cabling 10GbE Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers
www.leviton.com
Siemon QSFP+ Passive Copper Cable 40 GbE 3 meter copper QSFP-FA-010
http://www.siemon.com/sis/store/cca_qsfp+passive-copper-assemblies.asp
Siemon Moray Low Power Active Optical Cable Assemblies Single Mode, QSFP+ 40GbE optical cable QSFP30-03 7 meters
http://www.siemon.com/sis/store/cca_qsfp+passive-copper-assemblies.asp
For the Lippis/Ixia Active-Active Fabric est, Extreme
Networks built a ault tolerant, two-tier Ethernet data
center Fabric with its Open Fabric solution, consisting o
10/40GbE switches, including the Summit® X670V-48X oR
switches and high-perormance 20 terabit capacity Black-
Diamond® X8 10/40GbE Core switches.
Te combination o these two products in the design o a
data center Fabric is unique on multiple levels. First, the
BlackDiamond® X8 Core switch is the astest, highest capac-
ity, lowest latency Core switch Lippis/Ixia has tested to date
with the highest density, wire-speed support o 10GbE or
40GbE ports and just 5W per 10GbE port o power.
As a point o differentiation, the high-port density o the
BlackDiamond® X8 Core switch enables a data center Fabric
to be built with just a ew devices, and as its latency is in
the order o a microsecond, this design makes it the highest
perormer in the field.
Video feature: Click to view Lippis/Mendis
Extreme Open Fabric Video Podcast
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 13/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
13 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Te combination o the Summit® X670V oR and BlackDia-
mond® X8 Core orms the basis o Extreme’s Open Fabric
architecture utilized in virtualized multi-tenant data cen-
ters, Internet exchanges, data centers with SDN, converged
I/O data centers and cloud network designs.
Te basic configuration tested consisted o Summit® X670V-
48X oR switches connecting Ixia test gear as server acing
devices and BlackDiamond® X8 switches in the Core. Te
Summit® X670V-48X oR connected to Ixia test gear at
16-10GbE links via our-10GbE LAGs. Each Summit®
X670V-48X oR switch connected to BlackDiamond® X8s
via 40GbE in a ull mesh configuration via its VIM4-40G4X
module. Te BlackDiamond® X8s were connected via our-
40GbE trunks leveraging MLAG.
Te logical network was a mix o unicast traffic in a many-
to-many or mesh configuration, multicast traffic and unicast
return or multicast peers where the LAGs were segmented
into traffic types. For example, LAGs 1, 2, 3 and 4 were used
or unicast traffic. LAGs 5 and 6 were multicast sources
distributing to multicast groups in LAGs 7 and 8. LAGs 7
and 8 were unicast returns or multicast peers within LAGs
5 and 6.
Traffic Profiles
Unicast(Many to Many)
Switch 1 Switch 2
1
2
5
6
3
4
7
8
Multicast
Switch 1 Switch 2
1
2
5
6
3
4
7
8
Unicast Return(For Multicast Peers)
Switch 1 Switch 2
1
2
5
6
3
4
7
8
We tested this data center Fabric in both single-homed and
dual-homed configurations and measured its overall system
latency and throughput. We also tested or reliability, which
is paramount, since Extreme Networks data center Fabric
architecture gives the option to place much o the process-
ing on a ew high-density BlackDiamond® X8 Core switches
instead o dozens o switches. Another configuration uti-
lized in high-density server data centers or cloud comput-
ing acilities taking advantage o the BlackDiamond® X8’s
port density is to connect servers directly into a network oBlackDiamond® X8 Core switches.
Extreme Networks demonstrated its Fabric with MLAG as
the active-active protocol and thus, eliminated the slower,
more rudimentary SP and created a highly efficient
two-tier network design. In addition to current support o
MLAG or L2 and L3, as well as ECMP or L3 based active-
active topologies, Extreme Networks also has committed to
support RILL and other active-active protocols within its
ExtremeXOS OS in uture releases.
Single Homed
For the Single-Homed Server Lippis/Ixia est, Extreme con-
figured two Summit® X670Vs and two BlackDiamond® X8s
or its Ethernet Fabric.
Tirty-two-10GbE links connected Ixia test equipment
to two Summit® X670Vs, which were divided into eight,
our-port LAGs. Each Summit® X670V connected to two
BlackDiamond® X8s with our-40GbE links. Tereore, the
load placed on this Ethernet Fabric was 32-10GbE ports, or
320Gbs, with a mix o unicast, multicast and mesh or any-
to-any flows to represent the Brownian motion typical in
modern data center networks.
Latency and throughput were measured rom ingress to
egress rom Summit® X670Vs to BlackDiamond® X8s to Sum-
mit® X670Vs, representing Fabric latency and throughput.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 14/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
14 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
L1 L2 L5 L6 L3 L4 L7 L8
10G
40G
4x40G BD X8
Summit X670
2 4 5 6 17 20 21 24 9 12 15 16 25 28 29 32
Single-Homed Topology
XXXX
Unicast Results
Extreme Networks’ Open Fabric system latency or unicast
traffic varied rom a low o 2.2 microseconds to a high o 28
microseconds. As expected, latency increased with packetsize where 9216 size packets experienced the largest system
delay. Zero packet loss was observed across all packet sizes o
unicast traffic.
0
5000
10000
15000
0
10000
20000
30000
0
2000
4000
6000
8000
10000
128 256 512 1522 9216
Max Latency 2900 3320 4220 7780 28240
Avg Latency 2445 2591 2923 4366 14117
Min Latency 2280 2360 2580 3460 9620
ns
Avg Latency
Max Latency
Min Latency
Extreme 2-X670V ToR & 2-BD X8 Core
Single Homed Test
Unicast Return From Multicast Flows(min, max, avg latency)
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 15/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
15 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Multicast Results
Te Extreme Open Fabric latency measured or multicast
traffic varied rom a low o 2.4 microseconds to a high o 38
microseconds. As expected, latency increased with packetsize where 9216 size packets experienced the largest system
delay. Note that currently Ixia’s statistics do not support the
combination o multicast traffic running over ports within a
LAG. Tereore, packet loss or this scenario was not accu-
rately calculated and is, thereore, not valid.
0
5000
10000
15000
20000
25000
0
10000
20000
30000
40000
0
2000
4000
6000
8000
10000
128 256 512 1522 9216
Max Latency 3360 4140 5500 11340 38180
Avg Latency 2691 3053 3517 5700 22382
Min Latency 2400 2480 2700 3580 9740
ns
Avg Latency
Max Latency
Min Latency
Extreme 2-X670V ToR & 2-BD X8 Core
Single Homed Test
Multicast Traffic
(min, max, avg latency)
Many-to-Many Results
Te Extreme Open Fabric system latency or many-to-many
unicast traffic in a mesh configuration varied rom a high o
45 microseconds to a low o 830 nanoseconds. As expected,latency increased with packet size where 9216 size packets
experienced the largest system delay. Zero packet loss was
observed across all packet sizes o unicast traffic.
0
5000
10000
15000
20000
0
10000
20000
30000
40000
50000
0
400
800
1200
1600
128 256 512 1522 9216
Max Latency 3220 3820 5140 10460 45420Avg Latency 2065 2297 2887 4789 19115
Min Latency 820 860 1000 1240 1240
ns
Avg Latency
Max Latency
Min Latency
Extreme 2-X670V ToR & 2-BD X8 Core
Single Homed Test
Many to Many Full Mesh Flows
(min, max, avg latency)
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 16/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
16 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Te table below illustrates the average system latency across
packet sizes 128 to 9216 or unicast, multicast and many-to-
many traffic flows through the Extreme Networks single-
homed configuration. All traffic flows perormed in a tight
range between 2 and 5.8 microseconds except or the large
packet size o 9216 where latency increased by an approxi-
mate actor o five. Tis is mostly due to the act that the
9216 packet size was six times larger than the previous 1522
size and required substantially more time to pass through
the Fabric.
0
5000
10000
15000
20000
25000
0
5000
10000
15000
0
5000
10000
15000
20000
128 256 512 1522 9216
Unicast 2445 2591 2923 4366 14117
Multicast 2691 3053 3517 5700 22382
Many-to-Many 2065 2297 2887 4789 19115
Unicast Return From
Multicast Flows Avg Latency
Multicast Traffic Avg Latency
Many-to-Many
Full Mesh Flows Avg Latency
ns
Extreme 2-X670V ToR 2-BD X8 Core
Single Homed Test
Unicast Return From Multicast Flows,
Multicast Traffic, Many-to-Many FullMesh Flows (avg latency)
Dual Homed
For the Dual-Homed Server Lippis/Ixia est, Extreme
configured our Summit® X670Vs and two BlackDiamond®
X8s or its Ethernet Fabric. Each Ixia port, acting as a vir-tual server, was dual homed to separate Summit® X670Vs,
which is a best practice in high availability data centers and
cloud computing acilities. Te load placed on this Ether-
net Fabric was the same 32 10GbE, or 320Gbs, with a mix
o unicast, multicast and mesh or any-to-any flows. Each
Summit® X670V was configured with eight-10GbE server
ports. Ten two-40GbE lagged ports connected the X670Vs
Finally the top X670Vs were connected to BlackDiamond®
X8 Core switches via dual-40GbE port MLAGs. Latency
and throughput were measured rom ingress to egressrom Summit® X670Vs to BlackDiamond® X8s to Summit®
X670Vs, representing Fabric latency and throughput rather
than device.
40G
4x40G
1 4 17 31 3 10 22 32
Dual-Homed Topology
BD X8
Summit X670
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 17/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
17 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Unicast Results
Te Extreme Open Fabric system latency or unicast tra-
fic varied rom a low o 2.2 microseconds to a high o 28
microseconds. Te result was the same as the single-homed
configuration (as expected) even though increased reli-
ability or availability was introduced to the design via
dual-homing server ports to two Summit® X670V oRs plus
MLAG between oRs. In line with expectations, latency
increased with packet size where 9216 size packets experi-
enced the largest system delay. Zero packet loss was ob-
served across all packet sizes o unicast traffic.
0
5000
10000
15000
0
10000
20000
30000
0
2000
4000
6000
8000
10000
128 256 512 1522 9216
Max Latency 2960 3320 4280 6720 28220
Avg Latency 2438 2578 2923 4256 14094
Min Latency 2280 2360 2580 3460 9600
ns
Avg Latency
Max Latency
Min Latency
Extreme 2-X670V ToR & 2-BD X8 Core
Dual Homed TestUnicast Return From Multicast Flows(min, max, avg latency)
Multicast Results
Te Extreme Open Fabric system latency or multicast traffic
varied rom a low o 2.4 microseconds to a high o 38 mi-
croseconds. Te results were the same as the single-homed
configuration even though increased reliability or availability
was introduced to the design via dual-homing server ports
to two Summit® X670V oRs plus MLAG between oRs. As
expected, latency increased with packet size where 9216 size
packets experienced the largest system delay. Again, 75%
packet loss was expected and observed across all packet sizes
o multicast traffic. Note that currently Ixia’s statistics do not
support the combination o multicast traffic running over
ports within a LAG. Tereore, packet loss or this scenario
was not accurately calculated and is, thereore, not valid.
0
5000
10000
15000
20000
25000
0
10000
20000
30000
40000
0
2000
4000
6000
8000
10000
128 256 512 1522 9216
Max Latency 3440 4140 5540 11260 38300
Avg Latency 2697 3042 3565 5818 22533
Min Latency 2420 2480 2720 3600 9740
ns
Avg Latency
Max Latency
Min Latency
Extreme 2-X670V ToR & 2-BD X8 Core
Dual Homed Test
Multicast Traffic(min, max, avg latency)
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 18/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
18 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Many-to-Many Results
Te Extreme Open Fabric system latency or many-to-many
unicast traffic in a mesh configuration varied rom a low o
830 nanoseconds to a high o 45 microseconds. Again, theresult was the same as the single-homed configuration, even
though increased reliability or availability was introduced
to the design via dual-homing server ports to two Sum-
mit® X670V oRs plus MLAG between oRs. As expected,
latency increased with packet size where 9216 size packets
experienced the largest system delay. Zero packet loss was
observed across all packet sizes o unicast traffic.
0
5000
10000
15000
20000
0
10000
20000
30000
40000
50000
0
400
800
1200
1600
128 256 512 1522 9216
Max Latency 3160 3720 5220 9360 45440
Avg Latency 2045 2223 2730 4882 19121
Min Latency 820 860 1000 1220 1220
ns
Avg Latency
Max Latency
Min Latency
Extreme 2-X670V ToR & 2-BD X8 Core
Dual Homed TestMany to Many Full Mesh Flows(min, max, avg latency)
Te table below illustrates the average system latency across
packet sizes 128 to 9216 or unicast, multicast and many-to-
many traffic flows through the Extreme Networks dual-homed configuration. All traffic flows perormed in a tight
range between 2 and 5.8 microseconds except or the large
packet size o 9216 where latency increased by an approxi-
mate actor o five. Tis was mostly due to the act that the
9216 packet size was six times larger than the previous 1522
size and required substantially more time to pass through
the Fabric.
0
5000
10000
15000
20000
25000
0
5000
10000
15000
0
5000
10000
15000
20000
128 256 512 1522 9216
Unicast 2438 2578 2923 4256 14094
Multicast 2697 3042 3565 5818 22533
Many-to-Many 2045 2223 2730 4882 19121
Unicast Return From
Multicast Flows Avg Latency
Multicast Traffic Avg Latency
Many-to-Many
Full Mesh Flows Avg Latency
ns
Extreme 2-X670V ToR 2-BD X8 Core
Dual Homed TestUnicast Return From Multicast Flows,Multicast Traffic, Many-to-Many FullMesh Flows (avg latency)
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 19/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
19 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Cloud Performance Test
In addition to testing the Extreme Open Fabric with uni-
cast, multicast and many-to-many traffic flows at varying
packet sizes, the Lippis Cloud Perormance est iMix wasalso used to generate traffic and measure system latency
and throughput rom ingress to egress. o understand the
perormance o Extreme’s Open Fabric system under load,
we ran six iterations o the Lippis Cloud Perormance est
at traffic loads o 50%, 60%, 70%, 80%, 90% and 100%, mea-
suring latency and throughput on the X670V oR switch.
See the methodology section or a ull explanation o this
test. Te X670V oR was connected to Ixia test gear via 28-
10GbE links.
Lippis Cloud Performance
Te Extreme X670V perormed flawlessly over the six Lip-
pis Cloud Perormance iterations. Not a single packet was
dropped as the mix o east-west and north-south traffic
increased in load rom 50% to 100% o link capacity. Te
average latency was stubbornly consistent as aggregate
traffic load was increased. Microsof Exchange, Youube
and database traffic were the longest to process with 100
nanoseconds more latency than HP and iSCSI flows. Te
difference in latency measurements between 50% and 100%o load across protocols was 181ns, 146ns, 87ns, 93ns and
85ns, respectively, or HP, Youube, iSCSI, Database and
Microsof Exchange traffic. Tis was a very tight range with
impressive results as it signifies the ability o the Fabric to
deliver consistent perormance under varying load.
Reliability
Extreme Networks went above and beyond what was
required in the Lippis/Ixia Active-Active Fabric est to
demonstrate and test the reliability o a data center Fabric
built with its Summit® X670V and BlackDiamond® X8.
Within Extreme’s two-tier switch architecture, reliability
was tested in three critical areas: 1) between Ixia test gear
and the Summit® X670V, 2) between Summit® X670V and
BlackDiamond® X8 and 3) with the loss o an entire Black-
Diamond® X8. Te Summit® X670V and BlackDiamond® X8
were configured in the “single-homed” test design.
0
200
400
600
800
1000
1200
1400
NS_HTTP YouTube iSCSI DB MS Exchange
50% Load 60% Load 70% Load 80% Load 90% Load 100% Loadns
Summit X670V ToR IxCloud
Performance Test
28 ports of 10GbE
(avg latency)
Server to Summit® X670V Reliability Test
A stream o unicast many-to-many flows at 128-byte size
packets was sent to the Extreme Open Fabric. While the
Fabric was processing this load, a 10GbE link was discon-
nected in LAGs 3, 4, 7 and 8. For many-to-many unicast
traffic, 0.139% packet loss was observed over 87 millisec-
onds packet loss duration.
L1 L2 L5 L6 L3 L4 L7 L8
10G
40G
4x40G BD X8
Summit X670
2 4 5 6 17 20 21 24 9 12 15 16 25 28 29 32
Single-Homed Topology
XXXX
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 20/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
20 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Summit® X670V to BlackDiamond® X8Reliability Test
Tere were two parts to the Lippis/Ixia Reliability est.
First, a QSFP+ 40GbE optical cable that connected Sum-mit® X670V switches to BlackDiamond® X8 switches while
unicast many-to-many traffic flows were being processed is
pulled, and the resulting packet loss plus packet loss dura-
tion was recorded by Ixia test equipment. Ten the 40GbE
link was restored and the resulting packet loss plus packet
loss duration was recorded by Ixia test equipment. Tere are
our-40GbE links that connected a Summit® X670V to two
BlackDiamond® X8s.
L1 L2 L5 L 6 L3 L4 L 7 L8
10G
40G
4x40G BD X8
Summit X670
2 4 5 6 17 20 21 24 9 12 15 16 25 28 29 32
Single-Homed Topology
XXX
X
Once again, a stream o unicast many-to-many flows at
128-byte size packets was sent to the Extreme Open Fabric.
While the Fabric was processing this load, a 40GbE link was
disconnected between Summit® X670V and BlackDiamond®
X8. For many-to-many unicast traffic, 0.003% packet loss was
observed over 3 milliseconds packet loss duration. When the40GbE link was restored, the Fabric reconfigured itsel in 56
milliseconds and lost 0.026% o packets in that time.
BlackDiamond® X8 Shut Down Reliability Test
Tere were two parts to this Reliability est o losing an en-
tire Core switch. First, the BlackDiamond® X8 Core switch
was shut down while unicast many-to-many traffic flows
were being processed, and the resulting packet loss plus
packet loss duration was recorded by Ixia test equipment.
0.00%
0.50%
1.00%
1.50%
2.00%
2.50%
3.00%
MLAG Link from
ToR to Core -
Cable Pull
MLAG Link from
ToR to Core -
Restoring the Link
0
10
20
30
40
50
60
ms
MLAG Core,
entire BD 8X
Goes Down
All 4-40GbE links -
Switch Shutdown
Event
MLAG Core,
entire BD 8X
Goes Down
All 4-40GbE links -
Switch
Restoration
Extreme 2-X670V ToR & 2-BD X8 Core
Single Homed Configuration Reliability Test
Based upon 128 Byte size packet
Ten the BlackDiamond® X8 Core switch was restored,
and the resulting packet loss plus packet loss duration was
recorded by Ixia test equipment.
For many-to-many unicast traffic, 0.01% packet loss was
observed over 4.3 milliseconds packet loss duration. When
the BlackDiamond® X8 was restored, the Fabric reconfig-ured itsel in 42 milliseconds and lost 0.021% o packets in
that time.
L1 L2 L 5 L6 L3 L 4 L7 L8
10G
40G
4x40G BD X8
Summit X670
2 4 5 6 17 20 21 24 9 12 15 16 25 28 29 32
Single-Homed Topology
X
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 21/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
21 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Discussion
Te Extreme Networks Open Fabric data center network, built with its Summit®
X670V oR switches and BlackDiamond® X8 Core switches, has undergone the
most comprehensive public test or data center network Fabrics, and has achievedoutstanding results in each o the key aspects o networking. While the Open
Fabric architecture supports multiple, high availability network designs and con-
figurations, during the Lippis/Ixia Active-Active Ethernet Fabric est, a two-tier
network was implemented or both single homed and dual homed topologies.
We ound that its system latency was very consistent under different packet sizes,
payloads and traffic types. It perormed to expectations; that is, large-size packet
streams consume more time to pass through the switches resulting in greater
latency. Tere was no packet loss in either single- or dual-homed configurations,
while it was supplied 320 Gbs o unicast, multicast and many-to-many traffictypes to process. In addition to processing a mix o different traffic types, Ex-
treme Networks Open Fabric perormed just as outstandingly during the Lippis
Cloud Perormance est, processing a mix o HP, Youube, iSCSI, Database
and Microsof Exchange traffic that increased in load rom 50% to 100% o ca-
pacity. Here, too, its latency was consistently low with zero packet loss and 100%
throughput achieved.
Extreme used MLAG to implement its active-active protocol. Extreme’s MLAG
implementation perormed flawlessly, proving that a two-tier data center net-
work architecture built with its Summit® X670V oR and BlackDiamond® X8Core switches will scale with perormance.
Regarding scale, Extreme Networks tested two tier topologies with Open Fabric
in these tests, demonstrating versatility and robustness in the single and dual
homed topologies. Note that due to the capacity o the BlackDiamond X8 core
switches, these designs can scale-up to support large network implementations
with small oversubscription ratios at the oR level. Similarly, i needed large
topologies can be deployed without oversubscription by deploying dual Black-
Diamond X8 core switches in a single tier architecture, yet supporting active-
active connectivity rom servers. Tis design can scale up to 750+ 10GbE or 190+40GbE attached active-active servers. Tese designs make Extreme Networks
stand out in their ability to deploy highly scalable designs or active-active net-
works.
Extreme Networks solutions, as tested, ocused on Layer 2 active-active archi-
tectures, its Open Fabric hardware and sofware, as tested, also supports Layer
3 orwarding or routing and redundancy protocols. As such, Equal Cost Multi
Path (ECMP) based designs are commonly deployed with these products or
active-active Layer 3 connectivity.
In this video podcast, Extreme
demonstrated the ease in which VMs are
moved between data centers when its OpenFabric was connected via VPLS or Virtual
Private LAN Service.
Video feature: Click to view
VM Migration Video Podcast
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 22/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
22 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
One o the big and very positive surprises o Extreme’s set o active-active test
results concerned its reliability results. Links between Ixia gear and Summit®
X670V oR switches were disconnected and re-established to measure packet
loss and packet loss duration. In addition, an MLAG link between Summit®
X670V oR switches and BlackDiamond® X8 Core switches were disconnectedand re-established to measure packet loss and packet loss duration. Finally, an
entire BlackDiamond® X8 Core switch was shut down and powered back up
to measure packet loss and packet loss duration. Tese tests proved that the
Extreme Networks Open Fabric architecture is resilient and reliable as packet
losses in the range o .266% to .003% were observed or different traffic types and
points o disconnection. Packet loss duration was measured in the hundreds o
milliseconds, well under the threshold o CP (ransmission Control Protocol)
timeouts, so that users would not be disconnected rom their applications during
a network disruption.
Data center networking is moving in multiple directions o efficiency. Converged
I/O hopes to reduce cabling and storage switch cost by combining both storage
and datagram traffic over one Ethernet Fabric. Te Open Networking standards
approach to networking looks to reduce operational spend by centralizing net-
work control where northbound APIs abstract network services so that applica-
tions and data center orchestration systems can automate network configuration.
As these trends develop and grow, a stable, high perorming data center network
inrastructure that scales becomes ever so important. Extreme Networks Open
Fabric demonstrated high perormance and reliability under various loads and
conditions during this Lippis/Ixia Active-Active Fabric est. Its Open Fabric roadmap includes 100GbE, Open Flow and OpenStack, a set o northbound APIs, a
new unified orwarding table to support L2/L3 and flows, application tie-ins and
much more. We find that Extreme Networks Open Fabric is an excellent choice
to consider or modern data center networking requirements.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 23/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
23 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Active-Active Fabric Cross-Vendor Analysis
o deliver the industry’s first test suite o Ethernet abrics Ixia, Lippis Enterprises
and all vendors provided engineering resources to assure that the configura-
tion files are suitable or MLAG, RILL and SPB configurations. In addition,the results are repeatable, a undamental principal in testing. Tis test was more
challenging thanks to the device under test being a abric or “system” versus a
single product. Tese participating vendors are:
Arista Sofware-Defined Cloud Network
Avaya Virtual Enterprise Network Architecture Fabric Connect
Brocade Virtual Cluster Switching
Extreme Networks Open Fabric
Brocade was the first company to be tested. Te test suite evolved afer its first
test, thus we do not include Brocade’s single and dual homed results in the cross-
vendor section due to a different traffic mix utilized or Brocade’s VCS. Further,
each vendor was offered optional testing opportunities which some accepted and
some declined. For the cross-vendor analysis we report on only required aspects
o the Lippis/Ixia Active-Active Ethernet Fabric est.
Tese abric configurations represent the state-o-the-art in two-tier networking.
Pricing per abric varies rom a low o $75K to a high o $150K. oR and Core
switch port density impact pricing as does 10GbE vs 40GbE. Price points on a
10GbE per port basis are a low o $351 to a high o $670. 40GbE oR switch price
per port is as low as $625 to $2,250 per port. In the Core 10GbE price per port is
as low as $1,200 while 40GbE ports are as low as $6,000 per port.
We compared each o the above firms’ abrics in terms o their ability to orward
packets: quickly (i.e., latency), without loss o their throughput at ull line rate or
three types o traffic, unicast mesh in a many-to-many configuration, multicast
and unicast returns rom multicast peers. We compare Server-oR reliability and
how each oR perorms during the Lippis Cloud simulation test.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 24/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
24 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
128-bytes 256-bytes 512-bytes 1522-bytes 9,216-bytes *
Single Homed Dual Homed
Single Homed Dual Homed
Arista SDCN1
2-7050S-64 ToR &
2-7508 Core
Avaya VENA2
Fabric Connect
4-VSP 7000s ToR
Brocade VCS3,4
2-VDX6720 ToR &
2-VDX8770 Core
Extreme Network
Open Fabric
2-X670V ToR &
2-BD X8 Core
Arista SDCN1
2-7050S-64 ToR &
2-7508 Core
Avaya VENA2
Fabric Connect
4-VSP 7000s ToR
Brocade VCS3,4
2-VDX6720 ToR &
2-VDX8770 Core
Extreme Network
Open Fabric
2-X670V ToR &
2-BD X8 Core
Arista Avaya BrocadeExtreme Arista Avaya Brocade Extreme
0
10000
20000
30000
40000
0
30000
60000
90000
120000
150000
ns ns
Fabric Test, Many-to-Many Full Mesh Unicast - Average Fabric Latency
All Switches Performed at Zero Frame Loss
1Software-Defined Cloud Network2Virtual Enterprise Network Architecture3Virtual Cluster Switching4
Brocade’s test was based on slightly different traffic profile and thus is not included here
* The latency measurement was unexpectedly high andhighly probable that it was the result of buffering andnot a true measure of fabric latency.
1Software-Defined Cloud Network2Virtual Enterprise Network Architecture3Virtual Cluster Switching4Brocade’s test was based on slightly different traffic profile and thus is not included here
Single Homed Double Homed
Framesize
(bytes)
Arista SDCN1 2-7050S-64 ToR &
2-7508 Core
Avaya VENA2 Fabric Connect
4-VSP 7000s ToR
Brocade VCS3,4 2-VDX & 6720 ToR2-VDX8770 Core
Extreme NetworksOpen Fabric
2-X670V ToR &2-BD X8 Core
Arista SDCN1 2-7050S-64 ToR &
2-7508 Core
Avaya VENA2 Fabric Connect
4-VSP 7000s ToR
Brocade VCS3,4 2-VDX & 6720 ToR2-VDX8770 Core
Extreme NetworksOpen Fabric
2-X670V ToR &2-BD X8 Core
128 6993 2741 2065 6446 2356 2045
256 10189 3447 2297 9167 2356 2223
512 16375 4813 2887 14375 2428 2730
1,522 39780 10491 4789 34007 2724 4882
9,216 143946 53222 19115 142800 6258 19121
Brocade’s test
was based on a
slighty different
traffic profile
and thus is not
included here.
Brocade’s test
was based on a
slighty different
traffic profile
and thus is not
included here.
Jumbo rame 9216 size packet size tra-
fic requires significantly more time to
pass through the abric, thanks to seri-
alization, thereore, its plotted on a sep-
arate graphic so that smaller size packet
traffic can be more easily viewed.
Extreme Networks Open Fabric de-
livered the lowest latency or singled
homed ully meshed unicast traffic
Note: Avaya VSP 7000 s wereconfigured in Store andForward Mode whileall others were in CutThrough which mayimpact comparison ofthe data directly.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 25/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
25 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
o packet sizes 128-1522 ollowed by
Avaya Fabric Connect and Arista’s
SDCN. Avaya’s Fabric Connect deliv-
ered the lowest latency or dual homed
ully meshed unicast traffic o packetsizes 128-1522. Te Extreme Net-
works Open Fabric and Avaya’s Fabric
Connect Dual homed results or ully
meshed unicast traffic o packet sizes
128-1522 were nearly identical. Note
that Avaya’s Fabric Connect was con-
figured with oR switches while Arista
and Extreme provided oR and Core
Switches, thereore, there is significant-
ly more network capacity with theseconfigurations.
Both Arista and Extreme dual homed
results are the same as single homed,
as expected. Avaya’s dual homed re-
sult is lower than its single homed and
is due to increased bandwidth between
the oRs. Further, Core switches do
take longer to process packets, thanksto their higher port densities and inter-
module system abrics. Avaya’s lack o
a Core switch provided it an advantage
in this test.
We don’t believe that IxNetwork la-
tency measurement o a abric in Cut-
through (C) or Store and Forward
(SF) is material. As the SF RFC 1242 la-
tency measurement method is the timeinterval starting when the last bit o the
input rame reaches the input port and
ending when the first bit o the output
rame is seen on the output port (LIFO)
while C RFC 1242 latency measure-
ment method is the time interval start-
ing when the end o the first bit o the
input rame reaches the input port and
ending when the start o the first bit othe output rame is see on the output
port (FIFO). Te measurement differ-
ence between C vs. SF in a abric un-
der test is the size o one packet rom
the starting point; in essence it’s the se-
rialization delay o one packet size. C
vs. SF on device latency measuremen
are material. Given the above and the
act that Avaya’s VSP 7000s were con-
figured in SF while all other switcheswere configured or C, we cannot rule
out an anomaly that may impact Ava-
ya’s abric latency measurement during
this industry test.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 26/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
26 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
128-bytes 256-bytes 512-bytes 1522-bytes 9,216-bytes *
Single Homed Dual Homed
Single Homed Dual Homed
Arista SDCN1
2-7050S-64 ToR &
2-7508 Core
Arista Avaya BrocadeExtreme Arista Avaya Brocade ExtremeAvaya VENA2
Fabric Connect
4-VSP 7000s ToR
Brocade VCS3,4
2-VDX6720 ToR &
2-VDX8770 Core
Extreme Network
Open Fabric
2-X670V ToR &
2-BD X8 Core
Arista SDCN1
2-7050S-64 ToR &
2-7508 Core
Avaya VENA2
Fabric Connect
4-VSP 7000s ToR
Brocade VCS3,4
2-VDX6720 ToR &
2-VDX8770 Core
Extreme Network
Open Fabric
2-X670V ToR &
2-BD X8 Core
ns ns
0
10000
20000
30000
40000
0
50000
100000
150000
200000
250000
300000
350000
400000
450000
Fabric Test, Multicast Traffic - Average Fabric Latency
All Switches Performed at Zero Frame Loss
1Software-Defined Cloud Network2Virtual Enterprise Network Architecture3Virtual Cluster Switching
4Brocade’s test was based on slightly different traffic profile and thus is not included here
1Software-Defined Cloud Network2Virtual Enterprise Network Architecture3Virtual Cluster Switching4Brocade’s test was based on slightly different traffic profile and thus is not included here
Single Homed Double Homed
Framesize
(bytes)
Arista SDCN1 2-7050S-64 ToR &
2-7508 Core
Avaya VENA2 Fabric Connect
4-VSP 7000s ToR
Brocade VCS3,4 2-VDX & 6720 ToR2-VDX8770 Core
Extreme NetworksOpen Fabric
2-X670V ToR &2-BD X8 Core
Arista SDCN1 2-7050S-64 ToR &
2-7508 Core
Avaya VENA2 Fabric Connect
4-VSP 7000s ToR
Brocade VCS3,4 2-VDX & 6720 ToR2-VDX8770 Core
Extreme NetworksOpen Fabric
2-X670V ToR &2-BD X8 Core
128 7693 3439 2691 8104 2892 2697
256 7792 4312 3053 11329 2884 3042
512 10302 6082 3517 16931 3036 3565
1,522 19983 13594 5700 38407 3951 5818
9,216 415230 53670 22382 402035 15362 22533
Brocade’s test
was based on a
slighty different
traffic profile
and thus is not
included here.
Brocade’s test
was based on a
slighty different
traffic profile
and thus is not
included here.
* The latency measurement was unexpectedly high andhighly probable that it was the result of buffering andnot a true measure of fabric latency.
Extreme Networks Open Fabric de-
livered the lowest latency or singled
homed multicast traffic o packet sizes
128-1522 ollowed by Avaya Fabric
Connect and Arista’s SDCN. Notice
that or all vendors there is little di-
erence in latency between 128-512,
which provides consistent perormance
or applications running in that range
Even as the packet size increased by a
actor o two so does latency in most
cases.
Note: Avaya VSP 7000 s were configuredin Store and Forward Mode while allothers were in Cut Through which mayimpact comparison of the data directly.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 27/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
27 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Avaya’s Fabric Connect delivered the
lowest latency or dual homed multicast
traffic o packet sizes 256-9216. Te
Extreme Networks Open Fabric and
Avaya’s Fabric Connect Dual homed re-sults or multicast traffic o packet sizes
128-1522 were nearly identical. Note
that Avaya’s Fabric Connect was con-
figured with oR switches while Arista
and Extreme provided oR and Core
switches; thereore, there is significant-
ly more network capacity with these
configurations.
Both Arista and Extreme dual homedresults are the same as single homed,
as expected. Avaya’s dual homed re-
sult is lower than its single homed and
is due to increased bandwidth between
the oRs. Further, Core switches do
take longer to process packets, thanks
to their higher port densities and inter-
module system abrics. Avaya’s lack o
a Core switch provided it an advantage
in this test.
Arista’s multicast latency in both single
and dual homed is anomalistically high.
Te reason is that the Arista 7500 has a
unique architecture with a credit-based
Fabric scheduler. Tis design allowsor airness across flows throughout
the system and allows or efficient uti-
lization o the Fabric bandwidth. Uni-
cast traffic is buffered on ingress using
VOQs. Tere are over 100,000 VOQs in
the system divided into eight classes o
traffic. By design multicast traffic can-
not be buffered on ingress as multiple
ports could be members o a multicast
group, and the packets must be trans-mitted to all destination ports without
dependencies on each other.
Te Arista 7500 provides a very large
amount o multicast bandwidth by
replicating on the ingress silicon, at
the Fabric and also at the egress. Tis
three-stage replication tree allows the
platorm to deliver wire speed multicast
to all 384 ports simultaneously. I there
is congestion on the egress port, mul-
ticast packets destined to that port are
buffered at egress. Other ports are not
affected by this congestion and head o
line blocking is avoided. When thereare multiple multicast sources, and 9K
packets are used, traffic is burstier and
it’s possible to overflow the egress bu-
ers. Such a burst could result in drop-
ping a small percentage o the overal
traffic thus increasing measured laten-
cy. Tis separation o buffering in the
ingress or unicast traffic and egress or
multicast traffic allows the 7500 to per-
orm well under real-world scenarioswith mixed unicast and multicast tra-
fic patterns. o obtain an accurate ab-
ric latency measurement, Arista recom-
mends that multicast traffic is run at a
no-drop rate across all nodes.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 28/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
28 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
128-bytes 256-bytes 512-bytes 1522-bytes 9,216-bytes
Single Homed Dual Homed
Single Homed Dual Homed
Arista SDCN1
2-7050S-64 ToR &
2-7508 Core
Arista Avaya BrocadeExtreme Arista Avaya Brocade ExtremeAvaya VENA2,3
Fabric Connect
4-VSP 7000s ToR
Brocade VCS4,5
2-VDX6720 ToR &
2-VDX8770 Core
Extreme Network
Open Fabric
2-X670V ToR &
2-BD X8 Core
Arista SDCN1
2-7050S-64 ToR &
2-7508 Core
Avaya VENA2,3
Fabric Connect
4-VSP 7000s ToR
Brocade VCS4,5
2-VDX6720 ToR &
2-VDX8770 Core
Extreme Network
Open Fabric
2-X670V ToR &
2-BD X8 Core
ns ns
0
2000
4000
6000
8000
10000
12000
0
10000
20000
30000
40000
Fabric Test, Unicast Return From Multicast Flows - Average Fabric Latency
All Switches Performed at Zero Frame Loss
1Software-Defined Cloud Network2,Virtual Enterprise Network Architecture3Avaya’s multicast traffic was simulated by forwarding4Virtual Cluster Switching5Brocade did not support multicast at test time.
1Software-Defined Cloud Network2,Virtual Enterprise Network Architecture3Avaya’s multicast traffic was simulated by forwarding4
Virtual Cluster Switching5Brocade did not support multicast at test time.
Single Homed Double Homed
Framesize
(bytes)
Arista SDCN1 2-7050S-64 ToR &
2-7508 Core
Avaya VENA2,3 Fabric Connect
4-VSP 7000s ToR
Brocade VCS4,5 2-VDX & 6720 ToR2-VDX8770 Core
Extreme NetworksOpen Fabric
2-X670V ToR &2-BD X8 Core
Arista SDCN1 2-7050S-64 ToR &
2-7508 Core
Avaya VENA2 Fabric Connect
4-VSP 7000s ToR
Brocade VCS3,4 2-VDX & 6720 ToR2-VDX8770 Core
Extreme NetworksOpen Fabric
2-X670V ToR &2-BD X8 Core
128 5437 2775 2445 5346 2510 2438
256 6183 3181 2591 6062 2561 2578
512 7358 4055 2923 7111 2675 2923
1,522 11059 7883 4366 10292 3041 4256
9,216 35693 39197 14117 35661 5803 14094
Brocade did not
support multicast
at test time
Brocade did not
support multicast
at test time
Extreme Networks Open Fabric de-
livered the overall lowest latency or
singled and dual homed unicast return
rom multicast flows traffic. Avaya
Fabric Connect offered lower latency at
packet sizes 1522-9216 or dual homed
unicast return rom multicast flows,
however it delivered the largest latency
or single homed jumbo packet sizes o
9216, which is anomalistic
Note: Avaya VSP 700 0s wereconfigured in Store andForward Mode whileall others were in CutThrough which mayimpact comparison ofthe data directly.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 29/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
29 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Notice that or all vendors there is little
difference in latency between 128-512,
on the order o hundreds o nanosec-
onds, which provides consistent peror-
mance or applications running in thatrange. Even as the packet size increased
by a actor o two so does latency in
most cases.
Arista’s SDCN and Extreme Networks
Open Fabric perormed consistently
within single and dual homed with
each set o data being nearly identical,
which is expected and desired. Notethat Avaya’s Fabric Connect was con-
figured with oR switches while Arista
and Extreme provided oR and Core
switches; thereore, there is significant-
ly more network capacity with these
configurations.
Avaya’s dual homed result is lower than
its single homed and is due to increased
bandwidth between the oRs. Further
Core switches do take longer to process
packets, thanks to their higher portdensities and inter-module system ab-
rics. Avaya’s lack o a Core switch pro-
vided it an advantage in this test.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 30/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
30 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
0.00%
0.05%
0.10%
0.15%
0.20%
0.25%
0.30%
Arista
SDCN1
Avaya
VENA2
Fabric Connect
Brocade4
VCS3
Packet Loss %
Extreme Networks
Open Fabric
0
50
100
150
200
250
300
% ms
Packet Loss Duration (ms)
Server-ToR Reliability Test
128 Byte Size Packet In Many-to-Many Full Mesh Flow Through Fabric
One 10GbE Cable in LAGs 3, 4, 7, and 8 Between Ixia-to-ToR is Pulled
Server-ToR Reliability Test
128 Byte Size Packet In Many-to-Many Full Mesh Flow Through Fabric
One 10GbE Cable in LAGs 3, 4, 7, and 8 Between Ixia-to-ToR is Pulled
1Software-Defined Cloud Network2,Virtual Enterprise Network Architecture3Virtual Cluster Switching4Brocade did not test for reliability
1Software-Defined Cloud Network2,Virtual Enterprise Network Architecture3Virtual Cluster Switching4Brocade did not test for reliability
Company Fabric Name Fabric Products Packet Loss % Packet Loss Duration (ms)
Arista SDCN1 2-7050S-64 ToR & 2-7508 Core 0.118% 70.179
Avaya VENA2 Fabric Connect 4-VSP 7000s ToR 0.283% 278.827
Brocade4 VCS3 2-VDX6720 ToR & 2-VDX8770 Core
Extreme Networks Open Fabric 2-X670V ToR & 2-BD X8 Core 0.139% 87.018
Arista’s SDCN delivered the lowest
packet loss and shortest packet loss du-
ration in the server-oR reliability test
ollowed by Extreme Network’s Open
Fabric, then ollowed by Avaya’s Fab-
ric Connect. Te difference between
Arista and Extreme’s results or this re-
liability test is 17 milliseconds o packet
loss duration and .024% packet loss; a
narrow difference. Avaya’s Server-oR
packet loss duration is approximately
our times that o Arista.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 31/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
31 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
NS HTTP NS Youtube EW iSCSI EWDB EW MSExchange
Arista SDCN1
7050S-64 ToR
Avaya4 VENA2 Fabric Connect
VSP 7000s ToR
Brocade VCS3
VDX6720 ToR
Extreme Networks Open Fabric
X670V ToR
0
500
1,000
1,500
2,000
ns
1Software-Defined Cloud Network2,Virtual Enterprise Network Architecture3Virtual Cluster Switching4Avaya did not test for Cloud Simulation
1Software-Defined Cloud Network2,Virtual Enterprise Network Architecture3Virtual Cluster Switching4Avaya did not test for Cloud Simulation
Company Fabric Name Fabric Products NS HTTP NS Youtube EW iSCSI EW DB EW MSExchange
Arista SDCN1 7050S-64 ToR 1121 1124 987 1036 1116
Avaya4 VENA2 Fabric Connect VSP 7000s ToR
Brocade VCS3 VDX6720 ToR 1929 1859 1815 1857 597
Extreme Networks Open Fabric X670V ToR 1149 1196 1108 1206 1292
Cloud Simulation ToR Switches At 50% Aggregate Traffic Load
Zero Packet Loss: Latency Measured in ns
28-10GbE Configuration Between Ixia-ToR SwitchTested While In Single Homed Configuration
Cloud Simulation ToR Switches At 50% Aggregate Traffic Load
Zero Packet Loss: Latency Measured in ns
28-10GbE Configuration Between Ixia-ToR SwitchTested While In Single Homed Configuration
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 32/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
32 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
NS HTTP NS Youtube EW iSCSI EWDB EW MSExchange
Arista SDCN1
7050S-64 ToR
Avaya4 VENA2 Fabric Connect
VSP 7000s ToR
Brocade VCS3
VDX6720 ToR
Extreme Networks Open Fabric
X670V ToR
0
2,000
4,000
6,000
8,000
10,000
ns
1Software-Defined Cloud Network2,Virtual Enterprise Network Architecture3Virtual Cluster Switching4Avaya did not test for Cloud Simulation
1Software-Defined Cloud Network2,Virtual Enterprise Network Architecture3Virtual Cluster Switching4Avaya did not test for Cloud Simulation
Company Fabric Name Fabric Products NS HTTP NS Youtube EW iSCSI EW DB EW MSExchange
Arista SDCN1 7050S-64 ToR 1187 1184 1033 1083 1156
Avaya4 VENA2 Fabric Connect VSP 7000s ToR
Brocade VCS3 VDX6720 ToR 4740 3793 10590 9065 602
Extreme Networks Open Fabric X670V ToR 1330 1342 1195 1300 1376
Cloud Simulation ToR Switches At 100% Aggregate Traffic Load
Zero Packet Loss: Latency Measured in ns
28-10GbE Configuration Between Ixia-ToR SwitchTested While In Single Homed Configuration
Cloud Simulation ToR Switches At 100% Aggregate Traffic Load
Zero Packet Loss: Latency Measured in ns
28-10GbE Configuration Between Ixia-ToR SwitchTested While In Single Homed Configuration
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 33/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
33 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Arista Network’s 7050S-64 delivered
the lowest latency measurement or the
Lippis Cloud simulation test at 50% and
100% load, ollowed by Extreme Net-
works’ X670V and Brocade’s VCS VDX6720. Both the Arista Networks’ 7050S-
64 and Extreme Networks’ X670V de-
livered nearly consistent perormance
under 50% and 100% load with varia-
tion o a ew hundred nanoseconds per
protocol, meaning that there is plentyo internal processing and bandwidth
capacity to support this traffic load
Te Brocade’s VCS VDX 6720 was
slightly more variable. All products de-
livered 100% throughput, meaning that
not a single packet was dropped as load varied.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 34/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
34 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Ethernet Fabric Industry Recommendations
Te ollowing provides a set o recommendations to I
business leaders and network architects or their consider-
ation as they seek to design and build their private/publicdata center cloud network abric. Most o the recommenda-
tions are based upon our observations and analysis o the
test data. For a ew recommendations, we extrapolate rom
this baseline o test data to incorporate key trends and how
these Ethernet Fabrics may be used in their support or
corporate advantage.
Consider Full Mesh Non Blocking: Most o the abric
configurations were ully meshed and non-blocking which
provided a highly reliable and stable inrastructure. Tisarchitecture scales, thanks to active-active protocols, and
enables two-tier design, which lowers equipment cost plus
latency. In addition to being highly reliable, it also enables
dual-homed server support at no perormance cost.
Consider Two-Tier Network Fabric: o reduce equip-
ment cost, support a smaller number o network devices
and increase application perormance, it’s recommended to
implement a two-tier lea-spine Ethernet Fabric. Tis Lip-
pis/Ixia Active-Active Ethernet Fabric est demonstrated
that two-tier networks are not only ready or prime-time
deployment, they are the preerred architecture.
MLAG and ECMP Proven and Scales But: It was proven
that a two-tier network can scale, thanks to ECMP up to
32-way links. In addition, ECMP offers multipathing, too,
at scale. What’s missing rom ECMP is auto provisioning
o links between switches. In short, ECMP requires manual
configuration.
Consider Utilizing TRILL and/or SPB: Over time, most
vendors will support RILL and SPB in addition to MLAG
and ECMP. Both RILL and SPB offer unique auto-provi-
sioning eatures that simpliy network design. It’s recom-
mended that network architects experiment with both
active-active protocols to best understand its utility within
your data center network environment.
Strong Underlay or a Dynamic Overlay: Te combination
o ully meshed, non-blocking two-tier network build with
standard active-active protocols constructs a strong under-lay to support a highly dynamic overlay. With the velocity o
change in highly virtualized data centers ushering in virtu-
alized networks or overlays, a stable and scalable underlay is
the best solution to support the rapid build-up o tunneled
traffic running through Ethernet Fabrics. Tis huge demand
in overlay traffic is yet another good reason to consider a
two-tier active-active Ethernet Fabric or data center and
cloud networking.
Be Open to Different Fabric Architectures: Not all datacenters support 10,000 or 100,000 servers and require
enormous scale. Tere are different approaches to building
Ethernet Fabrics that are ocused on converged I/O or sim-
plicity o deployment, auto provisioning, keeping east-west
traffic at the oR tier, etc. Many vendors offering Ethernet
Fabrics offer product strategies to scale up as requirements
demand.
Get Ready or Open Networking: In this Lippis/Ixia est,
we ocused on the active-active protocols or all the rea-
sons previously mentioned. When considering an Ethernet
Fabric, it’s important to ocus on open networking, such as
the integration o the network operating system with Open-
Stack, or how oR and Cores support various SDN control-
lers, do the switches support OpenFlow or have a road map
or its support. Tere are three types o networks in data
centers today, L2/L3, Network Virtualization overlays that
tunnel traffic through L2/L3 and soon OpenFlow flows.
Consider those vendors that support all types o networking
as this is a ast-moving target. Auto provisioning o net-
working with compute and storage is increasingly impor-
tant; thereore, look or networking vendors that support
network configuration via SDN controllers plus virtualiza-
tion and cloud orchestration systems.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 35/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
35 © Lippis Enterprises, Inc. 2013 Evaluation conducted at Ixia’s iSimCity Lab on Ixia test equipment www.lippisreport.co
Terms of Use
Tis document is provided to help you understand whether
a given product, technology or service merits additional
investigation or your particular needs. Any decision topurchase a product must be based on your own assessment
o suitability based on your needs. Te document should
never be used as a substitute or advice rom a qualified
I or business proessional. Tis evaluation was ocused
on illustrating specific eatures and/or perormance o the
product(s) and was conducted under controlled, laboratory
conditions. Certain tests may have been tailored to reflect
perormance under ideal conditions; perormance may vary
under real-world conditions. Users should run tests based
on their own real-world scenarios to validate perormanceor their own networks.
Reasonable efforts were made to ensure the accuracy o
the data contained herein but errors and/or oversights can
occur. Te test/ audit documented herein may also rely
on various test tools, the accuracy o which is beyond our
control. Furthermore, the document relies on certain rep-
resentations by the vendors that are beyond our control to
veriy. Among these is that the sofware/ hardware tested is
production or production track and is, or will be, avail¬able
in equivalent or better orm to commercial customers.Accordingly, this document is provided “as is,” and Lippis
Enterprises, Inc. (Lippis), gives no warranty, representation
or undertaking, whether express or implied, and accepts no
legal responsibility, whether direct or indirect, or the accu-
racy, completeness, useulness or suitability o any inorma-
tion contained herein.
By reviewing this document, you agree that your use o any
inormation contained herein is at your own risk, and you
accept all risks and responsibility or losses, damages, costsand other consequences resulting directly or indirectly rom
any inormation or material available on it. Lippis is not
responsible or, and you agree to hold Lippis and its related
affiliates harmless rom any loss, harm, injury or damage
resulting rom or arising out o your use o or reliance on
any o the inormation provided herein.
Lippis makes no claim as to whether any product or compa-
ny described herein is suitable or in¬vestment. You should
obtain your own independent proessional advice, whether
legal, accounting or otherwise, beore proceeding with any
investment or project related to any inormation, products
or companies described herein. When oreign translations
exist, the English document is considered authoritative. o
assure accuracy, only use documents downloaded directly
rom www.lippisreport.com .
No part o any document may be reproduced, in whole or
in part, without the specific written permission o Lippis.
All trademarks used in the document are owned by their
respective owners. You agree not to use any trademark inor as the whole or part o your own trademarks in connec-
tion with any activities, products or services which are not
ours, or in a manner which may be conusing, misleading or
deceptive or in a manner that disparages us or our inorma-
tion, projects or developments.
8/13/2019 EXTREME Lippis AA Report Spring 2013
http://slidepdf.com/reader/full/extreme-lippis-aa-report-spring-2013 36/36
Lippis Open Industry Active-Active Cloud Network Fabric Test
for Two-Tier Ethernet Network Architecture
About Nick Lippis
Nicholas J. Lippis III is a world-renowned authority on advanced IP networks,
communications and their benefits to business objectives. He is the publisher
o the Lippis Report, a resource or network and I business decision mak-ers to which over 35,000 executive I business leaders subscribe. Its Lippis
Report podcasts have been downloaded over 200,000 times; Iunes reports
that listeners also download the Wall Street Journal’s Money Matters, Business
Week’s Climbing the Ladder, Te Economist and Te Harvard Business Review’s
IdeaCast. He is also the co-ounder and conerence chair o the Open Networking User Group, which
sponsors a bi-annual meeting o over 200 I business leaders o large enterprises. Mr. Lippis is cur-
rently working with clients to design their private and public virtualized data center cloud comput-
ing network architectures with open networking technologies to reap maximum business value and
outcome.
He has advised numerous Global 2000 firms on network architecture, design, implementation, ven-
dor selection and budgeting, with clients including Barclays Bank, Eastman Kodak Company, Federal
Deposit Insurance Corporation (FDIC), Hughes Aerospace, Liberty Mutual, Schering-Plough, Camp
Dresser McKee, the state o Alaska, Microsof, Kaiser Permanente, Sprint, Worldcom, Cisco Systems,
Hewlett Packet, IBM, Avaya and many others. He works exclusively with CIOs and their direct reports
Mr. Lippis possesses a unique perspective o market orces and trends occurring within the computer
networking industry derived rom his experience with both supply- and demand-side clients.
Mr. Lippis received the prestigious Boston University College o Engineering Alumni award or ad-
vancing the proession. He has been named one o the top 40 most powerul and influential people in
the networking industry by Network World . echarget , an industry on-line publication, has named
him a network design guru while Network Computing Magazine has called him a star I guru.
Mr. Lippis ounded Strategic Networks Consulting, Inc., a well-respected and influential computer
networking industry-consulting concern, which was purchased by Sofbank/Ziff-Davis in 1996. He
is a requent keynote speaker at industry events and is widely quoted in the business and industry
press. He serves on the Dean o Boston University’s College o Engineering Board o Advisors as well
as many start-up venture firms’ advisory boards. He delivered the commencement speech to Boston
University College o Engineering graduates in 2007. Mr. Lippis received his Bachelor o Science in
Electrical Engineering and his Master o Science in Systems Engineering rom Boston University. His
Masters’ thesis work included selected technical courses and advisors rom Massachusetts Institute o
echnology on optical communications and computing.