Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... ·...

27
Layer 2 High Performance Networking Challenges and Breakthroughs: Lessons From GLIF 2012 and SC12 Demonstrations Marc Lyonnais, Ciena Joe Mambretti International Center for Advanced Internet Research Northwestern Univ Linda Winkler, Argonne National Laboratory Special Collaboration: Rod Wilson, Ciena TIP 2013 January 14, 2013

Transcript of Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... ·...

Page 1: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

Layer 2 High Performance Networking

Challenges and Breakthroughs: Lessons From

GLIF 2012 and SC12 Demonstrations

Marc Lyonnais, Ciena

Joe Mambretti International Center for Advanced Internet Research Northwestern Univ

Linda Winkler, Argonne National Laboratory

Special Collaboration: Rod Wilson, Ciena

TIP 2013

January 14, 2013

Page 2: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

2

Introduction

This presentation describes experiments and demonstrations staged

during the Global Lambda Grid Workshop (GLIF2012) and the SC12

Supercomputing Conference.

These activities resulted from a collaboration among Ciena, iCAIR at

Northwestern, Argonne National Lab, the StarLight Consortium, the

Metropolitan Research and Education Network (MREN), and multiple

other research partners.

These experiments and demonstrations utilized software and

hardware prototypes based on closely integrated Ethernet and

photonic transport technologies.

The demonstrations presented techniques designed to scale and to

reduce network complexity of SDN at 100G Rates.

These techniques are still In early stages of deployment.

Page 3: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

3

In This Presentation…

We Will Look Into:

What drives bandwidth at and beyond 100G.

Complexities observed at this early stage of building next generation

Optical Open Exchanges

What should be done to scale Networks and Open Exchanges?

What are the next steps in this process?

Page 4: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

What Drives Bandwidth At and Beyond

100G? – Data Intensive Science

Page 5: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

5

Sloan Digital Sky

Survey

www.sdss.org

Globus Alliance

www.globus.org

LIGO

www.ligo.org

ALMA: Atacama

Large Millimeter Array

www.alma.nrao.edu

CAMERA

metagenomics

camera.calit2.net

Comprehensive

Large-Array

Stewardship System

www.class.noaa.gov

DØ (DZero)

www-d0.fnal.gov

ISS: International

Space Station

www.nasa.gov/station

IVOA: International

Virtual Observatory

www.ivoa.net

BIRN: Biomedical

Informatics Research

Network

www.nbirn.net

GEON: Geosciences

Network

www.geongrid.org

ANDRILL:

Antarctic

Geological

Drilling

www.andrill.org

GLEON: Global Lake

Ecological

Observatory Network

www.gleon.org Pacific Rim

Applications and Grid

Middleware Assembly

www.pragma-grid.net

CineGrid

www.cinegrid.org

Carbon Tracker

www.esrl.noaa.gov/

gmd/ccgg/carbontracke

r

www.xsede.org

LHCONE

www.lhcone.net

WLCG

lcg.web.cern.ch/LCG/public/

OOI-CI

ci.oceanobservatories.org

OSG

www.opensciencegrid.org SKA

www.skatelescope.org

NG Digital

Sky Survey

ATLAS

Compilation By Maxine Brown

www.ligo.org

StarLight Supports All Major Data Intensive Science Projects

Page 6: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

6

Global Lambda Integrated Facility: A World Wide Facility

Supporting Multiple Networks and Testbeds

Visualization courtesy of Bob Patterson, NCSA; data compilation by Maxine Brown, UIC. www.glif.is

Page 7: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

7

GLIF 2012 Hosted By EVL and iCAIR, Supported By StarLight 14

Major Demonstrations, Including (4k Uncompressed Streaming

Transport)

Chicago

Ottawa

MEN LAB 10 • 40G Connectivity from Ciena

Ottawa to Starlight Gigapop in

Chicago

• 2 x10G DWDM link to EVL Lab

40G through 1100km

EVL Lab-Quad-HD

710NLSD UIC

DWDM

2x10G

Page 8: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

8

High Performance Digital Media Network (HPDMnet): Dynamically

Provisioned Inter-Domain International Service for High Performance Digital Media and Other Data Intensive Applications

The HPDMnet Consortium is working with the GLIF

community on developing and implementing architecture for new types of advanced network services

The HPDMnet Consortium is designing and developing new dynamically provisioned L1/L2 capabilities for large scale HPDM services, which can be used for any data intensive application

Including world-wide L1/L2 services for high performance, high quality, large volume stream digital media

Key attributes: capacity, flexibility, quality, reliability, “green”

The Consortium is developing services specifically for implementation at GLIF GOLES, and related facilities

Page 9: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

9

Page 10: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

10

An Advanced International Distributed Programmable

Environment for Experimental Network Research:

“Slice Around the World” Demonstration

A Demonstration and Presentation By the

Consortium for International Advanced Network Research

Leads for Participating Organizations: Ilia Baldine, Andy Bavier, Scott Campbell, Jeff Chase,

Jim Chen, Cees de Laat, Dongkyun Kim, Te-Lung Liu, Luis Fernandez Lopez, Mon-Yen Lou,

Joe Mambretti, Rick McGeer, Paul Muller, Aki Nakao, Max Ott, Ronald van der Pol, Martin

Reed, Rob Ricci, Ruslan Smeliansky, Marcos Rogerio Salvador, Myung-Ki Shin, Michael

Stanton, Jungling Yu

Page 11: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

11

Overview

This International Research Consortium is creating a large scale

distributed environment (platform) for basic network science

research, experiments, and demonstrations.

This initiative Is designing, implementing, and demonstrating an

international highly distributed environment (at global scale) that

can be used for advanced next generation communications and

networking.

The environment is much more than “A Network” – other resources

include programmable clouds, sensors, instruments, specialized

devices, and other resources.

This environment is the world’s most extensive international

OpenFlow research network.

The US National Science Foundation’s Global Environment for

Network Innovations (GENI) Is a major partner in this initiative.

Page 12: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software
Page 13: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

13

Page 14: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

14

Page 15: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software
Page 16: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

16

Other SC12 Demonstrations At SC12, the 100 Gbps Consortium demonstrated multiple architectural

approaches, tools and methods related to using 100 Gbps channels

across multiple LAN and WAN environments.

Devices from multiple vendors integrating available components and

emerging beta components not yet commercialized.

The WAN environment included:

A 100 Gbps channel established on the Sidera Network between the

NASA Goddard Space Flight Center in Greenbelt Maryland, the Mid-

Atlantic Crossroads (MAX) on the east coast and the StarLight

International/National Communications Exchange in Chicago (the

MREN StarWave Facility)

Between StarLight On ESnet and the SC12 Conference Center in

Salt Lake City, interconnecting to a100 Gbps ring among booths on

the SC12 Showfloor.

Page 17: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

17

100 Gbps Services for Petascale Science Project

Partners

NASA Goddard Space Flight Center

International Center for Advanced Internet Research (iCAIR) at

Northwestern University

MidAtlantic Crossroads (MAX)

Argonne National Laboratory

StarLight International/National Communications Exchange Facility

Sidera Networks

Ciena

The Metropolitan Research and Education Network (MREN)

Energy Science Network (ESnet)

Laboratory for Advanced Computing

The Open Cloud Consortium (OCC)

Page 18: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

18

Page 19: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

19

SC12 Version of Playing with Light Using Video

Sources and Sinks

CC

MD

12

SM

D

8x

2 W

SS

CC

MD

12

SM

D

8x

2 W

SS

Sink-Source

Sink-Source

A Software Define Network

that could reflect the Application

Interconnection at Layer 1

CC

MD

12

SM

D

8x

2 W

SS

Sink

Page 20: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

20

Early Observations

High bandwidth requirements & deployments beyond 10G are increasing

rapidly to 100G.

100G capacity provides major opportunities for new services (i.e., more

than mere aggregation, increased L1/L2 deployments vs L3).

New optical technology is emerging: e.g, using phase modulation instead

of amplitude modulation when exceeding 10G.

This capacity is required for new segmentation techniques, e.g., Software

Define Networks (SDN), which allows for much more flexibility,

customization, new standards, new techniques (e.g., OpenFlow).

Demonstrations for GLIF 2012 and SC12 transiting the Starlight exchange

facility performed as expected – i.e, very well.

Last minute component provisioning posed some problems.

LR4 vs LR10 issues.

Testing paths using a test Set @ 100GE saved a lot of time.

Optimizations, e.g, L1 vs L2 scalability are still unclear.

Page 21: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

What Issues Need To Be Resolved In

Order to Scale the Network and Open

Exchanges?

Page 22: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

22

Layer1 v. Layer 2 as

Interconnection Option For Open Light Exchange

Unresolved to date: when are Layer 2 interconnections more optimal for an

Open Light Exchanges compared to Layer 1 interconnections? Both

approaches have positive attributes. Does one preclude the other?

Interconnecting many point to point sites brings up this question:

Should/can a simple OTN Layer 1 cross connection do the trick?

Or should Layer2, which Is VLAN sensitive In the intermediate interconnections

point e.g., at StarLight?

Layer 1 greatly reduces the complexity of interconnections, e.g., fewer steps.

Layer 2 provides stat muxing, but at 100G; is this needed? Yes, but primarily at

the endpoints really.

Should a hybrid fabric be implemented that introduces the flexibility of both for

the same network element?

Choice is highly dependent on app requirements.

More investigation of these issues is required – especially for scalability.

These are the questions we are looking at for SC13.

Page 23: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

23

What will be the Submarine interconnect? OTN or Ethernet?

Visualization courtesy of Bob Patterson, NCSA; data compilation by Maxine Brown, UIC.

www.glif.is

Page 24: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

24

Why OTN

OTN by definition, protects, the synchronization domain of the

payload.

In the initial interconnection of high bandwidth, we need to reflect on

how a slower clock rate will slowly influence the buffer outflow Rate.

In other words, if you fill the buffer with a slower speed than you are

emptying it, you will run into buffer overrun.

The idea? Same buffer rate in and out.

The Ethernet frame fits inside OTN. In fact, this is what is being used

in the transponders to transport Ethernet.

By using a Layer 1 protocol to transport or switch bandwidth, you may

be able to simplify your network point to point connections by

introducing SOME EPL circuit.

By using Ethernet switching at each end and using OTN throughout

the path, the interconnects are by far simpler to manage.

Page 25: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

Conclusion

Page 26: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

26

What We Talked About

Bandwidth at and beyond 100G is required.

Some solutions are emerging

However, complexities can be observed at this early stage of building

next generation optical open exchanges based on 100G services

Scaling L1/L2 services at open exchanges is a key issue

The next steps in this process

Implementing production 100G services

Preparing for more complex 100G demonstrations as proof points for

next generation 100G services at future events, including SC13

Page 27: Layer 2 High Performance Networking Challenges and ... › presentations › tip2013 › ... · 1/14/2013  · This capacity is required for new segmentation techniques, e.g., Software

Thank you