Evolution of OSCARS

16
Evolution of OSCARS Chin Guok, Network Engineer ESnet Network Engineering Group Winter 2012 Internet2 Joint Techs Baton Rouge, LA Jan 23, 2012

description

On-demand Secure Circuits and Advance Reservation System (OSCARS) has evolved tremendously since its conception as a DOE funded project to ESnet back in 2004. Since then, it has grown from a research project to a collaborative open-source software project with production deployments in several R&E networks including ESnet and Internet2. In the latest release of OSCARS as version 0.6, the software was redesigned to flexibly accommodate both research and production needs. It is being used currently by several research projects to study path computation algorithms, and demonstrate multi-layer circuit management. Just recently, OSCARS 0.6 was leveraged to support production level bandwidth management in the ESnet ANI 100G prototype network, SCinet at SC11 in Seattle, and the Internet2 DYNES project. This presentation will highlight the evolution of OSCARS, activities surrounding OSCARS v0.6 and lessons learned, and share with the community the roadmap for future development that will be discussed within the open-source collaboration.

Transcript of Evolution of OSCARS

Page 1: Evolution of OSCARS

Evolution of OSCARS

Chin Guok, Network Engineer

ESnet Network Engineering Group

Winter 2012 Internet2 Joint Techs

Baton Rouge, LA

Jan 23, 2012

Page 2: Evolution of OSCARS

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Outline

What was the motivation for OSCARS

History of OSCARS

What’s new with OSCARS (v0.6)

Where is OSCARS (v0.6)

Who’s using OSCARS

OSCARS supporting research

What have we learned

What’s next for OSCARS

OSCARS in a nutshell

2

Page 3: Evolution of OSCARS

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

What was the motivation for OSCARS

•  A 2002 DOE Office of Science High-Performance Network Planning Workshop identified bandwidth-on-demand as the most important new network service (for):

•  Massive data transfers for collaborative analysis of experiment data •  Real-time data analysis for remote instruments •  Control channels for remote instruments •  Deadline scheduling for data transfers •  “Smooth” interconnection for complex Grid workflows (e.g. LHC)

•  Analysis of ESnet traffic done in early 2004 indicated that the top 100 host-to-host flows accounted for 50% of the utilized bandwidth

3

Page 4: Evolution of OSCARS

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

History of OSCARS •  OSCARS v0.6 RC1 released (Dec 2011) •  OSCARS v0.6 is field tested in SCinet (SC11), and ESnet ANI 100G prototype network (Nov 2011) •  OSCARS interops with OGF NSI protocol v1 using an adapter at NSI Plugfest at GLIF Rio, participants

include OpenDRAC (SURFnet), OpenNSA(NORDUnet), OSCARS(ESnet), G-lamdba (AIST), G-lambda (KDDI Labs), AutoBAHN (GÉANT project), and dynamicKL (KISTI) (Sep 2011)

•  OSCARS v0.6 SDK released allowing researchers to build and test PCEs within a flexible path computation framework (Jan 2011)

2004

20

05

20

06

2007

200

8

2

009

2

010

2

011

•  Funded as a DOE project (Aug 2004)

•  First production use of OSCARS circuit to reroute LHC Service Challenge traffic due to transatlantic fiber cut (Apr 2005) •  Collaboration with Internet2 BRUW project (Feb 2005)

•  First dynamic Layer-3 interdomain VC between ESnet (OSCARS) and Internet2 (BRUW) (Apr 2006) •  Formulation of DICE (DANTE, Internet2, CANARIE, ESnet) Control Plane WG (Mar 2006) •  Collaboration with GÉANT AMPS project (Mar 2006)

•  Successful Layer-2 reservation between ESnet (OSCARS) and GÉANT2 (AutoBAHN), and Nortel (DRAC) (Nov 2007) •  First dynamic Layer-2 interdomain VC between ESnet (OSCARS) and Internet2 (HOPI) (Oct 2007) •  First ESnet Layer-2 VC configured by OSCARS (Aug 2007) •  Adoption of OGF NMWG topology schema in consensus with DICE Control Plane WG (May 2007)

•  Successful control plane interop between OSCARS and g-Lambda using GLIF GNI-API GUSI (GLIF Unified Service Interface) (Dec 2008)

•  DICE Inter-Domain Controller (IDC) Protocol v1.0 specification completed (May 2008)

•  First use of OSCARS by SCinet (SC09) to manage bandwidth challenges (Nov 2009) •  Successful control and data plane interop between OSCARS, g-Lambda and Harmony using GLIF GNI-API Fenius

(Nov 2009) •  Draft architecture design for OSCARS v0.6 (Jan 2009)

•  DICE IDCP v1.1 released to support brokered notification (Feb 2010)

4

Page 5: Evolution of OSCARS

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

What’s new with OSCARS (v0.6)

•  Up till OSCARS v0.5 the code was tailored specifically to production deployment requirements

•  Monolithic structure with distinct functional modules

•  In OSCARS v0.6 the entire code base was re-factored to focus on enabling research and production customization

•  Distinct functions are now individual processes with distinct web-services interfaces

•  Flexible PCE framework architecture to allow “modular” PCEs to be configured into the path computation workflow

•  Extensible PSS module allows for multi-layer, multi-technology, multi-point circuit provisioning

•  Protocol used to make requests to OSCARS (IDC protocol) was modified to include an “optional constrains field” to allow testing of augmented (research) features without disrupting production service model

5

Page 6: Evolution of OSCARS

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Modularization in OSCARS v0.6

* Current focus of research projects

!"#$%&#"'()*"+,*(• (-&'&.,(/012%*34#"'2(• (5"*6&*7(!"#$%&#"'2(

809:!(• (809:,'#%&#"'(

;,2"0*%,(-&'&.,*(• (-&'&.,(;,2,*<&#"'2(

• (8073#'.(

=""*73'&9"*(• (>"*+?"6(=""*73'&9"*(

@A=B(• (="'29*&3',7(A&9:(="C409&#"'2(

D"4"E".F()*37.,(• (D"4"E".F(G'H"*C&#"'(

-&'&.,C,'9(

GI=(8AG(• (-&'&.,2(BJ9,*'&E(>/(

="CC0'3%&#"'2(

@A&9:(/,904(• (!,96"*+(BE,C,'9(

G'9,*H&%,(

K""+04(• (K""+04(2,*<3%,(

809:L@(• (809:"*3M&#"'(

• (="2#'.(

@I32#'%9(I&9&(&'7(="'9*"E(AE&',(50'%#"'2(

>,1()*"62,*(N2,*(G'9,*H&%,(

(

OSCARS Inter-Domain Controller (IDC)

Use

rs

Use

r A

pps

Oth

er

IDC

s

Loca

l Net

wor

k R

esou

rces

6

Page 7: Evolution of OSCARS

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Flexible PCE Framework

Supports arbitrary execution of distinct PCEs •  Example graph of PCE Modules

Constraints = Network Element Topology Data

7

PCE Runtime

Policy PCE1

CSPF PCE3

Latency PCE2

Dijkstra PCE5

B/W PCE4

A* PCE6

User Constraints

User + PCE1 Constraints

User + PCE(1+2) Constraints

User + PCE(1+2) Constraints

User + PCE(1+2+3) Constraints

User + PCE(1+2+4) Constraints

User + PCE(1+2+4) Constraints

User + PCE(1+2+4+5) Constraints

User + PCE(1+2+4+6) Constraints

Page 8: Evolution of OSCARS

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Extensible PSS Module

8

Multi-technology PSS

Operation Decomposer

Path Analysis Notification

Recomposer

EoMPLS PSS

DRAGON PSS

Openflow PSS

PSS Framework

-  Modular device management address

resolution

-  Modular management

connection method

Workflow Engine

Configuration Engine

Modular configuration generation (per device

model & service)

Page 9: Evolution of OSCARS

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Protocol Extension to IDCP •  “optionalConstraint” added to support research features without constant

need to change base protocol •  Enhancements prototyped in “optionalConstraint” will migrate to base

protocol once they have been baked <xsd:complexType name="resCreateContent"> <xsd:sequence> <xsd:element name="messageProperties" type ="authP:messagePropertiesType" maxOccurs="1" minOccurs="0"/> <xsd:element name="globalReservationId" type="xsd:string" maxOccurs="1" minOccurs="0"/> <xsd:element name="description" type="xsd:string" /> <xsd:element name="userRequestConstraint" type="tns:userRequestConstraintType" maxOccurs="1" minOccurs="1" /> <xsd:element name="reservedConstraint" type="tns:reservedConstraintType" maxOccurs="1" minOccurs="0" /> <xsd:element name="optionalConstraint" type="tns:optionalConstraintType" maxOccurs="unbounded" minOccurs="0"/> </xsd:sequence> </xsd:complexType> ! <xsd:complexType name="optionalConstraintType"> <xsd:sequence> <xsd:element name="value" type="tns:optionalConstraintValue"/> </xsd:sequence> <xsd:attribute name="category" type="xsd:string" use="required"/> </xsd:complexType> <xsd:complexType name="optionalConstraintValue"> <xsd:sequence > <xsd:any maxOccurs="unbounded" namespace="##other" processContents="lax"/> </xsd:sequence> </xsd:complexType>

9

Page 10: Evolution of OSCARS

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

OSCARS v0.6 is gaining adoption and seeing production deployments •  Field tested at SC11

•  Deployed by SCinet to manage bandwidth/demo bandwidth on show floor

•  Modified (PSS) by Internet2 to manage Openflow switches •  Modified (Coordinator and PSS) by ESnet to broker bandwidth and

coordinate workflow •  Currently deployed in ESnet 100G Prototype Network

•  Modified (PSS) to support ALU devices and “multi-point” circuits •  Adopted by Internet2 for NDDI and DYNES

•  IU GRNOC has modified OSCARS v0.6 (PSS and PCE) to support NDDI OS3E

OSCARS v0.6 RC1 is now available (open-source BDS license) •  https://code.google.com/p/oscars-idc/wiki/OSCARS_06_VM_Installation

Where is OSCARS (v0.6)

10

Page 11: Evolution of OSCARS

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Who’s using OSCARS •  Currently deployed in 21 networks including wide-area backbones,

regional networks, exchange points, local-area networks, and testbeds. •  Under consideration for an additional 26 green field deployments in 2012

Deployments using OSCARS v0.6

above dotted line

11

Page 12: Evolution of OSCARS

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

OSCARS Supporting Research

Changes made to OSCARS v0.6 has been extremely useful in supporting research projects, e.g.:

•  Advance Resource Computation for Hybrid Service and Topology Networks (ARCHSTONE)

•  http://archstone.east.isi.edu

•  Coordinated Multi-layer Multi-domain Optical Network Framework for Large-scale Science Applications (COMMON)

•  http://www.cis.umassd.edu/~vvokkarane/common/

•  End Site Control Plane System (ESCPS) •  https://plone3.fnal.gov/P0/ESCPS

•  Network-Aware Data Movement Advisor (NADMA) •  http://www2.cs.siu.edu/~mengxia/NADMA/

•  Virtual Network On-Demand (VNOD) •  https://wiki.bnl.gov/CSC/index.php/VNOD

12

Page 13: Evolution of OSCARS

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

What have we learned

•  Solve a real problem •  Understanding the practical problem made it easy to judge if OSCARS

met the original design requirements •  Be practical

•  It was necessary to “punt” or simplify specific designs in OSCARS in order to scope the problem and solve the core issues

•  Balance optimization and customizability •  As OSCARS started gaining adoption in various networks, it became

evident that the ability to customize OSCARS was more important than optimization. This was the driver for OSCARS v0.6.

•  Engage the right community •  For OSCARS to be successful in an end-to-end and multi-domain

environment, it was necessary to engage communities (e.g. DICE, GLIF) with similar goals and requirements

•  OSCARS has been and is a collaborative effort!

13

Page 14: Evolution of OSCARS

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

What’s next for OSCARS

•  Short-Term •  Evaluate contributions from collaborative research projects (e.g.

ARCHSTONE, COMMON, etc), and certify code for production deployment

•  Support standards protocol (e.g. OGF NSI) natively into OSCARS •  Evaluate and develop solutions driven by production requirements

(e.g. multi-point overlay networks, Nagios plug-ins, packaging)

•  Long-Term •  Explore co-scheduling and composible services framework to

advance “Intelligent networking” •  Develop strategies for OSCARS to mature in a larger community

base (e.g. FreeBSD)

14

Page 15: Evolution of OSCARS

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

All Other Traffic

(e.g. Best-Effort)

OSCARS in a nutshell

For more info: https://code.google.com/p/oscars-idc/

15

•  OSCARS is an open-source collaborative project

•  OSCARS is gaining critical mass in the deployment in networks

•  OSCARS is support new exciting research in the area of intelligent networking

•  OSCARS supports network resource management ! and “MOMS” (Movement Of Massive Science)

Page 16: Evolution of OSCARS

Lawrence Berkeley National Laboratory U.S. Department of Energy | Office of Science

Questions?

16

[email protected] | Chin Guok