DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks
-
Upload
tal-lavian-phd -
Category
Devices & Hardware
-
view
44 -
download
3
Transcript of DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks
DWDM-RAM:DARPA-Sponsored Research for Data Intensive
Service-on-Demand Advanced Optical Networks
DWDM-RAM demonstration sponsored by Nortel Networks and iCAIR/Northwestern University
Dates Monday Oct 6 at 4pm & 6pm & Tuesday Oct 7 at 12Noon, 2pm & 4pmTimes: Wednesday Oct 8 at 10am & 12Noon
SLAC Agenda
• DWDM-RAM Overview– The Problem– Our Architecture & Approach
• Discussion with HEP Community
Problem: More Data Than Network
Application-level network scheduling Application must see dedicated bandwidth as a managed resource Advance scheduling of network from application Optimization is important
Rescheduling with under-constrained requests Data transfers require service model
Scheduled network and host data services combined Co-reservation of storage, data, and network Requires scheduling
Optical Networks Change the Current Pyramid
George Stix, Scientific American,
January 2001
x10
DWDM- fundamental misbalance between computation and communication
Radical mismatch: L1 – L3• Radical mismatch between the optical transmission world and
the electrical forwarding/routing world. • Currently, a single strand of optical fiber can transmit more
bandwidth than the entire Internet core
• Current L3 architecture can’t effectively transmit PetaBytes or 100s of TeraBytes
• Current L1-L0 limitations: Manual allocation, takes 6-12 months - Static. – Static means: not dynamic, no end-point connection, no service
architecture, no glue layers, no applications underlay routing
Growth of Data-Intensive Applications
• IP data transfer: 1.5TB (1012) , 1.5KB packets– Routing decisions: 1 Billion times (109) – Over every hop
• Web, Telnet, email – small files • Fundamental limitations with data-intensive
applications – multi TeraBytes or PetaBytes of data – Moving 10KB and 10GB (or 10TB) are different (x106, x109)– 1Mbs & 10Gbs are different (x106)
Challenge: Emerging data intensive applications require:
Extremely high performance, long term data flowsScalability for data volume and global reachAdjustability to unpredictable traffic behaviorIntegration with multiple Grid resources
Response: DWDM-RAM - An architecture for data intensiveGrids enabled by next generation dynamic optical networks, incorporating new methods for lightpath provisioning
Architecture
Applications
Replication,Disk, Accounting,Authentication, Etc.
ftp, GridFTP,Sabul, fast,
etc
Architecture, Page 2
OGSI provided for all application layer
interfaces
DTS NRM Other Svcs
DMS
Other dwdm …
ODINomninet
’s ’s
Data Management Services
OGSA/OGSI compliantCapable of receiving and understanding application requestsHas complete knowledge of network resourcesTransmits signals to intelligent middlewareUnderstands communications from Grid infrastructureAdjusts to changing requirementsUnderstands edge resourcesOn-demand or scheduled processingSupports various models for scheduling, priority setting,
event synchronization
Intelligent Middleware for Adaptive Optical Networking
OGSA/OGSI compliantIntegrated with GlobusReceives requests from data servicesKnowledgeable about Grid resourcesHas complete understanding of dynamic lightpath provisioningCommunicates to optical network services layerCan be integrated with GRAM for co-managementArchitecture is flexible and extensible
Dynamic Lightpath Provisioning Services
Optical Dynamic Intelligent Networking (ODIN)OGSA/OGSI compliantReceives requests from middleware servicesKnowledgeable about optical network resourcesProvides dynamic lightpath provisioningCommunicates to optical network protocol layerPrecise wavelength controlIntradomain as well as interdomainContains mechanisms for extending lightpaths through E-Paths - electronic paths
Optical Control Network
Optical Control Network
Network Service Request
Data Transmission Plane
OmniNet Control PlaneODIN
UNI-N
ODIN
UNI-N
Connection Control
L3 routerL2 switch
Data storageswitch
DataPath
Control
DataPath Control
DATA GRID SERVICE PLANEDATA GRID SERVICE PLANE
DWDM-RAM Service Control Architecture
1 n
DataCenter
1
n
1
n
DataPath
DataCenter
ServiceControl
ServiceControl
NETWORK SERVICE PLANENETWORK SERVICE PLANE
GRID Service Request
Design for Scheduling Network and Data Transfers scheduled
Data Management schedule coordinates network, retrieval, and sourcing services (using their schedulers) Network Management has own schedule
Variety of request models Fixed – at a specific time, for specific duration Under-constrained – e.g. ASAP, or within a window
Auto-rescheduling for optimization Facilitated by under-constrained requests Data Management reschedules
for its own requests request of Network Management
DWDM-RAM October 2003 Architecture Page 4
New Concepts• Many-to-Many vs. Few-to-Few• Apps optimized to waste bandwidth• Network as a Grid service• Network as a scheduled service• New transport concept• New control plane• Cloud bypass
SummaryNext generation optical networking provides significantnew capabilities for Grid applications and services, especially for high performance data intensive processes
DWDM-RAM architecture provides a framework for exploiting these new capabilities
These conclusions are not only conceptual – they are beingproven and demonstrated on OMNInet –
a wide-area metro advanced photonic testbed
End of SLAC mini-overview3 March 2004
Key Terms
DTS – Data Transfer ServiceEffects transfers NRM – Network Resource ManagementInterface to multiple physical/logical network typesConsolidation, topology discovery, path allocation, scheduler, etc. DMS – Data Management ServiceTopology discovery, route creation, path allocationScheduler/optimizer Other ServicesReplication, Disk, Accounting, Authentication, Security, etc.
Possible Extensions
Authentication/Security Multi-domain environments
Replication for optimization May help refine current Grid file system models May Use existing replica location services
Priority models Rule-based referees
Allow local and policy-based management Add domain specific constraints
DWDM-RAM October 2003 Architecture Page 5
Extending Grid Services
OGSI interfaces Web Service implemented using SOAP and JAX-RPC Non-OGSI clients also supported
GARA and GRAM extensions Network scheduling is new dimension Under-constrained (conditional) requests Elective rescheduling/renegotiation
Scheduled data resource reservation service (“Provide 2 TB storage between 14:00 and 18:00 tomorrow”)
DWDM-RAM October 2003 Architecture Page 6
DWDM-RAM: An architecture designed to meet the networking challenges of extremely large scale Grid applications.Traditional network infrastructure cannot meet these demands,especially, requirements for intensive data flows
DWDM-RAM Components Include:
Data management servicesIntelligent middlewareDynamic lightpath provisioning State-of-the-art photonic technologiesWide-area photonic testbed implementation
Current Implementation
ftp
Client Application
DTS NRMDMS
ODINOMNInet
’s
OGSI provided for network allocation
interfaces
NRM OGSA Compliance
OGSI interface
GridService PortType with two application-oriented methods:allocatePath(fromHost, toHost,...)deallocatePath(allocationID)
Usable by a variety of Grid applications
Java-oriented SOAP implementation using the Globus Toolkit 3.0
NRM Web Services Compliance
Accessible as Web Service for non-OGSI callers
Fits Web Service model:
- Single-location always-on service
- Atomic message-oriented transactions
- State preserved where necessary at the application level
No OGSI extensions, such as service data and service factories
Data Management Service
Uses standard ftp (jakarta commons ftp client)
Implemented in Java
Uses OGSI calls to request network resources
Currently uses Java RMI for other remote interfaces
Uses NRM to allocate lambdas
Designed for future scheduling
λData Receiver Data Source
FTP client FTP server
DMS NRM
Client App
Network Resource Manager
• Presents application-oriented OGSI / Web Services interfaces for network resource (lightpath) allocation
• Hides network details from applications
•Implemented in Java
Items in blue are planned
Network Resource Manager
Network Resource Manager
End-to-End-Oriented Allocation Interface
Using Application(DMS)
Omninet Network Manager (Odin)
Omninet Data Interpreter
Segment-Oriented Topology and Allocation Interface
Scheduling / Optimizing Application
Network-Specific Network Manager
Network-Specific Network Manager
Network-Specific Data Interpreter
Network-Specific Data Interpreter
Items in blue are planned
Enabling High Performance Support forData-Intensive Services With On-Demand Lightpaths Created ByDynamic Lambda Provisioning, Supported by Advanced PhotonicTechnologies
OGSA/OGSI Compliant ServiceOptical Service Layer: Optical Dynamic Intelligent Network (ODIN) ServicesIncorporates Specialized SignalingUtilizes Provisioning Tool: IETF GMPLSNew Photonic Protocols
Lightpath Services
Optical Dynamic Intelligent Networking Services:An Architecture Specifically Designed to Support Large Scale, Data Intensive, Extremely High Performance, Long-Term Flows
OGSA/OGSI Compliant ServiceDynamic Lambda Provisioning Based on DWDMBeyond Traditional Static DWDM ProvisioningScales to Gbps, Terabits Data Flows withFlexible, With Fine-Grained Control
Lightpaths: Multiple Integrated Linked Lambdas, IncludingOne to Many and Many to One, Intradomain/Interdomain
ODIN
ODIN Server – A server software that accepts and fulfills requests (eg, allocates and manages routes, paths)
Resource – A host or other hardware that provides a service over the optical network, OGSA/OGSI compliant
Resource Server – Server software running on a Resource that provides the service
Resource Config. Server – Server software that receives route configuration data from the ODIN Server
Client – A host connecting to a Resource through the optical network, in this demonstration, Grid clusters
Network Resource – Dynamically allocated network resource, in this demonstration, Lightpaths
Terms
Specialized SignalingRequest Characterization, Resource Characterization, Optimization,
Performance, and Survival/Protection, Restoration, Characterization
Basic Processes Are Directed at Lightpath/Management:Create, Delete, Change, Swap, Reserve
And Related Processes:Discover, Reserve, Bundle, Reallocate, etc.
IETF GMPLS As Wavelength Implementation ToolsUtilizes New Photonic Network Protocols
Lightpath Provisioning Processes
O-UNI, Specialized Interfaces, eg, APIs, CLIs
Wavelength Distribution Protocol
Auto-Discovery of Optical Resources
Self-Inventorying
Constraint Based Routing
Options for Path Protection, Restoration
Options for Optical Service Definitions
Core Processes
Options for Interface AddressingOptions for VPN IDsPort, Channel, Sub-channel IDsRouting Algorithm Based on Differentiated ServicesOptions for Bi-directional Optical Lightpaths, and
Optical Lightpath GroupsOptical VPNs
Addressing and Identification
4x10GE
Northwestern U
OpticalSwitchingPlatform
Passport8600
ApplicationCluster
• A four-node multi-site optical metro testbed network in Chicago -- the first 10GE service trial!• A test bed for all-optical switching and advanced high-speed services• OMNInet testbed Partners: SBC, Nortel, iCAIR at Northwestern, EVL, CANARIE, ANL
ApplicationCluster
OpticalSwitchingPlatform
Passport8600
4x10GE
StarLight
OPTera Metro5200
ApplicationCluster
OpticalSwitchingPlatform
Passport8600
4x10GE8x1GE
UIC
CA*net3--Chicago
OpticalSwitchingPlatform
Passport8600
Closed loop
4x10GE8x1GE
8x1GE
8x1GELoop
OMNInet Core Nodes
Fiber
KM MI1* 35.3 22.02 10.3 6.43* 12.4 7.74 7.2 4.55 24.1 15.06 24.1 15.07* 24.9 15.58 6.7 4.29 5.3 3.3
NWUEN Link
Span Length
CAMPUSFIBER (16)
CAMPUSFIBER (4)
GridClusters10/100/
GIGE
10 GE
10 GE
To Ca*Net 4
Lake Shore
Photonic Node
S. Federal
Photonic Node
W Taylor SheridanPhotonic
Node 10/100/GIGE
10/100/GIGE
10/100/GIGE
10 GE
Optera5200
10Gb/sTSPR
Photonic Node
PP
8600
10 GEPP
8600
PP
8600
Optera5200
10Gb/sTSPR
10 GE
10 GE
Optera5200
10Gb/sTSPR
Optera5200
10Gb/sTSPR
1310 nm 10 GbEWAN PHY interfaces
10 GE
10 GE
PP
8600
…
EVL/UICOM5200
LAC/UICOM5200
INITIALCONFIG:10 LAMBDA(all GIGE)
StarLightInterconnect
with otherresearchnetworks
10GE LAN PHY (Dec 03)
TECH/NU-EOM5200
CAMPUSFIBER (4)
INITIALCONFIG:10 LAMBDAS(ALL GIGE)
Optera Metro 5200 OFA
NWUEN-1
NWUEN-5
NWUEN-6NWUEN-2
NWUEN-3
NWUEN-4
NWUEN-8 NWUEN-9
NWUEN-7
Fiber in use
Fiber not in use
5200 OFA
5200 OFA
Optera 5200 OFA
5200 OFA
OMNInet
• 8x8x8 Scalable photonic switch• Trunk side – 10 G WDM
• OFA on all trunks
Overlay Management Network (Current)
• Uses ATM PVC with 2 Mb/s CIR from existing network (MREN + OC12)
• Hub and spoke network from 710 Lakeshore Dr.
Photonic Switch
BayStack 350
OPTera 5200 OLA
Passport 8600
Local Management
Station
To Management Network ATM switch port
10/100 BT
10/100 BT OC-12 Ring
Evanstan
710 Lakeshore Dr.
600/700 S. FederalUniv. IL
NAP
SBC
OC-3
Part of StarTap
•MREN
OMNInet Control Plane Overlay Network
MREN= Metropolitan ResearchAnd Education Network
NorthwesternLeverone Hall Data Com Center
10GE WAN/LAN
PHY to OMNInet
• The implementation is lambdas (unprotected).• Installed shelf capacity and common equipment permits expansion of up to 16 lambdas through
deployment of additional OCLD, and OCI modules.• A fully expanded OM5200 system is capable of supporting 64 lambdas (unprotected) over the same 4-
fiber span.
Up to 16xGE(SMF LX)
4-fibers~1km
ClustersFor Telecom2003 Demo
iCAIR Clusters at NorthwesternTechnological Institute
OM5200 OM5200PP8600
~20m
DWDM Between Cluster Site and OMNInet Core Node at iCAIR sites at Northwestern in Evanston
DWDM on
Dedicated
Fiber~20m
OMNInet Optical Grid Clusters
Rela
tive
Fib
er po
wer
Rela
tive
po
we
r
To
ne
cod
e
A/D
PPS Control Middleware
tapOFA
D/A
Management & OSC Routing
VOA
D/A
Power measurement
switch
SwitchControl
AWG Temp. Control alg.
D/AA/D
AWG
Heater
+-
Setpoint
DSP Algorithms & Measurement
tap
PhotoDetectorPhotoDetector
Drivers/data translation
Connectionverification
Path ID Corr.
Fault isolation
Gain Controller Leveling
Transientcompensator
Power Corr.
+-
LOS
+-
PhotonicsDatabase
100FXPHY/MAC
Splitter
OSC cct
FLIPRapidDetect
Photonic H/W
Physical Layer Optical Monitoring and Adjustment
Application level measurementsPath allocation: 48.7 secs
Data transfer setup time: 0.141 secs
FTP transfer time: 464.624 secs
Effective transfer rate: 156 Mbits/sec
Path tear down time: 11.3 secs
File size: 10 GB
10GB file Transfer
Path Allocation Overhead as a % of the Total Transfer Time
• Knee point shows the file size for which overhead is insignificant
Setup time = 2 sec, Bandwidth=100 Mbps
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
0.1 1 10 100 1000 10000
File Size (MBytes)
Setu
p tim
e / To
tal Tr
ansfe
r Tim
e
1GB
Setup time = 2 sec, Bandwidth=300 Mbps
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
0.1 1 10 100 1000 10000
File Size (MBytes)
Setup
time /
Total
Tran
sfer T
ime
5GB
Setup time = 48 sec, Bandwidth=920 Mbps
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
100 1000 10000 100000 1000000 10000000
File Size (MBytes)
Setup
time /
Total
Tran
sfer T
ime
500GB
Packet Switched vs Lambda NetworkSetup time tradeoffs (Optical path setup time = 2 sec)
0.0
50.0
100.0
150.0
200.0
250.0
0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0
Time (s)
Da
ta T
ran
sfe
rre
d (
MB
)
Packet sw itched (300 Mbps)
Lambda sw itched (500 Mbps)
Lambda sw itched (750 Mbps)
Lambda sw itched (1 Gbps)
Lambda sw itched (10Gbps)
Packet Switched vs Lambda NetworkSetup time tradeoffs (Optical path setup time = 48 sec)
0.0
500.0
1000.0
1500.0
2000.0
2500.0
3000.0
3500.0
4000.0
4500.0
5000.0
0.0 20.0 40.0 60.0 80.0 100.0 120.0
Time (s)
Da
ta T
ran
sfe
rre
d (
MB
)
Packet sw itched (300 Mbps)
Lambda sw itched (500 Mbps)
Lambda sw itched (750 Mbps)
Lambda sw itched (1 Gbps)
Lambda sw itched (10Gbps)
File transfer times
1
2
5
10
1
2
5
10
0
1
2
3
4
5
6
7
8
9
10
0 100 200 300 400 500 600 700 800 900 1000
Time (sec)
File
Siz
e (G
b) DWDM-RAM (overOMNINet)
FTP (over Internet)
File Transfer Times
0
5
10
15
20
25
30
35
100 Gb 200 Gb 300 Gb 400 Gb
msec
Max bandwidth
900+ Mb/s
Max bandwidth
900+ Mb/s
Optical level measurementsTime to set up an individual X-connect: secs
UNI-N processing time for request: secs
Time taken by the routing card to send command to control card:
secs
Time taken by the routing card to forwarding request to next hop in control plane:
secs
Time taken by the control card to drive the switch card :
secs
End-to-end light path setup : secs
Enhanced Optical Dynamic Intelligent Network Services
Additional OGSA/OGSI developmentEnhanced signalingEnhanced integration with optical component addressing methods Extension of capabilities for receiving information from L1 process monitorsEnhanced capabilities for establishing optical VPNsNew adaptive response processes for dynamic conditionsExplicit segment specification
Enhanced Middleware Services
Enhanced integration with data services layerEnhanced understanding of L3-L7 requirementsAwareness of high performance L3/L4 protocolsEnhanced integration with edge resourcesMiddleware process performance monitoring and analysisNew capabilities for schedulingSecurity
Expanded Data Management Service
New methods for schedulingNew methods of priority settingEnhance awareness of network resourcesTechnique for forecasting demand and preparing responsesReplication servicesIntegration with metadata processesIntegration with adaptive storage servicesEnhanced policy mechanisms
Photonic Testbed - OMNInet
Implementation of RSVP methodsExperiments with parallel wavelengthsExperiments with new types of flow aggregationExperiments with multiple 10 Gbps parallel flowsEnhancement of control plane mechanismsAdditional experiments with interdomain integrationEnhanced integration with clusters and storage devices
Enhanced security methodsOptimization heuristicsIntegration with data derivation methodsExtended path protectionRestoration algorithmsFailure prediction and fault protectionPerformance metrics, analysis and reportingEnhanced integration of optical network information flows
with L1 process monitoring
Additional Topics