Creating a Global Lambda GRID: International Advanced Networking and StarLight Presented by Joe...
-
Upload
helen-cunningham -
Category
Documents
-
view
215 -
download
0
Transcript of Creating a Global Lambda GRID: International Advanced Networking and StarLight Presented by Joe...
Creating a Global Lambda GRID: International Advanced Networking and StarLight
Presented by Joe Mambretti, Director,
International Center for Advanced Internet Research (www.icair.org)Director, Metropolitan Research and Education Network (www.mren.org)
Based on StarLight Presentation Slides by Tom DeFanti – PI, STAR TAP, Director EVL, University of Illinois, Chicago
APAN ConferencePhuket, Thailand
January 24, 2002
• Creation and Early Implementation of Advanced Networking Technologies - The Next Generation Internet All Optical Networks, Terascale Networks
• Advanced Applications, Middleware and Metasystems, Large-Scale Infrastructure, NG Optical Networks and Testbeds, Public Policy Studies and Forums Related to NG Networks
Accelerating Leading Edge Innovation and Enhanced Global Communications through Advanced Internet Technologies, in Partnership with the Global Community
Introduction to iCAIR:
Tom DeFanti, Maxine Brown
Principal Investigators, STAR TAP
Linda Winkler, Bill Nickless, Alan Verlo, Andy Schmidt
STAR TAP Engineering
Joe Mambretti, Tim Ward
StarLight Facilities, et al
Who is StarLight?
StarLight is jointly managed and engineered by • Electronic Visualization Laboratory (EVL), University of Illinois at
Chicago– Tom DeFanti, Maxine Brown, Andy Schmidt, Jason Leigh, Cliff Nelson, and
Alan Verlo
• International Center for Advanced Internet Research (iCAIR), Northwestern University– Joe Mambretti, David Carr and Tim Ward
• Mathematics and Computer Science Division (MCS) , Argonne National Laboratory – Linda Winkler and Bill Nickless; Rick Stevens and Charlie Catlett
• In Partnership with Bill St Arnaud, Kees Neggars, Olivier Martin, etc.
What is StarLight?
710 N. Lake Shore Drive, ChicagoAbbott Hall, Northwestern UniversityChicago view from 710
StarLight is an experimental optical infrastructure andproving ground for network services optimized forhigh-performance applicationsStarLight Leverages $32M (FY2002-3) in experimental networks (I-WIRE, TeraGrid, OMNInet, SURFnet, CA*net4, DataTAG)
Where is StarLight?
• Located in Northwestern’s Downtown Campus: 710 N. Lake Shore Drive
Carrier POPs
Chicago NAP
StarLight Infrastructure
• StarLight is a large research-friendly co-location facility with space, power and fiber that is being made available to university and national/international network collaborators as a point of presence in Chicago
StarLight Infrastructure
• StarLight is a production GigE and trial 10GigE switch/router facility for high-performance access to participating networks
StarLight is Operational
Equipment at StarLight• StarLight Equipment installed:
– Cisco 6509 with GigE – IPv6 Router– Juniper M10 (GigE and OC-12 interfaces)– Cisco LS1010 with OC-12 interfaces– Data mining cluster with GigE NICs– Visualization/video server cluster (on order)
• SURFnet’s 12000 GSR• Multiple vendor plans for 10GigE, DWDM and Optical
Switch/Routing in the future
Carriers at StarLight: SBC-Ameritech, Qwest, AT&T, Global Crossing, Teleglobe…
StarLight Connections
• STAR TAP (NAP) connection with two OC-12c ATM circuits • The Netherlands (SURFnet) has two OC-12c POS from
Amsterdam and a 2.5Gbps OC-48 to StarLight this month• Abilene will soon connect via two GigE circuits • Canada (CA*net3/4) is connected via GigE, soon 10GigE• I-WIRE, a State-of-Illinois-funded dark-fiber multi-10GigE
DWDM effort involving Illinois research institutions is being built. 36 strands to the Qwest Chicago PoP are in.
• NSF TeraGrid (DTF) 4x10GigE network being engineered by PACI and Qwest.
• NORDUnet is now sharing StarLight’s OC-12 ATM connection• TransPAC/APAN is bringing in an OC-12, later an OC-48
• CERN’s OC-48 is in the advanced funding stages
Evolving StarLightOptical Network Connections
Vancouver
Seattle
Portland
San Francisco
Los Angeles
San Diego(SDSC)
NCSA
Chicago NYC
SURFnet, CERN
CA*net4
Asia-Pacific
Asia-Pacific
AMPATH
PSC
Atlanta
IU
U Wisconsin
CA*net4
Source: Maxine Brown 12/2001
StarLight Services and Locations
AADS NAP 225 West Randolph Street
StarLight 710 North Lakeshore Drive
Qwest455 North Cityfront Plaza
IPv4 and IPv6 STAR TAP Transit (AS 10764)
Int’l R&E Networks
Int’l R&E Networks
FedNets /NGIX
ATM PVC Mesh to Other Participants
Yes - -
GigE 802.1q Policy-Free VLANs
- Yes FedNets /NGIX
Co-Location
Space, Power- Yes Qwest
Customers
Fiber Patches - $T&M Install$0 Monthly
$ Install$ Monthly
TeraNodes in Action:Interactive Visual Tera Mining
(Visual Data Mining of Tera Byte Data Sets)
Data MiningServers
(TND-DSTP)
Data MiningServers
(TND-DSTP)
Data MiningServers
(TND-DSTP)
Parallel Data MiningCorrelation (TNC)
Tera MapTera SnapTera Snap
Chicago Amsterdam NWU, NCSA, ANL etc…
– Problem is to touch a Terabyte of data interactively and to visualize it
– 100M/s – 24 hours to access 1 Terabyte of data
– 500M/s – 4.8 hours using a single PC
– 10G/s – 14.4 minutes using 20 node PC cluster
– Need to parallelize data access and rendering
Parallel Visualization (TNV)
Prototyping The Global Lambda Grid in Chicago: A Photonic-Switched Experimental Network of Light Paths
CONTROL
PLANE
Clusters
DynamicallyAllocatedLightpaths
Switch Fabrics
PhysicalMonitoring
Apps
Multi-leveled Architecture
NEW!
Metros As International Nexus Points
Prototype Global Lambda Grid
Optical Metro Europe
Tokyo?APAN
AP Metro2
CA*net4
Miami to South Am?
CalTech
AmsterdamNetherLight
CERNDataTAGStarLight
NCSA
SDSC
CSWASWClusterOFA
I-WIRE
TeraGrid
ANL
Multiwavelength Optical Amplifier
Multiwavelength Fiber
CSWASW
ASW
GE Links
GE Links *N*N
LAN PHYInterface, eg, 15xx nm 10GE serial
GE Links
GE Links
Optical,Monitors, for Wavelength Precision, etc.
Power Spectral Density Processor, Source + Measured PSD
DWDM Links
Multiple Per Fiber
Computer Clusters Each Node = 1GEMulti 10s, 100s, 1000s of Nodes
Multiple Optical Impairment Issues, Including Accumulations
Controller
Client Device
Client Controlle
r
Controller
ControllerControlle
r
Controller
Optical Layer Control Plane
Client Layer Control Plane Optical Layer Control Plane
Client Layer Traffic Plane
Optical Layer – Switched Traffic Plane
UNI
I-UNI
CICICI
2x10GE
Northwestern U
OpticalSwitchingPlatform
Passport8600
ApplicationCluster
OMNInet Technology Trial: January 2002
• A four-site network in Chicago -- the first 10GE service trial!• A test bed for all-optical switching and advanced high-speed services• Partners: SBC, Nortel, iCAIR at Northwestern, EVL, CANARIE, ANL
ApplicationCluster
OpticalSwitchingPlatform
Passport8600
2x10GE
StarLight
OPTera Metro5200
ApplicationCluster
OpticalSwitchingPlatform
Passport8600
2x10GE8x1GE
UIC
CA*net3--Chicago
OpticalSwitchingPlatform
Passport8600
ApplicationCluster
2x10GE8x1GE
8x1GE
8x1GE
EVL
LAC UIC StarLight
10x10 GigE
10GigE10GigE
2x 10GigE
TNV3 TNC3
6509
TND3
65096509
DWDM
TNV3 TND3
6513
DWDM
StarLight On Ramps : Proposed Development Phase IGigabit Ethernet NICs to 10 Gigabit Ethernet MAN
TNDs = Datamining Clusters
TNVs = Visualization Clusters (gigapixel/sec)
TNCs = TeraGrid On-Ramps
StarLight On Ramps : Proposed Development Phase II 10 Gigabit Ethernet to 2x80 Gb MAN
TND: Upgrade NICs in TND clusters to (8)x 10GigE
TNV: Upgrade NICs in TNV3 clusters to (8)x10GigE
O-E-O: or O-O-O Optical Switch at StarLight(?)
DWDM: (2) 40Gb(?) and (8) 10Gb
LAC
10GigE
10GigE 10x 10GigE
UIC
6509
EVL
TNV3 TNC3
6509
8x 10GigE
2x40GigE (UIC fiber) 2x40GigE
TND3
6509
StarLight
TND3TNV3
6513
DWDM DWDM
O-E-O or O-O-Oswitch
…
NSF’s Distributed Terascale Facility (DTF)
TeraGrid Interconnect Objectives
• Traditional: Interconnect sites/clusters using WAN– WAN bandwidth balances cost and utilization- objective to keep
utilization high to justify high cost of WAN bandwidth
• TeraGrid: Build a wide area “machine room” network– TeraGrid WAN objective to handle peak M2M traffic– Partnering with Qwest to begin with 40 Gb/s and grow to ≥80
Gb/s within 2 years.
• Long-Term TeraGrid Objective– Build Petaflops capable distributed system, requiring Petabytes
storage and a Terabit/second network.– Current objective is to step toward this goal.– Terabit/second network will require many lambdas operating at
minimum OC-768 and its architecture is not yet clear.
Source: Rick Stevens 12/2001
Trends Cyberinfrastructure
• Advent of regional dark fiber infrastructure– Community owned and managed (via 20 yr IRUs)– Typically supported by state or local resources
• Lambda services (IRUs) viable replacements for bandwidth service contracts– Need to be structured with built in capability escalation (BRI)– Need strong operating capability to exploit this
• Regional groups moving faster (much faster!) than national network providers and agencies– A viable path to putting bandwidth on a Moore’s law curve– Source of new ideas for national infrastructure architecture
Source: Rick Stevens 12/2001
13.6 TF Linux TeraGrid32
32
5
32
32
5
Router or Switch/Router
32 quad-processor McKinley Servers(128p @ 4GF, 8GB memory/server)
Fibre Channel Switch
HPSS
HPSS
ESnetHSCCMREN/AbileneStarlight
10 GbE
16 quad-processor McKinley Servers(64p @ 4GF, 8GB memory/server)
NCSA500 Nodes
8 TF, 4 TB Memory240 TB disk
SDSC256 Nodes
4.1 TF, 2 TB Memory225 TB disk
Caltech32 Nodes
0.5 TF 0.4 TB Memory
86 TB disk
Argonne64 Nodes
1 TF0.25 TB Memory
25 TB disk
IA-32 nodes
4
Juniper M160
OC-12
OC-48
OC-12
574p IA-32 Chiba City
128p Origin
HR Display & VR Facilities
= 32x 1GbE
= 64x Myrinet
= 32x FibreChannel
Myrinet Clos SpineMyrinet Clos Spine Myrinet Clos SpineMyrinet Clos Spine
= 8x FibreChannel
OC-12
OC-12
OC-3
vBNSAbileneMREN
Juniper M40
1176p IBM SPBlue Horizon
OC-48
NTON
32
24
8
32
24
8
4
4
Sun E10K
4
1500p Origin
UniTree
1024p IA-32 320p IA-64
2
14
8
Juniper M40vBNS
AbileneCalrenESnet
OC-12
OC-12
OC-12
OC-3
8
SunStarcat
16
GbE
= 32x Myrinet
HPSS
256p HP X-Class
128p HP V2500
92p IA-32
24Extreme
Black Diamond
32 quad-processor McKinley Servers(128p @ 4GF, 12GB memory/server)
OC-12 ATM
Calren
2 2
Source: Rick Stevens 12/2001
TeraGrid Network Architecture
• Cluster interconnect using multi-stage switch/router tree with multiple 10 GbE external links
• Separation of cluster aggregation and site border routers necessary for operational reasons
• Phase 1: Four routers or switch/routers– each with three OC-192 or 10 GbE WAN PHY– MPLS to allow for >10 Gb/s between any two sites
• Phase 2: Add Core routers or switch/routers– Each with ten OC-192 or 10 GbE WAN PHY– Expandable with additional 10 Gb/s interfaces
Source: Rick Stevens 12/2001
Los Angeles
710 N. Lakeshore (Starlight)
Chicago
1 mi
Option 1: Full Mesh with MPLS
Cluster Aggregation
Switch/Router
One Wilshire(Carrier Fiber Collocation Facility)
Qwest San Diego POP
Site Border Router or
Switch/Router
2200mi
140mi 25mi
115mi20mi
20mi
455 N. Cityfront Plaza(Qwest Fiber Collocation Facility)
Caltech SDSC NCSA ANL
Caltech Cluster
SDSC Cluster
NCSA Cluster
ANL Cluster
DWDM
OC-192
10 GbE
Cienna Corestream DWDM
DWDM TBD
Other site resources
IP Router
Source: Rick Stevens 12/2001
Expansion Capability: “StarLights”
Los AngelesOne Wilshire(Carrier Fiber Collocation Facility)
Qwest San Diego POP
2200mi
140mi 25mi
115mi20mi
20mi
455 N. Cityfront Plaza(Qwest Fiber Collocation Facility)
Caltech SDSC NCSA ANL
Caltech Cluster
SDSC Cluster
NCSA Cluster
ANL Cluster
Regional Fiber Aggregation
PointsAdditional Sites
AndNetworks
710 N. Lakeshore (StarLight)
Chicago
DWDM
OC-192
10 GbE
Cienna Corestream DWDM
DWDM TBD
Cluster Aggregation
Switch/Router
Site Border Router or
Switch/Router
Other site resources
1 mi
IP Router
IP Router (packets)or
Lambda Router (circuits)
Source: Rick Stevens 12/2001
Leverage Regional/Community FiberExperimental Interconnects
UIUC/NCSA
Starlight(NU-Chicago)Argonne
UChicagoIIT
UIC
Illinois Century NetworkJames R. Thompson CtrCity HallState of IL Bldg
4
12
4
2 2
4
18
4 10
12
2
Level(3)111 N. Canal
McLeodUSA151/155 N. MichiganDoral Plaza
Qwest455 N. Cityfront
UC Gleacher450 N. Cityfront
Illinois’ I-WIRE Logical and Transport Topology
Next Steps--Fiber to FermiLab, other sites-Additional fiber to ANL, UIC-DWDM terminals at Level(3), McLeodUSA locations-Experiments with OC-768, Optical Switching/Routing
Source: Rick Stevens 12/2001
• An Advanced Network for Advanced Applications
• Designed in 1993; Initial Production in 1994, Managed at L2 & L3
• Created by Consortium of Research Organizations -- over 20
• Partner to STAR TAP/StarLight, I-WIRE, NGI and R&E Net Initiatives, Grid and Globus Initiatives etc.
• Model for Next Generation Internets
• Developed World’s First GigaPOP
• Next – the “Optical MREN”
• Soon - Optical ‘TeraPOP’ Services
GigaPoPs TeraPoPs (OIX)
GigaPoP data from Internet2, map by Rick Stevens, Charlie Catlett
Pacific Lightrail
TeraGrid Interconnect
Pacific Light Rail
draft 12/4/01
Critical Mass SitesTop 10 Res. Univ.:Next 15 Res. Univ:Centers, Labs:Intl. 10gig & Key Hubs
Source: Ron Johnson 12/2001
CA*net 4 Physical Architecture
VancouverCalgary
Regina Winnipeg
Ottawa
Montreal
Toronto
Halifax
St. John’s
Fredericton
Charlottetown
Chicago
Seattle
New YorkLos Angeles
Miami
Europe
Dedicated Wavelength or
SONET channel
OBGP switches
Optional Layer 3 aggregation service
Large channel WDM system
By Bill St. Arnaud (Provider of ExcellenceIn Advanced Networking)
NSF ANIR
• NSF will emphasize support for domestic and international collaborations involving resource-intensive applications and leading-edge optical wavelength telecommunication technologies
• But, NSF will not abandon needed international collaboration services (e.g., STAR TAP)
StarLight Thanks
• StarLight planning, research, collaborations, and outreach efforts at the University of Illinois at Chicago are made possible, in part, by funding from: – National Science Foundation (NSF) awards ANI-9980480, ANI-9730202, EIA-
9802090, EIA-9871058, and EIA-0115809– NSF Partnerships for Advanced Computational Infrastructure (PACI) cooperative
agreement ACI-9619019 to the National Computational Science Alliance – State of Illinois I-WIRE Program, and UIC cost sharing– Northwestern University for providing space, engineering and management
• Argonne National Laboratory for StarLight and I-WIRE network engineering and planning leadership
• NSF/ANIR, Bill St. Arnaud of CANARIE, Kees Neggers of SURFnet, and Olivier Martin and Harvey Newman of CERN for global networking leadership;
• NSF/ACIR and NCSA/SDSC for DTF/TeraGrid opportunities• UCAID/Abilene for Internet2 and their ITN• CA*net3/4 and CENIC/Pacific Light Wave for planned North America and West
Coast transit
September, 2002, Amsterdam, The Netherlands
Grid 2oo2www.startap.net/igrid2002University of Illinois at Chicago and Indiana University in collaboration with The GigaPort Project and SURFnet5 of The Netherlands
Grid-Intensive Application Control of
Lambda-Switched Networks
iMaxine Brown
STAR TAP/StarLight co-Principal InvestigatorAssociate Director, Electronic Visualization Laboratory
A showcase of applications that are “early adopters” of very
high bandwidth national and international networks
Coming…..
Further Information
www.startap/starlight www.evl.uic.eduwww.icair.orgwww.mren.orgwww.canarie.cawww.anl.govwww.surfnet.nlwww.globalgridforum.orgwww.globus.orgwww.ietf.orgwww.ngi.gov
Ed by Foster &Kesselmann