Network infrastructure at FR-CCIN2P3 Guillaume Cessieux – CCIN2P3 network team Guillaume....
-
Upload
alexander-simpson -
Category
Documents
-
view
218 -
download
0
description
Transcript of Network infrastructure at FR-CCIN2P3 Guillaume Cessieux – CCIN2P3 network team Guillaume....
Network infrastructure at FR-CCIN2P3Network infrastructure at FR-CCIN2P3
Guillaume Cessieux – CCIN2P3 network teamGuillaume . Cessieux @ cc.in2p3.frOn behalf of CCIN2P3 network team
LHCOPN meeting, Vancouver, 2009-09-01
FRFR-CCIN2P3-CCIN2P3
2
Since 1986 Now 74 persons~ 5300 cores10 Po disk30 Po tapeComputing room ~ 730m21.7 MW
LHCOPN meeting, Vancouver, 2009-09-01GCX
RENATER-4 → RENATER-5: Dark fibre galoreRENATER-4 → RENATER-5: Dark fibre galore
LHCOPN meeting, Vancouver, 2009-09-01GCX 3
→
~7500km of DF
CERN
Kehl
Cadarache
Dark fibresLeased line 2,5 G
Genève (CERN)
Cadarache
Leased line 1 G (GE)
Tours
Le Mans
Angers
Tours
Le Mans
Angers
Kehl
(D)WDM based Previously– Alcatel 1.6k series– Cisco 6500 & 12400 Upgraded to– Ciena CN4200– Cisco 7600 & CRS-1 Hosted by CCIN2P3– Direct foot into RENATER’s backbone
• No last miles or MAN issues
Pop RENATER-5 LyonPop RENATER-5 Lyon
LHCOPN meeting, Vancouver, 2009-09-01GCX 4
Ending two 10G LHCOPN linksEnding two 10G LHCOPN links
5
100km
GRIDKA-IN2P3-LHCOPN-001CERN-IN2P3-LHCOPN-001
Candidate for L1 redundancy
CERN-GRIDKA-LHCOPN-001
Layer 3 view:
WAN connectivity related to T0/T1sWAN connectivity related to T0/T1s
RENATER
2x1G
WANLAN
Chicago
Geneva
Karlsruhe
GÉANT2
Internet
Generic IP Dedicated
Tiers2FR
NREN
LHCOPNTiers1
Backbone
Tiers2Edge
10G
LHCOPN meeting, Vancouver, 2009-09-01 6GCX
Beware: Not for LHC
MDM appliancesDedicated data servers for LCG
1G
LAN: Just fully upgradedLAN: Just fully upgraded!!
LHCOPN meeting, Vancouver, 2009-09-01GCX 7
Computing
→
StorageSATA
StorageFC+TAPE Computing Storage
SATAStorage
FC+TAPE
Now “top of rack” designNow “top of rack” design Really easing mass handling of devices
– Enable directly buying pre-wired racks• Just plug power and fibre – 2 connections!
LHCOPN meeting, Vancouver, 2009-09-01GCX 8
…
Current LAN for data analysisCurrent LAN for data analysis
LHCOPN meeting, Vancouver, 2009-09-01GCX 9
36 computing racks34 to 42 server per rack
1x10G uplink
1G per server
Data FC(27 servers)
Data SATA816 serversin 34 racks
10G/server
Tape10 servers
2x1G per server
10G/server1 switch/rack (36 access switches)
48x1G/switch
3 distributing switchesLinked to backbone with 4x10G
Computing
…
24 servers per switch
34 access switches withTrunked uplink 2x10G
Linked to backbone with 4x10G
…
2 distributing switches
Storage
Backbone 40G
Main network devices and configurations usedMain network devices and configurations used• 24x10G (12 blocking)
+ 96x1G + 336x1G blocking (1G/8ports)
• 48x10G (24 blocking) + 96x1G
• 64x10G (32 blocking)
48x1G + 2x10G
6509
6513
4948
4900 16x10G
Backbone & Edge
Distribution
Access
LHCOPN meeting, Vancouver, 2009-09-01 10GCX
x5
x5
x70
> 13km of copper cable & > 3km of 10G fibres
Tremendous flowsTremendous flows
LHCOPN meeting, Vancouver, 2009-09-01GCX 11
But still regular peaks at 30G on the LAN backbone
LHCOPN links not so used yet
GRIDKA-IN2P3-LHCOPN-001
CERN-IN2P3-LHCOPN-001
Other detailsOther details LAN– Big devices preferred to meshed bunch of small– We avoid too much device diversity
• Ease management & spare– No spanning tree, trunking is enough
• Redundancy only at service level when required– Routing only in the backbone (EIGRP)
• 1 VLAN per rack No internal firewalling
– ACL on border routers are sufficient• Only on incoming traffic and per interface
– Preserve CPU
LHCOPN meeting, Vancouver, 2009-09-01GCX 12
MonitoringMonitoring Home made flavour of netflow
– EXTRA: External Traffic Analyzer• http://lpsc.in2p3.fr/extra/• But some scalability issues around 10G...
Cricket & cacti + home made– ping & TCP tests + rendering
Several publicly shared– http://netstat.in2p3.fr/
LHCOPN meeting, Vancouver, 2009-09-01GCX 13
Ongoing (1/3)Ongoing (1/3)
WAN - RENATER– Upcoming transparent L1 redundancy Ciena based
– 40G & 100G testbed• Short path FR-CCIN2P3 – CH-CERN is a good candidate
LHCOPN meeting, Vancouver, 2009-09-01GCX 14
Ongoing (2/3)Ongoing (2/3) LAN
– Improving servers’ connectivity• 1G → 2x1G→ 10G per server• Starting with most demanding storage servers
– 100G LAN backbone• Investigating Nexus based solutions
– 7018: 576x10G (worst case ~144 at wirespeed)– Flat to stared design
LHCOPN meeting, Vancouver, 2009-09-01GCX 15
→Nx40G
Nx100G
Ongoing (3/3)Ongoing (3/3) A new computer room!
LHCOPN meeting, Vancouver, 2009-09-01GCX 16
Building 2
2 floors
Existing
850m² on two floors• 1 cooling, UPS, etc.• 1 computing devices
Target 3 MW
Expected beginning 2011(Starting at 1MW)
ConclusionConclusion
WAN– Excellent LHCOPN connectivity provided by
RENATER– Demand from T2s may be next working area
LAN– Linking abilities recently tripled– Next step will be core backbone upgrade
LHCOPN meeting, Vancouver, 2009-09-01GCX 17