Global Networks and Data Centers Briefing...Catalyst 6509 = 112 10G ports Nexus 7018 = 512 10G ports...
Transcript of Global Networks and Data Centers Briefing...Catalyst 6509 = 112 10G ports Nexus 7018 = 512 10G ports...
May 27, 2011
Global Networks and Data Centers Briefing
Target State T ( ~ FY 13 )
Netherlands – Metro-based DC Pair (Tier-III)
Single-Instance Order Management (OM/AR)
Cisco International
AsiaPAC TBD – Single DC (Tier-III) + land
Continental hub for SaaS, Unified Communications and software development
1 A 1 x Type - A ( Tier - III )
2 A sc 2 x Type - A at Synchronous Capable Distance
1 A
2 A sc
2 A sc
1 A
E
E Early Adopter DC
(~Uptime Tier-II) (~Uptime Tier-III)
B
B Type - B ( Tier - II ) 1 x Type - A ( Tier - III )
B
2 x Type - A at Synchronous Capable Distance
B
40 ms rtt
B B
B
B
B
B
B B
Distributed standalone DCs (Tier-II)
Latency-sensitive software development at lower availability
Richardson & Allen (TX) – Metro-based DC Pair (Tier-III) + Carrollton Co-Lo
Global hub for business applications Continental hub for SaaS and communications
Global Disaster Recovery Strategy
Short to Mid-Term: Leverage Current Assets
Long-Term: DR + SaaS growth
1 x Type - A ( Tier - III )
2 x Type - A at Synchronous Capable Distance
BC / DR Plan Type - B ( Tier - II ) 1 x Type - A ( Tier - III )
2 x Type - A at Synchronous Capable Distance
BC / DR Plan
1 A
Mountain View, CA – Early Adopter DC
Production New Product/Service Introduction
Technology Leadership Business Strategy Cisco Green Initiative
Acceleration
Automation
Virtualization
Next-Gen DC
Networking
Adoption
Curve
Management
New
Opportunities
Accelerated Adoption
• Shared (leveraged) Infrastructure
• Aligned Architecture – application, business, infrastructure
• Reduced Time to Capability, TCO
• Virtualized, Resilient, Green Data Center
• Accelerated, Video Enabled Global Network
• Cisco on Cisco Global Data Center
Current state: “traditional DR” (San Jose + RTP)
Target state: Metro-based Virtual DC (Texas/Richadson Metro + RTP/DR)
DC1
(RCDN9) DC2
SJC RTP
(DR)
DC1
(RCDN9) DC2
• RPO = Recovery Point Objective:
Maximum amount of data loss the
business can tolerate (time)
• RTO = Recovery Time Objective:
Maximum downtime the business can
tolerate (time)
RTP
(DR)
RTP
(DR)
• ADL = Acceptable Data Loss (like RPO,
but only for incidents limited to single
facility)
• ART = Acceptable Recovery Time (like
RTO, but only for incidents limited to
single facility)
• RPO = (same as traditional)
• RTO = (same as traditional)
NEW
Operational
continuity
Disaster
Recovery
Richardson Data Center
Richardson Campus Site Map
• GDCP Strategy
Transformation
Growth
Resiliency
• Enablers
Intelligent Network Infrastructure
New Technology
New Services
Footprint for Future
7
Electrical control room-
Station#2
Generator & fuel tank yard-
Station#3
Cooling
towers
TST-
Station#4
Chiller
room
Build room-
Station#5
Data Hall 1-
Station#6
Data Hall 2
Data Hall 4 Data Hall 3
RCDN9 lobby-
Station#1
Gallery-
Station#7
CD1
EF1, EF2
located 3rd fl.
CD2 UPS-A
Distribution
UPS-B
Distribution
8
9
Electrical control room
Generator & fuel tank yard
Cooling
towers
TST
Chiller
room
Build room Data Hall 1
Data Hall 2
Data Hall 4 Data Hall 3
Main Switchboard B
Main Switchboard A
Cooling Towers
1000 Ton Capacity (ea)
Fuel Tanks
48,000 gals
Chillers
1000 Ton Capacity (ea)
Meter/Switchgear
Generators
Generators
Paralleling Switchgear A
Paralleling Switchgear B
Cooling Tower
Pumps
11
Switchgear A1 Switchgear A2
Switchgear B1 Switchgear B2 UPS B2
UPS B1
UPS A1 UPS A2
Batteries
Batteries
IT storage room
IT/Facility control
rm & office space
2nd floor CUP
Generators
Critical Load
(Servers)
MSB-A
UPS-A
RPP-A
MSB-B
UPS-B
RPP-B
A B
Tie
Breaker
Utility - A Utility - B
40%
max
load
40%
max
load
13
14
15
16
17
18
19
“TD New”
Nexus 7010
UCS - MVDC ONS - DWDM Nexus 7018
• ONS Gear
• Supports DWDM for MVDC
• Enables Resiliency for Data Replication
• Located in DH1 and DH2 for NPRD MVDC
• Nexus 7018
• Provide Scalability
• Supports new UCS Server Bays
• Supports new Teradata
• Supports MVDC
• UCS clusters
• NPROD
• MVDC NPRD
• UCS PROD
(1) Server Bay = 2 clusters or 80 blades
• TD New Node
• Support Cisco Data Warehouse
• Installed in DH 1
24
25
26
27
28
• Physical to Virtual
• Increased Density
• Improved Power Utilization
• Lower unit cost
• Transformation to Cloud (Unified Management)
© 2010 Cisco and/or its affiliates. All rights reserved. 31
A tremendous step forward in technology
New processor architecture
New platform architecture
New memory subsystem
New I/O subsystem
New options with SSDs
An even bigger step forward for IT capability
Performance Energy Efficiency
Virtualization
© 2010 Cisco and/or its affiliates. All rights reserved. 32
Typical System
Either • 12 DIMMs @ 1066MHz
• Max 96GB (192 GB @800MHz QR)
Or • 18 DIMMs @ 800MHz
• Max 144GB
at lower performance
Intel Xeon 5500 Series with UCS • 48 DIMMs @ 1066MHz
• Max 384GB per Blade
at full performance
Benefit •2x capacity
•Lower costs
•Standards DIMMs, CPUs, OS
Typical Memory Cisco UCS Memory
Xeon 5500 Fixed
number of
DIMMs
can be
addressed
by the CPU
Each DIMM
the CPU
looks for
is made of 4
standard
DIMMs
Xeon 5500
© 2010 Cisco and/or its affiliates. All rights reserved. 33
UCS uses significantly fewer cables
Traditional blade server,
similarly configured to… Cisco Unified Computing
March 2009
40% Savings on Cable Infrastructure
Notes: Assumes pre-UCS average V2P ratio of 10 to 1 and post UCS average ratio of 20 to 1
due to the memory expansion technology. Unified Fabric efficiency gains result from power
optimization. UCS efficiency gains result from additional power benefits of UCS.
Cisco IT’s Case Study– Summary
Traditional Unified Fabric
UCS
DC efficiency 100% 130-150% 130% 170-200%
10,000 sq ft, 1 MW
Cabling
$2.7 million $1.6 million $1.6 million
Physical Server Count per Footprint
720 930 -1080 1200-1400
VM Count 7200 9300-10800 12000-28000
Density
efficiencies
12,000 to
28,000
VMs -- In
the same
size DC!
~40%
Savings
from
cabling
March 2009
© 2010 Cisco and/or its affiliates. All rights reserved. 37
Component
Management
Interfaces
Cisco (320 servers in 40 chassis
across 6 racks)
HP (320 servers in 20 chassis
across 6 racks)
No. of Chassis
Manager Interfaces 0 20
(OA)
No. of Chassis
Interconnect Interfaces 0 5
(VC Manager)
No. of External Switch
Interfaces 1
(UCS Manager)
4 (3rd Party ToR Switch)
Total Management
Interfaces 1 29
*External switches are required for both Cisco and HP in order for any blade server to communicate with any other blade server.
© 2010 Cisco and/or its affiliates. All rights reserved. 38
Embedded Management
B1
Disjointed Management
VM
Manager
iLO
Licenses
Rapid
Deployment Virtual
Connect
Enterprise
Insight
Dynamics
Onboard
Admin –
CMM
Insight
Recovery Insight
Control
California Fabric
Manager Architecture Existing HP Mgmt Architectures
Servers Network
Storage
Access
Embedded
S/W
Embedded Device Management
Multiple Management Solutions
Uncoordinated / Complex Control Plane Integration
Management abstracted to Fabric Interconnects
Chassis have no state – just hold blades
All blades share same management domain
© 2010 Cisco and/or its affiliates. All rights reserved. 39
Server
Blades
Adapters
Chassis
Modules
Multi
Chassis
Access
Layer
FC
Enet
FC
Enet
Unified Fabric
Unified Fabric
Unified Fabric
• VIRTUALIZE, VIRTUALIZE, VIRTUALIZE
• ESX Farms built using UCS clusters (2 Viffs per cluster)
• Farms built using 192GB as a standard
• Provisioning virtual hosts via SRT/CITEIS
• Migrating SAN to NAS for ESX farms (evaluation)
• Bare Metal automation - next
• Storage Provisioning/Virtualization
UCS 3 Rack Design Initial design built out
with UCS 6120’s.
Place for easy upgrade
to UCS 6140’s.
Cisco 3845 for OOB management of
fabric interconnects
Initial design
supports 5
chassis per
cluster,
increasing to 10
chassis with
6140’s
Each chassis provides
80GB/s throughput
configured with 3m and
5m twin-ax.
Fiber available for
longer runs (legacy
DC)
Anticipated
power
consumption
per chassis:
2.5kw
© 2010 Cisco and/or its affiliates. All rights reserved. 42
Cisco 3845 for OOB management
of fabric interconnects
UCS Single
Rack Design Initial design built out
with UCS 6120’s.
Place for easy upgrade
to UCS 6140’s.
Initial design
supports 5 chassis
per cluster,
increasing to 10
chassis with 6140’s
Anticipated power
consumption per
UCS: 6kw to 10kw
Anticipated power
consumption per
chassis: 2.5kw
Nexus 70xx
• Provide Scalability
Nexus 7010 = 256 10G ports compared to Catalyst 6509 = 112 10G ports
Nexus 7018 = 512 10G ports compared to Catalyst 6513 = 112 10G ports
• GREEN
Catalyst 6500 16-port 10G line card: 28.6W per port
Nexus 7000 32-port 10G line card: 23.4W per port
• Supports UCS Server Bays (PROD/NPRD/DMZ)
• Supports Data Warehouse
• Supports Application Migrations
Nexus
© 2010 Cisco and/or its affiliates. All rights reserved. 44
Four
Nine
Thirteen
© 2010 Cisco and/or its affiliates. All rights reserved. 45
We migrated 462 apps to UCS platform to date.
By the end of Q2 FY11, the number will be around 487.
The Key applications :
Cisco Channels Release Management
Cisco Network Academy
Cisco Tele-presence
Cisco Internal Directory
Cisco Internal Learning (e-Learning)
© 2010 Cisco and/or its affiliates. All rights reserved. 46
Key Advantages
Simplified management. UCS & Linux
OS management is simpler & more
automated
~75% Reduction in overall power
consumed
>$600K reduction in hardware/OS cost
Customer X Pre UCS:
HP Superdome 32 shared for
prod & test
• Prod nPar: 20 CPU cores, 48
GB RAM
• Test nPar: 2 CPU cores, 8 GB
RAM
Customer X on UCS:
• Prod: 1 B440M1 server
• 32 CPU cores, 256 GB RAM
• Test: GB B200-M1 blade
• 8 CPU cores, 96 GB RAM Increased CPU cores & Memory
using less power
Performance increase as we adopt
latest Intel technology
Cisco on Cisco. UCS servers capable
of hosting previously Unix-only high-
performance applications
Cisco Texas DC2
James Cribari
© 2010 Cisco and/or its affiliates. All rights reserved. 48
© 2010 Cisco and/or its affiliates. All rights reserved. 49
July 2009
July 2010
TXDC2 was handed over by
WPR on Dec. 1, 2010
© 2010 Cisco and/or its affiliates. All rights reserved. 50
© 2010 Cisco and/or its affiliates. All rights reserved. 51
• Cisco multi-tenant, high density data center
• Total area – 162,000 Sq Ft
• Data floor – 27,000 Sq Ft (spread over two halls). No raised floor
• LED lighting in Data Hall; 100kW photovoltaic system
• 2N Systems: Primary Utilities, Generators, UPS; N+2: Chillers
• Building hardened to wind speed requirement of 175 mph
• Goal: Gold LEED Certified Data Center
• Geothermal wells
• Active-Active DC (Two 10 Gb dark fiber connections to RCDN-9 DC)
© 2010 Cisco and/or its affiliates. All rights reserved. 52
Category Legacy Design Allen Design
Inlet Temperature 68-72F (20-22C) 78F (25.5C)
Air separation None Hot/cold air separation
based on chimneys
Cooling CHW System with
CRAH units
Airside economizer
(free cooling)
Density 1.5-4.5 kW/rack Average 10.5 kW/rack
Up to 18 kW/rack
Power distribution 480V Back of House
208/120V to Rack
4160V Back of House
415/240V to Rack
UPS Systems Static Dynamic (flywheel)
Certification n/a LEED
PUE Target 2.0 1.34 annual (1.28 – 1.41)
© 2010 Cisco and/or its affiliates. All rights reserved. 53
• Stand-alone DC facility
• Separated from RCDN9 DC by around 20 miles
• 5 MW IT load (with additional 5MW possible in future)
• 200 watt / sf density yielding approximately 25K sf of data center floor
• Tier 3 per the Uptime Institute
© 2010 Cisco and/or its affiliates. All rights reserved. 54
• 5.5 MW of critical load capacity (potential expansion to 11 MW in future – requiring additional buildout)
• 2N electrical service
• 2N UPS – dynamic/flywheel system
• 2N Generators (approx 96 hr run time)
• 4160V Distribution System
• 230V cabinet power distribution
• All IT equipment to be dual corded
• Power delivered via overhead busway system with in-rack smart CDUs (power strips)
© 2010 Cisco and/or its affiliates. All rights reserved. 55
• Hot air discharge containment via chimneys for all cabinets
• IT equipment operational target of 78 F inlet supply air temperature
• AHU capacity is based on an average 25 degree delta T across the IT equipment
• Overhead air delivery
• Airside economizer system
• N+2 chillers and cooling towers, N+1 pumps
• 15 minute thermal storage tank
• 48 hrs Domestic water storage
© 2010 Cisco and/or its affiliates. All rights reserved. 56
• 2 diverse data ductbanks
• No raised access floor
• No suspended lay-in ceiling – exposed structure and equipment
• Overhead delivery of power and network
• Building hardened to 175 mph
• Vestibule Entry, no security turnstiles
• Layout will be designed to include a Customer Tour route and viewing panels into the Data Center & Building System rooms
• Target PUE of 1.3 0r lower
• Office space with approximately 20 workstations and areas to support customer visits
© 2010 Cisco and/or its affiliates. All rights reserved. 57
• LEED NC v2.2 – target is Gold certification and current indications are that we will achieve this
• Utilize airside economizer systems
• Utilize elevated IT equipment inlet temperatures (78F)
• Implement Cisco Energy Wise to provide efficient facility energy management
• Utilize higher voltage power distribution
• Utilize dynamic UPS systems
• Solar cells on roof space to reduce utilities consumption
• Ground source heat pump systems for office area HVAC
© 2010 Cisco and/or its affiliates. All rights reserved. 58
© 2010 Cisco and/or its affiliates. All rights reserved. 59
© 2010 Cisco and/or its affiliates. All rights reserved. 60
© 2010 Cisco and/or its affiliates. All rights reserved. 61
© 2010 Cisco and/or its affiliates. All rights reserved. 62
© 2010 Cisco and/or its affiliates. All rights reserved. 63
© 2010 Cisco and/or its affiliates. All rights reserved. 64
Overhead Delivery / Partial Containment
© 2010 Cisco and/or its affiliates. All rights reserved. 65
© 2010 Cisco and/or its affiliates. All rights reserved. 66
First Floor Scheme
© 2010 Cisco and/or its affiliates. All rights reserved. 67
© 2010 Cisco and/or its affiliates. All rights reserved. 68
© 2010 Cisco and/or its affiliates. All rights reserved. 69
© 2010 Cisco and/or its affiliates. All rights reserved. 70
© 2010 Cisco and/or its affiliates. All rights reserved. 71
© 2010 Cisco and/or its affiliates. All rights reserved. 72
© 2010 Cisco and/or its affiliates. All rights reserved. 73
© 2010 Cisco and/or its affiliates. All rights reserved. 74
© 2010 Cisco and/or its affiliates. All rights reserved. 75
© 2010 Cisco and/or its affiliates. All rights reserved. 76
© 2010 Cisco and/or its affiliates. All rights reserved. 77