Datacentre Datacentre Optimization Optimization for Cloud for ...
-
Upload
datacenters -
Category
Documents
-
view
814 -
download
2
Transcript of Datacentre Datacentre Optimization Optimization for Cloud for ...
DatacentreDatacentreOptimization Optimization for Cloudfor Cloud
Modular Design for achieving Modular Design for achieving Energy-EfficiencyEnergy-Efficiency
●Wesley LIMWesley LIM●Data Center Efficiency Practice●Sun Microsystems
2
AgendaAgenda● Today's Datacentre ChallengesToday's Datacentre Challenges● Best Practices to Future-proof Best Practices to Future-proof
DatacentreDatacentre● Sun's POD PrincipleSun's POD Principle● Sun Sun DatacentreDatacentre Efficiency Efficiency Consulting Services Consulting Services
3
Today's DatacentreChallenges
4
Demand Users Services Access
Power Costs Space Heat
2003 2005 NextGeneration
40120
800Watts perSquare Foot
Data Center
Ever-changing IT demands• Costs, Demand and Capacity are Colliding...
> Innovation in technology & businesses demands for compute capacity> Power and cooling costs surging, insufficient capacity> Limits to existing floor space and new real estate
5
200848 Blade Servers
768 Processor Cores28.5kW Heat Load
200014 x 3U Servers28 Processors2kW Heat Load
Rack Densities Increasing
27x More Processors>10x More Heat
Reality:More compute per watt
More watts per rack
Old racks now fit into asingle blade
6
Reality: Heterogeneous Data Centers
Industry average is between 4-6kw/cabinet> 20kw Skyscrapers will be integrated
Must deal with small buildings & skyscrapers
7
The Nature of Data Centers• Space, power, cooling and connectivity envelope
> Power and cooling most important> Don't deploy maximum capacity day one, but still build for the future
• Think watts per rack, not per square foot> Handle heterogeneity at the start> Forget about raised floors for cooling
• Rate of change> Be prepared to be more flexible
• Connectivity requirements> Ethernet, Fibre Channel, Infiniband
8
Best Practices to Future-Proof Your Data Center
Strategies to improve your Datacentre efficiency
9
Strategies for New / Greenfield Facility
• Design with key considerations:> Set your business goals> Determine cooling architecture - Heat load /
rack> Review power distribution approach> Determine your cabling strategy
10
Tier-IV Data Center Power Flow
HV Transformers ATS UPSs
Generators
External Batteries
PDUs
Chillers
CDUs CRAHs
InRowCoolers
IT Loads
ThermalStore
Raised Floor Support Space Real Estate
Grid
Pumps
A Bus
B Bus
Water A
Water B
Fire, Security, Lighting, etc..
2(N+1)
11
Tier-IV Data Center Power Losses
IT Loads
Raised Floor Support Space Real Estate
Fire, Security, Lighting, etc..
Chillers33%
UPSs, Transformers
12%
CRACs13%
Generator,Switching 1%
Misc2%
Systems33%
PDUs5%
2(N+1) Right-sizing = Matching Tier of your Data Center facility with Business requirement
12
Cooling System Issues
• Effective to < 6 kW per rack• Trend to increase raised floor height• Hot spots need more open floor tiles
> Reduces cooling for other racks> Mixture causes air delivery temperatures
to be reduced
• 60 percent of cooling does no work> Random intermixing of hot/cold air
Traditional Design: Computer room air conditioners (CRAC) or air handlers (CRAH) on the data center perimeter, cooling through a raised floor
13
The Limits of Raised Floors•High density raised floor deployments makes it difficult to maintain even and predictable server temperatures>Heat from one server can impact the reliability of another!
14
Limits to Raised Floors•Floor load consideration>IT loads increasingly heavier – racks are beginning to weigh beyond 1
Metric Ton each – and designing raised floors will soon require calculation of point loads and rolling loads (but would a facility provider be able to know these at design phase?)
•Increasing Infrastructure costs> Increasing heat densities require much higher – and more expensive –
raised floors height
•Decreasing energy efficiency>Raised floors can be very efficient at low heat densities but become much
less efficient as air velocities and sub-floor pressures increase•Increased design costs>High density raised floors require much more careful designs (i.e. CFD
modeling)
15
Limits to Raised Floors• Fire suppression
● Fire suppression is generally focus on isolation of smaller zones and release of a clean agent to extinguish the fire in that zone.
● With raised floor, you instantly double the number of zones you must monitor, and deploy fire suppression systems into.
• Cleanliness● Unless it was installed yesterday, all sort of dirt, dust, debris will
accumulate and lurk beneath every raised floor in actual production.
● Pollutants and contaminants in the air will lead to higher risk of failure.
16
Cooling System SolutionContaining the hot aisle and adding closely coupled cooling puts cooling capacity where it is needed
• POD is self-contained from a cooling perspective● It removes its own heat, matching load● Room air conditioning to meet habitation requirements
• POD eliminates random intermixing of air● Data center inlet temperatures can be raised, safely
● POD handles hot spots● Modular, plug-in units can be added and moved to support
heterogeneous, rapidly changing environments
17
Deploy Closely Coupled CoolingUnderstanding the differences between room-oriented, row-oriented and rack-oriented cooling
• Targeted right sized cooling where it is required
• Efficient and Optimized air flow
18
In-Row Cooling with Hot Aisle Containment
19
Overhead Cooling
20
Power Distribution Issues
• Consumes valuable floor space• Imposes cooling load• Cables impede airflow• Changes
> Requires time and expense> Exposes all connected systems to risk and
downtime> Difficult to change and cables often
abandoned in place
• Cable home runs wastes copper
Traditional Design: PDUs on the floor with whips going to racks; or breaker panels with whips
21
Electrical Busway Solution
• Requires no floor space or cooling> Transformers moved outside the
data center
• Snap-in cans with short whips> Non-disruptive> Reduced copper consumption> No in-place abandonment> Significant time reduction – from
months to minutes
• Supports multiple Tier levels> Use multiple busways
Modular overhead, hot-pluggable busway with conductors to handle multiple voltages and phases
22
Cabling Issues
• Difficult to change> Cable trays are static> Interconnect mechanisms change more
frequently
• Huge amounts of cable per rack> Rack of 1U or blade servers can have
>300 cables
• Wastes copper, increases weight• Increasing density makes the
problem worse
Traditional Design: Home run cabling from each rack to centralized intermediate distribution frames (IDFs)
23
Cabling Solution
• Easy to change• Easy to scale• Cable lengths are short• Relatively small number of uplinks
to aggregation layer switching• Localizing switching simplifies
design and creates building blocks – rooms inside of rooms
• Can cut cabling costs by up to 75%
Patch panels with expansion capacity at each rack position, move IDFs into PODs to make them more self-sufficient
24
Sun's POD Principle
Building Block for the Future
25
Traditional Data Center designs vs.Next-Generation Data Center
Traditional Datacentre designs:•Lower efficiency, high PUE, high OPEX•As business and technology changes, adaptability of DC to future needs may require modifications
Next-Generation Datacentre designs:•Based on Sun's POD Principle•Higher efficiency / PUE, lower OPEX•Modular, scalable, flexible•Future-proofed
26
Modular POD ComponentsPhysical Design● Influenced by cooling, brick and mortar
and/or container
Cooling – Closely Coupled● In-row or overhead, hot-aisle
containment, and passive
Power Distribution● On-tap, overhead or under-floor
busway
Cabling● Recommended localized switching in
each POD
27
Sun's POD Principle
• Unit of Scale for a data center, ~20 racks – building block
• Self-contained, independent of the room• Efficiency achieved from putting
resources where they are needed> Bringing cooling closer to the heat source
• Flexibility from modular, snap-in systems that scale POD components up and down
Small, self-contained group of racks that optimize power, cooling and cabling efficiencies based around a common hot or cold aisle
28
KPI's & Strategies for a Service Provider
29
KPI : Power Usage Efficiency•The power efficiency of a data center is described by the relationship between the power used by the IT, and the total power used by the facility>Typically expressed as a number or as a percent:>e.g. PUE of 2.0 = DCE of 50%*
IT Equipment Power
IT Equipment Power + Infrastructure Power Power UtilizationEfficiency (PUE) =
IT Equipment Power + Infrastructure Power
IT Equipment Power Data CenterEfficiency (DCE) = %
* PUE and DCE are Green Grid terms – which differ from Uptime terms, etc …
30
KPI : Data Center Efficiency•Space Utilization Efficiency – How much of the space you have built are usable/chargeable
Space Utilization Efficiency = Chargeable floor space
Total facility floor space
•Operating Leverage>A higher Operating leverage means greater fixed cost commitments that have to be met even when utilization / chargeable volume declines>Objective is to lower fixed costs – i.e. unavoidable running costs
Operating Leverage = Fixed Costs
Total Costs
31
KPI : Data Center Efficiency•Marginal Cost and Marginal Revenue>Marginal Cost (MC) is the change in total cost associated with a unit change in quantity.>Marginal Revenue (MR) is the rate of change in total revenue with respect to quantity sold.>Marginal Analysis aids decision making that can be applied to financial decisions. In order to maximize profits, MR = MC
Marginal Cost = Change in Total Cost
per kW of new IT load hosted
Marginal Revenue = Change in Total Revenue
per kW of new IT load hosted
= d TC
d Q (kW)
= d TR
d Q (kW)
32
Top 5 Efficiency Strategies for a Facility Service Provider / SSO1)Improve Cooling Efficiency2)Improve power distribution3)Match Infrastructure to SLAs by Multi-Tier/Zone4)Growing with Ease - Applying Modularity5)Cost Control – Metering & Charging
33
1. Improve Cooling Efficiency•Right-size the cooling system>Measure and monitor cooling capacity vs IT load>Implement capacity management for the cooling system•Optimize chiller performance>Use free cooling (where available), airside or waterside economizers, variable frequency drives, thermal buffers•Optimize air flow within the room>Ensure the room is well sealed, use hot/cold aisles, seal cut-outs, optimize perforated tile layout, increase raised floor height, use blanking panels, high airflow doors and hot-air containment•Adopt new ASHRAE guidelines
34
New ASHRAE Guidelines
35
2. Improve Power Distribution•Right-size the power delivery system>Monitor power usage throughout the system
• E.g. UPS, PDU, Power Strip>Implement capacity planning for the power delivery system>Consider charge backs for power consumption•Deliver higher voltage circuits to the rack>E.g. 3 Phase, 208V today - perhaps higher in future>Minimizes power losses and infrastructure provisioning (less cables, conduits, breakers, etc..)•DC power is not recommended at this point>Higher costs, insufficient standards, fewer products, fewer trained people, etc..>Energy savings are not substantially better than high voltage power deliver
36
3. Match Infrastructure to SLAs by Multi-Tier/Zone
•Customer segmentation - Infrastructure capacity should be right-sized for your customer's IT load, service levels, risk tolerance>e.g. – Capacity (kW), Availability (%), Scheduled downtime (min/yr), Single Failure vs. Multi-Failure•Infrastructure needs to be over-provisioned for sites with high Tier ratings. This has a big impact on power efficiency>e.g. - Tier-IV site at 30% utilization is only 33% efficient*•New architectures are evolving to address this>e.g. - Multi-tier, Modified Tier-IV>Zoning the facility to different tiers, catering to different customer segments with differing needs, e.g. high-density zone with localized/closely-coupled cooling and a general/low-density zone with room-oriented cooling.•These architectures require less infrastructure and energy, but are more complex to design, build and maintain
37
4. Growing with Ease - Applying Modularity
•Minimizing CAPEX investment by enabling growth through modular design and allowing charge-back to your customers>Power distribution>Cooling systems>Cabling Structure
5. Cost Control – Metering & Charging•Unable to anticipate what IT load each new customer will bring in?•Metering on actual usage at rack / suite level>Charge-back for actual power utilization
38
Sun Datacentre Efficiency Consulting Services
39
Sun Datacentre Efficiency Portfolio
*All projects are considered turn-key solutions in which Sun oversees entire project management of the solution including vendor and partner management, deployment schedule, leveraging proven best practices, quality assurance and delivering business results to budget.
IT Business Strategy Consulting• Eco Policy/Social Resp. • Legislative, DR• Availability/Agility/TTM
• Open Work Consulting
• Long-term planning• Refresh or Replace• Technology options• Sourcing strategies
• Site Selection & Site Acquisition• Design Services
> Lifecycle Planning> Building Information Modeling
• Build, Commission, Move, Populate
Datacenter Strategy Datacenter Design & Build
• Application Modernization• Infrastructure
Modernization
• Eco Consulting & Assessments
• Consolidation Services • Migration Services• Virtualization Services
• Managed & Remote Operations• Virtualization Best Practices
Datacenter Modernization Datacenter Optimization Datacenter Operations
40
DCE Cloud Services – Evaluate your readiness●Business
Key business drivers & KPIsEconomic environmentCompetitive landscapeCapex and Opex targetsAsset management & disposalCompliance, security, privacyGovernance, decision making
●Organization/CultureStructure & logisticsCulture orientation & emphasisDecision making topologySecurity & privacy policies, trainingCompliance, governance, trainingInformation sharing &
communicationInformal norms & policies
●Technology24 month technology roadmapCurrent “Cloud-like”
initiatives/POCsBilling, metering, SLAsDeployment & support processManagement, tracking, reportingCompliance, security, privacyResources & training
Operations IT Infrastructure support Datacenter/facility plans: -consolidation, migration -build, co-locate, other Deployment & support process Management, tracking, reporting Resources & training
●
41
Data Centre Facility Design (Greenfield) •Core Concepts>Holistic perspective – balancing availability vs. efficiency>Scalable, repeatable, modular architecture>Modular right-sizing power and cooling >Simplified, flexible cabling and plumbing>Facilitates growth>Vendor independent>Lifecycle Planning - Flexible & Scale cost with use>Building Information Modelling (BIM)
Entering a New Age of Engineered Datacentre
42
Data Center design
Example of BIM model for one engagement
43
Data Center design
Example of site model for one engagement
44
Data Center design
Example of data center layout for one engagement
45
Data Center design
Example of data center layout for one engagement
46
A group of racks or benches with a common hot or cold aisle used as a building block to simplify data-center design for
power, cooling, & cabling
The POD Architecture
Vendor Independent, Slab or Raised Floor, Flexible, Scalable, High Density
47
A group of racks or benches with a common hot or cold aisle used as a building block to simplify datacenter design for power, cooling, & cabling
The POD Architecture
Vendor Independent, Slab or Raised Floor, Flexible, Scalable, High Density
48
Additional Thoughts
49
Best Practices = Competitive Weapon•Align Facilities, IT & Engineering>Partnering nets significant short term & long term savings http://www.sun.com/aboutsun/environment/docs/aligning_business_organizations.pdf
•Hardware Replacement>Apply new hardware solutions and extend the life of your DC http://www.sun.com/aboutsun/environment/docs/creating_energy_efficient_dchw_consolidation.pdf
•Simplify Datacentre design with the POD concept>Power: Modular, Scalable, Smart http://www.sun.com/aboutsun/environment/docs/powering_energy_efficientdc.pdf
>Cooling: Adaptable, Scalable, Smart http://www.sun.com/aboutsun/environment/docs/cooling_energy_effiicientdc.pdf
>Cabling: Distributed vs Centralized http://www.sun.com/aboutsun/environment/docs/connecting_energy_efficientdc.pdf>Measurement: Visibility gives you power to control http://www.sun.com/aboutsun/environment/docs/accurately_measure_dcpower.pdf
•Video: http://www.sun.com/aboutsun/environment/media/datacenter_tour.xml
50
Sun Blueprint•Released June 10, 2008•Download: http://sun.com/blueprints•1st of 9 chapters to be released over the next 12 months
Wesley [email protected]
Thank You
52
Other Successful Sites Globally
Camberley (UK)
Prague (CZ) Bangalore (India)
Trondheim (Norway)
●Consolidation and relocation of EMEA mission critical datacenter●80% space reduction – 2,200 ft2 (204 m2) down to 450 ft2 (42 m2)●3,600 ft2 (334 m2) total build-out●50% utility reduction●Base cooling with Liebert XD (first install of XDO in EMEA)
●Datacenter supporting the growing engineering site●2,600 ft2 (242 m2) of modular datacenter●Liebert XD, base cooling under 10%, first install of XD in EMEA●Highly efficient, expandable datacenter
●50%+ reduction on equipment footprint●17% power reduction●154% Compute capacity increase●3,000 ft2 (280 m2) Datacenter●16 Datacenters down to 1●Innovative design in the region - PCQuest award for best IT implementation in 2007
●Consolidation of four R&D labs into new datacenter●Database stress testing●High density – 10kW/rack, expandable to 16kW/rack●1,190 ft2 (110 m2) datacenter●Second highest density in our portfolio
Reality:Modular, scalable, fully redundant datacenter supporting long
term growth.
53
Data Center Space Constraints
•The single biggest reason cited for running out of capacity is insufficient space (39% of respondents)
54
Data Center Heat Densities
55
Why do we care about Heat?•Hot servers are less reliable than cool ones>For every increase of 7'C above 21'C, long-term electronics reliability falls by 50%
•In order to maintain correct server temperatures, ASHRAE recommends server inlet temperatures remain between 20'C to 25'C•The temperature of a server is directly related to the power it uses (kW) and the air it draws (CFM)
ASHRAE: American Society of Heating, Refrigerating and Air-Conditioning Engineers
56
Raised Floor Cooling•“The hardware manufacturer shall design the equipment ... for a temperature change from intake to exhaust (delta T) of not less than 15'F nor more than 20'F.”* - Uptime Institute
CFM: Cubic Feet per Minute; Measurement of air volume velocity and is often used in measuring air flow from cooling diffusers.
57
Cooling Capacity per Tile•4kW of IT load requires about 600 CFM of airflow – pretty much the maximum for a single tile! >3.5kW is 1 Ton of AC per tile!
58
In Row Cooling Requires Less PowerExample Using Overhead Cooling Units
Annual Power Consumption
0.00
0.10
0.20
0.30
0.40
0.50
0.60
Traditional CWCRAC @
100% Capacity
Traditional CWCRAC @
50% Capacity
Liebert XD/XDV @
100% Capacity
Liebert XD/XDV @
50% Capacity
kW p
ow
er t
o c
oo
l 1 k
W o
f se
nsi
ble
hea
t
Fan
Pump (XD)
Pump (CW)
Chiller
27% Lower
32% Lower
Input Power - ComparisonInput Power - Comparison
Overhead rack-based cooling @ 100% CapacityOverhead rack-based cooling @ 50% Capacity