Critical architecture choices and the impact of the...
-
Upload
doankhuong -
Category
Documents
-
view
219 -
download
5
Transcript of Critical architecture choices and the impact of the...
Critical architecture choices and the dramatic impact of the physical layer on
Data Centre performance
Alberto Zucchinali RCDDEMEA DC Solutions & Services Manager
The Siemon Company
Your infrastructure: a delicate balance
•
Civil works•
Data
•
Electrical•
Cooling
The importance of the Physical layer
Would you run something like this....
...on something like this ?
The importance of the Physical layer
Is your infrastructure reliable ?
The importance of the Physical layer
Is your infrastructure a bottleneck ?
Why should we care about physical layer ?
•
Big Impact on:–
Cooling/power•
cost of powering and cooling
servers is currently one and a half
times the cost of the server
hardware (*)
–
Management–
Documentation
–
Troubleshooting.(*) Uptime Institute
Why should we care about physical layer ?
•
Planning for the future–
Move, Add & Change work is more
expensive
than project related channels–
MAC quality
may or may not be as good
–
The standards tell us to cable accommodating growth over the life of
the system (cabling supports 2‐3 generations of electronics).
Facts and figures
•
Cooling and electrical costs represent up to 44% of a data centers total cost of ownership
°°C+WC+WIT others
TCOTCO
othersothers
Source: Uptime Institute
Facts and figures
•
Cooling and electrical costs represent up to 44% of a data centers total cost of ownership
•
Information technology (IT) power usage is today over 2% of all electrical power
consumption and contributes over 2% of carbon emissions—same as the aircraft
industryIT others
TCOCO2
Source: Uptime Institute
Facts and figures
•
Cooling and electrical costs represent up to 44% of a data centers total cost of ownership
•
Information technology (IT) power usage is today over 2% of all electrical power
consumption and contributes over 2% of carbon emissions—same as the aircraft
industry•
50% of power for data centers and computing
sites is used to drive cooling, 29% for servers and 5% for networking.
W
Source: Uptime Institute
How to improve Cooling / Energy savings
•
Put hot equipment at the bottom of the rack where air is cooler
–
the failure rate of equipment in the top 1/3 of the rack is 3x
that of equipment in the lower 2/3’s (*)
•
Use blanking panels•
Leave no spaces between racks
•
Block cable penetrations with brush guards.
(*) Uptime Institute
How to improve Cooling / Energy savings
•
Place hot equipment around your data center to distribute cooling and power load
•
Place hottest equipment closer to chillers
•
Do not allow mixing of hot and cold air
•
Move cables away from active equipment !!
How to improve Cooling / Energy savings
•
Determine what really needs to be redundant!
•
Turn off servers that don’t need to be on (WoL)
•
Decommissioning!
What do standard recommend ? ISO/IEC 24764‐EN 50173/5‐TIA 942
•
“..channels shall be run accommodating growth so
these areas do not have to be revisited”
•
All equipment must be connected via structured
system, top of rack is exception•
Category 6A
Class EA
minimum (UTP or F/UTP),
Category 7 and 7A
/Class F and FA•
OM3/SMF Minimum FO.
10G options
•
FO•
10GBASE‐Ex, 10GBASE‐Lx, 10GBASE‐Sx
•
10GBASE‐LX4, 10GBASE‐LRM•
Copper•
10GBASE‐T
•
10GBASE‐CX4, 10GSFP+
Fibre / Copper ...?
•
40nm 10GBASE‐T PHY is there (<4W/port)•
FC‐BaseT, FCoE/FCoCEE….
•
40G
and 100G
finalized (802.3ba 2010)•
EEE (802.3az 2010)–
new technology to
address power •
PoE (Power Over Ethernet) +
•
Autonegotiation
for copper.
Power backoff (PBO) with 10GBASE‐T
Source: Solarflare Communications
Why should I not
use Cat 6 ?
•
Standards recognize category 6A as the minimum grade
of cabling for new data centres
•
no 10GBASE‐T application support assurance over short runs of category 6
–
ANEXT depends on cable density•
IEEE 802.3 or ATM are not investigating the
development of new Ethernet or other data transmission solutions for deployment over Cat 6.
Why should I not
use Cat 6 ?
•
Cat 6 cannot support a full 100 meter 10GBASE‐T channel, limiting design flexibility
•
Cat 6 cannot support power‐saving short reach mode (aka data center mode)
•
Cables with reduced diameter conductors cannot dissipate heat as well as category 6A or higher
systems.
Why copper should be shielded ?
•
Performance headroom ensures solid 10Gb/s
application support•
100 times less susceptible to
interference than UTP •
Eliminates alien crosstalk so
field testing is never required•
Overcome performance ‐
robbing heat generated by PoE and PoE Plus.
(Data provided courtesy of NEXANS/Berk-Tek
How to connect your devices ?
Any to all
Any to all
•
Requires extra patch panels and slightly more cable
•
Virtually eliminates moves adds and changes to the fixed
physical infrastructure•
Can co‐exist with top of rack
switching
•
Structured cabling can be used for KVM and Management
•
Changes are patch cords/jumpers only
•
Allows pathways to be properly designed.
Patch panels in Switch Cabinets ?
•
Easier:–
Cable tracing
–
Blade changing when required–
Wire management
•
Colder air at bottom of cabinet is better for equipment
•
May be required due to power/cooling constraints
•
Alternative: harness cables (male‐to‐female).
In‐row patching (EoR / MoR)
Storage and server
Cu / FO
Storage and server
Storage and server
Cu / FO
Cu /
FO
FO FO FO
In‐row patching (EoR / MoR)
In‐row patching
•
Patching Fields are Located Mid/End of Row
•
In this scenario patch cords are kept to minimum length and
generally switch blades (or switches) are dedicated to a
certain cabinet •
May lead to resource waste for
ports purchased and not used
•
Acceptable for smaller installations running
10/100/1000 connections•
Cabling quality can make the
difference with minimum cable lengths and 10GBASE‐T .
In‐row patching
•
In some instances, field terminated or non standard
length patch cords are used to provide the desired esthetic
appeal. JUST SAY NO!•
The number of cabinets that can
be placed in a row is dictated by available switch ports and
patching space
•
The number of switches required may increase if the
row becomes full before all switch ports are used
•
Allows for more efficient chassis switches.
Top of Rack (Point to point)
Copper
Fibre
ToR switch
ServerServerServerServer
Top of Rack (Point to point)
•
NOT recommended
outside of specific applications
•
Oversubscription
of ports is likely
•
Land locked in new server deployment (max distance of
few m for copper)
Top of Rack (Point to point)
•
High risk of creating spaghetti•
Daily management distributed
over all DC cabinets•
MAC work is the most expensive
work in a data center•
Will likely need structured
system anyway (KVM, Mgmt, New Technology)
•
Unused ports still draw power!
Conclusion
•
The right choice of a future‐proof physical layer solution will have a big impact on your Data Centre’s
costs•
Copper (shielded) 10G and 40/100G fibre solutions will
be increasingly present in any modern data centre•
A careful design of cabling and cabinets will positively
influence the flexibility, modularity and scalability of your data centre
THANKS FOR YOUR KIND ATTENTION !!