DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

120
DATACENTER DESIGN & INFRASTRUCTURE LAYOUT OF PATNI COMPUTER SYSTEMS LTD. PROJECT BY 1

description

MBA Project Report

Transcript of DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Page 1: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

DATACENTER DESIGN & INFRASTRUCTURE

LAYOUT

OF

PATNI COMPUTER SYSTEMS LTD.

PROJECT BY

1

Page 2: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Name of the Learner: Mr.

Registration No:

Program Name: PGDBA

Address:

Contact No:

PART – I

Summery

S No. Description Page No.(s)

1. Introduction 8 - 11

2. Objectives And Scope 11 - 15

3 Limitations 15 - 18

4 Theoretical Perspective 18 - 19

5 Methodology And Procedure 19 - 41

6. Analysis of Data 42 – 42

7. Findings, Inferences And Recommendations 42 - 52

8. Conclusions 53 –55

2

Page 3: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

PART – II

Overview of the Organization

S No. Description Page No.(s)

1. An Overview Of The Organization 55 – 56

2. Patni Computer Systems Ltd. 56 - 57

3. Operational Statistics 57 – 59

PART – III

Project Overview

S No. Description Page No.(s)

1. Patni Data Center Design 59 – 69

2. Patni Data Center Infrastructure Layout 69 - 80

3. Bibliography 80

3

Page 4: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

NO OBJECTION CERTIFICATE

This is to certify that Mr. (SCDL Regn: 200621368) is an employee of this organization for

the past 2 Years.

We have no objection for him to carry out a project work titled “Datacenter Design &

Infrastructure Layout” in our organization and for submitting the same to the Director,

SCDL as part of fulfillment of the Post Graduate Diploma in Business Administration

(PGDBA) program.

We wish him all the success.

Place: Noida, UP Signature of the Competent AuthorityOf the Organization

Date: 15th Dec, 2009

4

Page 5: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

DECLARATION BY THE LEARNER

This is to certify that carried out this project work myself in part fulfillment of the Post

Graduate Diploma in Business Administration (PGDBA) program of SCDL.

The work is original, has not been copied from any where else and has not been submitted to

any other University/Institute for an award of any Degree/Diploma.

Place: Faridabad Signature:Name:

Date: 15th Dec, 2009 SCDL Regn:

5

Page 6: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

CERTIFICATE OF SUPERVISOR (GUIDE)

Certified that the work incorporated in this project report “Datacenter Design &

Infrastructure Layout” submitted by Mr. (SCDL Regn:) is his original work and completed

under my supervision. Material obtained from other sources has been duly acknowledged in

the Project Report.

Place: Noida, UP Signature of Supervisor

Date: 15th Dec, 2009 Designation: Email:

1. Introduction

1.1 Data center is home to the computational power, storage, and applications necessary to support an

enterprise business. The data center infrastructure is central to the IT architecture, from which all

6

Page 7: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

content is sourced or passes through. Proper planning of the data center infrastructure design is critical,

and performance, resiliency, and scalability need to be carefully considered.

Another important aspect of the data center design is flexibility in quickly deploying and supporting

new services. Designing a flexible architecture that has the ability to support new applications in a

short time frame can result in a significant competitive advantage. Such a design requires solid initial

planning and thoughtful consideration in the areas of port density, access layer uplink bandwidth, true

server capacity, and oversubscription, to name just a few.

The data center network design is based on a proven layered approach, which has been tested and

improved over the past several years in some of the largest data center implementations in the world.

The layered approach is the basic foundation of the data center design that seeks to improve

scalability, performance, flexibility, resiliency, and maintenance. Figure 1-1 shows the basic layered

design.

A Data Center Architecture includes the layout of the boundaries of the room (or rooms) and the

layout of IT equipments within the room. Most users do not understand how critical the floor layout is

to the performance of a data center, or they only understand its importance after a poor layout has

compromised the deployment. The floor plan either determines or strongly affects the following

characteristics of a data center:

• The number of rack locations that are possible in the room

• The achievable power density

• The complexity of the power and cooling distribution systems

• The predictability of temperature distribution in the room

• The electrical power consumption of the data center

7

Page 8: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

There are five core values that are the foundation of a data center design philosophy:Simplicity,

flexibility, scalability, modularity, and sanity. The last one might give you pause, but if you’ve had

previous experience in designing data centers, it makes perfect sense.

Design decisions should always be made with consideration to these values.

1.2 Keep the Design as Simple as Possible

A simple data center design is easier to understand and manage. A basic design makes it simple to do

the best work and more difficult to do sloppy work. For example, if you label everything—network

ports, power outlets, cables, circuit breakers, their location on the floor—there is no guess work

involved. When people set up a machine, they gain the advantage of knowing ahead of time where the

8

Page 9: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

machine goes and where everything on that machine should be plugged in. It is also simpler to verify

that the work was done correctly. Since the locations of all of the connections to the machine are pre-

labeled and documented, it is simple to record the information for later use, should the machine

develop a problem.

Figure 1.2 Simple, Clean, Modular Data Center Equipment Room

1.3 Design for Flexibility

Nobody knows where technology will be in five years, but it is a good guess that there will be some

major changes. Making sure that the design is flexible and easily upgradable is critical to a successful

long-term design.

Part of flexibility is making the design cost-effective. Every design decision has an impact on the

budget. Designing a cost effective data center is greatly dependent on the mission of the center. One

company might be planning a data center for mission critical applications, another for testing large-

scale configurations that will go into a mission critical data center. For the first company, full backup

generators to drive the entire electrical load of the data center might be a cost-effective solution. For

the second company, a UPS with a 20-minute battery life might be sufficient. Why the difference? If

the data center in the first case goes down, it could cost the company two million dollars a minute.

Spending five million on full backup generators would be worth the expense to offset the cost of

downtime. In the second case, the cost of down time might be $10,000 an hour. It would take 500

hours of unplanned downtime to recoup the initial cost of five million dollars of backup generators.

9

Page 10: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

1.4 Design for Scalability

The design should work equally well for a 2,000, 20,000, or 2,000,000 square foot data center. Where

a variety of equipment is concerned, the use of watts per square foot to design a data center does not

scale because the needs of individual machines are not taken into consideration. This book describes

the use of rack location units (RLUs) to design for equipment needs. This system is scalable and can

be reverse engineered.

1.5 Use a Modular Design

Data centers are highly complex things, and complex things can quickly become unmanageable.

Modular design allows you to create highly complex systems from smaller, more manageable building

blocks. These smaller units are more easily defined and can be more easily replicated. They can also

be defined by even smaller units, and you can take this to whatever level of granularity necessary to

manage the design process. The use of this type of hierarchy has been present in design since

antiquity.

1.6 Keep Your Sanity

Designing and building a data center can be very stressful. There are many things that can, and will, go

wrong. Keep your sense of humor. Find ways to enjoy what you’re doing. Using the other four values

to evaluate design decisions should make the process easier as they give form, order, and ways to

measure the value and sense of the design decisions you’re making. Primarily, they help to eliminate

as many unknowns as possible, and eliminating the unknowns will make the process much

less stressful.

2 Objectives & Scope

2.1 Objectives:•    To ensure that your data center is built and meet to your business requirement•    Meeting to known Data Center Standard and best practices•    To meet the principal goals in data center design, which are flexibility, and scalability,

where these shall involve site location, building selection, floor layout, electrical system design, mechanical design and modularity.

2.1.1   Provide all design as mentioned in the scope of services

      •    Many factors go into the design of a successful data center.

           

10

Page 11: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Careful and proper planning during the design phase will ensure a successful implementation resulting

in a reliable and scalable data center, which will serve the needs of the business for many years.

Remember the times when you had to keep your computer in a cool, dust-free room? Your personal

computer may not need such a cozy environment anymore, but your precious servers do. The

datacenter is what houses your organization’s servers.

The benefits of a datacenter are many. But a datacenter may not be necessary for every organization.

Two things dictate the requirement of a datacenter: the number of servers, network devices and your

ability to manage the datacenter. If you find monitoring and managing a datacenter overwhelming, you

need not forsake the datacenter. Instead, opt for a hosted one, where your datacenter is kept within the

premises of a vendor who also takes care of its management and security.

Ideally, large businesses and the mid-size ones that are growing rapidly should go for datacenters. For

small businesses, single servers or cluster servers can provide all that they need.

Now, how does a datacenter help your business?

It offers high availability. A well-managed datacenter ensures that business never suffers

because of one failure somewhere.

It is highly scalable. The datacenter offers support as the business needs change.

It offers business continuity. Unexpected problems and server failures don’t deter the

functioning of your business in any way.

2.1.2 One of the biggest criteria of owning a datacenter is your ability to manage it. Datacenter

management, however, does not necessarily depend on you. You can hire professionals to manage it

for you. In fact, you don’t even need to keep the datacenter in your premises; it can be kept within the

premises of the vendor.

IT operations are a crucial aspect of most organizational operations. One of the main concerns is

business continuity; companies rely on their information systems to run their operations. If a system

becomes unavailable, company operations may be impaired or stopped completely. It is necessary to

provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption.

Information security is also a concern, and for this reason a data center has to offer a secure

environment which minimizes the chances of a security breach. A data center must therefore keep

high standards for assuring the integrity and functionality of its hosted computer environment. This is

accomplished through redundancy of both fiber optic cables and power, which includes emergency

backup power generation.

11

Page 12: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

2.1.3 When designing a large enterprise cluster network, it is critical to consider specific objectives.

No two clusters are exactly alike; each has its own specific requirements and must be examined from

an application perspective to determine the particular design requirements. Take into account the

following technical considerations:

•Latency—In the network transport, latency can adversely affect the overall cluster

performance. Using switching platforms that employ a low-latency switching architecture

helps to ensure optimal performance. The main source of latency is the protocol stack and NIC

hardware implementation used on the server. Driver optimization and CPU offload techniques,

such as TCP Offload Engine (TOE) and Remote Direct Memory Access (RDMA), can help

decrease latency and reduce processing overhead on the server.

Latency might not always be a critical factor in the cluster design. For example, some clusters

might require high bandwidth between servers because of a large amount of bulk file transfer,

but might not rely heavily on server-to-server Inter-Process Communication (IPC) messaging,

which can be impacted by high latency.

•Mesh/Partial mesh connectivity—Server cluster designs usually require a mesh or partial

mesh fabric to permit communication between all nodes in the cluster. This mesh fabric is used

to share state, data, and other information between master-to-compute and compute-to-

compute servers in the cluster. Mesh or partial mesh connectivity is also application-

dependent.

•High throughput—The ability to send a large file in a specific amount of time can be critical

to cluster operation and performance. Server clusters typically require a minimum amount of

available non-blocking bandwidth, which translates into a low oversubscription model between

the access and core layers.

•Oversubscription ratio—The oversubscription ratio must be examined at multiple

aggregation points in the design, including the line card to switch fabric bandwidth and the

switch fabric input to uplink bandwidth.

•Jumbo frame support—Although jumbo frames might not be used in the initial

implementation of a server cluster, it is a very important feature that is necessary for additional

flexibility or for possible future requirements. The TCP/IP packet construction places

12

Page 13: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

additional overhead on the server CPU. The use of jumbo frames can reduce the number of

packets, thereby reducing this overhead.

•Port density—Server clusters might need to scale to tens of thousands of ports. As such, they

require platforms with a high level of packet switching performance, a large amount of switch

fabric bandwidth, and a high level of port density.

• High-Availability— All data center designs are judged by their ability to provide continuous

operations for the network services they support.  Data center availability is affected by both

planned (scheduled maintenance) and unplanned (failures) events.  To maximize availability,

the impact from each of these must be minimized and/or eliminated.

All data center designs are judged by their ability to provide continuous operations for the

network services they support.  Data center availability is affected by both planned (scheduled

maintenance) and unplanned (failures) events.  To maximize availability, the impact from each

of these must be minimized and/or eliminated.

All data centers must be maintained on a regular basis.  In most data center designs,

scheduled maintenance is a planned event requiring network downtime.  For this reason,

general maintenance is often forgone, leaving long-term availability to chance.  In robust data

center designs, concurrently maintainable systems are implemented to avoid interruption to

normal data center operations.

To mitigate unplanned outages, both redundancy and fault-tolerance must be incorporated into

the data center design.  High-availability is accomplished by providing redundancy for all,

major and minor, systems, thereby eliminating single points of failure.  Additionally, the data

center design must offer predictable uptime by incorporating fault-tolerance against hard

failures.  (A hard failure is a failure in which the component must be replaced to return to an

operational steady state.)

A data center achieves high-availability by implementing a fully redundant, fault-tolerant, and

concurrently maintainable IT and support infrastructure architecture in which all possible hard

failures are predictable and deterministic.

13

Page 14: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

2.2 Scope

An important distinction to make at this point is what really constitutes the elements of a data center.

When we talk about the data center, we are talking about the site, the Command Center (if one is to

be added), the raised floor (if one is to be added), the network infrastructure (switches, routers,

terminal servers, and support equipment providing the core logical infrastructure), the environmental

controls, and power. Though a data center contains servers and storage system components

(usually contained in racks), these devices are contents of the data center, not part of the data

center. They are transient contents just as DVDs might be considered the transient contents of a DVD

player. The data center is more of a permanent fixture, while the servers and storage systems are

movable, adaptable, interchangeable elements. However, just as the DVD is of no value without the

player and the player is of no value without the DVD, a data center without equipment is an expensive

empty room, and servers with no connection are just expensive paper weights. The design of the data

center must include all of the elements. The essential elements are called the criteria.

Most often, it is the project scope that determines the data center design. The scope must be

determined based on the company's data center needs (the desired or required capacities of the system

and network infrastructure), as well as the amount of money available. The scope of the project could

be anything from constructing a separate building in another state with offices and all the necessary

utilities, to simply a few server and storage devices added to an existing data center. In either case,

those creating the project specifications should be working closely with those responsible for the

budget.

3. Limitation

The primary components necessary to build a data center include rack space (real estate), electrical

power and cooling capacity. At any given time, one of these components is likely to be a primary

capacity limitation, and over the past few years, the most likely suspect is power. The obvious

requirement is power for the servers and network equipment, but sometimes less obvious is the power

required to run the air handling and cooling systems. Unless you have built your data center right next

to a power station, and have a very long contract in place for guaranteed supply of power at a nice low

rate per kilowatt-hour, you are likely seeing your data center costs rise dramatically as the cost of

electricity increases.

14

Page 15: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Power costs have become the largest item in the OpEx budgets of data-centre owners and, for the first time, now exceed the IT hardware

cost over its average service-life of 3-4 years. This has resulted in the pursuit (both real and spun) for higher efficiency, sustainable,

‘green’ designs. Majority of the losses occurs in a data centre as per the below mentioned ratio:

COMPONENT Losses %

Chip Set 34%

SMPS losses 9%

Server Fans 6%

Room Fans 4%

Pumps 2%

Compressors 18%

Condenser Fans 4%

Humidification 1%

Plantroom Cooling 1%

Anticilliary Power 7%

Security, Controls & Comms 1%

UPS & Distribution Losses 6%

Transmission Losses 7%

Many IT professionals are finding themselves in the awkward position of begging their data center

hosting providers for increased electrical capacity, or they are facing the unpalatable option of

relocating their data center facilities.

While the industry is very aware of the issue of equipment power consumption, and the major

manufacturers are already designing in power reduction and conservation features to their latest

products, we are far from turning the corner on the demand for more data center electrical power. The

demand for computing capacity continues to rise, which implies a demand for “more, better and

faster” systems. This results directly in higher demands for rack space, power and cooling in

your data center, and related increased costs.

The increasing costs are receiving a lot of executive management attention, especially given current

economic conditions. Server rationalization is the buzz word of the day. Do we need to buy a new

server, or can we re-use one we already have? Are we using the servers we have to their full

capability? A typical data center has very high storage utilization (as no one voluntarily throws old

data away), and server utilization is low (as different functions within a company don’t want to share).

Server virtualization has become one of the hottest topics in IT, as it has become a means of

ensuring higher utilization of servers, lowering data center costs and providing flexibility for

shuffling systems’ workload. But there are limits to what can be virtualized, and of course, the

overall utilization of a server pool has an upper bound, as well. But there are other components needed

to build a data center, including: network equipment and communication circuits, power distribution

15

Page 16: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

equipment (e.g., distribution panels, cable and outlets), power backup equipment (including

generator(s) and uninterruptible power supplies (UPS)), cable trays (for both network and power

cables), and fire suppression systems.

And, yes, believe it or not, sometimes these other components can become the constraining factor in

data center capacity. “Cable trays can be a limiting factor?” you ask. Yes, just two years ago, we ran

into a situation where an older data center couldn’t add capacity to a specific cage because the weight

of the cable already in the tray was at design load limits. We couldn’t risk the tray ripping out of the

ceiling by laying in more cable, and we couldn’t shut the network and power down long enough to pull

out all of the old cables and put new ones back in without excessive downtime to the business. A very

expensive migration to a new cage in the data center became the only feasible option. Though it may

seem hard to believe, there are many IT professionals who have never seen the inside of a data center.

Over the years, having a glass-walled computer room as a showcase in your corporate headquarters

became problematic for a number of reasons including, security, high-priced real estate for servers that

could be better accommodated elsewhere, as well as loss of the glass wall space for other purposes

(e.g., communications punch-down blocks, electrical distribution panels). Besides, the number of

blinking lights in the data center has steadily decreased over the years. And you don’t see white lab-

coated workers pushing buttons and handling tapes. So, what’s there to see? (Not much, especially in a

“lights out” facility.)

But the interesting part is: when corporate executives and managers do occasionally visit a data center

facility, they still expect to see nice clean rows of equipment, full racks of blinking lights and servers

happily computing away. Instead, we now see large amounts of unused floor space and partially filled

racks. As servers have become more powerful per cubic inch of space occupied, power and cooling

capacity have become increasingly scarce, not the rack space for the equipment. The decreasing server

footprint relative to the higher energy per cubic inch requirement is often referred to as a “power-

density” problem.

You should ask your data center manager for a tour. Standing behind a full rack of 40+ servers

consuming 200 watts (or more) of power each is an amazing experience, likened to having someone

turn six 1200-watt hair dryers directly toward you, running full blast.

While the heat load behind a server rack has its shock effect, the more interesting cognitive dissonance

for many executives is seeing the empty racks and floor space. When called upon to explain, my

16

Page 17: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

simple example has been this: if the power capacity (power cap) in your cage (or computer room) is

limited to 100 kWh, it doesn’t matter whether you have 10 full racks that consume 10 kWh each, or 20

partially filled racks that only consume 5 kWh each. If you have only one supercomputer that

consumes all 100 kWh sitting in the middle of the room taking up only 20 square feet and there is 500

square feet of unused space all around it, it may look very odd, but you’re still out of power.

It would be wonderful if the data center design would “come out even,” with exactly the right

amount of full racks, power and cooling to look like the space is being well-utilized, but that is not

a common occurrence these days. Even if you can optimize one cage, it’s extremely difficult to

optimize across the entire data center floor.

Building out a new data center space requires careful planning and engineering of all the

components. And even then, changes over time in server utilization, increases (and decreases) in

the business’ needs, Virtualization and server technology all conspire to generate re-work of

your carefully thought-out and well-balanced design.

4. Perspectives

IT operations are a crucial aspect of most organizational operations. One of the main concerns is

business continuity; companies rely on their information systems to run their operations. If a system

becomes unavailable, company operations may be impaired or stopped completely. It is necessary to

provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption.

Information security is also a concern, and for this reason a data center has to offer a secure

environment which minimizes the chances of a security breach. A data center must therefore keep high

standards for assuring the integrity and functionality of its hosted computer environment. This is

accomplished through redundancy of both fiber optic cables and power, which includes emergency

backup power generation.

Information stewardship is one key perspective. Information stewardship calls for holistic data

management in the enterprise: defining and enforcing policy to guide the acquisition, management,

and storage lifecycle of data, and the protection of data from theft, leak, or disaster. Our research

shows that enterprises that manage these intertwined issues as a set are more successful dealing with

them than those that treat them as disjoint.

The IT executives we are speaking with in our current research on security and information protection

frequently cite the rising importance of a second key perspective: risk management.

17

Page 18: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

In the past, we mainly heard about risk management in two specific contexts: disaster planning and

security. In disaster planning, risk assessment (where risk equals the cost to the business of major IT

outages times the likelihood of the natural or manmade disasters that would lead to those outages)

dictates how much the enterprise should spend on back-up IT infrastructure and services. In security,

IT would often focus on specific threats and specific defensive technologies, and use risk assessment

mainly to help decide where to spend money on security tools, or how to dedicate IT security staff

time.

Now many of the people we speak with tell us they use risk as a lens through which they view all their

systems, processes, and staffing. Risk is not subordinate, as a component in the calculations of

security and business continuity planners; instead, security and business continuance have

become facets of risk management.

5. Methodology and Procedure of Work

5.1 Sizing the Data Center

Nothing has a greater influence on a Data Center's cost, lifespan, and flexibility than its size—even the

Data Center's capability to impress clients. Determining the size of your particular Data Center is a

challenging and essential task that must be done correctly if the room is to be productive and cost-

effective for your business. Determining size is challenging because several variables contribute to

how large or small your server environment must be, including:

How many people the Data Center supports

The number and types of servers and other equipment the Data Center hosts

The size that non-server areas should be depending upon how the room's infrastructure is

deployed

Determining Data Center size is essential because a Data Center that is too small won't adequately

meet your company's server needs, consequently inhibiting productivity and requiring more to be

spent on upgrading or expansion and thereby putting the space and services within at risk. A room that

is too big wastes money, both on initial construction and ongoing operational expenses.

Many users do not appreciate these effects during data center planning, and do not establish the floor

layout early enough. As a result, many data centers unnecessarily provide suboptimal performance.

Below explained how floor plans affect these characteristics, and to prescribe an effective method for

developing a floor layout specification.18

Page 19: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

5.2 Role of the Floor Plan in the System Planning Sequence

Floor plans must be considered and developed at the appropriate point in the data center design

process. Considering floor plans during the detailed design phase is typical, but simply too late in the

process. Floor plans should instead be considered to be part of the preliminary specification and

determined BEFORE detailed design begins.

It is not necessary for a floor layout to comprehend the exact location of specific IT devices. The

effective floor plans only need to consider the location of equipment racks or other cabinets, and to

target power densities. These preliminary floor layouts do not require knowledge of specific IT

equipment.

For most users it is futile to attempt to specify particular IT equipment locations in advance – in fact,

racks may ultimately house equipment that is not even available on the market at the time the data

center is designed.

Figure 5.2 – The floor plan is a key input in the system planning sequence

19

Page 20: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

The reasons that floor plans must be considered early, as part of the preliminary specification, and not

left until the later detailed design include:

• Density is best specified at the row level, so rows must be identified before a density

specification can be created.

• Phasing plans are best specified using rows or groups of rows, so rows must be identified

before an effective phasing plan can be created.

20

Page 21: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

• The floor grid for a raised floor and the ceiling grid for a suspended ceiling should be aligned

to the rack enclosures, so rows must be identified before those grids can be located.

• Criticality or availability can (optionally) be specified differently for different zones of the

data center – rows must be identified before a multi-tier criticality plan can be created.

Density and phasing plans are a key part of any data center project specification, and both require a

row layout. Detailed design can only commence after density, phasing, and criticality have been

specified.

Therefore, a floor plan must be established early in the specification phase of a project, after

SYSTEM CONCEPT but well before DETAILED DESIGN (see Figure 5.2).

5.3 Floor Planning Concepts

A data center floor plan has two components: the structural layout of the empty room and the

equipment layout of what will go in the room. Note that for many projects the room is pre-existing

and the only option is to lay out the equipment within the room. A key rule of data center design is that

there is a potentially huge advantage in efficiency and density capacity if planners can lay out the

room boundaries at the outset. Wherever possible, an attempt should be made to influence the

structural room layout using the principles established.

5.3.1 Structural layout

The room layout includes the location of walls, doors, support columns, windows, viewing windows,

and key utility connections. If the room has a raised floor, the height of the raised floor and the

location of access ramps or lifts are also part of the structural layout. If the room has a raised floor or a

suspended ceiling, the index points for the floor or ceiling grid are critical design variables, and must

also be included in the structural layout. Room measurements will be described in units of tiles, where

a tile width is equal to 2 feet (600 mm) or one standard rack enclosure width.

5.3.2 Equipment layout

The equipment layout shows the footprint of IT equipment and the footprint of power and cooling

equipment. IT equipment can usually be defined as rack locations without regard for the specific

devices in the cabinets, but other equipment such as tape libraries or large enterprise servers may have

form factors that are different from typical racks and must be called out explicitly. In addition, IT

equipment in a layout must be characterized by its airflow path. In the case of typical IT racks, the

21

Page 22: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

airflow is front-to-back, but some devices have other airflow patterns such as front-to-top. Power

and cooling equipment must also be accounted for in equipment layouts, but many new power and

cooling devices are either rack mountable or designed to integrate into rows of racks, which simplifies

the layout.

5.4 The Effects of Floor Plans on Data Center Performance

Several important data center characteristics are affected by floor plans. To understand effective floor

layout methods, it is important to understand the consequences.

5.4.1 Number of rack locations

The floor layout can have a dramatic affect on the number of rack locations that are possible in the

room. Although, on average, the number of IT rack locations possible can be estimated by dividing the

room area by 28 sq ft / rack (2.6 sq meters / rack)1, the actual number of racks for a particular data

center can vary greatly from this typical value.

The basic principle of floor planning is to maximize the number of rack locations possible. Small

variations in the location of walls, existing IT devices, air conditioners, and power distribution units

can have a surprisingly large impact on the number of possible rack locations. This effect is magnified

when high power densities are required. For this reason, a careful and systematic approach to floor

planning is essential.

5.4.2 Achievable power density

The floor plan can have a major impact on the achievable power density. With certain cooling

architectures, a poor layout can decrease the permissible power for a given rack by over 50%.

This is a huge performance compromise in a modern data center, where new technologies have power

densities that are already stressing the capabilities of data center design. In many data centers, users

may want to establish zones of different power density. These density zones will be defined by the

equipment layout. The floor plan is therefore a critical tool to describe and specify density for data

centers.

5.4.3 Complexity of distribution systems

The floor plan can have a dramatic affect on the complexity of the power and cooling distribution

systems. In general, longer rows, and rows arranged in the patterns, simplify power and cooling

22

Page 23: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

distribution problems, reduce their costs, and increase their reliability.

5.4.4 Cooling performance

In addition to impacting the density capability of a data center, the floor plan can also significantly

affect the ability to predict density capability. It is a best practice to know in advance what density

capability is available at a given rack location and not to simply deploy equipment and “hope for the

best,” as is a common current practice. An effective floor plan in combination with row-oriented

cooling technologies allows simple and reliable prediction of cooling capacity. Design tools such

as APC InfraStruXure Designer can automate the process during the design cycle, and when layouts

follow standard methods, off-the-shelf operating software such as APC InfraStruXure Manager can

allow users to monitor power and cooling capacities in real time.

5.4.5 Electrical Efficiency

Most users are surprised to learn that the electrical power consumption of a data center is greatly

affected by the equipment layout. This is because the layout has a large impact on the effectiveness

of the cooling distribution system. This is especially true for traditional perimeter cooling

techniques. For a given IT load, the equipment layout can reduce the electrical power consumption of

the data center significantly by affecting the efficiency of the air conditioning system.

• The layout affects the return temperature to the CRAC units, with a poor layout yielding a

lower return air temperature. A lower return temperature reduces the efficiency of the

CRAC units.

• The layout affects the required air delivery temperature of the CRAC units, with a poor

layout requiring a colder supply for the same IT load. A lower CRAC supply temperature

reduces the efficiency of the CRAC units and causes them to dehumidify the air, which in turn

increases the need for energy-consuming humidification.

• The layout affects the amount of CRAC airflow that must be used in “mixing” the data center

air to equalize the temperature throughout the room. A poor layout requires additional mixing

fan power, which decreases efficiency and may require additional CRAC units, which draw

even more electrical power.

A conservative estimate is that billions of kilowatt hours of electricity have been wasted due to

poor floor plans in data centers. This loss is almost completely avoidable.

23

Page 24: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

5.5 Basic Principles of Equipment Layout

The existence of the rack as the primary building block for equipment layouts permits a standardized

floor planning approach. The basic principles are summarized as follows:

• Control the airflow using a hot-aisle/cold-aisle rack layout.

• Provide access ways that are safe and convenient.

• Align the floor or ceiling tile systems with the equipment.

• Minimize isolated IT devices and maximize row lengths.

• Plan the complete equipment layout in advance, even if future plans are not defined.

Once these principles are understood, an effective floor planning method becomes clear.

5.5.1 Control of airflow using hot-aisle/cold-aisle rack layout

The use of the hot-aisle/cold-aisle rack layout method is well known and the principles are described

in other documents, such as ASHRAE TC9.9 Mission Critical Facilities, “Thermal Guidelines for

Data Processing Environments” 2004, and a white paper from the Uptime Institute titled

“Alternating Cold and Hot Aisles Provides More Reliable Cooling for Server Farms.” The basic

principle is to maximize the separation between IT equipment exhaust air and intake air by

establishing cold aisles where only equipment intakes are present and establishing hot aisles where

only equipment hot exhaust air is present. The goal is to reduce the amount of hot exhaust air that is

drawn into the equipment air intakes. The basic hot-aisle/cold aisle concept is shown in Figure 5.5.1.

Figure 5.5.1 – Basic hot-aisle/cold-aisle data center equipment layout plan

24

Page 25: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

In the above figure, the rows represent the IT equipment enclosures (racks). The racks are arranged

such that the adjacent rows face back to back, forming the hot aisles.

The benefits of the hot-aisle/cold-aisle arrangement become dramatic as the power density increases.

When compared to random arrangements or arrangements where racks are all lined up in the same

direction, the hot-aisle/cold-aisle approach allows for a power density increase up to 100% or more,

without hot spots, if the appropriate arrangement of CRAC units is used. Because all cooling

architectures (except for fully enclosed rack-based cooling) benefit dramatically from

hot-aisle/cold-aisle layout, this method is a principal design strategy for any floor layout.

5.5.2 Align the floor and/or ceiling tiles with the equipment

In many data centers the floor and ceiling tile systems are used as part of the air distribution

system. In a raised floor data center, it is essential that the floor grille align with racks. If the racks and

the floor grid do not align, airflow can be significantly compromised. It is also beneficial to align any

ceiling tile grid with the floor grid. This means the floor grid should not be designed or installed

until after the equipment layout is established, and the grid should be aligned or indexed to the

equipment layout according to the row layout options.

Unfortunately, specifiers and designers often miss this simple and no-cost optimization opportunity.

The result is that either (1) the grid is misaligned with the racks, with a corresponding reduction in

25

Page 26: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

efficiency and density capability, or (2) the racks are aligned to the grid but a suboptimal layout

results, limiting the number of racks that can be accommodated.

5.5.3 Pitch – the measurement of row spacing

The row length in a hot-aisle/cold aisle layout is adjustable in increments of rack width, which

provides significant flexibility. However, the spacing between aisles has much less flexibility and is a

controlling constraint in the equipment layout. The measurement of row-to-row spacing is called pitch,

the same term that is used to describe the repeating center-to-center spacing of such things as screw

threads, sound waves, or studs in a wall. The pitch of a data center row layout is the distance from one

mid-cold-aisle to the next mid-cold-aisle (Figure 5.5.3).

Figure 5.5.3 – Pitch of a row layout

26

Page 27: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

5.5.4 Minimize isolated IT devices and maximize row lengths

The control of airflow by separating hot and cold air, as described above, is compromised at

the end of a row where hot air can go around the side of the end rack and return to IT equipment air

intakes on the back.

Therefore, the theoretical ideal design of a data center is to have no row ends – i.e. rows of infinite

length. Conversely, the worst case situation would be rows of one-rack length – i.e., isolated single

racks. In addition, the ability to effectively implement redundancy is improved with longer rows. The

goal of row layout is to maximize row length consistent with the goals of providing safe and

convenient access ways. In general, a layout that provides longer row lengths is preferred, and a row

layout that generates short rows of 1-3 racks should be avoided.

5.5.5 Special considerations for wide racks

Standard-width racks (2 ft or 600 mm) conveniently align with the width of raised-floor tiles. When

underfloor cables must be distributed to such a rack, a hole is typically created in the tile directly

below the rack to run the cables; if that particular rack is then re-located or removed, the tile is simply

replaced with a new one.

Wide racks that do not align with the standard raised floor tile width are creating a new challenge,

because a rack may occupy two or even three tiles. If such a rack is removed, no longer can the tile

simply be replaced with a new one, since the tile is partially underneath the neighboring rack as well.

These issues can be avoided altogether by overhead power and data cable distribution.

5.5.6 Plan the complete floor layout in advance

The first phase of equipment deployment often constrains later deployments. For this reason it is

essential to plan the complete floor layout in advance.

5.5.7 Minimize isolated IT devices and maximize row lengths

When row lengths are three racks or less, the effectiveness of the cooling distribution is impacted.

Short rows of racks mean more opportunity for mixing of hot and cold air streams. For this reason,

when rooms have one dimension that is less than 15-20 feet it will be more effective in terms of

cooling to have one long row rather than several very short rows.

27

Page 28: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

5.5.8 Standardized room dimensions

There are preferred room dimensions for data centers, based on the pitch chosen. Given an area or

room that is rectangular in shape, free of the constraints imposed by support columns (described

earlier), the preferred length and width are established as follows:

• One dimension of the room should be a multiple of the hot-aisle/cold-aisle pitch, plus a

peripheral access-way spacing of approximately 2-4 tiles

• The other dimension of the room is flexible and will impact the length of the rows of racks

When one of the dimensions of the room is not optimal, the performance of the room can be

dramatically reduced, particularly if the room is smaller. The most obvious problem is that the number

of equipment racks may be lower than expected because some space cannot be used. The second, and

less obvious, problem is that when the ideal layout cannot be achieved, the power density and

electrical efficiency of the system is reduced.

To understand the effect of room dimension on the number of racks, consider a room with a fixed

length of 28 feet and a variable width. In such a room, the length of a row would be 10 racks, allowing

for 2 tiles (4 feet) at each row-end for access clearance. The number of racks that could fit in this room

will vary as a function of the width of the room as shown in Figure 5.5.8.

Figure 5.5.8 shows that the number of installable racks jumps at certain dimensions as new rows fit

into the room. Furthermore, the chart shows that certain numbers of racks are preferred because the

even row number permits a complete additional hot-aisle/cold-aisle pair to be installed. The preferred

width dimensions are indicated by the arrows, for the pitch (the most compact pitch A in this case) and

perimeter clearances (2 tiles) defined.

Figure 5.5.8 – Impact of room dimension on number of rows

28

Page 29: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

5.5.9 Location of support columns in room boundary layout

The location of support columns in the room can dramatically affect the equipment layout, as

previously illustrated. Therefore, when an option exists to locate room boundaries, the following

guidelines apply:

• For smaller rooms, arrange the room boundaries, if possible, so that no support columns are

in the equipment area.

• Rooms should be rectangular, where possible. Unusual shapes, niches, and angles often

cannot be effectively utilized and/or create a reduction in power density or electrical

efficiency.

• For situations where columns are unavoidable but boundaries are flexible, the floor plan

should be laid out as if no columns existed, based on the standardized dimensions of the

room, and the pitch(es) required. Columns should then be located directly over any one

particular rack location, preferably at a row end.

• For very large rooms, the location of the walls in relation to the columns is typically

inflexible.29

Page 30: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

When a column is located directly over a particular rack location, as the third bullet above suggests, it

is important to block off any openings between the column(s) and the neighboring racks. If these gaps

are not blocked with a filler panel, mixing of hot and cold air streams can occur and cooling

performance can be compromised.

5.5.10 Phased deployments

When a phased deployment is planned, there are two strategies that can be beneficial. These are:

• Creating area partitions

• Advance layout of future rows

When a future phase has a very large uncertainty, area partitions or walls that subdivide the data center

into two or more rooms can be used. The benefits are:

• Ability to re-purpose areas in the future

• Ability to perform radical infrastructure modifications in one area without interfering with the

operation of another area

• Ability to defer the installation of basic infrastructure (such as piping or wiring) to a future

date

The advent of modular row-oriented power and cooling architectures has reduced the need to provide

radical infrastructure modifications during new deployments, and has greatly reduced the cost and

uncertainty associated with installing base wiring and plumbing infrastructure. Therefore, the

compelling need to partition data centers has been dramatically reduced. Nevertheless, retaining

options such as future re-purposing of area is valuable for some users. The key to successful

partitioning is to understand that partitions should NEVER be placed arbitrarily without first

performing an equipment layout scenario analysis. This is because the floor layout can be seriously

compromised by a poor choice of a partition position.

During the setting of partitions or walls within a data center room, the same principles should be

applied as those used when establishing the overall perimeter room boundaries. The standard spacing

of rows must be considered. Failure to do this can result in problems (See Figure 5.5.10).

Note that the location of the wall in the bottom scenario has caused row 5 of the equipment layout to

be lost, representing 10 racks out of the 80 rack layout, or 12% of the total – a significant loss of rack

footprint space. Although the wall was only offset by a small amount, this loss occurs because the

wall-to-wall spacing does not permit appropriate access ways if row 5 is included. Furthermore, the

30

Page 31: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

access way between row 6 and the wall has become a hot aisle. This reduces the confining effect of the

hot-aisle/cold-aisle design and will result in a reduced power capacity for row 6. Furthermore, because

the primary access path between row 6 and the wall is now a hot aisle, this creates an uncomfortable

zone for personnel. These factors taken together demonstrate how serious a small change in a wall

location can be when partitioning a data center.

Figure 5.5.10 – Impact of portioning placement on number of rack locations

5.6 Floor Planning Sequence

Using the rack as the basic building block for floor layout, and the row-pair “pitch” as the spacing

template, a standardized floor layout approach is achievable. Starting with a floor plan diagram for the

room, the basic principles are summarized as follows:

31

Page 32: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

5.6.1 Identify and locate the room constraints

First, identify and locate all physical room constraints:

• Columns – verify the exact as-built dimensions

• Doorways

• Existing fixed equipment – breaker panels, pipe connections, fire suppression equipment,

cooling equipment

5.6.2 Establish key room-level options

Next, identify what additional equipment will be placed in the room, and the options available for

delivering/installing that equipment within existing room constraints:

• Identify additional equipment (besides the IT equipment or in-row power and cooling

equipment) that will be placed in the room, including any additional cooling equipment, fire

suppression equipment, power equipment, or user workstations.

• If the room uses a raised floor, determine the length(s) of the access ramp(s) and identify all

possible options for locating the ramps It is critical at this stage to know if the facility will have

a raised floor. Many new high-density data centers do not use a raised floor, so a raised floor

should not be automatically assumed. Sometimes it is even appropriate to remove a raised floor

from an existing site for new deployments.

5.6.3 Establish the primary IT equipment layout axis

Every room has two primary layout axes, or directions that the rows can be oriented. The axis

selection is one of the most critical decisions in a data center plan and has a large impact on

performance and economy. When using a hot-aisle/cold-aisle row pair arrangement in the pitch

determined necessary or preferred, test the two primary axis orientation layouts to establish if either

has an obvious advantage. When performing the test layouts, ensure that:

• Columns are not located in main access ways

• (If no raised floor) Rows are aligned to the ceiling grid so that the cold aisles contain

complete tiles

• There is sufficient clearance at row-ends and between rows and walls

• There is sufficient clearance/access around any fixed equipment in the room

• Access ramps, if required, are present and have been optimally located

• Any open areas or areas for another purpose face a cold aisle, not a hot aisle

32

Page 33: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

• Locations have been found for any additional equipment identified in the room level options

above

• Rows that are separated by an access way should not reverse the direction they face

• All rows align with the same axis (i.e., all a rows are parallel, with no perpendicular rows)

• The entire room is laid out in the floor plan, even if no immediate plans are in place to deploy

some sections of the room

To determine the preferred layout, the following factors should be considered:

• Which axis most effectively keeps support columns out of the main access ways?

• Which axis allows for the most racks?

• Which axis works best with the preferred hot-aisle/cold-aisle pitch?

• Which axis has hot-aisle/cold-aisle row pairs and does not end up with an odd number of

rows?

• Which axis has the fewest short rows or isolated racks?

• Which layout provides the desired aesthetic layout of the data center for purposes of viewing

or tours, if that is a consideration.

Different users may weigh the above criteria differently. It is common for users to choose a layout axis

that meets aesthetic considerations without concern for the data center performance, and later regret

their choice. The preferred method is to test both axes during planning and carefully decide the axis

selection based on an understanding of the consequences.

5.6.4 Lock the row boundaries

The process of selecting the primary layout axis typically establishes the row locations accurately.

With the row locations established, it is critical to establish and validate the row boundaries. This

includes setting the row end boundaries, and verifying the boundaries between fronts and/or backs of

rows with respect to other equipment, columns, or walls.

Access must be provided between row-ends and other obstructions using the following guidelines:

• For plain walls, a minimum of 2 tiles is an acceptable spacing for a row-end; larger data

centers often prefer 3 tiles to provide better accessibility.

• For some layouts, it may be desired to end a row at a wall. However, this creates a dead-end

alleyway which may limit the length of the row based on code requirements.

33

Page 34: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

• For long rows of over 10 racks, local regulations may require that breaks be placed in rows to

allow personnel to pass through. This may also be of practical concern for technicians who

need access to both sides of a rack without having to walk a long distance.

• The spacing between the row front (cold aisle) or the row back (hot aisle) and other

equipment must be carefully checked to ensure that access ways are sufficient, and that any

access required to those other devices for service or by regulation is sufficient and meets code.

• It must be verified that any other equipment that has been located as part of the floor plan is

not constrained by piping, conduits, or access restrictions.

The above restrictions and boundaries must be marked on the room layout before the axis selection

and row layout are confirmed.

For small data centers – i.e., up to 2 rows of racks – this floor planning process can occur as a paper

study. As the room size grows, computer-aided tools that ensure consistent scale become necessary in

order to accurately plan the floor layout. Ideally, the row layout and boundary areas should also be

marked out using colored masking tape in the actual facility. This step is quite feasible for many

smaller fit-out designs and for retrofits, and often identifies surprise constraints that were not realized

during the conceptual plans.

5.6.5 Specify row/cabinet density

Once the row boundaries and the orientation of the row axis have been established, the

enclosure/cabinet layout can be performed. This begins with the partitioning of rows by buildout

phase. For each phase, multiple zones or areas may exist, each with a unique density requirement.

5.6.6 Identify index points (for new room)

If the data center has a pre-existing raised floor, then the actual location of the floor grid relative to the

wall is pre-established and will have been comprehended in an earlier process step. However, for new

rooms, the raised floor grid location is controlled by the floor layout. An index point for the raised

floor grid should be established in the plan, and clearly and permanently marked in the room. It is

absolutely essential that the contractor installing the raised floor align the grid to the index point

during installation. If this is not done, it may not be possible to shift the layout later to align with

the grid due to the boundary constraints. In a raised floor design, this can result in a massive

loss of power density capability and a dramatic reduction of energy efficiency. This is a

34

Page 35: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

completely avoidable and terrible error that is commonly made during data center installations. Data

centers that use a hard floor rather than a raised floor do not have this concern.

If the data center uses a suspended ceiling for lighting and/or air return, aligning the index point to the

ceiling grid is also highly recommended, but less critical than aligning with the floor grid.

5.6.7 Specify the floor layout

The final step in the floor planning process is to specify the floor layout for subsequent design and

installation phases of the data center project. The specification is documented as a detailed floor layout

diagram, which includes all necessary room and obstruction measurements, all rack locations

identified, all unusable areas marked, and non-rack-based IT equipment that requires power and

cooling noted. Ideally, this specification diagram is created in a computer-aided tool such as APC’s

InfraStruXure Designer, which subsequently allows for the complete design of the data center’s

physical infrastructure, detailed to the rack level.

5.6.8 Identify index points (for new room)

If the data center has a pre-existing raised floor, then the actual location of the floor grid relative to the

wall is pre-established and will have been comprehended in an earlier process step. However, for new

rooms, the raised floor grid location is controlled by the floor layout. An index point for the raised

floor grid should be established in the plan, and clearly and permanently marked in the room. It is

absolutely essential that the contractor installing the raised floor align the grid to the index point

during installation. If this is not done, it may not be possible to shift the layout later to align with

the grid due to the boundary constraints. In a raised floor design, this can result in a massive

loss of power density capability and a dramatic reduction of energy efficiency. This is a

completely avoidable and terrible error that is commonly made during data center installations. Data

centers that use a hard floor rather than a raised floor do not have this concern.

If the data center uses a suspended ceiling for lighting and/or air return, aligning the index point to the

ceiling grid is also highly recommended, but less critical than aligning with the floor grid.

5.7 Common Errors in Equipment Layout

Many users attempt rudimentary floor layout planning, yet still have downstream problems. Here are

some of the most common problems observed by industries:

35

Page 36: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

5.7.1 Failure to plan the entire layout in advance

Most data centers begin to deploy equipment without a complete equipment deployment plan. As the

deployment expands, severe constraints on the layout may emerge, including:

• Equipment groups grow toward each other and end up facing hot-to-cold instead of hot-tohot,

with resultant hot spots and loss of power density capability

• Equipment deployments grow toward a wall, and it is subsequently determined that the last

row will not fit – but would have fit if the layout had been planned appropriately

• The rows have a certain axis orientation, but it is later determined that much more equipment

could have been deployed if the rows had been oriented 90º the other way – and it is too late

to change it

• Equipment deployments grow toward a support column, and it is subsequently determined

that the column lands in an access way, limiting equipment deployment – but much more

equipment could have been placed if the layout had been planned in advance

• Equipment deployments drift off the standard floor tile spacing and later high-density

deployments are trapped, not having full tiles in the cold aisles, with a resultant loss of power

density capability

Most existing data centers have one or more of the problems listed above, with the attendant

lossesof performance. In typical data centers routinely observed, the loss of available rack locations

due to these problems is on the order of 10-20% of total rack locations, and the loss of power density

capability is commonly 20% or more. These unnecessary losses in performance represent substantial

financial losses to data center operators, but can be avoided by simple planning.

5.8 Data Center Multi-Tier Design Overview

The multi-tier model is the most common model used in the enterprise today. This design consists

primarily of web, application, and database server tiers running on various platforms including

blade servers, one rack unit (1RU) servers, and mainframes.

5.8.1 Why Use the Three-Tier Data Center Design?

Why not connect servers directly to a distribution layer and avoid installing an access layer?

The three-tier approach consisting of the access, aggregation, and core layers permit flexibility in the

following areas:

36

Page 37: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

•Layer 2 domain sizing— When there is a requirement to extend a VLAN from one switch to

another, the domain size is determined at the distribution layer. If the access layer is absent, the

Layer 2 domain must be configured across the core for extension to occur. Extending Layer 2

through a core causes path blocking by spanning tree and has the risk of uncontrollable broadcast

issues related to extending Layer 2 domains, and therefore should be avoided.

•Service modules—An aggregation plus access layer solution enables services to be shared across

the entire access layer of switches. This lowers TCO and lowers complexity by reducing the

number of components to configure and manage. Consider future service capabilities that include

Application-Oriented Networking (AON), ACE, and others.

•Mix of access layer models—The three-tier approach permits a mix of both Layer 2 and Layer 3

access models with 1RU and modular platforms, permitting a more flexible solution and allowing

application environments to be optimally positioned.

•NIC teaming and HA clustering support—Supporting NIC teaming with switch fault tolerance

and high availability clustering requires Layer 2 adjacency between NIC cards, resulting in Layer 2

VLAN extension between switches. This would also require extending the Layer 2 domain through

the core, which is not recommended.

5.8.2 Why Deploy Services Switch?

When would I deploy Services Switch instead of just putting Services Modules in the

Aggregation Switch?

Incorporating Services Switch into the data center design is desirable for the following reasons:

•Large Aggregation Layer—If services are deployed in the aggregation layer, as this section

scales it may become burdensome to continue to deploy services in every aggregation switch. The

service switch allows for services to be consolidated and applied to all the aggregation layer

switches without the need to physically deploy service cards across the entire aggregation layer.

Another benefit is that it allows the aggregation layer to scale to much larger port densities since

slots used by service modules are now able to be deployed with LAN interfaces.

•Mix of service modules and appliances—Data center operators may have numerous service

modules and applicances to deploy in the data center. By using the service chassis model you can

deploy all of the services in a central fashion, allowing the entire data center to use the services

instead of having to deploy multiple appliances and modules across the facility.

•Operational or process simplification—Using the services switch design allows for the core,

aggregation, and access layers to be more tightly controlled from a process change prespective.

37

Page 38: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Security, load balancing, and other services can be configured in a central fashion and then applied

across the data center without the need to provide numerous access points for the people operating

those actual services.

•Support for network virtualization—As the network outside of the data center becomes more

virtualized it may be advantageous to have the services chassis become the point where things like

VRF-aware services are applied without impacting the overall traffic patterns in the data center.

5.8.3 Determining Maximum Servers

What is the maximum number of servers that should be on an access layer switch? What is the

maximum number of servers to an aggregation module?

The answer is usually based on considering a combination of oversubscription, failure domain sizing,

and port density. No two data centers are alike when these aspects are combined. The right answer for

a particular data center design can be determined by examining the following areas:

•Oversubscription—Applications require varying oversubscription levels. For example, the web

servers in a multi-tier design can be optimized at a 15:1 ratio, application servers at 6:1, and

database servers at 4:1. An oversubscription ratio model helps to determine the maximum number

of servers that should be placed on a particular access switch and whether the uplink should be

Gigabit EtherChannel or 10GE. It is important for the customer to determine what the

oversubscription ratio should be for each application environment. The following are some of the

many variables that must be considered when determining oversubscription:

–NIC—Interface speed, bus interface (PCI, PCI-X, PCI-E)

–Server platform—Single or dual processors, offload engines

–Application characteristics—Traffic flows, inter-process communications

–Usage characteristics—Number of clients, transaction rate, load balancing

•Failure domain sizing—This is a business decision and should be determined regardless of the

level of resiliency that is designed into the network. This value is not determined based on

MTBF/MTTR values and is not meant to be a reflection of the robustness of a particular solution.

No network design should be considered immune to failure because there are many uncontrollable

circumstances to consider, including human error and natural events. The following areas of failure

domain sizing should be considered:

–Maximum number of servers per Layer 2 broadcast domain

–Maximum number of servers per access switch (if single-homed)

38

Page 39: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

–Maximum number of servers per aggregation module

–Maximum number of access switches per aggregation module

•Port density—The aggregation layer has a finite number of 10GigE ports that can be supported,

which limits the quantity of access switches that can be supported. When a Catalyst 6500 modular

access layer is used, thousands of servers can be supported on a single aggregation module pair. In

contrast, if a 1RU Catalyst 4948 is used at the access layer, the number of servers supported is less.

Cisco recommends leaving space in the aggregation layer for growth or changes in design.

The data center, unlike other network areas, should be designed to have flexibility in terms of

emerging services such as firewalls, SSL offload, server load balancing, AON, and future

possibilities. These services will most likely require slots in the aggregation layer, which would

limit the amount of 10GigE port density available.

5.8.4 Determining Maximum Number of VLANs

What is the maximum number of VLANs that can be supported in an aggregation module?

•Spanning tree processing—When a Layer 2 looped access topology is used, which is the most

common, the amount of spanning tree processing at the aggregation layer needs to be considered.

There are specific watermarks related to the maximum number of system-wide active logical

instances and virtual port instances per line card that, if reached, can adversely affect convergence

and system stability. These values are mostly influenced by the total number of access layer

uplinks and the total number of VLANs. If a data center-wide VLAN approach is used (no manual

pruning on links), the watermark maximum values can be reached fairly quickly.

•Default Gateway Redundancy Protocol— The quantity of HSRP instances configured at the

aggregation layer is usually equal to the number of VLANs. As Layer 2 adjacency requirements

continue to gain importance in data center design, proper consideration for the maximum HSRP

instances combined with other CPU-driven features (such as GRE, SNMP, and others) have to be

considered. Lab testing has shown that up to 500 HSRP instances can be supported in an

aggregation module, but close attention to other CPU driven features must be considered.

5.8.5 Importance of Team Planning

Considering the roles of different personnel in an IT organization shows that there is a growing need

for team planning with data center design efforts. The following topics demonstrate some of the

challenges that the various groups in an IT organization have related to supporting a “business ready”

data center environment:

39

Page 40: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

•System administrators usually do not consider physical server placement or cabling to be an

issue in providing application solutions. When the need arises for one server to be connected into

the same VLAN as other servers, it is usually expected to simply happen without thought or

concern about possible implications. The system administrators are faced with the challenge of

being business-ready and must be able to deploy new or to scale existing applications in a timely

fashion.

•Network/Security administrators have traditionally complied with these requests by extending

the VLAN across the Layer 2 looped topology and supporting the server deployment request. This

is the flexibility of having a Layer 2 looped access layer topology, but is becoming more of a

challenge now than it was in the past. The Layer 2 domain diameters are getting larger, and now

the network administrator is concerned with maintaining spanning tree virtual/logical port counts,

manageability, and the failure exposure that exists with a large Layer 2 broadcast domain. Network

designers are faced with imposing restrictions on server geography in an effort to maintain

spanning tree processing, as well as changing design methods to include consideration for Layer 2

domain sizing and maximum failure domain sizing.

•Facilities administrators are very busy trying to keep all this new dense hardware from literally

burning up. They also see the additional cabling as very difficult if not impossible to install and

support with current design methods. The blocked air passages from the cable bulk can create

serious cooling issues, and they are trying to find ways to route cool air into hot areas. This is

driving the facilities administrators to look for solutions to keep cables minimized, such as when

using 1RU switches. They are also looking at ways to locate equipment so that it can be cooled

properly.

These are all distinct but related issues that are growing in the enterprise data center and are creating

the need for a more integrated team planning approach. If communication takes place at the start,

many of the issues are addressed, expectations are set, and the requirements are understood across all

groups.

6 Analysis

Data center architectures are evolving to meet the demands and complexities imposed by increasing

business requirements to stay competitive and agile. Industry trends such as data center

consolidation, server virtualization, advancements in processor technologies, increasing storage

demands, rise in data rates, and the desire to implement "green" initiatives is causing stress on

current data center designs. However, as this document discusses, crucial innovations are emerging

40

Page 41: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

to address most of these concerns with an attractive return on investment (ROI). Future data centers

architectures will incorporate increasing adoption of 10 Gigabit Ethernet, new technologies such as

Fibre Channel over Ethernet (FCoE), and increased interaction among virtualized environments.

Data Centers are specialized environments that safeguard your company's most valuable equipment

and intellectual property. A well-planned and effectively managed Data Center supports these

operations and increases your company's productivity by providing reliable network availability and

faster processing. In many ways your Data Center is the brain of your company. Your business' ability

to perceive the world (data connectivity), communicate (e-mail), remember information (data storage),

and have new ideas (research and development) all rely upon it functioning properly.

As businesses seek to transform their IT departments from support organizations into sources of

productivity and revenue, it is more important than ever to design and manage these specialized

environments correctly. A well-built Data Center does not just accommodate future growth and

innovation; it acts as a catalyst for them. Companies that know their Data Center is robust, flexible,

and productive can roll out new products, move forward with their business objectives, and react to

changing business needs, all without concern over whether their server environment is capable of

supporting new technologies, high-end servers, or greater connectivity requirements.

7. Findings, Inferences and Recommendations

Running out of data ports is perhaps the most common infrastructure shortcoming that occurs in server

environments, especially in those whose rows contain infrastructure tailored to support a specific

model of server. When it comes time to host different equipment, those cabinet locations must be

retrofitted with different infrastructure. Fortunately, lack of connectivity is one of the easier issues to

address. It can be remediated in one of two ways.

One option is added structured cabling. As long as the installer is careful to work around existing

servers and their connections, the upgrade can usually be completed without any downtime. This

cabling needs to terminate somewhere in the Data Center, however, either at a network substation or a

main networking row. These added ports might require more space than existing networking cabinets

can provide, so be aware that more floor space might need to be allocated for them, which in turn

reduces what is available to host servers.

41

Page 42: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

A second option, particularly when the need is for copper connections, is to install either of two

networking devices—a console server or console switch—at the cabinet where more ports are needed.

These networking devices can send multiple streams of information over one signal, a process known

as multiplexing. For example, if you have several servers installed in a cabinet, instead of running a

dozen patch cords from those devices to the Data Center's structured cabling under the floor, you run

those patch cords to a console server or console switch and—thanks to multiplexing—you then run

just one patch cord from that device to the structured cabling. Installing these networking devices can

significantly expand the capacity of your Data Center's existing structured cabling.

This second approach is best used when:

There is limited internal space within the Data Center's network cabinets. Installing these

networking devices in key server cabinets can reduce the space needed for additional patching

fields.

Infrastructure within the Data Center plenum is chaotic, and installing more structured cabling

may either restrict airflow or pose a downtime risk. Increasing ports by way of these

networking devices involves only a fraction of the structured cabling than would otherwise be

required.

The need for additional ports is temporary. Networking devices can be removed and reused

more easily than structured cabling.

As Data Center cabinets fill up with servers, you might discover that its cooling infrastructure isn't up

to the task of keeping the space cool. The overall ambient temperature of the server environment might

become too warm, or else hot spots might develop in areas where servers are tightly packed or a large

device emits a high amount of exhaust.

Presumably, you have already used the Data Center's floor tiles to good advantage, placing perforated

floor tiles so that cooling is directed at known hot spots and sealing unwanted openings to maintain air

pressure. There are many techniques for improving cooling in a server environment. Here are five:

Relocate air handler temperature sensors— These devices are generally located at the

cooling unit itself, where temperatures are often lower. Placing the sensors deeper within the

room gives the air handlers more accurate readings about the Data Center's ambient

temperature and can cause them to provide cooling for longer periods.

42

Page 43: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Install ducted returns— These draw away more of the Data Center's heated air, channeling it

into each handler's normal cooling cycle, and can reduce temperatures in sections of the room

by a few degrees.

Distribute servers— If tightly packed servers are causing hot spots, spreading such equipment

out is a sure way to prevent them. This solution obviously has limited value if your Data

Center also has space constraints. However, even if you don't have the option to only partially

fill server cabinets in your Data Center, at least try to strategically locate devices so that the

highest heat-producers aren't clustered together. It is easier to deal with several warmer areas in

a server environment than one that is very hot.

Install self- cooling cabinets—Reinstall the Data Center's most prodigious heat-generating

servers into cabinets that are cooled by fans or chilled liquid. This eliminates hot spots at their

source and might lower the room's overall ambient temperature.

Install additional air handlers— Finally, if all else fails you might need to put in another air

handler to increase how much cold air is being pumped in to the Data Center. Use the same

approach that you would when designing the room's cooling infrastructure from scratch—try to

place the handler perpendicular to server rows and create a buffer area around it so that short

cycling does not occur.

When performing any work on your Data Center's cooling system that might require air handlers to be

shut down, have multiple portable fans and spot coolers at the ready. Temperatures can rise quickly

when air handlers are turned off. You might need to prop open the Data Center doors and use fans to

blow hot air out of the room. For major cooling system work, try to schedule work to occur during

colder weather, such as at night or during winter months or both.

7.1 Paradigm Shifts

It is also possible that your Data Center has ample physical room and infrastructure available and yet

still begins having problems hosting incoming equipment. This occurs when servers, networking

devices, or other machines arrive that the server environment wasn't designed to accommodate.

Maybe server manufacturers alter their designs, making machines that need more physical space or

electrical power. Perhaps your company decides to pursue a different business goal, requiring

equipment that your Data Center never had to host in years past. It could also be that technology

changes, requiring new cabling media to accommodate it. Whatever the cause, this can be the hardest

43

Page 44: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

of shortcomings to deal with in a server environment, because you might not be able to overcome it by

simply adding a few circuit panels or running more structured cabling. The physical layout of Data

Center rows might need to be changed, including the physical relocation of both servers and

infrastructure components—all while the server environment remains online.

If you are fortunate—more accurately, if you anticipated the need for future change—you designed

your Data Center infrastructure to be easily upgradeable. Flexible electrical conduits and lightly

bundled structured cabling, with additional slack provided, can enable you to reconfigure your under-

floor infrastructure quickly and concentrate power and data connectivity where it is needed.

Infrastructure components that enable you to use different media, such as multimedia boxes that can

accommodate multiple connector types and electrical whips pre-wired to terminate in several types of

receptacles, also make it easier to change elements of a server environment.

If your Data Center possesses these types of infrastructure components, retrofitting the room might be

as simple as having structured cabling and electrical conduits reterminated. More likely, however, you

are going to have to make more dramatic and intrusive changes to the server environment. This might

include rearranging server rows, removing and rerunning structured cabling, and either adding or

relocating power distribution units or air handlers.

Here are several tips to follow when making significant infrastructure changes to your existing Data

Center:

Upgrade the room in phases, say a couple of server rows at a time, rather than trying to

overhaul it all at once. Retrofitting a live Data Center is like performing surgery on a conscious

patient. Breaking the task down into segments reduces the effect of downtime and makes it less

likely for something to go wrong on a large scale. If work can be completed over a long period

of time, you might even be able to coordinate infrastructure changes with server lifecycles, that

is as servers are decommissioned and replaced with newer models.

When retrofitting the Data Center dictates physically moving equipment, take advantage of

devices that have dual power supplies. Strategically shifting power plugs from an old power

receptacle to a new one can enable you migrate a server to a new part of the Data Center, or

onto a different power source that won't be shut down, and enable you to avoid downtime.

If work in the Data Center might produce debris or airborne particles, shut down the fire

suppression system to avoid setting it off accidentally.

44

Page 45: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

If new servers are installed in the Data Center while it is undergoing a retrofit, place them

according to how the room is going to be designed ultimately. Arrange them to be part of the

new layout, not another piece of equipment that must be relocated later.

An example of a recent paradigm shift for Data Centers is the emergence of 1U servers. At the start of

this century, server manufacturers began producing low profile servers that were high performing and

relatively inexpensive compared to earlier generations of devices.

IT departments in many companies opted to replace older, larger systems with dozens of 1U servers.

They are particularly popular for computing applications that can pool the power of multiple servers.

For these applications, a cluster of 1U devices provides both flexibility and redundancy: flexibility,

because exactly how many servers are dedicated to a task can be altered as needed and redundancy

because even if a server fails there are several others that continue processing. Low-profile servers are

also desirable for companies that lease Data Center space. Most hosting facilities charge based upon

how much floor space a client occupies, so using smaller servers can reduce those costs.

Despite their merits, 1U servers are difficult to host in many Data Centers. When clustered together,

they are heavy, draw large amounts of power, produce a lot of heat, and require a high amount of data

connections in a very small space—not what most pre-existing server environments were designed and

built to accommodate.

It isn't easy monitoring a distributed environment. Unchecked server growth fills data center space

rapidly. Heterogeneous platforms and operating systems make provisioning and inventorying a long

and laborious process. Power supply limitations can prevent cooling upgrades or curtail the addition of

servers. It is difficult to know what IT resources are available, making it extremely difficult to plan

effectively for future needs.

Capacity planners, therefore, have two key questions to answer: How do we use the existing

environment? And what is our total installed capacity? Most of the time there isn't an easy

response. The picture is muddled by the fact that the capacity planner is often being forced to compare

apples to oranges.

The rising energy costs of running a data center are gaining more and more attention as they are

already in the range of $3.3 billion annually, according to IDC.  As a result, the Environmental

Protection Agency (EPA) and Department of Energy are now creating standard ratings for

energy efficiency benchmarks, forcing companies to be more conscious of their energy use and

45

Page 46: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

environmental impact.  Many companies, however, are wary of such new regulations and standards,

as they think it’ll mean incurring new costs to meet them. 

This growing number of servers and data centers, naturally causes an increase in power and energy

consumption, quickly escalating the amount of resources needed and raising environmental concerns

even more.  Thirty percent responded that they could find an additional 20 percent capacity.  By

having the proper tools in place, companies can keep their data center lean, mean, and green. 

Sometimes the driving force behind the retrofit of a Data Center is that the room has become so

disorganized and cluttered that it is problematic to install new equipment. Tangled patch cords,

unlabeled structured cabling and electrical whips, and poor cooling distribution can all make a server

environment vulnerable to downtime every time a new device is installed. Before embarking upon a

major construction project to add infrastructure or expand an overworked server environment, see if

any of the following can remediate the problems and make the space more usable:

Use the right length patch cords for the job— System administrators sometime plug in servers and

networking devices using whatever patch cords happen to be at hand rather than locating the correct

lengths of cable that are needed to make connections.

This results in 15 feet of cable being used to go 4 feet (4 meters of cable used to go 1 meter).

Excess cable length is left to dangle from the device or patch panel it is plugged into, perhaps

coiled with a tie wrap or perhaps not. Over time, this creates a spider web of cable that blocks

access to servers, pre-installed cabling ports, and electrical receptacles. This tangled web not

only reduces the usability of Data Center infrastructure, but it all presents a snagging hazard that

can cause accidental downtime. Replace overly long patch cords and power cables with those

that are the correct length. Also, remove cords and cables that aren't plugged in to functioning

servers and are simply leftovers from decommissioned equipment. Do this within cabinets and

under the raised floor.

Many servers come standard with power cables that are 6 feet (1.8 meters) long. This is useful for

reaching a power receptacle under a raised floor or in a ceiling-mounted raceway, but it is much longer

than necessary if you are installing the device into a server cabinet with its own power strips. You can

tie wrap the excess length, but even that still dangles somewhere within the server cabinet.

46

Page 47: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

To reduce this problem, I stock power cables that are the same type provided with the servers, but just

2 feet (61 centimeters) long. The shorter cables are much less likely to become tangled or get snagged

when someone is installing or removing equipment into a cabinet.

Add wire management— Even when Data Center users run the right length of patch cord,

you end up with hanging cables if adequate wire management is not provided. Install new or

larger cable management at cabinet locations where cable glut interferes with access to

infrastructure or equipment.

Together with replacing correct cable lengths, installing wire management can free up access

to existing data ports and electrical receptacles. These steps also make troubleshooting easier,

improve airflow around servers, and improve the overall appearance of the Data Center.

Make sure that people are correctly using infrastructure— You can spend hundreds of

thousands of dollars on structured cabling, but it is worthless if Data Center users string patch

cords between server cabinets rather than use the infrastructure that is installed. Failing to

properly use the infrastructure in a server environment leads to disorganization, makes

troubleshooting difficult, and can create situations that are hazardous to both equipment and

Data Center users.

For example, imagine that someone installs a server into a cabinet within the Data Center, and

they don't understand that each cabinet location is provided with dedicated circuits from two

different power sources. Wanting redundant power for their server's dual power supplies, they

plug one power cable in to the power strip of their own server cabinet and then string the other

power cable to the strip in an adjacent cabinet. This adds unnecessary electrical draw onto the

adjacent power strip, making it more susceptible to tripping a circuit. It also ties the two

adjacent server cabinets together. If a time ever comes to relocate either cabinet, and whoever

is moving it is unaware that a power cord has been strung between them, an accident might

occur that could harm someone or damage a server.

Install power strips with known electrical ratings— Do you know how much equipment

you can install into your server cabinets before you overload their power strips? If not, you are

either underutilizing server cabinet space or risking downtime every time you plug in new

equipment. Swap out any mystery power strips with ones whose amp ratings are known. If

possible, have the power strip rated for the same amperage as the circuit it is plugged into. This

47

Page 48: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

enables you to maximize existing electrical circuits and server cabinet space.

Redeploy floor tiles— If cooling is a problem and your Data Center has a raised floor, check

that floor tiles are deployed to best advantage. Close off unnecessary openings in the floor

surface and open perforated tiles closest to highly populated cabinets. Close or relocate floor

panels that are especially close to air handlers and might be causing short cycling. Make sure

under-floor infrastructure isn't blocking air coming from the room's air handlers. If it is, try to

shift the location of structured cabling and electrical conduits to enable better circulation.

7.2 Below are the top ten Data Center Design Guidelines:

The following are the top ten guidelines selected from a great many other guidelines, many of which

are described throughout this book.

1. Plan ahead. You never want to hear “Oops!” in your data center.

2. Keep it simple. Simple designs are easier to support, administer, and use. Set

things up so that when a problem occurs, you can fix it quickly.

3. Be flexible. Technology changes. Upgrades happen.

4. Think modular. Look for modularity as you design. This will help keep things

simple and flexible.

5. Use RLUs, not square feet. Move away from the concept of using square footage

of area to determine capacity. Use RLUs to define capacity and make the data

center scalable.

6. Worry about weight. Servers and storage equipment for data centers are getting

denser and heavier every day. Make sure the load rating for all supporting

structures, particularly for raised floors and ramps, is adequate for current and

future loads.

7. Use aluminum tiles in the raised floor system. Cast aluminum tiles are strong

and will handle increasing weight load requirements better than tiles made of

other materials. Even the perforated and grated aluminum tiles maintain their

strength and allow the passage of cold air to the machines.

8. Label everything. Particularly cabling! It is easy to let this one slip when it seems

as if “there are better things to do.” The time lost in labeling is time gained when

you don’t have to pull up the raised floor system to trace the end of a single cable.

48

Page 49: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

And you will have to trace bad cables!

9. Keep things covered, or bundled, and out of sight. If it can’t be seen, it can’t be

messed with.

10. Hope for the best, plan for the worst. That way, you’re never surprised.

Most existing data centers have one or more of the problems listed above, with the attendant

losses of performance. In typical data centers routinely observed, the loss of available rack locations

due to these problems is on the order of 10-20% of total rack locations, and the loss of power density

capability is commonly 20% or more. These unnecessary losses in performance represent substantial

financial losses to data center operators, but can be avoided by simple planning.

When row lengths are three racks or less, the effectiveness of the cooling distribution is impacted.

Short rows of racks mean more opportunity for mixing of hot and cold air streams. For this reason,

when rooms have one dimension that is less than 15-20 feet it will be more effective in terms of

cooling to have one long row rather than several very short rows.

Use of best-practices air management, such as strict hot aisle/cold aisle configuration,

can double the computer server cooling capacity of a data center.

Combined with an airside economizer, air management can reduce data center cooling

costs by over 60%1.

Removing hot air immediately as it exits the equipment allows for higher capacity and

much higher efficiency than mixing the hot exhaust air with the cooling air being drawn

into the equipment. Equipment environmental temperature specifications refer

primarily to the air being drawn in to cool the system.

A higher difference between the return air and supply air temperatures increases the

maximum load density possible in the space and can help reduce the size of the cooling

equipment required, particularly when lower-cost mass produced package air handling

units are used.

Poor airflow management will reduce both the efficiency and capacity of computer

room cooling equipment. Examples of common problems that can decrease a Computer

Room Air Conditioner (CRAC) unit’s usable capacity by 50%2 or more are: leaking

floor tiles/cable openings, poorly placed overhead supplies, underfloor plenum

obstructions, and inappropriately oriented rack exhausts.

Specify and utilize high efficiency power supplies in Information Technology (IT)

49

Page 50: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

computing equipment. High efficiency supplies are commercially available and will

pay for themselves in very short timeframes when the total cost of ownership is

evaluated.

For a modern, heavily loaded installation with 100 racks, use of high efficiency power

supplies alone could save $270,000-$570,0002 per year and decrease the square-

footage required for the IT equipment by allowing more servers to be packed into a

single rack footprint before encountering heat dissipation limits.

Cooling load and redundant power requirements related to IT equipment can be

reduced by over 10 – 20%, allowing more computing equipment density without

additional support equipment (UPSs, cooling, generators, etc.).

In new construction, downsizing of the mechanical cooling equipment and/or electrical

supply can significantly reduce first cost and lower the mechanical and electrical

footprint.

When ordering servers, power supplies that meet at least the minimum efficiency

recommendations by the SSI Initiative (SSI members include Dell, Intel, and IBM).

When appropriate, limit power supply oversizing to ensure higher – and more

efficient – load factors.

Select the most efficient UPS system that meets the data center’s needs. Among double

conversion systems (the most commonly used data center system), UPS efficiency

ranges from 86% to 95%. Simply selecting a 5% higher efficiency model of UPS can

save over $38,000 per year in a 15,000 square foot data center, with no discernable

impact on the data center’s operation beyond the energy savings. In addition,

mechanical cooling energy use and equipment cost can be reduced.

For battery-based UPS systems, use a design approach that keeps the UPS load factor

as high as possible. This usually requires using multiple smaller units. Redundancy in

particular requires design attention; operating a single large UPS in parallel with a

100% capacity identical redundant UPS unit (n+1 design redundancy) results in very

low load factor operation, at best no more than 50% at full design build out.

Evaluate the need for power conditioning. Line reactive systems often provide enough

power conditioning for servers, and some traditional double conversion UPS systems

(which offer the highest degree of power conditioning) have the ability to operate in the

more efficient line conditioning mode, usually advertised as ‘economy’ or ‘eco’ mode.

7.3 Below are recommendations to build Tier 3 data center:

50

Page 51: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

• At least two access providers should serve this data center with the provider’s cabling from their central offices or POPs (Points of Presence) separated by at least 66 feet along their route.

• Mantraps at all entrances to the computer room should control letting more than one person in by the use of only one credential.

• A signal reference grid (SRG) and lightning protection system should be provided.

• If the HVAC system’s air conditioning units are served by a waterside heat rejection system, the components of these systems are to be sized to maintain design conditions, with one electrical switchboard removed from service. The piping system or systems are dual path.

• Two independent sets of pipes are to be used for data centers using chilled water.

• Sufficient capacity and distribution should be available to simultaneously carry the load on one path while performing maintenance or testing on the other path. If there are errors during operation or any other unplanned activities, disruption will be caused.

Data center design should follow industry standards for best practices. Industry guidance is on the way

in the form of an emerging industry standard for data centers. This document, to be published as

Telecommunications Industry Association ANSI/TIA/EIA-942, Telecommunications Infrastructure

Standard for Data Centers, lists requirements and provides recommendations for data center design

and construction. TIA-942 helps consultants and end-users design an infrastructure that will last years

without forklift renovations. The standard also gives information on cooling, power, room sizing and

other information useful in data center design.

Below are proposals by World Network leader CISCO has given to follow:

● Realize the full potential of your data center investment by improving your network

performance, availability, security, and QoS.

● Adopt and combine technologies to create a data center network that continuously evolves to

sustain a competitive business.

● Proactively address potential data center issues before they affect operations.

● Achieve and maintain a comprehensive, end-to-end data center optimization solution.

● Make sure that the data center plays a strategic role in your efforts to protect, optimize, and

grow your business.

● Achieve operational excellence by providing informal training that prepares your staff to

knowledgeably manage data center technologies.

51

Page 52: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

8 Conclusion

In many ways your Data Center is the brain of your company. Your business' ability to perceive the

world (data connectivity), communicate (e-mail), remember information (data storage), and have new

ideas (research and development) all rely upon it functioning properly.

A well-built Data Center does not just accommodate future growth and innovation; it acts as a catalyst

for them. Companies that know their Data Center is robust, flexible, and productive can roll out new

products, move forward with their business objectives, and react to changing business needs, all

without concern over whether their server environment is capable of supporting new technologies,

high-end servers, or greater connectivity requirements.

It is safe to assume that routers, switches, servers, and data storage devices will advance and change in

the coming years. They will feature more of something than they do now, and it will be your Data

Center's job to support it. Maybe they will get bigger and heavier, requiring more power and floor

space. Maybe they will get smaller, requiring more data connections and cooling as they are packed

tighter into the Data Center. They might even incorporate different technology than today's machines,

requiring alternate infrastructure. The better your server environment responds to change, the more

valuable and cost-effective it is for your business. New equipment can be deployed quicker and easier,

with minimal cost or disruption to the business.

Data Centers are not static, so their infrastructure should not be either. Design for flexibility. Build

infrastructure systems using components that are easily changed or moved. This means installation of

patch panels that can house an array of connector types and pre-wiring electrical conduits so they can

accommodate various electrical plugs by simply swapping their receptacle. It also means avoiding

items that inhibit infrastructure mobility. Deploy fixed cable trays sparingly, and stay away from

proprietary solutions that handcuff you to a single brand or product.

Make the Data Center a consistent environment. This provides stability for the servers and networking

equipment it houses, and increases its usability. The room's modularity provides a good foundation for

52

Page 53: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

this, because once a user understands how infrastructure is configured at one cabinet location, he or

she will understand it for the entire room. Build on this by implementing uniform labeling practices,

consistent supplies, and standard procedures for the room. If your company has multiple server

environments, design them with a similar look and feel. Even if one Data Center requires

infrastructure absolutely different from another, use identical signage, color-coding, and supplies to

make them consistent. Standardization makes troubleshooting easier and ensures quality control.

Above all, your Data Center has to be reliable. Its overarching reason for existence is safeguarding

your company's most critical equipment and applications. Regardless of what catastrophes happen

outside—inclement weather, utility failures, natural disasters, or something else unforeseen—you want

your Data Center up and running so your business continues to operate.

To ensure this, your Data Center infrastructure must have depth: standby power supplies to take over

when commercial electricity fails, and redundant network stations to handle the communication needs

if a networking device malfunctions, for example. Primary systems are not the only ones susceptible to

failure, so your Data Center's backup devices might need backups of their own.

Additionally, the infrastructure must be configured so there is no Achilles Heel, no single component

or feature that makes it vulnerable. It does little good to have multiple standby power systems if they

are all wired through a single circuit, or to have redundant data connections if their cable runs all enter

the building at one location. In both examples, a malfunction at a single point can bring the entire Data

Center offline.

As IT organizations look for ways to increase the cost-effectiveness and agility of their data centers,

they need to understand the current state of their architecture and determine which changes can best

help them achieve their business and IT goals.

Ensure more efficient use of data center facilities.

Gain operational efficiencies and cost savings through standardization and asset consolidation.

Increase asset utilization to increase flexibility and reduce costs.

Reduce energy costs.

Extend the working life of capital assets.

Optimize use of space, power, and cooling infrastructure.

53

Page 54: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Avoid or defer construction of new facilities.

Reduce business impact of localized and large footprint disaster events.

Improve productivity through enhanced application and data availability.

Meet corporate and regulatory compliance needs.

Improve data security and compliance.

Extend desktop hardware lifecycles.

Extend business continuity and disaster recovery to enterprise desktops.

Data Center solutions enables IT organizations to meet business continuance and corporate

compliance objectives, and provide benefits that include:

Reducing business impact of localized and large footprint disaster events.

Improving productivity through enhanced application and data availability.

Meeting corporate and regulatory compliance needs and improving data security.

9. An overview of the organization

PATNI COMPUTER SYSTEMS LTD. is a provider of Information Technology services and

business solutions. The company employs over 13,600 people, and has 23 international offices across

the globe, as well as offshore development centers in 8 cities in India. PATNI's clients include more

than 200 Fortune 1000 companies. PATNI has registered revenues of US$ 719 million for the year

2008.

PATNI COMPUTER SYSTEMS LTD. was incorporated as Patni Computer Systems Private

Limited on February 10, 1978 under the Companies Act, 1956. In 1988, by virtue of Section 43A of

the Companies Act, the Company became a public limited company on September 18, 2003.

In 2004, PATNI came out with an initial public offering (IPO) of 18,724,000 equity shares in the price

of Rs 230 per share for a face value of Rs 2 each. In the same year, PATNI acquired Fremont,

California based Cymbal Corporation for a sum of US$78 mn. Cymbal's acquisition allowed PATNI

to enter $60 billion IT services market in the telecom vertical which was previously not available to

54

Page 55: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

PATNI on their business landscape. This acquisition also allowed PATNI to spread its Non-GE

Business, and added a development center in Hyderabad, India.

In December 2005, PATNI COMPUTER SYSTEMS LTD. listed its ADRs on the New York Stock

Exchange (NYSE) under the ticker PTI.

10. PATNI COMPUTER SYSTEMS LTD.

PATNI COMPUTER SYSTEMS LTD. (BSE: PATNI COMPUT, NSE: PATNI, NYSE: PTI) is one of

the leading global providers of Information Technology services and business solutions. Over 13,600

professionals service clients across diverse industries, from 27 sales offices across the Americas,

Europe and Asia-Pacific, and 22 Global Delivery Centers in strategic locations across the world. We

have serviced more than 400 FORTUNE 1000 companies, for over two decades.

Our vision is to achieve global IT services leadership in providing value-added high quality IT

solutions to our clients in selected horizontal and vertical segments, by combining technology skills,

domain expertise, process focus and a commitment to long-term client relationships.

PATNI delivers high quality, reliable and cost-effective IT services to customers globally. We provide

world-class technology services by constantly exploring and implementing innovative solutions that

drive long-term value to our customers.

As industry leaders, we introduced offshore development centers, pioneered "follow the sun"

development and support frameworks, ensuring compressed delivery timeframes.

Today, our solutions provide strategic advantage to several most-admired organizations in the world.

We have long-standing and vibrant partnerships with over 300 companies across the globe.

The most successful companies in the world have two common characteristics: they focus on core

competencies, and flawlessly execute on a consistent basis, in every area of business. For all other

companies, these elusive qualities are a highly prized goal. To achieve these goals in an environment

with increasing costs and competition, many global organizations are turning to Customer Interaction

Services (CIS) & Business Process Outsourcing (BPO) for streamlining their operations and

increasing profitability.

PATNI’s suite of CIS & BPO services is a natural extension of our IT service offerings. Our CIS &

BPO services are built on a foundation of process and domain expertise, and are enabled by innovative

55

Page 56: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

technologies. As in our other practices, PATNI’s CIS & BPO services are managed to meet high

quality standards. Clients rely on them to improve the bottom line and focus more effectively on their

core business, while maintaining a high quality of service.

PATNI provides customized global sourcing solutions to a diverse group of clients who rely on us for

vertical-specific processes, as well as shared corporate services. In addition to a broad range of

horizontal services including IT Helpdesk, Finance & Accounting, HR Services, Enterprise-wide

Service Desk and Product Support, we also provide a comprehensive suite of CIS & BPO services

for the Insurance, Financial Services, Telecom, Life Sciences vertical markets. For 401(K) Plan

administrators within the Insurance space, we also offer offshoring services of the entire Benefits

Administration Lifecycle.

PATNI offers clarity and expedience in delivering CIS & BPO solutions. We give clients a clear

roadmap that is designed to enhance productivity and reduce costs through process assessment,

process standardization and process re-engineering. To ensure the highest levels of information quality

and integrity, we have adhered to the BS7799 standard to establish a robust Information Security

Management System (ISMS) that integrates process, people and technology to assure the

confidentiality, integrity and availability of information. A global infrastructure helps us implement

CIS & BPO services that utilize the right combination of resources from offshore, onshore or onsite.

PATNI offers these CIS & BPO services through its state of the art BPO centers in India and across

the globe.

In the event of a business disaster, PATNI has you covered. We have in place a comprehensive model

for disaster recovery and business continuity in the event of any disruption.

10.1 Key Industry verticals

Financial Services

Manufacturing

Telecom

Product Engineering

Life Sciences

Independent Software Vendors(ISV)

Retail

Media & Entertainment

Energy & Utilities

56

Page 57: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Logistics & Transportation

10.2 Key Services

In addition to these, PATNI serves its customers through following Horizontal Business Units across

various industries:

Application Development

Application Management

IT Consulting

Embedded Software

Infrastructure Management

Enterprise Application Services

Customer Intelligence Services & Business Process Outsourcing (CIS & BPO)

BI & DW

Enterprise integration

Verification & Validation

Process Consulting

Customer Interaction Services & Business Process Outsourcing

Engineering Services

IT Governance

Business Process Management

Customized Learning Solutions

User Experience Management

10.3 Global Locations

1. Americas

Brazil

Canada

Mexico

U.S.A. 

2. Europe, Middle East & Africa (EMEA)

Czech Republic

57

Page 58: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Finland

Germany

South Africa

Sweden

The Netherlands

UAE

United Kingdom 

3. Asia Pacific (APAC)

Australia

India

Japan

Singapore

10.4 Our Vision

“To be a trusted partner, powered by passionate minds, creating innovative options to excel.”

10.4.1 Our Vision & Values

Passion

Zeal to exceed expectations by challenging the status quo.

Honesty & Integrity

Driving all actions based on openness, trust and ethical conduct.

Operational Excellence

Striving for excellence with an operating discipline, benchmarked with the best globally.

Innovation

Creating innovative solutions for success.

Accountability

Taking ownership and delivering on commitments.

Customer Centricity

Ensuring customer delight. Always.

58

Page 59: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

11. Patni Data Center Characteristics

Data center is designed for computers, not people. As a result, PATNI COMPUTER SYSTEMS LTD.

data center typically have no windows and minimal circulation of fresh air. PATNI Data center range

in size of 3500 sq. feet dedicated to housing servers, storage devices, and network equipment.

Data center room is filled with rows of IT equipment racks that contain servers, storage devices, and

network equipment. Data center include power delivery systems that provide backup power, regulate

voltage, and make necessary alternating current/direct current (AC/DC) conversions. Before reaching

the IT equipment rack, electricity is first supplied to an uninterruptible power supply (UPS) unit. The

UPS acts as a battery backup to prevent the IT equipment from experiencing power disruptions, which

could cause serious business disruption or data loss. In the UPS the electricity is converted from AC to

DC to charge the batteries. Power from the batteries is then reconverted from DC to AC before leaving

the UPS. Power leaving the UPS enters a power distribution unit (PDU), which sends power directly

to the IT equipment in the racks. Electricity consumed in this power delivery chain accounts for a

substantial portion of overall building load.

Electricity entering servers is converted from AC to low-voltage DC power in the server power

supply unit (PSU). The low-voltage DC power is used by the server’s internal components of, such

as the central processing unit (CPU), memory, disk drives, chipset, and fans. The DC voltage serving

the CPU is adjusted by load specific voltage regulators (VRs) before reaching the CPU. Electricity is

also routed to storage devices and network equipment, which facilitate the storage and transmission

of data.

The continuous operation of IT equipment and power delivery systems generates a significant amount

of heat that must be removed from the data center for the equipment to operate properly. Cooling in

data center is often provided by computer room air conditioning (CRAC) units, where the entire air

handling unit (AHU) is situated on the data center floor. The AHU contains fans, filters, and cooling

coils and is responsible for conditioning and distributing air throughout the data center. In most cases,

air enters the top of the CRAC unit and is conditioned as air passes across coils containing chilled

water pumped from a chiller located outside of the data center room. The conditioned air is then

supplied to the IT equipment (primarily servers) through a raised floor plenum. Cold air passes

through perforated floor tiles, and fans within the servers then pull air through the servers. The

warmed air stratifies toward the ceiling and eventually makes its way back to the CRAC unit intake.

Most air circulation in data center is internal to the data center zone. The majority of data centers are

designed so that only a small amount of outside air enters. Some data centers provide no ductwork for

59

Page 60: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

outside air to directly enter the data center area. Instead, outside air is only provided by infiltration

from adjacent zones, such as office space. Other data centers admit a relatively small percentage of

outside air to keep the zone positively pressurized.

Data center use a significant amount of energy to supply three key components: IT equipment,

cooling, and power delivery. These energy needs can be better understood by examining the electric

power needed for typical data center equipment in and the energy required to remove heat from the

data center.

Table 11-1. Component Peak Power, Consumption for a Typical Server

Component Peak Power (Watts) CPU Memory Disks Peripheral slots Motherboard Fan PSU losses

80 36 12 50 25 10 38

Total 251

11.1 Data Centre Design & Layout

The main items for consideration in data centre design are broken down into four main groups:

Structural

Supporting Environment

Cabling Infrastructure

Security and Monitoring

       

We can’t ignore other sub items those play vital role in datacenter design:

Air conditioning

UPS

Generators

Gas Fire Suppression

60

Page 61: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Air sampling systems

Raised Floors

Infrastructure Cabling

Environmental Monitoring

Security products

Racks etc.

11.2 Data Centre Design Structural

The structure and the location of the building is an important consideration when it comes to the data

centre design. Recent new standards cover both the structural element of the room or building and its

geographic location and covers many different aspects of the premises.

11.2.1 These include:

Location; both geographical and physically within the building i.e. within a flood plain

(building), below water level (building or room within a building)

Construction methods; materials used in the construction of the building i.e. timber framed or

metal and concrete

Access for deliveries

Proximity to hazards; airports, oil refineries, train lines, motorways, power stations, chemical

works, embassies, military locations etc

You have to install all the elements necessary to ensure a working and standards compliant

environment, including:

Fire rating of walls

Floor loading

Floor void

Floor grid configuration

Ceiling Height

Door sizes; from the loading bay to the data centre

Steps or ramps

Lifts

All of these items, and more, must be taken into consideration at the design stage to ensure the room

meets all necessary requirements.

11.3 Data Centre Design Supporting Environment

61

Page 62: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Any data centre, computer room or server room is only as good as the supporting services which

maintain the environment within the specifications as laid down by the equipment manufactures. This

environment must be maintained 24/7 or the equipment within the room could fail with potentially

catastrophic consequences for the business which it supports. It is therefore essential that the data

centre is designed to ensure that redundancy and resiliency of equipment is catered for in the

initial plan and is not added as an after thought, therefore compromising space and other services.

You have to consider following elements for your data centre to ensure a working and standards

compliant environment:

Electrical supply

Lighting

UPS

Generator

Air Conditioning

Humidity Control

Fire Suppression

There can be many variations to each of these services and the key to a successful implementation is

that all services are co-ordinated to co-exist in what can sometimes be very confined spaces.

The appointment of a properly qualified project manager can greatly improve the overall build time

and smooth implementation of any data centre project. Data Centre Standards Ltd has been involved in

the building and refurbishment of many Data Centers in the UK and Europe and can provide essential

first hand experience and guidance.As per the guidelines, PATNI has placed precision temperature

monitoring and control device in the Datacenter.

All the datacenter devices are power feeded by 2 UPS dedicated for Data center only to provide high

uptime. Both are of 160 KVA. There are 2 different power supply source have been provided in Data

center for redundancy purpose.3 central generators (1x3500 KVA and 2x1500 KVA) installed in the

facility to cater power requirement in case direct line power supply not available.

For data center and other Hub rooms we have provisioned separate split AC despite of central AC duct

feeded to all location just for redundancy purpose. Separate Fire fighting devices are placed in all Hub

rooms and Data center in addition with smoke/fire detection and prevention system.

11.4 Raised Floor

62

Page 63: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

PATNI is using raised floor concept inside the data center. Raised floor is an option with very

practical benefits. It provides flexibility in electrical and network cabling, and air conditioning.

A raised floor is not the only solution. Power and network poles can be located on the floor and air

conditioning can be delivered through ducts in the ceiling. Building a data center without a raised floor

can address certain requirements in ISP/CoLo locations. Wire fencing can be installed to create cages

that you can rent out. No raised floor allows these cages to go floor to ceiling and prohibits people

from crawling beneath the raised floor to gain unauthorized access to cages rented by other businesses.

Another problem this eliminates in an ISP/CoLo situation is the loss of cooling to one cage because a

cage closer to the HVAC unit has too many open tiles that are decreasing subfloor pressure. However,

some ISP/CoLo locations have built facilities with raised floor environments, because the benefits of a

raised floor have outweighed the potential problems listed above.

Drawbacks to the no-raised-floor system are the very inefficient cooling that cannot easily be rerouted

to other areas, as well as the problems associated with exposed power and network cabling. A raised

floor is a more versatile solution.

PATNI data center a raised floor system where supply coming from tiles in the cold aisle. Hot

aisle/cold aisle configuration is created when the equipment racks and the cooling system’s air supply

and return are designed to prevent mixing of the hot rack exhaust air and the cool supply air drawn

into the racks. All equipment is installed into the racks to achieve a front-to-back airflow pattern that

draws conditioned air in from cold aisles, located in front of the equipment. The rows of racks are

placed back-to-back, and holes through the rack (vacant equipment slots) are blocked off on the intake

side to create barriers that reduce recirculation, as shown in the graphic below in figure 11.4.

With proper isolation, the temperature of the hot aisle no longer impacts the temperature of the racks

or the reliable operation of the data center; the hot aisle becomes a heat exhaust. The HVAC system is

configured to supply cold air exclusively to the cold aisles and pull return air only from the hot aisles.

63

Page 64: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Figure 11.4

The hot rack exhaust air is not mixed with cooling supply air and therefore can be directly returned to

the air handler through various collection schemes, returning air at a higher temperature, often 85°F or

higher. The higher return temperature extends economization hours significantly and/or allows for a

control algorithm that reduces supply air volume, saving fan power. In addition to energy savings,

higher equipment power densities are also better supported by this configuration.

Underfloor distribution systems should have supply tiles in front of the racks. Open tiles may be

provided underneath the racks, serving air directly into the equipment. However, it is unlikely that

supply into the bottom of a rack alone will adequately cool equipment at the top of the rack without

careful rack design.

For proper ventilation PATNI bought special designed racks so that the ideal air management system

would duct cooling air directly to the intake side of the rack and draw hot air from the exhaust side,

without diffusing it through the data center room space at all.

11.5 Data Centre Design Cabling Infrastructure

A data centre, computer room or server room by its very nature will house an abundance of

interconnecting cables. The cabling industry is now moving at such a pace that it is important to

select a cabling infrastructure that will cope with your day one requirements and any new

technologies which may be used within the data centre in the coming years.

PATNI COMPUTER SYSTEM LTD. is using copper cabling CAT6 UTP (Solid cables) in office

buildings, Data Center, and other installations to provide connectivity. Copper is a reliable medium for

transmitting information over shorter distances; its performance is only guaranteed up to 109.4 yards

64

Page 65: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

(100 meters) between devices where solid cable provide better performance and are less susceptible to

interference, making them the preferred choice for use in a server environment.

For rest connectivity longer than 100 meters PATNI using fiber cabling which can handle connections

over a much greater distance than copper cabling, 50 miles (80.5 kilometers) or more in some

configurations. Because light is used to transmit the signal, the upper limits of how far a signal can

travel along a fiber cable is related not only to the properties of the cable but also to the capabilities

and relative location of transmitters.

Besides distance, fiber cabling has several other advantages over copper:

Fiber provides faster connection speeds.

Fiber isn't prone to electrical interference or vibration.

Fiber is thinner and lighter weight, so more cabling can fit in to the same size bundle or limited

spaces.

Signal loss over distance is less along optical fiber than copper wire.

PATNI is using multimode fiber to provide connectivity over moderate distances, such as in most Data

Center environments or among rooms within a single building. A light-emitting diode (LED) is its

standard light source. The term multimode refers to the several rays of light that proceed down the

fiber. PATNI use Fiber connectivity even for less than 100 meters distance for higher data rate

transmission like between switch to switch (Backbone connectivity).

11.6 Aisles and Other Necessary Open Space

Aisle space should allow for unobstructed passage and for the replacement of racks

within a row without colliding with other racks. The optimal space would allow for

the turn radius required to roll the racks in and out of the row. Also, rows should not

be continuous. Unbroken rows make passage from aisle to aisle, or from the front of

a rack to the back, very time consuming. Such clear passage is particularly important

in emergency situations. The general rule of thumb for free floor space is between 40

and 50 percent of the square footage.

65

Page 66: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Figure 11.6 Proper Aisle Space and Non-Continuous Rows

How aisle space is designed also depends upon air flow requirements and RLUs. When designing the

center, remember that the rows of equipment should run parallel to the air handlers with little or no

obstructions to the air flow. This allows for cold air to move to the machines that need it, and the

unobstructed return of heated air back to the air conditioners. Figure 11.6 is appropriate layout done in

PATNI.

Be sure to consider adequate aisle space in the initial planning stages. In a walls within-walls

construction where the data center is sectioned off within a building, aisle space can get tight,

particularly around the perimeter.

11.7 Network Redundancy

PATNI has built electrical redundancy provided to servers by electrical conduits running from more

than one power distribution unit, network redundancy is provided by structured cabling running from

more than one networking device. Whereas electrical conduits are hardwired into the PDUs, however,

structured cabling is standalone infrastructure that any networking devices can be plugged in to. Each

cable is essentially providing its own path, and you just need additional networking devices to make

them redundant.

As long as you provide abundant structured cabling throughout the Data Center, you increase

redundancy as much as you want by simply installing more networking devices at the network row

66

Page 67: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

and network substations. If you want to provide a minimum level of redundancy over the entire Data

Center, install a second set of networking devices in the network row and patch to key components at

the network substations. If you want to provide an even greater level of redundancy, double the

networking devices at each network substation as done by PATNI COMPUTER SYSTEMS LTD.

Providing this redundancy may or may not require additional cabling infrastructure. It depends upon

how many network connections a given server requires, and how many servers are patched into a Data

Center's network devices. Most servers require a minimum of two connections, one for a primary

Ethernet connection and another for either a console connection or a secondary Ethernet connection.

11.8 Data Centre Design Security and Monitoring

Security and monitoring are two key factors to ensure that a data centre, computer room or server

room will run undisturbed. Good security will ensure that access is controlled and in so doing,

interference of important and sensitive equipment by untrained and unauthorized personnel is

prevented. Monitoring of the environment can also help to prevent incidents which could otherwise

disrupt or destroy equipment within the data centre.

There are various types of security and monitoring equipment available, much of it designed

specifically for the data centre environment. PATNI has installed access controllers (program based)

on each door within the facility with the appropriate access right to its employees. These access

controllers are centrally managed by access control team. For Data center access PATNI has

implemented 2 level of security, one is of Card level (Employee Identity) and other one is of PIN code

level security as mentioned in the figure below and Data center access is provided only for datacenter

team members. Other than this PATNI monitoring its campus facility with the help of digitals cameras

placed all over the facility. All points of access should be controlled by checkpoints, and coded card

readers. Below figure 11.8 shows one of the access controller type installed in PATNI.

67

Page 68: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Figure 11.8 Cipher Lock at Restricted Access Doorways

11.9 Data Center Network Architecture and Engineering

The Data Center network architecture is a key component of the Service Oriented Infrastructure. How

the network infrastructure is designed and implemented plays a key role in what level of service

availability and survivability the IT resources can offer. In many cases, the network is grown

organically with little consideration for future growth or physical/logical separation requirements. As

the application services infrastructure expands, it becomes more of a challenge to maintain and

purposefully plan for performance and availability of the network. Additionally, it is paramount to

understand not only the data center WAN and LAN infrastructure, but also the remote site WAN

infrastructure in order to match expected application performance and availability characteristics on an

end-to-end basis. PATNI follows a standard, proven approach that includes the following activities:

identification of touch points within the current environment, creation of configuration standards,

development of test plans (network and end user), development of integration and migration plans and

process for turnover to the operational environment. As-built documentation and knowledge transfer

are key deliverables. Design areas of focus are LAN Architectures (IP Routing Architecture, Layer 2

Switching), WAN Architectures (MPLS VPN, IPSEC VPN, and traditional VPN), WLAN

Architectures, Optical Architectures (DWDM, SONET), Content Delivery Architectures, IPv6 and

QoS Architectures.

PATNI is using most of the technology/topology mentioned above to run the Datacenter. The detail of

the same is given in below sections.

PATNI COMPUTER SYSTEMS LTD. new facility at Noida, SEZ caters corporate and staff for

different processes. This new facility have connectivity to PATNI’s Center-I facility in addition to two

Data centers in US, Billerica and US San Jose and one in UK.

68

Page 69: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Other the Data Centers are connected to SEZ over MPLS backbone from a Service Provider. MPLS

SP has placed dual CPE at SEZ, and other Data Centers. CPEs have dual local-loop to different PEs

for redundancy.Center-I will be connected to SEZ via Ethernet links. PATNI has procured 3x10mbps

links between Center-I and SEZ. These links have L-3 connectivity between Core switches at both

locations. This solution will enable PATNI to route Voice and data between SEZ and Noida Center-I

facility.

PATNI COMPUTER SYSTEMS LTD. has also procured links from ISP for providing internet access

to its users at SEZ and wants to have load balancing and redundancy on multiples ISP links. PATNI

has procured AS number space. PATNI has also setup VPN capability for its remote and

telecommuters.

PATNI has designed a Tier 3 data center that is “concurrently maintainable” because the redundant

components are not on a single distribution path (as in Tier 2). The data center can have infrastructure

activity going on without disrupting the computer hardware operation in any way. This data center is

manned 24 hours a day—for maintenance, planned activities, repair and replacement of

components, addition or removal of capacity components, testing, etc.

PATNI data center is designed to be upgraded to Tier 4 when the business case justifies the cost of the

upgrade (additional protection). This datacenter has sufficient capacity and distribution available to

simultaneously carry the load on one path while performing maintenance or testing on the other path.

12. Network Topology69

Page 70: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Figure 12.0 presents voice and data network architecture at PATNI:

Figure 12.0

12.1 Solution resiliency

12.1.1 LAN resiliency

For sake for understanding and explanation, PATNI SEZ network setup has been divided into two

areas that are:

Internet / Public Area, for accessing public network

Intranet / Corporate area, which will connect to PATNI COMPUTER SYSTEMS LTD.’s

international Data centres in US & UK.

70

Page 71: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

In First Phase deployment is done at First Floor Fully. Second and Ground Floor each have one

Catalyst 2960-48 port switch. As depicted in network LAN figure 12.2 below:

12.2 Intranet Area network layout

Figure 12.2

12.3 Intranet Area

The new facility is having three floors, however in its first phase of project implementation

only first floor is covered.

PATNI COMPUTER SYSTEMS LTD. has procured two Catalysts 6509 with Sup-720 engine

for its LAN campus switching network. Campus network is deployed as industrial standard

network. Core switches in PATNI campus network provides core and distribution level access

where are edge switches which include Catalyst 3750 and Catalyst 2960 switches serving

access layer connectivity.

71

Page 72: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

There are three hub rooms at each floor which will serve local area connectivity for that floor.

Server room is separate to hub rooms and accommodates all core LAN / WAN / Voice

equipments. Both the core switches are installed in the server room with redundant power

supplies. However there are no Sup level redundancy in each Catalyst 6509 box.

For better through-put Voice and Data network are divided into logically separate VLANs

subnets. Layer – 3 routing between the VLANs is done at core switches.

There is an ether-channel of 2 TenGig ports on Sup-engine between the core-switches. Ether-

channel will avoid downtime due to single link failure and also provide high speed data

transfer rates.

All access switches have dual-uplinks to both the core switches. Redundant uplink is blocked

via STP. In case any uplink/Core switch goes down, the redundant uplink changes from

blocked to forwarding state and all traffic starts routing via this link.

Core Switch-1 (ODD) is acting as root for all the odd VLANs (1,3,5,7…) & secondary root for

all even VLANs (2,4,6,8…) and Core Switch-2 (EVEN) is acting as root for all the even

VLANs & secondary root for all odd VLANs.

Cisco 3750-POE switch is configured for IP Phone and agent PCs connectivity. However, in

case any access switch fails, all IP Phones and Agent PCs connected to that switch will become

unreachable causing loss of production. To resolve PATNI Tech team suppose to pre

configured 1 POE 3750 switch and 1 Edge switch 2960 to replace faulty so that the production

downtime can be minimized.

Both the core switches have been procured with single sup-engine and dual-power supplies.

Any problem in Supervisory engine can cause service outage and all devices connected to the

switch may become unreachable. Recalculation of STP may also be triggered in the network in

case of Supervisory engine on any Core-switch fails.

PATNI COMPUTER SYSTEMS LTD. has various application servers which are deployed in server

room. For isolation for server farm all servers are connected to high port density access switches

dedicated to server farm only and those server farm switches are connected to core switches. Server

farm is also logically separate as on different VLAN.

12.4 Internet Area network layout

Figure 12.4 shows Internet area layout.

72

Page 73: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Figure 12.4

12.5 Internet Area

For internet access PATNI has procured bandwidth from VSNL and Second ISP is Verizon.

These two links are connected to Cisco 3825 routers. These routers are serving as first level of

defence towards internet. These routers are also have hardened ACL configuration to secure

corporate network. These two routers are connected to Cisco 3750 layer -3 switches these

switches termed as outside switches of the network. As a matter of traffic load balancing,

PATNI has procured AS number space from its service providers. Multi-homing and load

balancing is achieved via BGP configuration on outside switches.

In internet area there is internet visitor’s area from where visitors are allowed to access

internet. For this purpose PATNI has procured 4400 series WLC and 1200 light weight

wireless access points. WLC – Wireless LAN controller is directly connected to outside

switches and registers APs in visitor area for providing access to network.

PATNI COMPUTER SYSTEMS LTD. has installed Video Conferencing unit, which

established connections via public network. The VC (Life Science) is used for video

conferencing within PATNI and clients meeting.

73

Page 74: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

In internet area one pair of Firewall is installed in active standby fashion, this serve as

perimeter security for corporate network. Various site to site VPN with client and between

PATNI offices has been configured and establish on this perimeter security firewall. PATNI

has various application servers like WEB Server, Mail server, Proxy server, FTP server and

client application base servers which are deployed in DMZ zone area of perimeter security

firewall.

Default gateway for perimeter security firewall is layer three switches (Outside Switches).

Inside leg of perimeter firewall is connected to corporate network firewall’s outside interface.

This will also serve internet access to corporate network users. So for internal users, there is 2

layer of security. PATNI has configured firewalls to allow only client specific and business

required applications. Hence restriction on firewall is port, source and destination based. There

is no direct access to internet for internal users, its only through the proxy where PATNI has

deployed content based filtering with the help of WEBSENSE.

12.6 Backbone resiliency in intranet area

12.6.1 PHASE-I

PATNI COMPUTER SYSTEMS LTD. has procured 3x10 MB of Ethernet link connectivity between

Center-I and SEZ facility. These links were primary path for connectivity to US & UK data centres in

first phase of deployment of SEZ facility at Noida. But now US & UK data centres are directly access

through MPLS cloud. Theses ethernet links are configured as Active as well as redundant to each

other.

In order to achieve the objective both site’s core switch comprising Cisco Catalyst 6509. For SEZ

PATNI has procured 2 NOS of Catalyst 6509 switches. Physical connectivity is as per network

diagram below in figure 12.6.2

12.6.2 SEZ to Center-I connectivity network layout

74

Page 75: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Figure 12.6.2

2 Nos of links are connected to Catalyst 6509 “A” switch and remaining 1 Nos is connected to

Catalyst 6509 “B” switch. All three links are connected in Layer 3 domain in three different

subnets.

There will be 1 Nos of Layer –3 connectivity between Catalyst 6509 switches in addition to

ether-channel at both facilities.

As shown all four switches are in single EIGRP domain.

For all voice subnet Core switch “A” at SEZ facility is primary path and Core switch “B” is

secondary path. Also will assure that no load balancing will happen between the links for voice

traffic and that the voice traffic will traverse through single path by implementing Policy Base

routing with redundancy through other available path from any of the Core switch with priority

high set which will insure better voice quality.

Data traffic routes as per the available path learned through EIGRP dynamic routing.

12.6.3 PHASE-II

PATNI COMPUTER SYSTEMS LTD. has procured two MPLS link connectivity from two

different ISPs for its SEZ facility’s connectivity to US & UK Data centres. For local end

redundancy PATNI has procured local loop connectivity from two different service provider.

MPLS CPE routers are placed at SEZ facility only.

75

Page 76: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Primary link from each service provider is Ethernet handoff and secondary link is serial.

In corporate network pair of ASA5520 firewall is configured to secure corporate network from

MPLS cloud.

PATNI has MPLS connectivity to its US & UK Data centres over Service Provider network.

Any outage in service provider MPLS network is handled and managed by the SP.

There are two MPLS CPEs placed at SEZ facility. Both the CPE routers are connected to

Cisco-3750 Layer-3 switch.

All corporate data and voice traffic flows between data centres and SEZ facility through MPLS

cloud.

12.7 LAN Switching

12.7.1 Design Requirements and Assumptions

No. of LAN connections to be deployed in first phase at first floor w.r.t to each hub room. On

the basis of data and voice connections VLAN domains are defined.

Total no. of Production Site for this LAN design is SEZ.

For now in first phase at first floor 8 NOS of Catalyst 2960-48 switch has been installed in

each hub room along with 1 NOS of Catalyst 3750-48 POE switch in each hub room.

At ground floor and Second floor each have 1 NOS of Catalyst 2960-48 port switch.

1 NOS of Catalyst 2960-8 port switch is installed in each hub room and 1 NOS of Catalyst

2960-8 port has been installed at Ground floor. This is used for direct public network access in

premises.

In first phase of installation 100 IP Phone connections are deployed at first floor.

All access layer switches have dual leg fibre LC to LC connectivity to core switches for

redundancy purpose.

Each core switch is having one number of SUP-720-3B module, there is no redundancy with

respect to Supervisory module redundancy available on Core switches. However, dual-power

supplies are available on both the switches.

12.7.2 Technical Prerequisites

Telnet passwords - PATNI has Cisco TACACS server in place through which all network

devices are authenticated which is integrated with AD.

As mention in above sections traffic load balancing or load sharing is configured for the

internet connection with the help of ISP.

76

Page 77: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

PATNI has installed Syslog server for capturing log of all network devices. PATNI has

installed DC, DHCP, DNS,WSUS, File server, FTP and Proxy/Websense Servers in the

network to authenticate end user to gain network access, IP address assignment, name

resolution, patch update, File management, File transfer and Internet access management with

content filtering respectively.

All switches are in single VTP domain for the ease of configuration within switching domain.

VTP domain is password protected for the security prospect.

12.7.3 VLAN ID and Subnet Info

VLAN would be assigned as per the process requirement. Every process or department would be in

separate VLAN. Inter VLAN routing will be configured on Core switches.

12.7.4 Rack layout

Following is the Device Rack layout.

U No. Rack # 1 Rack # 2 Rack # 3 Rack # 4 42        41        40        

39        

38        

37        

36        

35

Public Switch Primary(3750-

24TS-E)     

34 

DMZ Primary Switch(3750G-24TS-

S1U)   

33

Public Switch Secondary(3750-

24TS-E)     

32

External Firewall Primary(ASA 5520)

DMZ Secondary Switch(3750G-24TS-

S1U)   

31

External Firewall Secondary(ASA

5520)      

30

Wireless External(WLC4402-

12-K9)

Internal Firewall Secondary(ASA 5520)

   

29     

Secure Access Control (ACS-113)

28 Client Switch(WS-C3750-24TS-S)

Internal Firewall Primary(ASA 5520)

Packet Shaper(3500) Wireless Internal(WLC4402-

77

Page 78: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

50-K9)27

Client Switch(WS-C3750-24TS-S)

MPLS Switch Primary(3750-24TS-E)

26        

25

Client Bharti MPLS Router(2800)

MPLS Switch Secondary(3750-

24TS-E)

Core Switch - ODD (WS-6509-E-Series)

Core Switch – EVEN (WS-6509-E-Series)

24Client Reliance

MPLS Router(2800) 

23 

Verizon MPLS Primary Router(Cisco-2800)

22 

Verizon MPLS Secondary

Router(Cisco-2800)

21   VSNL MPLS Secondary

Router(Cisco-2800)20  

19    

18   VSNL MPLS Primary Router(Cisco-2800)17  

16 VSNL Public Router(CISCO-

3825)

 

15VG Router(CISCO-

3845-MB)14 Verizon Public Router(CISCO-

3825)13

12        

11        

10        

9        

8        

7        

6        

5        

4        3        2        1        

Figure 12.7.4

PATNI has to arrange the Rack Space as per the following ‘U’ space of devices:

Equipment U- Space RequiredCatalyst3750 Switch 1UCatalyst 6509-E 13UCisco 3825 router 2UCisco WLC 1UCisco ASA 5520 Firewall 1U

Table 12-7-4

12.7.5 POWER CONSIDERATION

Cisco equipment minimum power points require is listed as follows:

78

Page 79: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Equipment Power 5/15A Req.Catalyst 3750 Switch 1Catalyst 6509-E 2Cisco 3825 router 2Cisco WLC 2Cisco ASA 5520 1

Table 12-7-5

PATNI has provisioned efficient power and power point in the Data center to feed every device. To achieve this PATNI has setup 2x160 KVA UPS dedicated for Data center in redundant fashion.

Proper earthing (1V- 3V) has been done for whole facility.

12.8 Spanning Tree Design

Spanning-tree is required in the LAN campus network infrastructure to ensure a loop-free forwarding

topology in presence of redundant Layer 2 paths. Spanning tree should never be disabled even if the

topology is designed to be loop free.

In PATNI network Rapid-PVST spanning tree algorithm is deployed for fast convergence. Root

switches will be segregated between the VLANs for traffic load sharing.

12.8.1 Trunking

Configured 802.1Q (dot1q) trunks between core switches & between core-access switches on the

uplink interfaces. Trunk is nothing but a point-to-point link to carry the traffic of multiple VLANs &

also allows us to extend VLANs across an entire network.

802.1Q has been chosen because this is an industry-standard trunking encapsulation whereas ISL being Cisco proprietary has some limitations like it is not supported on following switching modules as on date:

· WS-X6502-10GE · WS-X6548-GE-TX,WS-X6548V-GE-TX,WS-X6548-GE-45AF · WS-X6148-GE-TX, WS-X6148V-GE-TX, WS-X6148-GE-45AF

12.8.2 EtherChannel

An EtherChannel bundles individual Ethernet links into a single logical link that provides the

aggregate bandwidth of up to eight physical links.

12.8.3 Spanning Tree Protocol

79

Page 80: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

The IEEE 802.1w standard was developed to take 802.1D’s principle concepts and make the resulting

convergence much faster. This is also known as the Rapid Spanning Tree Protocol (RSTP). RSTP

defines how switches must interact with each other to keep the network topology loop free, in a very

efficient manner. Like 802.1D, RSTP’s basic functionality can be applied as a single or multiple

instances.

Core Switch-1 acts as root for all Odd VLANs & secondary root for all Even VLANs and Core

Switch-2 acts as root for all Even VLANs & secondary root for all Odd VLANs.

12.8.4 HSRP

This is configured so as to provide redundancy for L3 path. Important point to note is that L3 path for

particular VLAN traffic will converge with L2 STP path so that even in case of device failure, L2 &

L3 convergence occurs together on same device. A single HSRP group is used between both Core

switches.

Core-1 Switch is responsible or L3 active for all Odd VLANs and standby for all Even VLANs.

Like wise, Core-2 switch is responsible for L3 Active for all Even VLANs and standby for Odd

VLANs.

12.8.5 Helper Address

Configured switch for helper address so that the end user machines can get the IP address from DHCP

server sits in Server VLAN.Each L3 VLAN interface except server VLAN configured for helper

address.

12.8.6 Wireless

PATNI COMPUTER SYSTEMS LTD. has procured two 4400 series WLC controller and 20 APs for

its wireless network. One WLC controller has been installed in Public area which has direct

connectivity to Public network and provides wireless access to visitors at PATNI.

Second WLC controller has been installed in corporate area which is directly connected to private

network facility of PATNI. This provides wireless access to PATNI users. As of now, 8 internal and 8

External APs are installed for accesses to internet as well as corporate network access. In this scenario

Internal APs are associated to corporate area WLC, have non-broadcasted SSID and other 8 external

APs are associated directly to internet area WLC, have broadcasted SSID with share key access

configured.

80

Page 81: DATACENTER DESIGN & INFRASTRUCTURE LAYOUT

Corporate SSID is associated to exiting Active domain controller of PATNI for user level

authentication. While External SSID is preshared.

Bibliography

“Grow A Greener Data Center”, by Douglas Alger

“Data Center Projects Plan”, by Neil Rasmussen & Wendy Torell

“Enterprise Data Center Design and Methodology”, by Rob Snevely

“Enterprise Data Center Design and Methodology”, by Rob Snevely

“Enterprise Data Center Design and Methodology”, by RobSnevely

“High performance Data Centers”, A Design Guidelines

81