July 15 2006

47

description

Technology, Business, Leadership

Transcript of July 15 2006

Page 1: July 15 2006

Alert_DEC2011.indd 18 11/16/2011 3:51:42 PM

Page 2: July 15 2006

From The ediTor

The last cover story of Cio (A New Game Plan) looked at how IT strategists are

evolving into business strategists. It explored how CIOs are beginning to delve into business

metrics, and not just IT measures to determine the success of IT projects. As our advisory board

sees it, IT executives are getting more accountable for business results. An equally interesting

trend is that business executives are becoming more accountable for IT results.

Take the case of ICICI Bank, which takes a different approach to IT projects. Its MD & CEO

K.V. Kamath, tells us in this issue that in his organization, business units ‘own’ technology

and implement it.

A major benefit of this, according to him, has been the dramatic reduction in the number

of failed projects as well as a reduction in implementation time. “Since the user departments

are conscious of their own overheads, it leads to further savings,” he adds.

He goes on to say that the CIO’s role in all this is to cut down “technological anarchy” since

various user groups can end up creating systems that don’t communicate with one another.

“Here, a CIO is somebody who brings together and keeps in harmony the various user

groups in the context of their technological

needs,” he observes.

A CIO I was speaking to the other day was

appreciative of the banker’s remarks, though

he observed that if various departments in a

multi-business unit scenario did indeed own

their technology, then the CIO would have an

extremely difficult time in attempting to bring

all of them onto a common platform — since business unit heads would have no obligation

to listen to him as their loyalties would lie more with their own units for which they are

responsible and against whose performance they are measured.

I find this a scary development. Fascinating for sure, but still scary. Corporate executives

scale managerial heights because the expertise they bring along is vital to an organization’s

growth. Whether this means that IT leaders talk business or business leaders tilt toward IT

is hardly a moot point from an enterprise’s point of view.

Either way, if CIOs don’t step up their efforts to be a part of corporate decision-making,

they may find that if they’re still around, it won’t be to provide their company with weapons

to take on competition. It will be to take orders and put out technological fires.

If CIOs don’t step up efforts to be a part of decision making, they may find themselves out of a meaningful role.

Business executives are becoming more accountable for IT results.

Vijay Ramachandran, Editor [email protected]

The Worm Turns

REAL CIO WORLD | J U LY 1 5 , 2 0 0 6 �Vol/1 | ISSUE/17

Content,Editorial,Colophone.indd3 3 7/13/2006 3:45:13 PM

Page 3: July 15 2006

Co

VE

r:

Ima

gIn

g B

y B

InE

Sh

Sr

EE

dh

ar

an

I

Ph

oT

o B

y S

rIV

aT

Sa

Sh

an

dIl

ya

26

Sajid Ahmed, global head-infrastructure, Syntel, has a network philosophy that is rooted in the manufacturing sector: too much inventory is always too bad.

Executive ExpectationsVIEW FROm ThE TOp | 36K.V. Kamath, MD & CEO, ICICI Bank, has an unusual approach to IT implementations: he ensures that user groups take complete responsiblity. CIOs, he says, should focus on harmonizing these groups. Interview by Gunjan Trivedi

Executive CoachIT’s GOOD NEWs | 23CIOs need to help their staffers understand that if they can hold on during the tough times, the payoff is just around the corner.Column by susan Cramm

Peer to Peer RED LIGhT, GREEN LIGhT | 20 How do you align your IT department with the company’s business goals? One CIO used project management discipline and the Traffic Light Report. Column by Dr. Catherine Aczel Boivie

Energy pOWERING DOWN | 40Electricity-hungry equipment, combined with rising energy prices, are devouring data center budgets. Here’s what you can do to get costs under control.Feature by susannah patton more »

Network Infrastructure

COVER sTORy | A CABLE FABLE | 26

The 10G Ethernet standard might have had varied applications for IT services company Syntel and plant nutrients manufacturer Nagarjuna Fertilizers. But both believe it is the way to future-proof the enterprise.Feature by Gunjan Trivedi

JUly 15 2006‑|‑Vol/1‑|‑issUe/17

content

Vol/1 | ISSUE/17� J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

Content,Editorial,Colophone.indd8 8 7/13/2006 3:45:18 PM

Page 4: July 15 2006

GovernINNOVATION ON TRACk | 46Transacting with 10 lakh passengers and making Rs 1 crore a day, few e-governance applications can top the success of the Passenger Reservation System. Now, 20 years since its launch, the Indian Railways is steaming ahead to better the system and improve customer experience. Feature by Rahul Neel mani

ThE ELECTRIC REFORms | 50Using IT-powered solutions, Bharat Lal Meena, MD, Karnataka Power Transmission Corporation and chairman of the state’s electric supply companies, has managed to track and control transmission and distribution losses — and brought the consumer into the loop too.Interview by Balaji Narasimhan

content (cont.)

Trendlines | 15 Internet solutions | Indian SMBs to Spend Big staffing | Not More but Better Engineers Networks | FIFA Network Tackles Tough Challenges Technology | Next World Cup's IT Test security | Robots Patrol Berlin Football Stadium Innovation | Great Play on Small Screens Book Review| Why Technology Fails management Report | User-Friendly IT Governance Telephony | Five Steps to VOIP Success

Essential Technology | 54 Data Center Intelligence | The Shrinking Server By Christopher Lindquist

security | The Endpoint of Endpoint Security By Scott Berinato

From the Editor | 3 The Worm Turns | Business executives are becoming more accountable for IT results. By Vijay Ramachandran

Inbox | 14

2 3

dEParTmENTs

NOW ONLINE

For more opinions, features, analyses and updates, log on to our companion website and discover content designed to help you and your organization deploy IT strategically. go to www.cio.in

c o.in

4 6

Vol/1 | ISSUE/171 0 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

Content,Editorial,Colophone.indd10 10 7/13/2006 3:45:23 PM

Page 5: July 15 2006

MAnAgeMent

PreSident n. Bringi dev

COO louis d’mello

editOriAl

editOr Vijay ramachandran

BureAu HeAd-nOrtH rahul neel mani

ASSiStAnt editOrS ravi menon;

harichandan arakali

SPeCiAl COrreSPOndent Balaji narasimhan

SeniOr COrreSPOndent gunjan Trivedi

CHief COPY editOr Kunal n. Talgeri

COPY editOr Sunil Shah

www.CiO.in

editOriAl direCtOr-Online r. giridhar

deSign & PrOduCtiOn

CreAtive direCtOr Jayan K narayanan

deSignerS Binesh Sreedharan

Vikas Kapoor

anil V.K.

Jinan K. Vijayan

Unnikrishnan a.V.

Sasi Bhaskar

Vishwanath Vanjire

Sani mani

mm Shanith

anil T

PC anoop

PHOtOgrAPHY Srivatsa Shandilya

PrOduCtiOn T.K. Karunakaran

T.K. Jayadeep

MArketing And SAleS

generAl MAnAger, SAleS naveen Chand Singh

BrAnd MAnAger alok anand

MArketing Siddharth Singh

BAngAlOre mahantesh godi

Santosh malleswara

ashish Kumar

delHi nitin Walia; aveek Bhose

MuMBAi rupesh Sreedharan

nagesh Pai; Swatantra Tiwari

JAPAn Tomoko Fujikawa

uSA larry arthur; Jo Ben-atar

SingAPOre michael mullaney

uk Shane hannam

AdverTiser index

All rights reserved. No part of this publication may be reproduced by any means without prior written permission from the publisher. Address requests for customized reprints to IDG Media Private Limited, 10th Floor, Vayudooth Chambers, 15–16, Mahatma Gandhi Road, Bangalore 560 001, India. IDG Media Private Limited is an IDG (International Data Group) company.

Printed and Published by N Bringi Dev on behalf of IDG Media Private Limited, 10th Floor, Vayudooth Chambers, 15–16, Mahatma Gandhi Road, Bangalore 560 001, India. Editor: Vijay Ramachandran. Printed at Rajhans Enterprises, No. 134, 4th Main Road, Industrial Town, Rajajinagar, Bangalore 560 044, India

mArkeTing & sAles

BAngAlOre

mahantesh godi

Tel : +919342578822

[email protected]

Idg media Pvt. ltd.

7th Floor, Vayudooth Chambers

15 – 16, mahatma gandhi road

Banglore — 560 001

delHi

nitin Walia

Tel : +919811772466

[email protected]

Idg media Pvt. ltd.

1202, Chirinjeev Towers

43, nehru Place

new delhi — 110 019

MuMBAi

Swatantra Tiwari

Tel : +919819804659

[email protected]

Idg media Pvt. ltd.

208, 2nd Floor “madhava”

Bandra – Kurla Complex

Bandra (E)

mumbai – 400 051

JAPAn

Tomoko Fujikawa

Tel : +81 3 5800 4851

[email protected]

uSA

larry arthur

Tel : +1 4 15 243 4141

[email protected]

SingAPOre

michael mullaney

Tel : +65 6345 8383

[email protected]

uk

Shane hannam

Tel : +44 1784 210210

[email protected]

Avaya 4,5

AMD 32,33

Canon IBC 59

HP 13

Interface 25

IBM BC 60

Krone 29

Microsoft Cover Gatefold

Mercury 9

Netmagic 21

Raritan IFC 2

R&M 11

RIM 57

SAS 19

Wipro 6,7

Vol/1 | ISSUE/171 2 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

Content,Editorial,Colophone.indd12 12 7/13/2006 3:45:24 PM

Page 6: July 15 2006

reader feedback

Too Much Datafrom all the issues of CIO I have gone through, I noticed that your's is the very first magazine with a perspective of a CIO who is more like a CEO and not merely a technology manager. It’s an approach that is quite different from any other publication in the market. You rightly show a CIO as someone who strategizes not only for IT but for business as well. CIOs, I feel, are increasingly getting closer to business and are locating avenues for generating business-oriented productivity as well as profitable values for their organization.

Interestingly, I have noticed that the moment a CIO dons a CEO’s hat, he realizes that he is actually providing an abundance of information — maybe too much. As a CIO, he does not realize the business value of information he is making available in abundance and continues to generate report after report. CIOs should be more business-oriented and should always look out for value while making information available — especially to customers.

A CIO should realize that in making information available to customers, there is an investment of the organization’s assets and resources. And, therefore, returns are key. This realization can only come about when a CIO gets past his familiar role of an information provider and steps into

the shoes of a CEO. And here, CIO, as a magazine, is helping.

Speaking of information, I would like to confess that I am inundated with information coming from every medium, be it print or online. It’s hitting me hard. The volume of information coming in from different media day-in and day-out makes it difficult to keep track of what I’m reading or where it’s from. In fact, it's come to the point where I've had to get into the habit of indexing and filing relevant information to address this problem. S.R. Mallela

CTO, AFL

Customer GaugeI read the CIO View From The Top interview of Habil Khorakiwala, chairman of Wockhardt (Growth Tonic, 1 February, 2006) with keen interest. Being a player in the healthcare sector here in Sri Lanka, I was interested in knowing more about the Infinity Prescription Information System, the IT initiative that Khorakiwala talked about, which enables Wockhardt to gauge a customer’s experience and views — and the vendor for this system.KaSTuRi WilSon

Director-shared services

Hemas Holdings, Sri Lanka

“Infinity is Wockhardt’s indigenously developed sales force automation application that front-ends into the portal www.wockinfinity.com. This application facilitates online tracking of customer

response and sales activities. I suggest that for more information you get in touch with the Infinity Cell at [email protected].”— Editor

Management BestsellersI would like to compliment the editorial team and staff of CIO for the quality, content and coverage of various issues in the magazine. The selection of topics is admirable; the articles are focused on issues faced by CIOs on a daily basis.After a long stint on the non-IT side of business, doing business development, strategic planning and finance, I am currently a CIO. I find the articles in CIO particularly relevant because they deal with concepts from a management perspective and not just from a technology perspective. Some of my colleagues, who are functional heads, find that the articles make for interesting reading.

I would like to suggest you also feature a page or a corner on management tips or excerpts from a current bestselling management book. I think it will be a worthwhile addition.ViKaS GaDRe

CIO, Rallis India What Do You Think?

We welcome your feedback on our articles, apart from your thoughts and suggestions. Write in to [email protected]. Letters may be edited for length or clarity.

editor@c o.in

“cIOs should realize that making

data available to customers means

an investment from the organization. So

returns are key. ”

Vol/1 | ISSUE/171 4 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

Inbox.indd 14Inbox.indd 14Inbox.indd 14Inbox.indd 14

Page 7: July 15 2006

n e t S O L U t I O n S Indian small and medium businesses (SMB) will invest up to Rs 5,400 crore on beefing up their Internet infrastructure and solutions this year, says AMI Partners, a research firm headquartered in New York. This will account for almost a sixth of the total IT spending in the country this year, according to the firm.

“We defined the businesses as small if their staff strength ranged between one and 99, and as medium if their employees numbered

between 100 and 499,” says Avimanyu Datta, an AMI analyst. “The biggest spenders among small businesses are in the retail sector, and the manufacturing sector for medium-sized businesses,” he added.

SMB spending on Internet access is forecasted to amount to nearly Rs 3,735 crore this year, a 24 percent jump from last year. Almost all medium businesses and more than half of PC-using small businesses already have access to the Internet. However, nearly half of the small businesses use dial-up access and, therefore, an untapped market still exists in the broadband Internet arena, Datta says.

The market is also untapped in the way the Internet is used in the country. “We use a proprietary ‘Global model’ in our surveys. It showed that Indian SMBs lag behind those in South Korea, for instance, in exploiting the Net for business growth,” he says. “In India, It’s still considered an economical channel for communication rather than a strategic growth driver.”

Website and e-commerce penetration remains comparatively low among small businesses. Small businesses operate on very low margins as their customers are usually medium businesses, not large ones. As contact with end consumers increases, website and e-commerce penetration is likely to correspondingly increase alongside the margins, he says.

— By Harichandan Arakali

S t A F F I n G Conventional wisdom says the US must produce more engineers or risk losing its lead in innovation to India and China, which graduate far more engineers each year than the US does. But that’s not the problem, according to Forrester Research: the US simply need better ones.

“The race to develop more engineers evokes the Cold War arms race, and it’s an approach that won’t work in today’s global economy,” says Navi Radjou, a vice president with Forrester. “The US should not be looking at China and India and

saying they are the new Japan and Russia. These countries are trading partners.”

Instead, to remain competitive, the US must breed a new type of engineer who is as business-savvy and multi-culturally minded as he is technically trained, says Radjou.

Creating better engineers involves retraining current employees and revamping university engineering curricula to reflect interdisciplinary thinking. But even kindergarten teachers can prepare tiny innovators for engineering by encouraging collaboration and promoting

multi-cultural education. Nevertheless, argues Martin Jischke, president of Purdue University, numbers have power. Jischke, who is also an adviser to President Bush, supports interdisciplinary education but insists, “A nation that lacks a critical mass of scientists and engineers will not lead the world in the decades ahead.”

— By Lauren Capotosto

n e w * h o t * u n e x p e c t e d

Not More but Better engineers

REAL CIO WORLD | J U LY 1 5 , 2 0 0 6 1 5Vol/1 | ISSUE/17

Il

lU

ST

RA

TIo

NB

y B

INE

SH

SR

EE

DH

AR

AN

Indian SMBs to

Spend

BigBigBigBigBig Indian small and medium businesses

Big

Page 8: July 15 2006

world cu p trendlines n e t w O r k S The World Cup was not only the planet’s largest sporting event, it was also home to what many experts call the world’s biggest communications network built for a single event.

Over 15 terabytes of data, the equivalent of more than 100 million Over 15 terabytes of data, the equivalent of more than 100 million books, traveled across a converged voice and data communications books, traveled across a converged voice and data communications network that linked stadiums, control centers, management offices, network that linked stadiums, control centers, management offices, hotels, railroad stations and other numerous outlets. hotels, railroad stations and other numerous outlets. With over 3 billion fans following the games — the most viewed With over 3 billion fans following the games — the most viewed

World Cup ever — “it wasn’t a good time to make World Cup ever — “it wasn’t a good time to make a mistake,” said Peter Meyer, head of IT at FIFA.By the end of the games, over 200,000 people, including 15,000 sports reporters, connected to the network, built and managed by Avaya. The company installed an all-IP network, which, for company installed an all-IP network, which, for the first time in the history of the games, included the first time in the history of the games, included voice as an integrated and not dedicated service, voice as an integrated and not dedicated service,

according to Douglas Gardner, MD of the Avaya FIFA World Cup according to Douglas Gardner, MD of the Avaya FIFA World Cup according to Douglas Gardner, MD of the Avaya FIFA World Cup program. As part of its VoIP (voice over internet protocol) service, program. As part of its VoIP (voice over internet protocol) service, program. As part of its VoIP (voice over internet protocol) service, the vendor provided a centralized, server-based directory service, the vendor provided a centralized, server-based directory service, the vendor provided a centralized, server-based directory service, as well as client software that allowed authorized users to make as well as client software that allowed authorized users to make phone calls from their notebook computers.

Toshiba equipped FIFA organizers with more than 3,000 Toshiba equipped FIFA organizers with more than 3,000 notebook computers for the event. “There were a lot of people notebook computers for the event. “There were a lot of people moving around at the games,” said Toshiba spokesman Manuel moving around at the games,” said Toshiba spokesman Manuel Linnig. “We configured the notebooks for quick, easy access to all the Linnig. “We configured the notebooks for quick, easy access to all the LANs and wireless LANs within the FIFA network, and installed LANs and wireless LANs within the FIFA network, and installed several security features, including fingerprint readers.”

Although WLAN technology was widely deployed, FIFA Although WLAN technology was widely deployed, FIFA required all systems to be linked by cable as well.

t e c h n O L O G y Germany put on a good World Cup tournament — so good that some experts wonder whether they’ll be able to pull off a similar feat in South

Africa four years from now. For sure, Germany raised the bar for technical precision at a World Cup event.

More than 3 million fans, who attended 64 games, had tickets embedded with an RFID (radio frequency identification) chip that contained identification information.contained identification information.

Police, fire and emergency squads at every stadium used TETRA (tap-proof digital terrestrial trunked radio) phones. The handsets were also equipped with a GPS (global positioning system) transceiver to (global positioning system) transceiver to locate and direct emergency personnel.

The National Information and Cooperation Center (NICC), located inside the German Interior Ministry, was manned around the clock by security experts from around the clock by security experts from around 20 government agencies, including Europol and Interpol. Each of them operated their own communications network and, in

some cases, established special units to monitor activities during the games. The Federal office of Criminal Investigation, for instance, had a unit watching out for possible terrorist attacks.

In addition, each stadium was equipped with 23 HDTV cameras and connected via with 23 HDTV cameras and connected via dual fiber optic links to a super high-speed backbone capable of transporting data at speeds up to 480Gbps.

Web services will also play a big role in South Africa. “A big difference between South Africa. “A big difference between the German World Cup and the previous tournaments was our extensive use of Web services,” said Meyer. “And the big difference between the German games and the South African games will be our efforts to have African games will be our efforts to have everything based on Web services. The Internet is going to be bigger than it already is in our operations.”

utr

en

dutr

en

dL

IuLIuLIn

en

eSS

Vol/1 | ISSUE/171 6 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

FIFA Network Tackles Tough Challenge

Ill

US

TR

AT

IoN

By

Sy

SA

SI

BH

AS

kA

R

B Y J O h n B L AUwworld cup trendlines p p SS

Next World Cup's IT Test

Trendlines.indd 16Trendlines.indd 16Trendlines.indd 16Trendlines.indd 16Trendlines.indd 16Trendlines.indd 16Trendlines.indd 16Trendlines.indd 16

Page 9: July 15 2006

world cu p trendlines

S e c U r I t y Robots had a heyday in Germany. While one group of robots completed a World Cup in Bremen, another was diligently patrolling Berlin’s Olympic Stadium, one of 12 World Cup venues.

Germany won the RoboCup 2006 championship. The RoboCup’s goal has been to develop a team of autonomous humanoid robots that can win against a human World Cup champion team by 2050. With RoboCup over, robot enthusiasts shifted their attention to another group of robots busy protecting the historical Berlin stadium.

Eleven robots patroled the stadium every night. The robots, built and operated by Robowatch Technologies GmbH in Berlin, were on contract to provide security services at stadiums in Berlin, Frankfurt and Leipzig.

One group of robots was programmed for outdoor surveillance. With the help of Global Positioning System technology, they patroled the outer area of the Berlin stadium. The robots communicated the outer area of the Berlin stadium. The robots communicated with the control center via 3G mobile technology, using 3G cards, which connected to a dedicated base station in the stadium. All data was encrypted.

The robots were equipped with video cameras, radar sensors, temperature gauges and infrared scanners. Camera heads on the temperature gauges and infrared scanners. Camera heads on the robots can turn in all directions and can be controlled remotely.

“If a robot registered a change, like a hole in a fence, it sent an alert to the control center,” said Stengl. “ Radar sensors can also detect human bodies through most walls.”

The outdoor robots can also be equipped with technology to The outdoor robots can also be equipped with technology to detect alpha, beta and gamma rays, as well as biological weapons.The robot software system is based on the open source Linux operation system.

robots PatrolBerlin Football Berlin Football Stadium

Vol/1 | ISSUE/17

I n n O v A t I O n Among the list of technology firsts at the World Cup was the use of a system that could format action to fit on mobile phone displays.

For the first time at the global sports event, near-live video clips for mobile phones were being produced with a technology known as ‘pan and scan’, initially developed to adapt screen films to the smaller TV format, according to Brian Elliott, head of international broadcast operations at Host Broadcast Services (HBS).

It works like this: production editors select footage and use pan-and-scan technology to zoom in and produce a clip that is exciting and relevant for small screens.Adding to the experience is picture quality. Because the originating feed in the HDTV (high-definition television) digital

format ensures that every part of the 16:9 formatted picture is of high quality, any selections of that picture are guaranteed to be equally clear.The edited clip, typically four minutes in length, is encoded and stored on a central file server which licensees, such as mobile phone operators, can transport to their home countries either via

dedicated data connections or as a secure FTP download dedicated data connections or as a secure FTP download from the Internet. operators can stream the clips to customers over their mobile networks.

T-Mobile International struck a deal to stream games live to customers with phones using high-speed connections. In addition to streaming, several service connections. In addition to streaming, several service providers offered broadcast mobile TV services, which sends signals from TV stations directly to mobile phones equipped with special antennas.

A report from the market research arm of Informa PPllC projects up to Rs 1,350 crore in revenue from fans C projects up to Rs 1,350 crore in revenue from fans watching streamed or broadcast coverage of the World Cup games on their phones.

REAL CIO WORLD | J U LY 1 5 , 2 0 0 6 1 7

IM

AG

ING

By

BIN

ES

H S

RE

ED

HA

RA

Ny

BIN

ES

H S

RE

ED

HA

RA

N

u p trendlines tr

en

du p trendlines tr

en

du p trendlines LIu p trendlines LIu p trendlines n

e

u p trendlines ne

u p trendlines S

u p trendlines S

Great Play on Small Screens

u p trendlines u p trendlines

IM

AG

ING

By

AN

Il TT

7/13/2006 11:50:24 AM7/13/2006 11:50:24 AM7/13/2006 11:50:24 AM7/13/2006 11:50:24 AM7/13/2006 11:50:24 AM7/13/2006 11:50:24 AM7/13/2006 11:50:24 AM7/13/2006 11:50:24 AM7/13/2006 11:50:24 AM7/13/2006 11:50:24 AM7/13/2006 11:50:24 AM7/13/2006 11:50:24 AM7/13/2006 11:50:24 AM

Page 10: July 15 2006

tr

en

dt

re

nd

LI

LIn

en

eSS

B O O k r e v I e w Most business books have one big idea that the author draws out for 200 pages. Usually, the idea can be explained in a review, and the book itself, colored with vague examples, is best to skim.

Pip Coburn’s The Change Function: Why Some Technologies Take Off and Others Crash and Burnis different. Yes, he has a big idea — that you can predict which technologies will succeed or fail by applying a simple formula — and this idea is covered in the first 10 pages. (And yes,

you’ll understand the idea by the end of this review.) But what sets this book apart from so many others is that the rest of it is worth reading. Coburn fills his book with detailed case studies, gleaned from his years as managing director of the technology group at UBS Investment Research, that illustrate his formula. And thanks to the detail, the cases actually teach you something.

The core of Coburn’s formula is that a new technology should be widely adopted only if it

meets two criteria. First, it has to address a problem and, second, that problem has to be more painful than the perceived pain of adopting the new technology. “We need to balance our wonderment at technology’s role in creating nirvana with a skepticism about business models,” writes Coburn, now head of his own investment company, Coburn Ventures. The picture-phone, for example, which AT&T pushed from the 1960s to the 1980s, failed because the need to see the person you’re

talking to isn’t a big enough problem to justify buying and learning how to use an expensive new phone system.

That’s a lesson that should hit home for CIOs. Applying Coburn’s insight, CIOs should force IT projects through a gauntlet of questions, asking not only if a particular technology will solve a problem but also whether that problem is one that users are desperate enough to have them do something about.

— By Ben Worthen

Why Technology FailsA skeptical view of new technology.

The Change Function: Why Some Technologies Take Off and Others Crash and BurnBy Pip CoburnPortfolio, 2006, Rs 1,347.50

M A n A G e M e n t r e P O r t A new version of Control Objectives for Information and related Technology (Cobit) is better organized than its predecessors, and provides clearer links between IT processes and business goals, says Craig Symons, an analyst at Forrester Research. The improvements make this IT governance tool something that CIOs should seriously consider using, he adds.

Issued by the IT Governance Institute (ITGI), Cobit is a set of guidelines that IT organizations can use to employ management best practices, measure IT processes, and align IT with business processes. IT departments can use this tool to measure their value to business, as well as comply with regulations such as Sarbanes-Oxley.

Yet, less than half of the CIOs in the financial services industry, where Cobit is most popular, are even aware of the guidelines, according to ITGI’s own assessment. The reason: since it was created in 1996, Cobit has expanded to cover so many control objectives and management guidelines that it’s difficult to make sense of them. A Cobit primer issued by the Sandia National Laboratories in June 2005

lamented: “Of the possible objectives, on which do you spend the effort, and which do you ignore?”

Answering that question has become much easier, Symons says, thanks to Cobit 4.0. The authors have done away with Cobit’s multiple volumes, integrating the information about all 34 high-level control processes, 239 detailed control objectives and related management guidelines into one volume. What’s more, the material is organized by how one approaches projects: first, plan and organize, next, acquire and implement, then deliver and support, and finally, monitor and evaluate.

In addition, Symons says, Cobit 4.0 offers more details on how to measure whether IT processes are delivering what the business needs. For example, under the heading ‘defining a strategic plan’ (one of the 34 high-level processes), Cobit outlines how to do that: engage executives on alignment with business goals and develop a proactive process to quantify business requirements.Cobit 4.0 is available at www.itgi.org.

— By Allan Holmes

User-Friendly it governance

Vol/1 | ISSUE/171 8 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

Cobit 4.0 offers CIOs more details than its predecessors on how to measure if it processes are delivering what the business needs.

Trendlines.indd 18Trendlines.indd 18Trendlines.indd 18Trendlines.indd 18Trendlines.indd 18Trendlines.indd 18Trendlines.indd 18

Page 11: July 15 2006

t e L e P h O n y Sage Research recently announced the winners of a contest recognizing organizations that have successfully rolled out voice over IP (VoIP) systems. Here is advice from the top practitioners.

1Do research. Talk with CIos, who have rolled out VoIP, about their experience before you call in vendors.

2 Set clear expectations. Explain to users what the new system will — and won’t — be able to handle. “Nothing causes a problem

like planning a simple install and discovering that the upper management was expecting all the bells and whistles,” says one IT manager.

3Know your network. Gather all documentation for your company’s network infrastructure, so that whoever designs the new system has all

the specifications of the current IP network. Winners cited network stress tests and bandwidth tests as important planning tools — and noted the importance of upgrading network equipment before rolling out VoIP to guard against failures.

4Outsource development. or not. Companies that hired outsiders to design and implement a VoIP system usually lacked the

internal expertise to do it themselves. Conversely, organizations that kept design and deployment in-house claimed that it’s now easier to support and maintain their systems because their staff knows the current infrastructure better.

5Train users before the rollout. Give employees time to become familiar with their new phone’s functionality before they actually need to rely

on it for everyday work.

— By Thomas Wailgum

Steps to VoIP Success

Vol/1 | ISSUE/17

Trendlines.indd 19 7/13/2006 11:50:26 AM

Page 12: July 15 2006

Red Light, Green LightHow one CIO used project management discipline and the Traffic Light Report to align her IT department with her company’s business goals.

When I joined Pacific Blue Cross in 2003 as vice president of IT, the CEO and I agreed on two foundational principles:one, technology has no value by itself, and

two, technology management must switch its focus from operational to business enabler.

These principles may seem self-evident but the truth is when there’s a flurry of projects, all of them important to some aspect of the business, technology management can all too easily get swept away in putting out fires. This seemed to be what was happening at Pacific Blue Cross when I arrived. With nearly 2 million members covered, Pacific Blue Cross is the market leader in providing health-care and dental coverage to residents of British Columbia. Our subsidiary, Blue Cross Life, also offers life insurance and disability income protection.

While I understood my mission — turning the IT department into an enabler of business — the journey has been far from straightforward. It’s been a long road with many bends and even a few dead-ends. Even so, there’s no doubt we’re making progress. How did we do it?

Project Management to the RescueFirst and foremost, we began to align every project to Pacific Blue Cross’s Balanced Scorecard. The Scorecard shows and measures the organization’s performance from six perspectives: quality, quantity, infrastructure, clients, people and community-related goals. Every project is now justified in terms of how it supports the goals described in the Scorecard. That keeps the company’s goals clearly in sight for all and shows how technology relates to and enables the

Dr. Catherine Aczel Boivie PEER TO PEER

ILL

us

TR

aT

IOn

MM

sH

an

ITH

VOL/1 | IssuE/172 0 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

Coloumn Red Light Green Light.in20 20 7/13/2006 3:07:48 PM

Page 13: July 15 2006

business.After assembling a list of all the projects we were working on, I introduced the project management office (PMO) function. This office oversees all projects of more than one month’s effort — from the business case to a post-implementation review. We fashioned this as a corporate-wide PMO, because all projects require disciplined management and almost all projects at Pacific Blue Cross have a technology component.

The business welcomed the PMO, since it gave it an overall view of all projects (in the planning, execution or close-out stages) as well as monthly updates on their status.

To ensure the success of this new process, the PMO (a manager, three project managers and two-to-five contract project managers, depending on the project mix) conducted three, half-day workshops for personnel who would be managing or sponsoring projects. These sessions helped to obtain buy-in. But as theory is nothing without practice, the workshops were followed by individual coaching sessions for the project managers. Finally, we put all the project management–related processes online so that everyone has a shared knowledge base.

The PMO regularly reports project status to IT management, the executive committee and the board of directors through the aptly named Traffic Light Report. This report lists each project along with a short description, its schedule, the stage it is at and a status comment. Next to the project is a red, yellow or green symbol that quickly identifies whether the project is on time, on budget and on scope. The report is also posted on our intranet so that all employees can follow a particular project’s progress. The Traffic Light Report has become a critical tool for demonstrating technology’s value to the business.

Gate 1, Gate 2, Gate 3...In my second year at PBC, I introduced IT governance — in the form of a gating process. Why wait until the second year? Because past experience has taught me that too much change, introduced too quickly, does more harm than good. Circuit overload may cause pushback!

Now, before a project can even get to the doorstep of the approval process, it has to be sponsored by a vice president. Only then is it ready for gate 1 — or what we call the ‘thumbs up/down’ gate. Here, the executive sponsor presents the idea to the executive committee, and the members give it a thumbs up or thumbs down. If the project is approved, the proposal continues to the next level (gate 2).

For gate 2, the sponsor presents a detailed cost-benefit analysis, because no matter how wonderful the idea, if the cost is too high for the projected benefits, that’s it. If it passes gate 2 and is more than $ 500,00, a gate 3 or detailed business case is prepared. Gate 4 is only for projects that have to be reviewed

Dr. Catherine Aczel Boivie Dr. Catherine Aczel Boivie

VOL/1 | IssuE/17

Today, NetMagic Provides mission-critical IT services to large and medium enterprises across the globe, saving them millions of dollars in managing their IT infrastructure. Our customers depend on us for managed Hosting, critical Mail services, Network and Server Security, Bandwidth and Connectivity, Network Monitoring and Management, and Data Storage and

Backup solutions, among other things

So give us a call, drop us a line or come by and talk to us about your requirements.

Netmagic Solutions Pvt. Ltd.22, Nirlon Complex, Western Express Highway,Goregaon (E), Mumbai - 400063Phone: 91-22-26850001. Fax: 91-22-26850002Email: [email protected]

Hosting Services | Managed Services | Remote ManagementProfessional Services | Disaster Recovery and Business Continuity

Remote Management | Bandwidth & Connectivity

Mission CriticalApplication Hosting

ERP, Online Trading, Online Billing, Custom Business Applications...

Your Search Ends Here

Coloumn Red Light Green Light.in21 21Coloumn Red Light Green Light.in21 21Coloumn Red Light Green Light.in21 21Coloumn Red Light Green Light.in21 21Coloumn Red Light Green Light.in21 21Coloumn Red Light Green Light.in21 21Coloumn Red Light Green Light.in21 21Coloumn Red Light Green Light.in21 21Coloumn Red Light Green Light.in21 21Coloumn Red Light Green Light.in21 21Coloumn Red Light Green Light.in21 21Coloumn Red Light Green Light.in21 21Coloumn Red Light Green Light.in21 21

Page 14: July 15 2006

at the executive level because they cost more than $ 1 million or are very complex. Gate 5 is the post-implementation review.

Last year, for example, our senior VP of client development presented a gate 1 concept of adding dental and extended health usage information to our member portal. The executive committee gave his idea the thumbs up. The PMO then helped him to develop a gate 2 high-level business case, which was also approved. As a result, our members can now obtain information online about their coverage usage, thus reducing the number of calls to our call center.

Through our ‘gating governance,’ everyone can see how projects are prioritized and approved. We’re able to plan and measure benefits of projects and assess how they enable our business initiatives.

Even though projects of more than a month’s effort are now overseen by the PMO, projects of smaller effort also need to be kept aligned with business needs. To do that, we established the change review board. Headed by a manager from the business area, this board reviews all change requests, assigning priorities based on how the change will enable the business. Three years ago, IT was swamped with more than 700 change requests and there was not much hope we’d get to all of them. So we asked all owners of change requests to re-submit any requests that were over a year old. With the change review board prioritizing the requests, we’re now able to see which are the most pressing, which ones overlap and which will be superseded by some that are more encompassing.

In three years we’ve come a long way, and these new initiatives would not have succeeded without the active involvement of senior management and the IT team’s hard work. But we still face a number of challenges. For instance, we still need to do a better job of conducting regular postmortems of larger projects to gather lessons learned. And even when we do gather lessons learned on an ad hoc basis, we still aren’t disseminating them to the appropriate personnel, so they can learn from previous experiences. Some of the new technology we are implementing — portals, document management and knowledge management — should help with this.

The road to Rome wasn’t built in a day. But there’s one thing we’re confident of: the framework we’ve put in place not only ensures that IT is already more focused on business, but that focus is advancing our business goals. In short, IT is enabling all the employees of Pacific Blue Cross to serve our members better. CIO

Dr. Catherine Aczel Boivie is senior VP of IT at Pacific Blue Cross. She is also

the founding chair of the CIO Association of Canada. Send feedback on this

column to [email protected]

Dr. Catherine Aczel Boivie

2 2 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD Web

Excl

usive

Features

The Web Lights up Your WorldInternet users say the Web has improved their efficiency at the workplace -- and beyond

Recruitment Rethink for IT Are companies investing enough in business training of recruits in IT teams?

Whipping IT Data Into Shape Enterprises are tackling the ugly problem of reconciling widely-distributed data, driven in part by the move to service-oriented architecture.

Read more of such web exclusive features at www.cio.in/features

Columns

Hitting to All FieldsHow do you move into a new industry without taking a backward step?

Radical ReformThe IT promise: sustainable competitive advantage.

Read more of such web exclusive features at www.cio.in/columns

Resources

Complete Data Protection StrategyBuilding a robust data protection strategy is now a business requirement.

IT Consolidation Drivers and BenefitsOrganizations are finding themselves in a position whereconsolidation does not necessarily

Download more web exclusive whitepapers from www.cio.in/resource

NE

WS

|

FE

AT

UR

ES

|

CO

LU

MN

S

| T

OP

VIE

W

| G

OV

ER

N

| E

SS

EN

TIA

L T

EC

HN

OL

OG

Y |

R

ES

OU

RC

ES

Log In Now! CIO.in

R E A L

WORLD

Coloumn Red Light Green Light.in22 22 7/13/2006 3:07:48 PM

Page 15: July 15 2006

IT’s Good NewsCIOs need to help their staffers understand that if they can hold on during the tough times, the payoff is just around the corner.

One of the fundamental jobs of a leader is to paint an exciting, positive view of the future that connects to the emotional concerns of his or her staff. This task is particularly critical for CIOs now, as the

stress on their departments intensifies with the business’ hunger for IT services appearing to be bottomless even as it continues to stipulate that IT control its costs. Adding to the demands on the CIO’s staff is the growing technical sophistication of their internal business partners, intensified competition from external service providers and the increasing trend toward the commoditization of IT processes, jobs and software.

Mark Walton, former CNN chief White House correspondent, writes in his book, Generating Buy-In: Mastering the Language of Leadership, that “stories are the language of our mind”. The stories that have the greatest impact — on our thinking, our emotions and, ultimately, our actions — are stories “that project a positive future.” The leader’s challenge is to “connect the dots between the future you want and the future your audience wants” by:

being clear about what you want your audience to do

describing the positive future your audience should see

illustrating how that future will fulfill their needs, wants and goals

asking for commitment and taking the first steps toward bringing about the future you want.In June's column (The Worst Job in IT), I challenged readers to begin crafting a story about how IT will exceed the expectations of the enterprise while ensuring the success and satisfaction of the IT staff. I truly believe that IT is entering a new stage of maturity where it will be easier for IT professionals to do their jobs without the fear, overload and confusion that exists today.

Susan Cramm EXECUTIVE COACH

Ill

us

tr

at

IOn

sa

sI

Bh

as

ka

r

REAL CIO WORLD | J U LY 1 5 , 2 0 0 6 2 3VOl/1 | IssuE/17

Coloumn IT's Good News.indd 23 7/12/2006 1:48:45 PM

Page 16: July 15 2006

IT has always been a difficult profession. At first, business partners were completely dependent on IT and there was, in truth, very little IT could deliver due to limitations in technology and IT’s necessary focus on delivering foundational transaction systems. Then, as PCs and client/server computing became prevalent, IT’s frustrated business partners tried to address their own needs through the use of ‘end user tools’ without involving IT. So, IT found itself either fighting for control of systems (and people) that had become enterprise critical or being held responsible for poorly performing user projects and systems.

Then, as the promise of the Internet and fears of Y2K generated unprecedented demand, the IT budget and organization ballooned. Not coincidentally, systems such as ERP were implemented that either were not ready for prime-time or ended up overwhelming the organization’s capabilities and finances. As a result, IT’s reputation in the organization suffered, and it was forced to retreat to try to figure out how to satisfy business’ demands by finding efficiencies within core operating costs. But even during this retreat, the importance of managing IT as an enterprise asset and capability became obvious to every layer of the organization. Ultimately, this gave birth to healthy forms of interdependence, i.e., governance and roles, that mirrored practices found in other, more mature areas of the business.

Evolving Into Innovation ExpertsIn the future, the interdependency of IT and business will mean that IT is no longer responsible for the delivery of IT while the business sits on the sideline waiting to judge the outcome. In this future, IT will be accountable for ensuring that IT is done well. I call this enabling IT, which requires creating relationships, roles, processes and an infrastructure that helps the business satisfy its day-to-day needs without involving IT. This means that IT:

facilitates appropriate decision making to protect the interests of the enterprise

defines data, business services, architectural guidelines and technology standards

develops enabling infrastructure, tools, processes and support resources

educates and coaches users and provides resources so that the business can manage projects and change, determine necessary functionality, and develop and deploy systems

provides development and operational resources and services on demand — both staff and technology — in conjunction with external suppliers.

In the future, IT professionals will become innovation experts by combining technology savvy, business knowledge, management

discipline and the ability to play well with others. Those with a heavier technical orientation will focus on defining architecture, and designing and developing infrastructure and enabling platforms. Professionals with a strong business orientation will focus on collaborating with the business on strategy, governance, and project and service delivery. Individuals with stronger management discipline will specialize in overseeing technology development, service and operational delivery, resource management and risk management.

The Story you TellThe enabling IT model will address concerns about job security, the hierarchy of technical skills and the resources squeeze. It’s undeniably true that commodity IT jobs will be outsourced. But IT jobs and roles that touch on innovation will not be outsourced because they demand the ability to comprehend the connections among technology, data and business processes, and the ability to understand the connections between people and how work gets done within the organization. In this brave new world, IT’s influence in a company will increase. Paradoxically, by giving up control over technology delivery, IT will gain authority as it can no longer be blamed for being a bottleneck to technical innovation. With the business doing (and paying for) daily development effort, much of the variable demand will be external to the IT team, allowing IT to plan its work and ensure adequate funding.

All this means that there will be an incredible demand for innovation experts, and it will only be satisfied if the talented professionals currently in place adopt lifelong learning as their mantra. Learning can occur on the job and, in some cases, in the classroom, but now more than ever IT professionals need to take a hard look in the mirror, assess their skill sets, and then reach out for the learning opportunities that will expand their capabilities. The projected slowdown in labor growth will play to the advantage of those professionals who possess innovation-expert-type skills. In the future, these people will be able to write their own tickets, personally, professionally and financially.

The future of IT in your organization depends on your ability to communicate this story to your staff and have this model embraced by your company. If your organization doesn’t understand already that today’s pain is for tomorrow’s gain, get busy writing your story. It will ensure that IT’s potential will finally be realized in the real sense. CIO

Susan Cramm is founder and president of Valuedance, an

executive coaching firm. Send feedback on this column

to [email protected]

Susan Cramm EXECUTIVE COACH

The projected slowdown in labor growth will play to the advantage of those professionals who possess innovation-expert-type skills.

VOl/1 | IssuE/172 4 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

Coloumn IT's Good News.indd 24 7/12/2006 1:48:46 PM

Page 17: July 15 2006

Trendline_Nov11.indd 19 11/16/2011 11:56:19 AM

Page 18: July 15 2006

Reader ROI:

What it takes to be an early adopter of technology

How to deal with legacy

Why business expectations need to be tempered

Sajid Ahmed, global head-infrastructure, Syntel, has designed a 10G network at the company's Pune facility.

Cover story FINAL.indd 26 7/13/2006 3:26:15 PM

Page 19: July 15 2006

Vol/1 | ISSUE/17

“How much bandwidth is really enough?”It’s the conundrum that has haunted IT managers ever since networking guru Robert Metcalfe dreamed up Ethernet — a ‘fast’ way of connecting hundreds of terminals across a whole building to minicomputers, while bringing in print and file-sharing capabilities.

The original Ethernet LAN sent 2.94 Megabits per second (roughly this paragraph) of data over thick coaxial cable at a distance of one kilometer. That’s evolved over decades with the present age Ethernet transferring data at a blazing 100 Gigabits per second.

Cover Story | Network Infrastructure

At first glance, 10G Ethernet would seem to be driven by bandwidth requirements. But a closer look reveals the need for network infrastructure to be designed for the future — especially for global enterprises and those rooted in R&D.

By Gunjan Trivedi

a aaaaCableCableCableCableCableCableCableCableFable

REAL CIO WORLD | J U LY 1 5 , 2 0 0 6 2 7

Ima

gIn

g b

y b

InE

Sh

Sr

EE

dh

ar

an

I

Ph

ot

o b

y S

rIV

at

Sa

Sh

an

dIl

ya

Cover story FINAL.indd 27Cover story FINAL.indd 27Cover story FINAL.indd 27Cover story FINAL.indd 27Cover story FINAL.indd 27Cover story FINAL.indd 27Cover story FINAL.indd 27Cover story FINAL.indd 27Cover story FINAL.indd 27Cover story FINAL.indd 27Cover story FINAL.indd 27Cover story FINAL.indd 27Cover story FINAL.indd 27Cover story FINAL.indd 27Cover story FINAL.indd 27

Page 20: July 15 2006

Sounds like magic, doesn’t it? But ask CIOs investing in upcoming technologies to future-proof their enterprise, and they'll say it’s anything but enchanting. They have to grapple with issues of obsolescence, commercially viable technology, costing, and convincing management of the need to go with the cutting-edge stuff rather than the mature solution approach.

Invigorate the BackboneSajid Ahmed faced a similar dilemma eight months ago when he began to design a network for IT services company Syntel’s Global Development Center in Pune. The management’s directive to Ahmed, the global head-infrastructure of Syntel, was straightforward: build a state-of-the-art facility that spans across 40-odd acres and can house around 8,000 engineers. The campus also had to double up as a customer-centric and an employee-friendly facility.

Ahmed began the ardent task by taking a top-down approach. First, anticipate the streams of applications that the campus will deploy. Then, examine the business case for the networking technology slated to come in. And, when convinced, rein it in.

Cometh the hour, Ahmed was crystal clear about Syntel’s requirements and what it wanted to offer. It was the process of designing the switching and cabling infrastructure of the campus that turned out to be time-intensive as he came to confront a critical question: Do I choose the established technology — or risk adopting the latest?

“We are not a research-based or a media company. We are primarily a provider of custom outsourcing solutions for IT application development. For us, our data warehousing activities come closest to choking the bandwidth in our organization,” he says. Still, bandwidth wasn’t Ahmed’s only parameter at that point. Though

100Mbps would have sufficed for the traffic coming in from the access layers, he was sure he didn’t want to lay a network that was running anything less than 1Gb Ethernet at the aggregation or distribution layers. “The reason was simple: future proofing,” he says.

A debate ensued, and Syntel finally standardized its campus backbone interconnecting seven buildings with the latest — fiber-modular 10Gb Ethernet (GbE).

The 10GbE standard surfaced a couple of years ago. If adopted, it was bound to disrupt existing infrastructure and force enterprises to rip-and-replace. Initially, copper cabling proved to be a deterrent to 10GbE, and vendors could only offer the technology on expensive fiber. This kept the standard out of the reach of many businesses. Of late though, the 10GbE volcano has begun to smolder. Vendors are now fast coming out with increasingly affordable products to support the technology that has begun to stabilize.

For an enterprise CIO, the big question surrounding 10GbE now is whether 10GbE is really stable. Loads of man-hours spent tweaking switches, network cards, drivers and

Cover Story | Network Infrastructure

"I would rather have the latest technology for my organization than sit on something old.”

— raj Katari,head-IT, Nagarjuna Fertilizers

Ph

ot

o b

y S

Ur

ES

h

Vol/1 | ISSUE/17

Cover story FINAL.indd 28Cover story FINAL.indd 28Cover story FINAL.indd 28Cover story FINAL.indd 28Cover story FINAL.indd 28Cover story FINAL.indd 28Cover story FINAL.indd 28

Page 21: July 15 2006

the like only to get the most out of a 10-Gig pipe simply isn’t good enough. CIOs want to know if the solution works. Do they get 10GbE performance simply by powering up?After that, they need to worry about easy integration into an existing architecture and day-to-day management tools — and they certainly have budget concerns. But most of all, they want to know what’s in it for them besides just a really fat pipe. What’s 10GbE really good for that straight Gigabit Ethernet doesn’t already give them?

For Syntel, it was the scaling-up options 10GbE provided, apart from superior network performance. As of today, its campus has seven buildings, three of which are up and working. Each building has eight wings that cater to 120 concurrent users and each wing, in turn, has two 10Gbps uplinks to the core layer. With Ethernet running on full duplex, each uplink port has 20Gbps bandwidth. In effect, the 120 users in each wing have an access to a whopping wire-speed of 40Gbps.

What Lies BeneathConsidering the distance between buildings in the campus, optic fiber was the natural choice. While designing the core backbone, Ahmed did think of copper cabling. “But we

realized that there was a huge distance restriction (copper cannot go more than 55 meters at one go), and our backbone is slated to run for several kilometers within the facility. So, our vote for optic fiber connectivity was clear,” he recalls.

Further, Syntel opted to deploy single mode fiber (SMF) cables instead of the cheaper multi mode fiber (MMF). With Ethernet networking evolving way too fast, when it comes to local area networking, it isn’t just cabling but even the inter-networking electronics that need to comply with upcoming bandwidth standards. “The last thing we want is to realize in the near future that we need to scale up to a faster bandwidth standard. Relaying the cables would be suicidal.” The stakes were high, and it perhaps explains why Ahmed devoted more time on switching and cabling than on any other aspect of the network infrastructure.

Being an IT services company, Syntel is not like a Tier-1 bandwidth aggregation point of any business and doesn’t need to deploy an end-to-end 10Gb Ethernet, especially on the desktops. “Though 10GbE cards are now available, a single card is as expensive as a workstation. So, having 10GbE cards on the workstations did not make a viable business case for us. We were still confident that 100Mbps at the desktops will continue to rule the roost for at least two years,” says Ahmed.

For the core backbone, Ahmed was keen to have a single, consistent network design. Interestingly though, he opted for a combination of SMF and Cat6 UTP (Unshielded Twisted Pair) in the 10,000-sq ft datacenter. He anticipates Syntel getting into R&D-based projects in the foreseeable future. When that happens, the organization would need to scale up to 10Gbps over copper across the datacenter floor.

“This is why we have stuck to Cat6 despite having a cost differentiation of 18 to 20 percent between Cat5 and Cat6 cable. Within the floor, everything is Cat6. But between the core switches or between the core and distribution layer of networking, it is all 10GbE,” he explains.Ahmed’s philosophy of core design is inspired by a home truth in manufacturing circles: too much inventory is always too bad. From an inventory standpoint, he wants to keep the campus design systematic and uncomplicated, and not have his team face the blues managing multiple platforms. “Since we are not a core R&D company, we did not want to invest in technologies that are still vague. There are inherent design and implementation challenges with 10GbE over copper standard and newer UTP cabling technologies that theoretically enable enterprises to run 10Gbps over 300 meters,” he says.

Even while contemplating between deploying Cat6 and Cat6A, Ahmed decided to stick to the established standard to avoid both the technological challenges and cost differentiation of at least 30 percent. “Cable engineering is fundamental to design, and we have to be vigilant to the design issues. Even the Cat6 cabling becomes wacky when deployed at angles more than 45 degrees,” he points out.

globally, enterprise It and network professionals will toss away more than rs 45,000 crore on gigabit Ethernet lan gear over the next two years that would be better spent on technologies to support increasingly distributed workforces, says gartner vice president mark Fabbi.

“the majority of network designers continue to be caught in traditional design practices,” says Fabbi. “they continue to spend money on bigger and faster core networking technologies at important locations that don’t actually serve the user population,” he says.

Corporate applications — even videoconferencing and VoIP — do not require more than a few hundred kilobits per second of bandwidth, he points out. “astute network managers will focus on the upper layers of the stack, and look to security, data control, application optimization and mobility services as key features that will benefit the organization far more than installing gb Ethernet for all desktops.”

From a cost standpoint, the gigabit option is complex. application usage, the form factor of the products and the medium of the wiring all contribute to the cost of the technology, according to analysts and users. averaging out the entire industry, the cost of a gigabit port was 80 percent to 300 percent the price of a Fast Ethernet port in 2005.

Still, gigabit modular ports outsold Fast Ethernet modular ports by 50 percent in 2005. “It’s no coincidence that large businesses have adopted modular gigabit in chassis switches,” says Seamus Crehan, an analyst with the dell’oro group. “generally, large networks tend to have chassis all the way out to the wiring closets, and enterprises future-proof more and have a greater need for bandwidth.” —Phil hochmuth and Jim duffy

to 10Gbps over copper across the datacenter floor.

realized that there was a huge distance restriction (copper cannot go more than 55 meters at one go), and our backbone is slated to run for several kilometers within the facility. So, our vote for optic fiber connectivity was clear,” he recalls.

cables instead of the cheaper multi mode fiber (MMF). With Ethernet networking evolving way too fast, when it comes to local area networking, it isn’t just cabling but even the inter-networking electronics that need to comply with upcoming bandwidth standards. “The last thing we want is to realize in the near future that we need to scale up to a faster bandwidth standard. Relaying the cables would be suicidal.” The stakes were high, and it perhaps explains why Ahmed devoted more time on switching and cabling than on any other aspect of the network infrastructure.

bandwidth aggregation point of any business and doesn’t need to deploy an end-to-end 10Gb Ethernet, especially on the desktops. “Though 10GbE cards are now available, a single card is as expensive as a workstation. So, having 10GbE cards on the workstations did not make a viable business case for us. We were still confident that 100Mbps at the desktops will continue to rule the roost for at least two years,” says Ahmed.

consistent network design. Interestingly though, he opted for a combination of SMF and Cat6 UTP (Unshielded Twisted Pair) in the 10,000-sq ft datacenter. He anticipates Syntel getting into R&D-based projects in the foreseeable future. When that happens, the organization would need to scale up

realized that there was a huge distance restriction (copper cannot go more than 55 meters at one go), and our backbone is slated to run for several kilometers within the facility. So, our vote for optic fiber connectivity was clear,” he recalls.

cables instead of the cheaper multi mode fiber (MMF). With Ethernet networking evolving way too fast, when it comes to local area networking, it isn’t just cabling but even the inter-networking electronics that need to comply with upcoming bandwidth standards. “The last thing we want is to realize in the near future that we need to scale up to a faster bandwidth standard. Relaying the cables would be suicidal.” The stakes were high, and it perhaps explains why Ahmed devoted more time on switching and cabling than on any other aspect of the network infrastructure.

bandwidth aggregation point of any business and doesn’t need to deploy an end-to-end 10Gb Ethernet, especially on the desktops. “Though 10GbE cards are now available, a single card is as expensive as a workstation. So, having 10GbE cards on the workstations did not make a viable business case for us. We were still confident that 100Mbps at the desktops will continue to rule the roost for at least two years,” says Ahmed.

consistent network design. Interestingly though, he opted for a combination of SMF and Cat6 UTP (Unshielded Twisted Pair) in the 10,000-sq ft datacenter. He anticipates Syntel getting into R&D-based projects in the foreseeable future. When that happens, the organization would need to scale up

globally, enterprise It and network professionals will toss away more t and network professionals will toss away more tthan rs 45,000 crore on gigabit Ethernet lan gear over the next two years that would be better spent on technologies to support increasingly distributed workforces, says gartner vice president mark Fabbi.

“the majority of network designers continue to be caught in traditional design practices,” says Fabbi. “they continue to spend money on bigger and faster core networking technologies at important locations that don’t actually serve the user population,” he says.

Corporate applications — even videoconferencing and VoIP — do not require more than a few hundred kilobits per second of bandwidth, not require more than a few hundred kilobits per second of bandwidth, he points out. “astute network managers will focus on the upper layers of the stack, and look to security, data control, application optimization and mobility services as key features that will benefit the organization far more than installing gb Ethernet for all desktops.”

From a cost standpoint, the gigabit option is complex. application usage, the form factor of the products and the medium of the wiring all contribute to the cost of the technology, according to analysts and users. averaging out the entire industry, the cost of a gigabit port was 80 percent to 300 percent the price of a Fast Ethernet port in 2005.

Still, gigabit modular ports outsold Fast Ethernet modular ports by 50 percent in 2005. “It’s no coincidence that large businesses have adopted modular gigabit in chassis switches,” says Seamus Crehan, an analyst with the dell’oro group. “generally, large networks tend to have chassis all the way out to the wiring closets, and enterprises future-proof more and have a greater need for bandwidth.” —Phil hochmuth and Jim duffy

Resist GbE: Gartner analyst

Vol/1 | ISSUE/173 0 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

Cover story FINAL.indd 30Cover story FINAL.indd 30Cover story FINAL.indd 30Cover story FINAL.indd 30Cover story FINAL.indd 30Cover story FINAL.indd 30Cover story FINAL.indd 30

Page 22: July 15 2006

Cover Story | Network Infrastructure

Through the past eight months, Ahmed has been keen to limit overall utilization of the network at 45 percent to enable a choke-free backbone. In this context, a ‘bandwidth-hungry’ component of the management’s brief to him has posed a challenge: an IP-ready facility to enable concurrent streams of data and voice over network. What Ahmed's team has done is identify all the applications to be run at the campus and gauge the bandwidth requirements when the systems are fully loaded. “Since we are not sure where certain apps or technologies are going to be housed at the campus, we have opted to go in for open and interoperable networking standards to accommodate future technologies,” he says.

Globally, network professionals agree that for the small price of upgrading to Gigabit in specific cases, the purchase is worth it even if the bandwidth isn’t being used. For instance, at the First American National Bank of Texas, a regional bank with 30 locations in North Texas, almost half the switch ports deployed are 10/100/1000 Megabits per second (Mbps), and almost 60 percent of the Dell desktops have Gigabit Ethernet network interface cards built in.

“I wouldn’t consider it overengineering the network,” says Kurt Paige, network administrator for the bank. “I consider it staying on top of the technology. If we’re going to buy a piece of equipment and we can get a 10/100/1000 Mbps port for only a little more, we’ll go with the newer switch, even if the speed may not be used.”

At Syntel, Ahmed has kept the design quite modular to accommodate changes without disturbing the core. “Even if CAT 6 evolves tomorrow to effectively run 10Gbps,

we can easily upgrade the cables at the datacenter. The backbone on single mode fiber does not even have to be touched,” he explains.

Testing The WatersNagarjuna Fertilizers, a leading manufacturer and supplier of plant nutrients in India, took the 10GbE plunge at around the same time as Syntel did — but for a different reason. Unlike Syntel, whose primary objective was to apply 10GbE to future-proof its facility, Nagarjuna Fertilizers innately has a culture of furthering its R&D capabilities. With 10GbE, it sought to get a technological head start.

Belonging to a three-decade-old industrial group, Nagarjuna Fertilizers has evolved phenomenally in several areas except IT infrastructure. The 10GbE standard would provide the organization a platform to leap to future technologies, believed Raj Katari, its head-IT.

Running on a seven-year-old networking infrastructure that offered 10/100 Mbps connectivity, Nagarjuna Fertilizers is now getting into overdrive. The organization has decided to increasingly revamp its infrastructure, and either replace the network wherever possible in a phased manner or bring up a new structure from scratch.

“Being in the fertilizer industry, the organization does have quite a domestic and bureaucratic way of doing business. Nevertheless, as far as IT and MIS is concerned, the company has now got fairly aggressive,” says Katari. This is evident in the company’s IT budget for 2005-06, which

How Fiber Stole Copper's GrooveCompanies in India believe optic fiber cable (oFC) is vastly superior to copper as a cabling link to run 10 gigabit Ethernet. “oFC has better flexibility and capability to be used over larger distances, and so it is the preferred technology,” says Chandra Kopparapu, VP (asia Pacific) at Foundry networks.

dileep Kumar, product manager-enterprise, adC Krone, points out that single mode fiber and om3 multimode fiber have been deployed for 10g backbone connections in India for the last three to four years. “Single mode is preferred for 10g campus backbone (building to building) connectivity and om3 multimode is preferred for vertical backbone (floor to floor) connectivity,” he explains.

and fiber is the way forward, believes Varghese m. thomas, spokesperson of Cisco Systems. “10g Ethernet is used to connect distribution switches to core in campus environment and in the core of metro Ethernet rollouts. hence, most of the deployment is on oFC,” he says.

one reason why copper isn’t preferred as much as fiber might be the rising price of the metal. according to the london metal Exchange, copper prices have tripled over the past four years, and risen more than 59 percent between January and may 2006. Internationally, this has caused several companies to contemplate oFC with renewed vigor. Kumar, however, feels that

the price-increase has a minimum impact on the overall It spending. Kopparapu has another point to make: copper has a relatively limited range.

So, is this the end of the road for copper in tandem with 10g Ethernet? no, he says. once new technology comes in that makes it viable between 50 and 100 meters, it could become more attractive. thomas adds: “the deployments of 10g Ethernet over copper are for connecting high-end servers to the network to avoid the bandwidth choke when multiple clients try to access some server and the clients are on gigabit connections.”

– balaji narasimhan

Cover story FINAL.indd 31Cover story FINAL.indd 31Cover story FINAL.indd 31Cover story FINAL.indd 31Cover story FINAL.indd 31Cover story FINAL.indd 31Cover story FINAL.indd 31Cover story FINAL.indd 31Cover story FINAL.indd 31Cover story FINAL.indd 31Cover story FINAL.indd 31Cover story FINAL.indd 31Cover story FINAL.indd 31Cover story FINAL.indd 31 7/13/2006 3:26:32 PM7/13/2006 3:26:32 PM7/13/2006 3:26:32 PM7/13/2006 3:26:32 PM7/13/2006 3:26:32 PM7/13/2006 3:26:32 PM7/13/2006 3:26:32 PM7/13/2006 3:26:32 PM7/13/2006 3:26:32 PM7/13/2006 3:26:32 PM7/13/2006 3:26:32 PM7/13/2006 3:26:32 PM

Page 23: July 15 2006

increased by a dizzying 500 percent over the previous year to boost the revamp of its networking infrastructure. “Even this year, our budget saw an almost 80 percent increase, enabling us to scale up the infrastructure seamlessly as per our revamp roadmap,” he says.

Most of its operation locations are situated at the company’s Hyderabad corporate office and the manufacturing plant in Kakinada, Andhra Pradesh. With several buildings spread out across a vast distance, copper cabling would not have been enough whereas optic fiber easily met the distance requirement specifications. With fiber coming in, Nagarjuna Fertilizers decided to roll out 10GbE and scale up its backbone to run data traffic at 10Gbps.

But, the fertilizer manufacturer has a problem. With many buildings still harboring 10/100Mbps Ethernet,

which can’t be replaced now, the organization has to support both the Ethernet standards. The switches that interconnect the buildings are dual port switches supporting both fiber optics and 10/100 Mbps Ethernet. Says Katari: “So, between the two buildings, the switches interconnect through fiber on 10Gbps, and, within the building, the Ethernet is on 10/100 Mbps Cat5e. Some buildings are also interconnecting using 1GbE. As per the latest network design, for any new location we add or any existing network we restructure, we are directly going in for 10GbE both as inter- and intra-building backbone.”

In Hyderabad, Nagarjuna Fertilizers connects six build-ings with an extended LAN running up to 2 km of fiber. At its Kakinada plant, it runs up to 7 km of fiber. There are a couple of locations where the organization has decided

to extend 10Gbps all the way from the backbone to the desktops. At these locations, Nagarjuna Fertil-izers has taken a step, which many would fear to take, and interconnected the workstations with 10GbE over copper.

The question arises: does its data traffic require 10GbE technology – is there a real business case for it? Says Katari, “We went by the logic that the latest technology would be available more economically, and easier to maintain, than one that is not so latest. Our experience has been that there is no more than 10 to 15 percent of price difference between the two.”

Another reason why Nagarjuna Fertilizers was so keen on 10GbE was that the two locations, which are completely on 10GbE, are R&D centers. Though workstations on the operations side of the organization connect to the workstations at the R&D center on Fast Ethernet (100MbE), in effect, the workstations within the R&D center talk to each other at 10Gbps. “We are known to be early adapters of technology and, in some cases, we are almost like test-beds for vendors. Our R&D is one of the unique sites for its implementation in India. We are running a little less than a kilometer of 10GbE over copper at the center, with a backup of wireless connectivity,” says Katari.

The R&D content emanating from the centers comprises a lot of streaming information and data. The centers generate loads of content on

a regular basis, which is further collated with more information from across different sites across the world. Typically, this would constitute a lot of content-rich graphics and supporting data. “We have

“The last thing we want is to have to scale up to a faster bandwidth standard in the near future. Relaying the cables would be suicidal.”

— Sajid ahmed,global head-infrastructure, Syntelglobal head-infrastructure, Syntelglobal head-infrastructure, Syntel

Vol/1 | ISSUE/17

Cover Story | Network Infrastructure

Page 24: July 15 2006

a business case where we would be generating 1.5 to 2 terabytes of data every year. While a part of the relevant applications are already implemented and in use, much more is yet in the planning stage. We are required to put in the infrastructure to support these future requirements as well. The cost difference of the new infrastructure here had no more than 15 percent of variance from its predecessor. This proved to be a business case supporting our deployment of 10GbE,” explains Katari.

In terms of operations, the organization would not have really required to transfer so much data. But, Katari didn’t want to get stuck with the standard that will soon phase itself out and prove to be a deterrent to the enterprise advancing to newer standards. “Today, my computing power requirement may not be more than what is offered by a 386 or 486 processor machine. But are these standards available today? No. The market is continuously pushing you out, and the ability to maintain is also getting pushed up further. So, I would rather have the latest than sit on something old,” says Katari.

However, opinions on the extent of future-proofing are divided, especially in the US where Gigabit Ethernet deployments and its utilization are relatively common. At the North Bronx Health Network in New York, LAN ports range from 10M to 10Gbps. Extreme backbone switches link with 10G Ethernet, while some users have Gigabit links to view digital radiology images. But the majority of users still connect with 10/100Mbps, says Adorian Ignat, director of IT. “We have some 10/100/1000 Mbps ports to desktops but not very many right now,” Ignat says. “If I don’t need that bandwidth there’s no sense in putting it in right now.”

Currently, Nagarjuna Fertilizers has about 150 users who interact with its SAP ERP. Over the next six months, Katari plans to increase its usage to about 600 users. Having a skeletal IT team, it is important for the enterprise to have end-to-end control over everything possible.

While studying the business case, Nagarjuna Fertilizers’ management looks at the cost component — and of the timing of the technology being recommended. Do you really need the technology there? Can you do it sooner or later? Is it really going to help to achieve what the organization wants to achieve? These are the standard questions. “If the budget is not there, I might get into the rigor to prove the difference between 10/100 and 10Gbps. But at this time, I would get into the timing and reasoning aspects — the business decisions more than the technical ones. The technical decisions are left to the divisional head,” says Katari. “However, the management will come back to discuss the business differentiations or returns that were expected after a stipulated time period of the implementation. As of now, the business returns are yet to come.”

As far as the technical requirements are concerned, the Hyderabad-headquartered fertilizer manufacturer

compared to what we used to get in the past. Experiences are very much technical in nature and not anywhere connected to business. I would not start tracking the returns of the implementation unless I start seeing business results. Technically, we are happy. Business expectations are yet to be realized,” he says. “It is a technology that is bound to come out. It might take a little longer to talk in right volumes and commercial diving mechanisms.” With the content growing phenomenally and applications increasingly demanding bandwidth, companies won’t have any choice but to go for 10GbE, Katari believes. CIO

With inputs from Phil Hochmuth and Jim Duffy.Senior Correspondent Gunjan Trivedi can be reached at [email protected]

conducts testing. “We have got a very good response, conducts testing. “We have got a very good response,

When Vimta labs announced its plans in may 2005 to deploy 10g Ethernet at its life sciences facility, it might have raised a few eyebrows. Few setups, leave alone a pharma-oriented company, would have hazarded a deployment that was so new in India.

one year on, the rs 55-crore contract research and testing services provider has gone the whole hog. the first phase of its 10g Ethernet-over-copper installation is nearly through — at a time when most Ethernet buyers are still opting for fiber.

the solution fitted in with the company’s “broad policy of protecting itself from obsolescence”, says chairman and md S.P. Vasireddi. “It suited our need for scaleable architecture. right now, it delivers 1g. but, with a 10-fold increase capability, it is an option that won’t get obsolete very soon,” he believes.

but how apt is this standard for a company like Vimta? Its storage requirements are immense, owing to applications such as laboratory Information management Systems. this entails transfer, management and storage of high-resolution graphics and visual data. at each stage of pre-clinical work, for instance, the findings generated would include large slides and 3d data. Vimta labs seeks to store all this for its research and to regularly track the results. In doing so, it has also consciously begun to comply with regulatory bodies like the US Fda.

Post the 10g installation, Vimta doesn’t need to worry about bandwidth issues. Consider this: it now has 62 terabytes of data storage that is still scaleable, not to mention the high-speed connectivity between its labs. “bandwidth is critical because we have to constantly transfer large files. 10g Ethernet ensures smoother movement of data,” says Vasireddi.

another area where 10g is proving more than useful at Vimta labs is in supporting its learning management System. these systems demand architecture than can store high-resolution audio-visual footage of training programmes, while also enabling convergent technologies for interactive training sessions and navigation through the content.

— Kunal n. talgeri

a Clinical Testa Clinical Testa for 10G

Vol/1 | ISSUE/17 REAL CIO WORLD | J U LY 1 5 , 2 0 0 6 3 5

Cover story FINAL.indd 35Cover story FINAL.indd 35Cover story FINAL.indd 35Cover story FINAL.indd 35 7/13/2006 3:26:41 PM7/13/2006 3:26:41 PM7/13/2006 3:26:41 PM7/13/2006 3:26:41 PM7/13/2006 3:26:41 PM7/13/2006 3:26:41 PM7/13/2006 3:26:41 PM7/13/2006 3:26:41 PM7/13/2006 3:26:41 PM

Page 25: July 15 2006

Trendline_Nov11.indd 19 11/16/2011 11:56:19 AM

Page 26: July 15 2006

CIO: In your first stint with ICICI’s Project Finance Division, you implemented its digitization. How did the project influence your outlook on technology as an investment?

K.V. Kamath: That was way back in 1981 when all we had was an 8-bit machine with 20MB of storage running on an 8086 processor. Despite this, we ran the entire leasing department of ICICI. When I say we ran it, I mean documentation, mail, spreadsheet, database, billing and

accounting processes. It taught us that we could effectively run an almost paperless office and that technology can be applied in an easy-to-understand and cost-effective way across the organization.

The division’s computerization also changed the relationship between machines and individuals. Technology was once supposed to be the master and users were slaves. Technology could only be accessed from the CTO’s office and by his team of programmers. With the division-wide computerization, this relationship suddenly changed, and machines and users were on an equal footing. People had developed the ability to interact with machines directly and make them do whatever they wanted.

K.V. Kamath, MD & CEO, ICICI Bank, believes that

the CIO must harmonize the organization's

user groups in the context of their

technological needs.

At ICICI Bank, it’s users— not the IT team — that own, underwrite and monitor IT implementations. Each user group is responsible for the success or failure of the applications they need. It’s an approach that K.V. Kamath, MD & CEO of ICICI Bank, champions and is founded on his own experience with ICICI's Project Finance Division in the early 1980s. That very stint led him to believe that ‘incomplete communication and inappropriate ownership’ are instrumental in the failures of IT projects.

BY GUNJAN TRIVEDI

View from the top is a series of interviews with CEOs and other C-level executives about the role of IT in their companies and what they expect from their CIOs. P

ho

to

by

Sr

iva

tS

a S

ha

nd

ily

a

vol/1 | iSSUE/173 6 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

Handing IT to the Users

View from the Top - K.V.indd 36 7/12/2006 1:12:48 PM

Page 27: July 15 2006

View from the Top

REAL CIO WORLD | J U LY 1 , 2 0 0 6 3 7vol/1 | iSSUE/16

K. V. Kamath expects I.t. to:

continue serving as a key market differentiator

ensure IcIcI Bank's compliance with regulatory requirements

take banking to rural India

View from the Top - K.V.indd 37 7/12/2006 1:12:50 PM

Page 28: July 15 2006

At ICICI Bank, why do business units or user departments own their technological implementations?

To go back to 1981 and the period after, I observed that, by and large, implementations failed because there was an interface in which a third person took in what the user wanted, translated it to somebody else, and then an implementation was attempted. There were two primary problems with this approach: incomplete communication and inappropriate ownership. These are the biggest reasons why IT projects fail. Given this, we felt that user groups should own and implement their own IT.

Does this approach help cut implementation costs and lead to better technology management?

I think those are some of the benefits. The true benefit, however, is in articulating your needs, having a better understanding of technological challenges, finding a way to resolve those challenges, and executing a technological solution. This leads to critical benefits. First, there is a dramatic reduction in the number of failed projects. Second, it enables a reduction in implementation time. Eventually, all this naturally saves costs. Since user departments are conscious of their overheads, this approach leads to further savings.

What then is the role of the CIO at ICICI Bank?

We have to remember that we also need to run technology in such a way that there isn’t technological anarchy in the organization. Various user groups can end up setting up disparate systems that don’t communicate with one another, hardware platforms that are different from each other, and protocols that don’t interact. There is a

strong need to ensure this doesn’t occur. A CIO brings various user groups together and keeps them in harmony in the context of their technological needs. The CIO is like a small clearing house, which ensures that all systems talk to each other, that all software is compliant to a viable extent, and that various divisions understand the costs involved — hardware and software — in specific projects. This can be done with a team of less than 15 people and you don’t need a large office to carry out this function. Here, the people implementing projects are embedded in user groups.

You introduced Silicon-Valley-styled ‘90-day rule’. Is this still the designated time frame for IT projects at ICICIBank?

This rule not only continues to be the time frame for IT projects, but for any project within ICICI. We’re always asking our business divisions why projects can’t be done in 90 days or less. For the most part, we have

found that when we have articulated this rule and made it clear that we’re going to stick to it, we have managed to. A big advantage of the 90-day rule is that it prevents projects from slipping or of getting out of hand, and being abandoned for lack of inputs. This rule ensures we have tight control over the implementation of projects.

However, there are exceptions. Certain projects, such as a greenfield project, might take more time to roll out. Also, in projects in which the software is available but people need to figure out compatibility issues, implementation can take more than 90 days. Having said that, we have also seen instances where vendors have quoted a year for implementation, and we decided to run the project ourselves with the vendor acting as an advisor. We found we were able to implement the project well under 90 days.

I believe that sometimes there is a lot of leeway built into project implementation schedules. But if you have a confident team, you can roll it out in under 90 days.

Has this aproach helped increase ROI?

Honestly speaking, we have never taken the ROI approach. For us, technology is a base case. We can say that we are almost a technology company running a financial services business. When that is the case, which ROI are we talking about? Rather, I stress on figuring out how I can ensure that the cost of a project is in the best interests of the business. I also try to assess by what multiples or factors a project is in the best interests of the business.

How does ICICI Bank deal with compliance? Does Basel II, for instance, put a strain on your IT implementations?

Not at all. In fact, whatever we have had to do in terms of Basel II, we could have almost done in-house. There is no stress at

View from the Top

“Ultimately, technology is a mindset. We must ensure the strength of our ethos, which is that users own technology. ”

— K.V. Kamath

vol/1 | iSSUE/173 8 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

View from the Top - K.V.indd 38 7/12/2006 1:12:52 PM

Page 29: July 15 2006

all in terms of what we had to do, as an organization, within the scope of technology or outside. Indeed, technology has gone a long way in ensuring compliance. Increasingly, you have technological solutions that validate whether what you are doing complies with regulatory requirements and internal policies.

With technology, the level of compliance comfort increases because real-time validation is introduced, compared to the manual or end-of-day interventions. In the areas of trading, in particular, with technology, you can have real-time monitoring on limits without causing a sense of strain on processes and functions.

How will IT help expand ICICI Bank's services to rural India, given the country’s poor technology infrastructure?

I believe that when we roll out into rural India in the next 12 to 24 months, technology — in terms of telephony — would have penetrated deep into rural India. And telephony will be the primary connect between urban technology and rural India. Without technology, we don’t have a chance to execute our rural strategies.

In fact, we expect that our rural efforts will probably cause a technological disruption in the way we use IT. We might witness an interesting situation, in which we will have to retrofit what we will be doing in rural India into urban India. I think we will have plenty of cutting-edge technology that will go into rural India before it touches more urban areas. We’d rather not touch already installed bases in urban

India and do the cutting-edge implementations — basically aimed at bringing down costs — in rural India. I expect chip-enabled smart cards, biometric verification, etcetera, to be introduced into rural India significantly ahead of urban India.

Rural India will work on a technology-led, branchless model. In this mode, authentication becomes critical and we will need smart technologies to authenticate. Biometrics is one. We will have to leapfrog with technology if we want to make our rural strategies work.

So far, technology has been a key differentiator for ICICI Bank. With other banks

adopting a similar approach, how will ICICI retain its lead?

Ultimately, technology is a mindset issue. I can’t say other banks will not do well with technology. However, we have to ensure the strength of our ethos, which is that users owns technology. We must maintain the capacity to keep an open mind where technology is concerned, show an ability to take risks, and have an entrepreneurial mindset with technology and leadership.

Frankly, when I look around, I find very few people who have these. For example, let’s say you want to run established CRM software. You find that costs are becoming prohibitive and that your vendors are no longer able to dialogue on costs. You are happy with the system, but you face a cost dilemma. What do you do? Do you encourage a user group which says it is happy with the product but holds you, as a CEO, responsible if things go wrong? This user group does not

meet the preconditions I mentioned. But, if you have a user group that agrees to own the challenge, that agrees to take a leap and says that they are ready to take a risk since the benefits are huge, bingo! This is the sort of effort which is required to be successful with a technological edge. Now, if there is somebody else who can also take these kinds of decisions, then he becomes a competent competition. Point is: can they do it time after time after time?

As more IT-driven financial products and services are being commoditized, how will ICICI ensure that it keeps introducing market differentiators?

I don’t think commoditization happens to that extent. There is still a lot of customization required. And that is where the problem lies. What you see is that, however commoditized a product might appear to be, when you implement, you have to be prepared to face some pleasant or unpleasant surprises. We have become immune to this. Whatever shocks exist, we have to be prepared to bear them. There is, however, a learning curve and it is one of the biggest barriers to anybody competing us. And we’re continuously going up that curve.

We have the humility to say that we learn from any business, not just the banking business, and that we keep our ears and eyes open. To be successful, I maintain, we have to have the mindset that we are continuously open to ideas and innovation, and are on the lookout for the next improvement that we can leverage for even greater advantage. CIO

Senior Correspondent Gunjan Trivedi can be

reached at [email protected]

SNAPSHOT

ICICI BankOFFERINgS:

banking Products and Financial Services

TOTAL ASSETS* : rs 251,389 crore

NET PROFIT* : rs 2,540 crore

CuSTOmERS: 1.5 crore

EmPLOYEES:

27,000

BRANCHES & COuNTERS:

620

ATms: 2,225

CIO: Pravir vohra

*as on March 31, 2006

View from the Top

REAL CIO WORLD | J U LY 1 5 , 2 0 0 6 3 9vol/1 | iSSUE/17

View from the Top - K.V.indd 39 7/12/2006 1:12:53 PM

Page 30: July 15 2006

Electricity-hungry equipment, combined with rising energy prices, are devouring data center budgets. Here’s what you can do to get costs under control.

Feature.indd 40 7/12/2006 1:09:57 PM

Page 31: July 15 2006

Atypical 10,000-square-foot data center consumes enough juice to turn on more than 8,000 60-watt lightbulbs. That amount of

electricity is six to 10 times the power needed to operate an office building at peak demand, according to scientists at Lawrence Berkeley National Laboratory. Given that most data centers run 24/7, companies that own them could end up paying millions of dollars this year just to keep their computers turned on.

And it’s getting more expensive. The price of oil may fluctuate, but the cost of energy to run the data center probably will continue to increase, energy experts say. This is because global demand for energy is on the rise, fueled in part by the proliferation of more powerful computers. According to Sun Microsystems engineers, a rack of servers installed in data

centers just two years ago might have consumed 2 kilowatts and emitted 40 watts of heat per square foot. Newer, ‘high-density’ racks, which cram more servers into the same amount of space, are expected to consume as much as 25 kilowatts and give off as much as 500 watts of heat per square foot by the end of the decade. The dire predictions keep coming. Most recently, a Google engineer warned in a research paper that if the performance per watt of today’s computers doesn’t improve, the electrical costs of running them could ultimately exceed their initial price tag.

“As the demand for computing grows, the cost of power is a larger and larger concern,” says Dewitt Latimer, CTO at University of Notre Dame. Latimer is grappling with finding the space and adequate power to handle a growing demand Il

lu

st

ra

tIo

ns

by

PC

an

oo

P

Reader ROI:

Why energy consumption hasbecome a headache for CIOs

Ideas for reducing data centerelectricity use

REAL CIO WORLD | J U LY 1 5 , 2 0 0 6 4 1Vol/1 | IssuE/17

Energy

Powering

By SuSannah Patton

PPooweringweringweringoweringooweringoweringweringweringweringweringweringweringweringdown

7/12/2006 1:10:00 PM7/12/2006 1:10:00 PM7/12/2006 1:10:00 PM7/12/2006 1:10:00 PM7/12/2006 1:10:00 PM7/12/2006 1:10:00 PM7/12/2006 1:10:00 PM7/12/2006 1:10:00 PM7/12/2006 1:10:00 PM7/12/2006 1:10:00 PM7/12/2006 1:10:00 PM

Page 32: July 15 2006

for cheaper and ever-more powerful high-performance computer clusters at Notre Dame. The problem comes not just from the computers themselves; Latimer is worried that the air-conditioning needed to keep the machines cool will also eat away at his budget.

Like Latimer, every CIO who is responsible for a data center — even those who outsource data center management to a hosting company — faces this conundrum: how to keep up with ever-increasing performance requirements while taming runaway power consumption. The problem is most pressing for companies on either coast and in large cities in between, where space is at a premium and companies compensate by putting more servers into their existing buildings. And there is no simple solution. Business demand for more applications results in companies adding more servers. According to market research company IDC (a sister company to CIO’s publisher), server sales are growing by 10 to 15 percent annually.

Nevertheless, CIOs with huge energy bills are developing strategies to contain power costs by deploying more energy-conscious equipment and by using servers more efficiently. “There’s no question that the issue of power and cooling is a growing concern,” says John Humphreys, an IDC analyst.

“The assumptions used for building data centers have been blown away.”

The Problem: IT Hogs EnergyIT’s energy woes have a lot to do with market factors that affect everyone; at the start of the year, the price of a barrel of oil was more than double what it was three years earlier. Anyone who thinks the current energy crunch is going away need only look at global energy markets.

The oil shocks in the ‘70s and ‘80s stemmed from large, sudden cuts in supply. This time, it’s different. While it’s true that some of today’s high prices stem from supply shocks tied to the US invasion of Iraq and hurricanes on the Gulf Coast, the world’s thirst for oil over the past 25 years has grown faster than the energy industry has been producing it. And rapid economic expansion in China and India has led to greater energy demand, putting further pressure on the world’s energy markets.

Servers in corporate data centers may use less energy than manufacturing facilities for heavy industries, but within a company, IT is an energy guzzler. “We’re pretty hoggish when it comes to power consumption in the data center,” says Neal Tisdale, VP of software development at NewEnergy Associates, a wholly owned subsidiary of Siemens. NewEnergy’s Atlanta data center performs simulations of the North American electric grid to help power companies with contingency planning. “We turn on the servers, and we just leave them on.”

Phil nail and his wife, sherry, have learned that green technology and

data centers can go together. the couple started their Web-hosting company, affordable Internet services online (aIso), nine years ago and switched to solar power in 2001. the company, located in romoland, California, provides Internet service to customers that include a film production company and Veggiedate.org, a dating service for vegetarians. the company data center’s 200 servers are powered by 120 photovoltaic panels that generate

electricity on platforms mounted beside the data center.

according to nail, the panels supply power to run the entire data center, including the offices and air conditioners. In case of a power failure, aIso can get power from its emergency generator (which runs on natural gas) or, as a last resort, the utility grid. the hosting company also uses servers with energy-efficient advance Micro Devices opteron processors from open source storage. “We built our company to be environmentally friendly because we thought it was the right thing to do,” says nail.

nail acknowledges that a solar-powered data center isn’t for everyone because startup costs can be expensive;

in 2001, it cost him $100,000 to install 120 solar panels for his 2,000-square-foot data center. He says his investment has paid off in low energy costs, and his eco-friendly marketing message has helped to attract some customers. but he acknowledges that the cost of switching to solar power would be steep for a large data center with thousands of servers.

now nail is taking green power to another level. specifically, the data center’s roof, where he intends to put five inches of dirt and cover it with drought-tolerant plants. “that’s supposed to reduce the amount of cooling needed by 60 percent,” he says.

— s.P.

Vol/1 | IssuE/174 2 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

Energy

How one company got its data center off the grid.

The Solar-Powered Server Farm

“There’s no question that the issue of power and cooling is a growing concern,” says John Humphreys, an IDC analyst.

“The assumptions used for building data centers have been blown away.”

How

TheServer Farm

Feature.indd 42Feature.indd 42Feature.indd 42Feature.indd 42Feature.indd 42

Page 33: July 15 2006

The exact amount of electricity used by data centers in the United States is hard to pin down, says Jon Koomey, staff scientist at Lawrence Berkeley National Laboratory. Koomey is working with experts from Sun and IDC to come up with such an estimate. Nevertheless, most experts agree that electricity consumption by data centers is going up. According to Afcom, an association for data center professionals, data center power requirements are increasing an average of eight percent per year. The power requirements of the top 10 percent of data centers are growing at more than 20 percent per year.

At the same time, business demands for IT are increasing, forcing companies to expand their data centers. According to IDC, at least 12 million additional square feet of data center space will come online by 2009. By comparison, the Mall of America in Minnesota, the world’s largest shopping mall, covers 2.5 million square feet.

SOLUTION 1More Efficient ComputersJust as automakers built SUVs when oil prices were low, computer manufacturers answered market demand for ever-faster and less expensive computers. Energy usage was considered less important than performance.

In a race to create the fastest processors, chip makers continually shrank the size of the transistors that make up the processors. The faster chips consumed more electricity, and at the same time allowed manufacturers to produce smaller servers that companies stacked in racks by the hundreds. In other words, companies could cram more computing power into smaller spaces.

Now that CIOs are beginning to care about energy costs, hardware makers are changing course. Silicon Valley equipment makers are now racing to capture the market for energy-efficient machines. Most chip makers are ramping up production of so-called dual-core processors, which are faster than traditional chips and yet use less energy. Among these new chips is Advanced Micro Devices’ Opteron processor, which runs on 95 watts of power compared with 150 watts for Intel’s Xeon chips. In March, Intel unveiled a design for more energy-efficient chips. Dubbed Woodcrest, these dual-core chips, which Intel says will be available this fall, would require 35 percent less power while offering an 80 percent performance improvement over previous Intel chips. And last November, Sun Microsystems introduced its UltraSparc T1 chip, known as Niagara, which uses eight processors but requires only 70 watts to operate. Sun also markets its Galaxy line of servers as energy-saving equipment.

“The manufacturers are getting better now,” says Paul Froutan, VP of product engineering for Rackspace, which manages servers for clients in its five data centers. With more than 18,000 servers to watch over, Froutan has

been worrying about energy costs for years. He’s seen the company’s power consumption more than double in the past 36 months, and in the same period has seen his total monthly energy bill rise five times to nearly Rs 1.35 crore.

Latimer, who oversees Notre Dame’s Center for Research Computing, first appreciated the power consumption problem when the university decided to hire a hosting company to house its high-performance computers off-campus. On-campus electrical costs associated with data centers have generally been rolled together with other facilities costs, and so the Rs 1.35 lakh monthly utility bill from the hosting company — for running a 512-node cluster of Xenon servers — came as a shock.

Notre Dame’s provost recently called Latimer and other leaders together to talk about how to handle the increasing demands that a growing research program was beginning to place on the campus utility systems and infrastructure. Faculty members are requiring more space, greater electrical capacity and dedicated cooling for high-powered computers and other equipment such as MRI (magnetic resonance imaging) machines. Latimer’s recent conversations with Intel, AMD, Dell and Sun about his plans to buy new computer clusters “have been very focused on power consumption,” he adds.

SOLUTION 2The Latest in CoolingIn September 2005, officials at Lawrence Livermore National Laboratory switched on one of the world’s most powerful supercomputers. The system, designed to simulate nuclear reactions and dubbed ASC Purple, drew so much power (close to 4.8 megawatts) that the local utility, Pacific Gas & Electric, called to see what was going on. “They asked us to let them know when we turn it off,” says Mark Seager, assistant deputy head for advanced technology at Lawrence Livermore.

What’s more, ASC Purple generates a lot of heat. And so, Seager and his colleagues are working on ways to cool it down more efficiently than turning up the air-conditioning. The lab is trying out new cooling units for ASC Purple and the lab’s second supercomputer, BlueGene/L (which

REAL CIO WORLD | J U LY 1 5 , 2 0 0 6 4 3Vol/1 | IssuE/17

Energy

Most chip manufacturers are ramping up production of so-called dual-core processors that are faster than traditional chips and yet use less energy.

Feature.indd 43Feature.indd 43Feature.indd 43Feature.indd 43Feature.indd 43Feature.indd 43Feature.indd 43

Page 34: July 15 2006

was designed with lower-powered IBM chips, but is hot). Lawrence Livermore recently invested in a spray cooling system, an experimental method in which heat emitted by the computer is vaporized and then condensed away from the hardware. Seager says this new method, which holds the promise of eliminating air-conditioning units, would allow the lab to save up to 70 percent on its cooling costs.

It’s not only supercomputers that create supersized cooling headaches. Tisdale, with NewEnergy Associates, says maintaining adequate and efficient cooling is one of the hardest problems to solve in the data center. That’s because as servers use more power, they produce more heat, forcing data center managers to use more power to cool down the data center. “You get hit with a double whammy on the cooling front,” says Rackspace’s Froutan.

To address the cooling dilemmas of more typical data centers, hardware makers such as Hewlett-Packard, IBM, Silicon Graphics and Egenera have offered or are coming out with liquid cooling options. Liquid cooling, which involves cooling air using chilled water, is an old method that is making a comeback because it’s more efficient than air-conditioning. HP’s modular cooling system attaches to the side of a rack of HP computers and “provides a sealed chamber of cooled air” separate from the rest of the data center, says Paul Perez, vice president of storage, networking and infrastructure for HP’s Industry Standard Server group.

More efficient servers help too. Last spring, Tisdale discovered that his data centers had reached their air-conditioning limit. While he had always imagined that a lack of physical space would be his biggest constraint,

he discovered that if he ever lost power, his main problem would be keeping the air-conditioning going. Tisdale had replaced all 22 of his company’s Intel servers in its Houston data center with two dual-core Sun Fire X4200 servers. The new servers are more energy-efficient, according to Tisdale. And so when he proposed installing the servers in Atlanta, he justified the purchase by arguing that he could avoid having to buy a bigger air conditioner that would have used even more power. Tisdale said that according to company projections, the move will save electricity and reduce heat output by 70 to 84 percent.

There are better ways to use traditional air-conditioning. Neil Rasmussen, CTO and co-founder of American Power Conversion (APC), a vendor of cooling and power management systems for data centers, says CIOs should consider redesigning their air-conditioning systems, particularly as they deploy newer, high-density equipment. “Instead of cooling 100 sq ft, it makes sense to look for the hot spots,” concurs Vernon Turner, group vice president and general manager of enterprise computing at IDC.

Traditional cooling units “sit off in the corner and try to blow air in the direction of the servers,” Rasmussen says. “That’s vastly inefficient and a huge waste of power.” Rasmussen argues that the most efficient way to cool servers is with a modular approach that brings cooling units closer to each heat source. Meanwhile, he adds, CIOs who manage data centers in colder climates should use air conditioners that have ‘economizer’ modes, which can reduce power consumption in the dead of winter. Newer air conditioners have compressors, fans and pumps that can slow down or speed up depending on the outside temperature.

SOLUTION 3A More Efficient Data Center Data Center Data CenterJust as aging cars are not as fuel-efficient as newer models, the majority of data centers in the US are using a lot more energy than they should. A survey of 19 data centers by the consultancy Uptime Institute found that 1.4 kilowatts are wasted for every kilowatt of power consumed in computing activities, more than double the expected energy loss.

However, like many people who aren’t going to junk their older cars right away, many companies aren’t ready to tear out their data centers to build new ones with a more efficient layout. “We haven’t reached the point yet where it makes financial sense to rebuild most data centers from scratch,” says Rackspace’s Froutan. And so for most companies, the journey toward an energy-efficient data center will be a gradual one.

For NewEnergy Associate’s Tisdale, that means retiring aging servers in one data center seven at a time and replacing them with more energy-efficient equipment. But redesigning your data center also means making the most of what you have through server consolidation and, more specifically, the use of virtualization software.

Virtualization allows several operating systems to reside on the same server. Froutan says that virtualization will help his data centers make do with fewer servers by allowing them to perform more tasks on one machine. In addition, he says, energy can be saved by deferring lower-priority tasks and performing them at night, when the cost of power can be three times less expensive. IDC’s Turner agrees that CIOs need to improve server utilization in order to cut both power and cooling costs. Instead of building one server farm for Web hosting and another for application

Vol/1 | IssuE/174 4 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

Energy

as part of the information-gathering process, CIos should establish metrics for power consumption in their data centers and measure how much electricity they consume.

Feature.indd 44Feature.indd 44Feature.indd 44 7/12/2006 1:10:05 PM7/12/2006 1:10:05 PM7/12/2006 1:10:05 PM7/12/2006 1:10:05 PM7/12/2006 1:10:05 PM7/12/2006 1:10:05 PM7/12/2006 1:10:05 PM7/12/2006 1:10:05 PM7/12/2006 1:10:05 PM7/12/2006 1:10:05 PM7/12/2006 1:10:05 PM7/12/2006 1:10:05 PM

Page 35: July 15 2006

development, for example, they should use virtualization to share servers for different types of workloads.

Finally, advises APC’s Rasmussen, if you are building a new data center, it’s better to design it to accommodate the equipment that you need right now, rather than building facilities designed for what you might eventually need as you grow, as many companies have done. By using a more modular architecture for servers and storage — so capacity can be added when needed — a company can avoid such waste and still be prepared for growth.

How to Start SavingAs CIOs search for more energy-efficient data center equipment and design, they need to educate themselves about which solutions will work best for them. As part of the information-gathering process, CIOs should establish metrics for power consumption in their data centers and measure how much electricity they consume.

There aren’t many generally accepted metrics for keeping tabs on power consumption. But according to Turner, such metrics could include wattage used per square foot, calculated by multiplying the number of servers by the wattage each uses and dividing by the data center’s total

square footage. Sun has come up with a method called SWaP, which stands for Space, Wattage and Performance. The company says this method, which lets users calculate the energy consumption and performance of their servers, can be used to measure data center efficiency. John Fowler, executive VP of the network systems group at Sun, says sophisticated customers are installing power meters at their data centers to get more precise measurements.

It also pays to be an energy-aware buyer. As Fowler says, “Don’t just take the vendor’s word” on the matter. He suggests having a method of testing the server and its energy use before buying. The industry is still working on methods to compare servers from vendors in a live environment.

Ultimately, vendors’ ‘eco-friendly' messages may resonate only slightly. NewEnergy’s Tisdale, for example, still cares most about maintaining server performance. But he is impressed that new equipment will help him add more computing capability while maintaining current power usage levels. “Like a lot of people,” he says, “I’m not interested in turning off the servers.” CIO

Susannah Patton is a senior writer with CIO. Send feedback about this

feature to [email protected]

REAL CIO WORLD | J U LY 1 5 , 2 0 0 6 4 5Vol/1 | IssuE/17

Search For Efficiency Begins At Home

Google, typically tight-lipped about the technology behind its data centers,

builds its own servers to save costs and because standard products don’t exactly meet its needs.

Hardware makers invest heavily in researching and developing reliable products, a feature that most businesses value. but Google doesn’t actually need very reliable servers because it has written its software to compensate for hardware outages, said urs Holzle, senior vice president (operations), Google.

Instead of buying commercial servers at a price that increases with reliability, Google builds less reliable servers at a cheaper cost, knowing that its software will work around any outages. “For us, that’s the right solution,” Holzle said.

another reason that Google builds its own servers is equally simple: it can save

costs on power consumption. Energy efficiency is a subject Holzle speaks passionately about. about half of the energy that goes into a data center gets lost due to technology inefficiencies that are often easy to fix, he said.

the power supply to servers is one place that energy is unnecessarily lost. one-third of the electricity running through a typical power supply leaks out as heat, he said. that’s a waste of energy and also creates additional costs in the cooling necessary because of the heat added to a building.

rather than waste the electricity and incur the additional costs for cooling, Google has power supplies specially made that are 90 percent efficient. “It’s not hard to do. that’s why to me it’s personally offensive that standard power supplies aren’t as efficient,” he said. While

he admits that ordering specially made power supplies is more expensive than buying standard products, Google still saves money ultimately by conserving energy and cooling, he pointed out.

Google has data centers scattered around the globe but is usually reluctant to divulge details of the hardware and software running in the centers. Holzle spoke to journalists during his visit to Dublin for the final day of the European Code Jam, a contest for programmers sponsored by Google in an effort to identify talented potential workers.

— nancy Gohring

Feature.indd 45Feature.indd 45Feature.indd 45Feature.indd 45Feature.indd 45 7/12/2006 1:10:06 PM7/12/2006 1:10:06 PM7/12/2006 1:10:06 PM7/12/2006 1:10:06 PM7/12/2006 1:10:06 PM7/12/2006 1:10:06 PM7/12/2006 1:10:06 PM7/12/2006 1:10:06 PM7/12/2006 1:10:06 PM7/12/2006 1:10:06 PM7/12/2006 1:10:06 PM

Page 36: July 15 2006

BY R a h u l N e e l M a N i

Vol/1 | ISSUE/174 6 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

Govern_Main.indd 46Govern_Main.indd 46Govern_Main.indd 46Govern_Main.indd 46Govern_Main.indd 46Govern_Main.indd 46Govern_Main.indd 46Govern_Main.indd 46Govern_Main.indd 46Govern_Main.indd 46 7/12/2006 1:03:59 PM7/12/2006 1:03:59 PM7/12/2006 1:03:59 PM

Page 37: July 15 2006

The Indian Railways’ Passenger Reservation System (PRS) has long been the poster-child of a Indian e-governance projects. But unlike

many winning projects, this one isn’t being left alone — for the better.

“The PRS is the lifeline of the railways. As the custodians of this application, we crave to make it better,” says Shashi Bhushan Roy, group general manager, PRS, CRIS (Centre for Railway Information Systems). Custodian, it would appear, sounds a tad too earnest. But the PRS is indeed a legacy of the Indian Railways. Right from when it was conceptualized in the 1970s, it’s been a shiny blue success that’s made the Railways proud. And with good reason. It’s lived up to the expectations of millions of travelers. It is a giant, complex application that juggles with nine train-types, 102 coach models and 40 different quotas. It can intimidate the best network managers with its 10 million reservations, cancellations and train enquiries everyday. It caters to 11 lakh people on a daily basis and moves over 3,000 trains, covering 1,250 locations with over 4,600 terminals.

These statistics have grown steadily from 1985, when the PRS was piloted. And so have passenger expectations. Initially, the PRS was created purely to automate the process of reserving tickets. “Today passengers want reservations quickly and they don’t want to go to a reservation counter. This puts pressure

on us to refurbish the PRS application with more functions,” says S. S. Mathur, general manager, IT infrastructure, CRIS.

And if CRIS can pick its way through a number of implementation issues, the PRS will be a ubiquitous way to provide information for passengers on the move.

At a strategic level, the new enhancements will buff a shine on system efficiency, and herald in demand forecasting and the ability to streamline railway traffic. It will also give train ticket examiners (TTEs) the power to do their jobs more dynamically and introduce flexibility to customer interfaces and “keep malpractices at bay,” says Mathur.

GettinG OnbOard the New PRSIt wasn’t so long ago that the Railways worked with a system in which a central authority allocated ‘ticket quotas’ to each station along a train’s route. But as traffic increased manifold and the number of long-haul trains, the quota solution was driving the Railways into the ground.

Today’s PRS is the third avatar of its original form, which worked as a host-based system. Individual applications ran on four host-based computers, and from there, to terminals at various locations in different zones. For example, Delhi’s PRS host system connected to terminals in Mumbai, Chennai and Kolkata. It was succeeded by a networked version. But even this version,

railways

Ill

US

tr

at

Ion

by

an

Il t

REAL CIO WORLD | J U LY 1 5 , 2 0 0 6 4 7Vol/1 | ISSUE/17

Reader ROI:

how handhelds can boost customer service

how the Indian Railways is dealing with capacity under-utilization

what business intelligence can do for the Railways

The Centre for Railway Information Systems continues to innovate and enhance an application that has controlled one of the world’s largest rail networks.

Govern_Main.indd 47 7/12/2006 1:04:00 PM

Page 38: July 15 2006

developed in FORTRAN, was basic. Travelers requesting a ticket would get one if there were available seats, or were placed on a waiting list.

“The PRS application has come a long way from when only five zonal railway data centers (Delhi, Mumbai, Kolkata, Chennai and Secunderabad) were networked. Today, it’s all-pervasive and provides the ability to make reservations from anywhere to anywhere. But it’s time to make it an application that can provide us enhanced facilities,” says Roy, who’s overseen the application for many years.

To meet rising passenger expectations, CRIS’s first strategy was to increase the number of offline reservation counters. Today, 1,300 counters have sprung up where there were 200 in 1994-95.

Roy and his team are determined to push the system. In just over three years, the PRS has gone from being an application that only permitted ticketing at reservation counters to allowing passengers to buy tickets over the Internet. And now, reservation information and ticket status are even available over mobile phones through SMS.

“This [e-ticketing] was a major boost to the facilities provided by the Indian Railways to its passengers. We literally took the ticket to the customer,” says Mathur.

Impressive as these improvements are, they are only extensions of the basic application; the front-end was tweaked and extended to the passengers. “Now’s the time for fundamental changes that will introduce transparency and ease of data access,” states Roy.

COuplinG INfoRmatIoN Until today, trains pulling out of a station were disconnected from the PRS. There was no way a railway official could tally the number of passengers canceling their journeys at the last minute — which meant that their seats went empty all the way to the last stop. “It’s a big drawback.

We aren’t able to utilize that capacity and lost additional revenue,” says Roy. Among the enhancements planned for the PRS is a move to empower TTEs with handheld terminals connected to the back-end of the PRS. The handhelds allow TTEs to ‘give back’ vacant berths to the system. By updating the PRS in real-time, TTEs can signal the next station of the number of vacant seats aboard — after the train has departed.

“The effort has been possible with GPRS connectivity provided by BSNL. It is a dedicated link provided to us. We’re piloting the project right now but will replicate it on other routes soon,” says Roy.

One of the problems they faced during early trials was that CRIS was forced to

restart the whole process of data gathering every time a link went down. Broken links, however, are a reality on India’s 108,706 km of track, and CRIS installed a connection manager — Websphere — to pick up signals through patches of service providers. When the link is disrupted, Websphere waits till it’s restored and then synchronizes data with the backend by keeping tabs on when the link went down.

Roy says that they plan to use the handheld terminals as an extension of the PRS. This will allow TTEs to issue tickets on the fly to people traveling without a reservation. “More important is the fact that the transaction is done with the system’s knowledge and it’s all happening in real time,” adds Roy.

In short, the system will introduce transparency. The handheld will ensure that that every seat is accounted for. It also facilitates better customer service. Since the terminals are connected with the back-end, they can be used as a source of information for traveling passengers. CRIS plans to provide information on connecting trains and the PNR status for travelers en route, but with unconfirmed reservations on connecting trains. CRIS officials say that in the future, these terminals could also be used to book retiring rooms. Extending the PRS to a moving train will add both revenue and boost customer services. Soon, CRIS will enable the handhelds to report air-conditioning failures or mechanicals faults — giving approaching stations time to prepare. “With access to information on moving trains, the train travel will undergo a significant improvement,” says Roy.

Initially, the PRS used with the handhelds crashed. And CRIS continues to tackle

railways

SNAPSHOT PRS

TRANSACTIONS PER DAy:

rs 1 crore

PASSENgERS HANDLED PER DAy: 9.95 to 10.85 lakh

TRAINS CONTROLLED: >3,080

LOCATIONS COvERED: 1,232

TERmINALS: 4,169

IT BuDgET: rs 272 crore

(2005-06 CaPEX)

Vol/1 | ISSUE/174 8 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

The Passenger Reservation System is conceived and introduced in Delhi, and then rolled out in three other metros. Each application runs independently.

1970s 1993

CRIS creates the Country-Wide Network for Computerized Enhanced Reservation and Ticketing. This enables ‘anywhere to anywhere’ reservation from any counter.

Inf

og

ra

Ph

ICS

: an

Il V

.K

Terminals.

The Indian

Railways IT Journey

Govern_Main.indd 48 7/12/2006 1:04:01 PM

Page 39: July 15 2006

problems with the device. It still takes more time to read data off the handheld than off a manual chart. CRIS will need to make it faster and train TTEs. They’ve also tempered their plans to load the handhelds with passenger information to facilitate TTEs scrolling for passenger names. “The pilot results are giving us a great amount of learning and we are constantly correcting ourselves,” says Roy.

The new enhancements will also flatten cumbersome refund procedures. Today, the process of getting a refund requires passengers to locate a counter three hours before a train leaves. Once it departs, getting a refund is very hard, which is gives the Railways a reputation for poor service.

To counter this, CRIS has introduced the Computerized Coaching Refunds, which provides data on passengers who have canceled their travel plans. Clubbed with handhelds, immediate changes to the PRS make it easier to get a refund,” says Mathur. Since handhelds make data available in real-time, they allow the Railways to process refund requests more efficienctly.

It also allows a group of passengers to be upgraded. “Unlike air travel, when a passenger is upgraded on in train, we try not to split a group traveling on one ticket. The system makes it easier to move groups,” says Roy.

ridinG On BuSINeSS INtellIgeNce Internally, enhancements to the PRS are making increased Business Intelligence (BI) possible. According to Roy, the old PRS ran on a Flag File system, which worked perfectly as a transaction processing system but failed to generate good MIS reports. “Data generated by the system isn’t of any use unless it ends

with business intelligence analysis and is used for commercial benefits,” explains Roy.

The improvements will allow the Railways more granular information of passengers’ travel behavior and pattern. This will, in turn, enable a more dynamic re-assignment of capacity and will allow the railways to strategically add or cancel the coaches. “Tatkal and other quotas can be re-allocated based on this information.” says Mathur.

“In the next stage of BI, we will collect data on customer needs. We’d like to know which passenger groups travel on which lines, why they travel, when they travel, and their class preference,” says Roy. Some of the data already coming out of the system shows that places like Hyderabad and Chennai are becoming popular medical destinations.

“Today, we know the quantum of demand, but with the help of BI, we’ll know the reason behind demand swings,” says Roy.

It will lead to better demand forecasts and will streamline railway traffic and increase efficiency, he adds.

The spadework of the last few years is paying off. During the last fiscal, railway revenues rose by 20 percent compared to the previous fiscal — despite the direct competition from low-cost airlines. “The enhancements on PRS have contributed a lot already,” says Roy.

The effort of moving to a dynamic computer interface from a static one has resulted in improved customer satisfaction, better utilization of capacity — all without increasing cost. It will also help create special ticket prices in both busy and off seasons.

All aboard the new PRS. CIO

Bureau head North Rahul Neel mani can be

reached at [email protected]

REAL CIO WORLD | J U LY 1 5 , 2 0 0 6 4 9Vol/1 | ISSUE/17

“The PRS application has come a long way. but now’s the time for fundamental changes that will introduce transparency and ease of data access.”— Shashi Bhushan Roy, group general manager, PRS, CRIS

PRS migrates to a new platform: VMS OS, Reliable Transaction Router Middleware. Written in C, charting routines in FORTRAN. Host-based systems and VT220

Indian Railways introduces Internet-based reservation and the Reservation Status Enquiry System. Today, it covers over 180 cities and draws in Rs 1 crore everyday.

e-ticketing introduced on all express trains and PRS implemented on a nation-wide basis with counters at over 1,300 cities.

2001 2002 2005-06

Ph

ot

o b

y P

ra

VE

En

KU

ma

r

Govern_Main.indd 49Govern_Main.indd 49Govern_Main.indd 49Govern_Main.indd 49Govern_Main.indd 49Govern_Main.indd 49Govern_Main.indd 49Govern_Main.indd 49Govern_Main.indd 49

Page 40: July 15 2006

R eforms

KPTCL seeks 100 percent computerization to give consumers the best access to information, says Bharat Lal Meena, MD of KPTCL.

Page 41: July 15 2006

Interview | Bharat Lal Meena

Last year, Karnataka Power Transmission Corporation (KPTCL) kicked off the centenary celebrations of electric lighting in Bangalore. Lighting in the city may be a hundred years old, but KPTCL itself was incorporated only as late as 1999 as part of the power was incorporated only as late as 1999 as part of the power reforms in Karnataka. It was formed by carving out the transmission function of the Karnataka Electricity Board, while four electric supply companies (ESCOMs) were set while four electric supply companies (ESCOMs) were set up in 2002 to distribute power across the State. Through its corporatization phase, technology has become a part of KPTCL’s plans. Though its IT spends weren’t too flattering to begin with, KPTCL has been investing in IT — in the installation of micro-controllers to track power supply for stipulated periods, the computerization of its cash counters, for instance.

As with most power companies, transmission and As with most power companies, transmission and distribution (T&D) losses have been an issue for KPTCL, distribution (T&D) losses have been an issue for KPTCL, touching almost 30 percent in rural areas. Bharat Lal touching almost 30 percent in rural areas. Bharat Lal Meena, managing director of KPTCL and chairman of Meena, managing director of KPTCL and chairman of ESCOMs, believes that IT can help to cut these losses ESCOMs, believes that IT can help to cut these losses and eventually enable KPTCL to provide 24-hour power and eventually enable KPTCL to provide 24-hour power supply — even to the rural hinterland.

CIO: KPTCL’s IT spend has risen from a few lakhs until 2003 to Rs 15 crore in recent times. What triggered this surge in IT investment?Bharat LaL Meena: We noticed that the usage of We noticed that the usage of IT was low and that we were not making optimum use of IT was low and that we were not making optimum use of computerization, despite being in India’s IT capital. We computerization, despite being in India’s IT capital. We felt the need to improve this, streamline our processes, felt the need to improve this, streamline our processes,

The Electric

REAL CIO WORLD | J U LY 1 5 , 2 0 0 6 5 1Vol/1 | ISSUE/17

R eformsIm

ag

Ing

by

an

Il t

I

Ph

ot

o b

y S

rIV

at

Sa

Sh

an

dIl

ya

Using IT-powered solutions, Bharat Lal Meena, MD, Using IT-powered solutions, Bharat Lal Meena, MD, Karnataka Power Transmission Corporation and chairman Karnataka Power Transmission Corporation and chairman of the state’s electric supply companies, has managed to of the state’s electric supply companies, has managed to track and control transmission and distribution losses track and control transmission and distribution losses — and brought the consumer into the loop too.

By Balaji NarasimhaBy Balaji NarasimhaBy Balaji NarasimhaNN

Page 42: July 15 2006

and to use IT properly in order to improve customer service and enhance MIS. That is why we decided to increase our IT spend.

Do you plan to increase investment in this area?

Yes, we will be spending on managing our information systems, as well as on an installation of SCADA (supervisory control and data acquisition) systems. We want to have last-mile SCADA too, so that we can monitor all power consumption meters online. We have already done this for our HT (high tension) systems, which have remote automatic meter reading. We also have RRAMR (Real-time Remote Automated Meter Reading), which we are going to extend from HT-consumers to other consumers.

What difference can IT make to an organization like KPTCL?

First of all, IT can remove the manual interface, making things better for the consumer. Hundred-percent computerization gives consumers better access. For instance, we have put meter information of the consumers on the website, and BESCOM consumers’ bills for a period of 12 months are already on the Web. So, a consumer can easily track when he has paid and how much.

T&D losses are a major problem for most power companies. How can IT help tackle such issues?

IT has helped us a lot in this sphere. We have introduced a transmission monitoring system, whereby the power output can be metered at the level of the transformer that supplies power. In all urban areas, we have put meters in these transformers. These meters can be read remotely using Real-time Remote Automated Meter Reading. This will help evaluate losses at the transformer level.

Different consumers are connected to these transformers. We get readings on their meters on the same day and feed

that into the system. We can that into the system. We can thereby compare the readings thereby compare the readings of individual users against the of individual users against the transformer to which they transformer to which they are connected. This would are connected. This would have been impossible using have been impossible using a manual system. Since we a manual system. Since we have data on the location of have data on the location of individual transformers, individual transformers, we can derive a lot of valuable we can derive a lot of valuable information.

Similarly, Geographic Similarly, Geographic Information Systems (GIS) Information Systems (GIS) can also play an important can also play an important role. We are already using role. We are already using it in Bangalore for linking it in Bangalore for linking location to consumer data. location to consumer data. For example, if somebody For example, if somebody is calls and complains, the is calls and complains, the system can tell us who called, system can tell us who called, from which area he called, from which area he called, and to which transformer he and to which transformer he is linked. Such information is is linked. Such information is captured and used.

How does KPTCL plan to minimize power losses in rural Karnataka, which are at around 30 percent as opposed to around 9 percent in Bangalore?

We will be putting meters at the We will be putting meters at the transformer level. We already have transformer level. We already have meters placed at the premises of different meters placed at the premises of different consumers in rural Karnataka. We will consumers in rural Karnataka. We will allow minimal losses that are on account allow minimal losses that are on account of genuine reasons and then measure of genuine reasons and then measure consumption, so we know what exactly is consumption, so we know what exactly is happening. The core issue here is: since we have IT, we can work along these lines. We have IT, we can work along these lines. We are also monitoring all the installation work online. Thanks to computerization, we are online. Thanks to computerization, we are in a position to know where installations in a position to know where installations are taking place. The status of the work in are taking place. The status of the work in progress can be measured and analyzed progress can be measured and analyzed online. Earlier, this was not possible. So, online. Earlier, this was not possible. So, IT is also helping us a lot in the area of IT is also helping us a lot in the area of project management.

Can you tell us more about theSCADA initiative?

SCADA means that you have a control SCADA means that you have a control system to manage your equipment remotely. system to manage your equipment remotely. By using SCADA, you can minimize outages By using SCADA, you can minimize outages and isolate the fault areas quickly. Without and isolate the fault areas quickly. Without

this technology, you will not know if a fault this technology, you will not know if a fault occurs until somebody calls you to complain. occurs until somebody calls you to complain. Thanks to SCADA, I can find out the status Thanks to SCADA, I can find out the status of all my equipment sitting in my office. We of all my equipment sitting in my office. We have introduced this in Bangalore.have introduced this in Bangalore.

Do you plan to extend SCADA to other Do you plan to extend SCADA to other parts of Karnataka?parts of Karnataka?

In the first phase, we have implemented SCADA in Bangalore’s urban and rural SCADA in Bangalore’s urban and rural districts. In Phase II, we will target other districts. In Phase II, we will target other places. In the third stage, we may cover places. In the third stage, we may cover smaller sub-stations. Ultimately, we want smaller sub-stations. Ultimately, we want to cover all places.to cover all places.

Does the Rs 100-crore outlay for SCADA Does the Rs 100-crore outlay for SCADA cover the whole state?cover the whole state?

No, it covers only Bangalore urban and rural districts. It is hard to guess how and rural districts. It is hard to guess how much SCADA implementation across the much SCADA implementation across the state would cost without inviting bids, but state would cost without inviting bids, but this could be anywhere in the range of Rs this could be anywhere in the range of Rs 200 crore to 300 crore.200 crore to 300 crore.

What is the status of your effort to What is the status of your effort to attain 100 percent computer literacy attain 100 percent computer literacy among the KPTCL staff?among the KPTCL staff?

We have attained IT literacy of around 60 to 70 percent in most areas. We have 60 to 70 percent in most areas. We have targeted employees in A, B and C categories. targeted employees in A, B and C categories. As per our standards, even if somebody As per our standards, even if somebody has some basic knowledge, we don’t treat has some basic knowledge, we don’t treat him as being computer literate until he him as being computer literate until he operates some system. If an employee has operates some system. If an employee has undergone training, but if his skills are undergone training, but if his skills are still not up to the mark, we send him for still not up to the mark, we send him for training again. We rely on the certificate training again. We rely on the certificate from the training company to ascertain from the training company to ascertain how well the employee has learnt IT.how well the employee has learnt IT.

You also have other innovations to You also have other innovations to your credit, such as IVRS. What has the your credit, such as IVRS. What has the response been?response been?

We have had huge response, particularly We have had huge response, particularly to IVRS. Thanks to IVRS, the number of to IVRS. Thanks to IVRS, the number of complaints has come down. And complaints complaints has come down. And complaints don’t get missed. Before IVRS, complaints don’t get missed. Before IVRS, complaints had to be written down manually using had to be written down manually using pen and paper. Since the voice is recorded pen and paper. Since the voice is recorded directly into the system, we can monitor directly into the system, we can monitor the status of the complaint with the aid of the status of the complaint with the aid of a docket number, which is also generated a docket number, which is also generated automatically by the computer. This automatically by the computer. This way, IVRS has helped us to respond to way, IVRS has helped us to respond to

Vol/1 | ISSUE/175 2 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

SNAPSHOT

KPTCLTURNOVER (2005-06) rs 1,859 crore

CUSTOMERS SERVED 134 lakh

AREA COVERED 1.92 lakh sq km

SUB-STATIONS 830

DISTRIBUTION TRANSFORMERS 150,000

TRANSMISSION LINES 33 KV+ 32,407 km 11 KV 130,000 km LOW TENSION

LINES 357,000 km

Page 43: July 15 2006

complaints in a timely manner. When things complaints in a timely manner. When things complaints in a timely manner. When things were manual, a lot of touts were operating were manual, a lot of touts were operating and creating problems. The moment such and creating problems. The moment such elements were removed from the system, elements were removed from the system, the efficiency automatically went up.

Karnataka Chief Minister H.D. Kumaraswamy recently said he had earmarked Rs 5,700 crore for KPTCL and the ESCOMs to set up new stations. Can you tell us more about IT implementations in these stations?

The biggest advantage to the new stations The biggest advantage to the new stations The biggest advantage to the new stations is that we can integrate IT right from day is that we can integrate IT right from day one. For instance, the new specifications one. For instance, the new specifications of stations have SCADA built right into of stations have SCADA built right into the architecture, so we don’t have to start the architecture, so we don’t have to start a station the old way and then worry about a station the old way and then worry about integrating IT automation later.

What are the other plans you have for KPTCL in the IT context?

We have a lot of plans. I have personally We have a lot of plans. I have personally listed out best practices and key parameters listed out best practices and key parameters in my chairman’s agenda. The items include in my chairman’s agenda. The items include distribution transformer audit to reduce distribution transformer audit to reduce losses, upgradation and ‘reconductoring’ of losses, upgradation and ‘reconductoring’ of old conductors in rural areas, introduction old conductors in rural areas, introduction of rural load management system, etcetera. of rural load management system, etcetera. If these things are implemented, we can give If these things are implemented, we can give 24-hour power supply in rural areas. Many 24-hour power supply in rural areas. Many of the IT initiatives that we successfully of the IT initiatives that we successfully implemented in Bangalore will be ported implemented in Bangalore will be ported to other parts of Karnataka.to other parts of Karnataka.

Have KPTCL’s IT initiatives drawn Have KPTCL’s IT initiatives drawn anything from similar systems in India anything from similar systems in India or abroad?

No. Everything we have done in No. Everything we have done in Bangalore has come from experience. Bangalore has come from experience. There was an IT task force initiated by the There was an IT task force initiated by the ministry of power, which gave us some ministry of power, which gave us some useful recommendations. What we did was useful recommendations. What we did was not something new — actually, many of the not something new — actually, many of the things we did have already been around — things we did have already been around — things we did have already been around — but we did the implementation effectively.but we did the implementation effectively.

What is the status of IT initiatives What is the status of IT initiatives in other urban centers like Mysore, Mangalore and Hubli?

These centers have separate distribution These centers have separate distribution companies or ESCOMs. We plan to follow companies or ESCOMs. We plan to follow the same practices there as the ones the same practices there as the ones the same practices there as the ones initiated in Bangalore. This is where the initiated in Bangalore. This is where the chairman’s agenda helps. The roadmap is chairman’s agenda helps. The roadmap is based on the implementation in Bangalore, based on the implementation in Bangalore, based on the implementation in Bangalore, which is used as a benchmark to drive which is used as a benchmark to drive which is used as a benchmark to drive efficiency in other ESCOMs.

You spearheaded IT initiatives at You spearheaded IT initiatives at KPTCL at a time when IT was viewed with animosity. What is your advice for with animosity. What is your advice for somebody on a similar journey?somebody on a similar journey?

It is easy to list out initiatives, but the It is easy to list out initiatives, but the big challenge lies in implementation. big challenge lies in implementation. big challenge lies in implementation. What we did — and what others should What we did — and what others should What we did — and what others should also do — was to closely monitor the also do — was to closely monitor the situation. When you apply technology, situation. When you apply technology, you must be clear about your objectives. you must be clear about your objectives. you must be clear about your objectives. you must be clear about your objectives.

Once you know these objectives, you have Once you know these objectives, you have to understand the difficulties, and this will to understand the difficulties, and this will help you to understand how you want to help you to understand how you want to use the technology. Everything depends on use the technology. Everything depends on proper usage. You can list out objectives, proper usage. You can list out objectives, but until you put the technology to good but until you put the technology to good use, there is no point.use, there is no point.

When we first started, we monitored the progress of IT on a weekly basis. Once the progress of IT on a weekly basis. Once things stabilized, we started monitoring things stabilized, we started monitoring projects on a fortnightly basis. That projects on a fortnightly basis. That was how the message went — since we was how the message went — since we were serious, we were able to achieve were serious, we were able to achieve a lot. We have also started outsourcing some activities to private companies now. This has been possible because of now. This has been possible because of computerization, which allowed us to computerization, which allowed us to closely monitor the progress and identify closely monitor the progress and identify activities that could be outsourced. CIO

Special Correspondent Balaji Narasimhan can be

reached at [email protected] at [email protected]

Remote management of transformers can go a long way in reducing distribution and transmission losses, and this would not have been possible without IT.

REAL CIO WORLD | J U LY 1 5 , 2 0 0 6 5 3Vol/1 | ISSUE/17

Page 44: July 15 2006

The Shrinking Servers BY CHRISTOPHER LINDQUIST

DATA CENTER INTELLIGENCE | You may be installing virtualization tools so that one server can do the job of five. You could be using configuration management tools to swap applications from one machine to another, depending on load. You may simply be looking to retire old hardware in order to run apps on new, more energy-efficient multicore systems. But no matter what your strategy, your goals or your tactics, you still have a problem. How the heck do you even know what’s out there to consolidate?

In large, dispersed environments, identifying consolidation opportunities can be a time-consuming job, requiring the combined efforts of engineers and systems architects working with everything from asset management tools to network discovery applications, to performance monitoring utilities, to homegrown spreadsheets, to big, old-fashioned whiteboards in order to determine what pieces of your hardware and software infrastructure might be better off someplace else.

But a new segment of products — called data center intelligence or consolidation management tools — promise to help automate consolidation, freeing up some of your most valuable employees while providing the hard numbers to justify a consolidation project. While these tools come primarily from smaller vendors, the big guys are gearing up to include these

Done well, consolidation

and virtualization

can cut computing costs while improving

performance.

technologyESSENTIaL From InceptIon to ImplementatIon — I.t. that matters

Ill

us

tr

at

Ion

by

un

nIk

rIs

hn

an

a.V

essentIal technology

Vol/1 | Issue/175 4 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

Essentisl Tec.indd 54 7/12/2006 12:57:13 PM

Page 45: July 15 2006

functions in their own business systems management suites. Here’s the hot news from the consolidation front.

A Consolidation TaleBell Mobility, one of the largest mobile phone service providers in Canada, had a problem. More accurately, it had one problem (or crash) after another. Recovering some critical systems could take hours, disrupting services and costing the company serious money — Rs 9.45 crore per day for one system alone. So in 2005, Bell Mobility launched a study to find a way to speed disaster recovery. One of the recommendations that emerged was to consolidate applications and servers in order to centralize recovery efforts, with the hoped-for side effect of improving server utilization.

At the beginning of the study, Bell brought in a consultancy to evaluate its operations and offer suggestions for where consolidation made sense. Michel Tremblay, manager for OSS network engineering at Bell Mobility, thinks that’s a good way to go. “If you support those systems, it’s hard to say, I’m going to cut off my system by so much percent,” he says. It’s more likely, Tremblay says, that an internal IT staff with relationships to the systems might be tempted to say, ‘It’s working, so why should I do this?’

Disaster recovery benefits aside, estimates showed that if the company could consolidate its server pool by 25 percent (its goal for 2006), it would save Rs 6.45 crore in hardware replacement expenses over two years — even without including ongoing support costs for systems that would no longer exist. Numbers like those put Bell on a path to find a tool that could help identify which systems were ripe for consolidation. Capacity analysis and asset management tools could do part of the job, but Bell found a product from a small vendor that seemed to address the heart of the issue.

Tool TalkBell originally had used startup Cirba’s Data Center Intelligence tool purely for auditing purposes. But Cirba believed its tool could also do capacity planning and analysis.

“They took it away and came back with a quick tool that could do just that,” says Bell Senior Systems Analyst Lou Fachin.

Cirba claimed that its new tools could provide data center intelligence: detailed reporting of asset utilization in the data center combined with cross-referenced information about what systems could be consolidated based on factors such as a server’s operating system version, its utilization percentage, its available memory, or seemingly trivial but often critical details such as the time zone setting of the system clock. The tool generates reports that help users identify consolidation opportunities without resorting to extended whiteboard sessions or trial and error. “What I really like is the way they can set you up for consolidation,” says Andi Mann, senior analyst at IT consultancy Enterprise Management Associates. “The Cirba stuff gives you some easy-to-use graphics and metrics on utilization and compatibility. It provides sort of a one-stop shop for this specific functionality.”

Consolidation-specific tools aren’t a necessity, however, particularly for smaller companies, says Michael Minichino, director of infrastructure at marketing services provider Parago. Heading into 2006,

Minichino had new IT initiatives slated that would require either more space or better use of the existing racks. Minichino was intrigued by the power-per-rack-space claims of new Sun hardware — the T2000 series of servers — and decided to try the latter route.

But he didn’t bother looking for a tool to help him figure his savings; he went straight to spreadsheets. “We’ve purchased an asset management tool to track workstations,” says Minichino. “[But] I haven’t really seen anything that would give me more of a return than spreadsheets.” His calculations led him to cut 10 servers from his co-location facility.

For anyone looking to jump into the consolidation toolset on the cheap, Sun offers a free, downloadable Sim Datacenter Java application that can calculate the power, heat and space requirements of your current data center versus one with different hardware.

Larger companies may see consolidation tools as a way of saving time and effort. As part of an application consolidation and tracking effort, David O’Neill, executive director of IT at Boise State University, used a service discovery tool from startup software vendor nLayers. Previously, identifying assets and connections between systems for audit or troubleshooting purposes meant making

Neal Tisdale, VP of software development at newenergy associates, an energy market

services provider owned by siemens, didn’t require a lot of analysis to determine which of his

machines needed consolidation. he found many of his best candidates by their color: beige.

“We looked at our oldest, beige, putty-colored, 1990s highest-wattage, lowest-performance

servers and started virtualizing those,” says tisdale.

and while he’s aware that a wide variety of tools exist for measuring nearly every aspect of data

center performance, he says that just by getting rid of the old boxes (23 servers consolidated to a

pair of sun 4100s running VMware virtualization software) he avoided an expensive upgrade to the

cooling and power systems in his data center. that in turn helped him to postpone buying extra tools.

about the only consolidation tool tisdale will recommend is a physical-to-virtual conversion

tool called PowerConvert from startup Platespin, which helped him completely clone some

older boxes right down to the MaC addresses on their network cards, thereby saving him

from having to recreate from scratch ancient hardware configurations in a virtual space.

PowerConvert is “a good time- and risk-saver,” says tisdale.

— C.l.

kill the beige ones Firsta simple, practical scheme for hardware consolidation.

essentIal technology

REAL CIO WORLD | J U LY 1 5 , 2 0 0 6 5 5Vol/1 | Issue/17

Essentisl Tec.indd 55 7/12/2006 12:57:13 PM

Page 46: July 15 2006

demands on systems engineers. “You put your engineering staff at the whiteboard, give them a couple cans of soda, and they spend all day drawing pictures,” says O’Neill.

With the nLayers tool, O’Neill is able to “let the machine do the inventory”, and he can dedicate his engineers to more important tasks. nLayers also claims that its products can map the connections between systems and identify underutilized servers.

If all these tools sound to you like features that should be part of larger-scale asset management, configuration management or business service management tools, BMC Software, IBM and other large vendors want to meet you.

BMC says it already has a suite of tools capable of initial device discovery, performance monitoring and analysis, configuration management and ongoing optimization. What BMC’s suite lacks, according to Dave Wagner, solutions management director for capacity management and provisioning at BMC, is an easy interface to tie all those operations together. But, he says, customers can expect to see bigger vendors expand and improve their product lines, while the smaller vendors will consolidate or cooperate in order to provide the more wide-ranging management solution large corporations will need.

Who’s Being Served?No matter how insightful they may become about the technical configuration of your infrastructure, these tools can never map your consolidation efforts to the political and contractual landscape of your corporation.

A word to the wise: get in touch with the server owners well before you intend to absorb their beloved boxes and applications into your data center. This will help smooth your path as well as help you identify relatively early on in the process if there are good reasons (compliance, security or otherwise) for keeping some seemingly underutilized hardware right where it is.

It’s also worth noting that internal politics might be the least of your hurdles. “Quite often, [the difficulties lie in] vendor relations,” says Bell Mobility’s Tremblay. He notes

that in one case, Bell Mobility discovered a system comprising 19 servers installed for one application by a vendor that Tremblay’s team found could be consolidated to nine. Such inefficient installations will be history, says Tremblay, noting that Bell has created a policy of examining vendor architecture plans for efficiency before it agrees to an

implementation. “[Vendors] have to start thinking of redesigning what they are selling” to make systems more efficient, he says.

If every IT organization does the same, your next consolidation effort could be your last. CIO

Send your feedback on this feature to [email protected]

essentIal technology

Vol/1 | Issue/175 6 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

the virtualization phenomenon is getting too real for hardware server builders, as one Canadian

research firm predicts server sales will fall by the end of next year.

More companies now realize the benefits of server optimization and consolidation to attain the full

potential of their hardware investments. Virtualization technology has emerged as an effective way

of implementing consolidation and maximizing computing capacity while reducing server count,

according to Darin stahl, research analyst at london, ontario-based Info-tech research.

“some [hardware vendors] are going to take the hit on [increasing server virtualization]. obviously,

from consolidating and virtualizing, customers are buying less hardware,” explained stahl.

the analyst added that even the bigger margins that the manufacturers have from selling

bigger-capacity servers “is not going to make up for the loss of sales” from multiple servers.

server virtualization tools allow organizations to reduce the number of boxes within their It

environments by creating virtual instances of servers within one or two high-capacity x86

physical servers. this effort enables organizations to efficiently utilize and manage server

capacities which, according to market research firm IDC, have generally been underutilized,

running only at 10 to 20 per cent capacity.

an organization with 60 distributed physical servers, for example, can implement virtualization

and end up with only two multiprocessor servers running 10 virtual servers, read a recent Info-

tech research paper entitled ‘the roI of server Consolidation’. and each of those virtual servers

could have its processing power and storage capacity raised or lowered as necessary.

With virtualization, the enterprise can achieve reduction in the number of physical servers by a

factor of five, 10, and even 20 to one,” the report stated. such reduction, it added, spells definite

cost savings in support and server maintenance.

the same research document claimed that through virtualization, organizations could reduce

server asset requirements and administrative support by up to 40 per cent. “If, for example, an

enterprise spends $ 50,000 in new server acquisitions per year, the enterprise can reduce this

amount to $ 30,000 per year. If an enterprise were to reduce It network operational overhead

from 10 staff to six, it could realize savings of up to $ 40,000 .”

“as server consolidation initiatives close out, x86 server builders will face a decline in server

volumes by the end of 2007,” according to stahl.

Virtualization software developer VMWare has developed a ‘relationship’ with hardware makers in an

effort to work on standards that enable integration between server hardware and the virtualization

tools, said brian byun, VP (products and alliances) at VMWare. “you’d think that [hardware

manufacturers] might not like [virtualization], but we accelerate the refresh of hardware. they

may sell less hardware, but they are (now) selling them in larger configurations,” said byun.

— Mari-len De Guzman

Virtualization reality setting inanalyst says hardware vendors could be hit.

Essentisl Tec.indd 56 7/12/2006 12:57:13 PM

Page 47: July 15 2006

PunditThe Endpoint of Endpoint SecurityWhen vendors come up with jargon like endpoint security, it's essential to deconstruct it.BY SCOTT BERINATO

SECURITY | If you’ve ever watched youth soccer, you instantly understand the term ‘swarm ball’. Security marketing isn’t so different from youth soccer. Vendors swarm toward the ball — the jargon that will resonate with buyers — and then have a mad battle to control it, only to have the ball squirt out in another direction, whereupon they all swarm that way. Recently, the ball was compliance solutions, until the

swarm caught up and jarred that around, and eventually kicked it out to where the vendors swarm now: endpoint security.

Look, endpoint security is jargon, and jargon is spin — an attempt to create buzz while downplaying potential negatives. It’s antithetical to substance. It’s saying certified pre-owned when what you mean is used. So we work with it, with our grain of salt, but it’s useful to deconstruct jargon.

Endpoint security, for example, doesn’t carry the baggage of older, more specific terms for products that did zilch to dam a rather steady and torrential flow of security failures. Antivirus sounds positively antediluvian. Intrusion detection implies there’s already been an intrusion, and intrusion prevention sounds an awful lot like a quixotic epithet

for a firewall. One way or another, all these products and others like patch management and anti-spam are associated with failure, management burden and, of course, money spent. For what? Endpoint security, on the other hand, might comprise some of these products while carrying none of the negative connotations of those products.

There’s a more manipulative progenitor of new jargon: the analyst community.

White papers, market reports and mystical squares can get crowded, and the big vendors often dominate them. But what if there were more squares? “No, no,” says Stu, the vendor sales and marketing guy. “We don’t belong in the same category as BehemothCo, because they do IPS, and we’re more of a dynamic endpoint security solutions provider.”

Magically, a new quadrant is born, and Stu’s company is rocking in that one, according to an analyst report. (Or, do the analysts themselves create these new categories to attract new clients?) It’s worth going back to endpoint security’s vagueness. When you think of it, it’s bold to call something intrusion detection. Because what if it doesn’t? Endpoint security, on the

other hand, doesn’t promise anything, so it can’t really fail.

I can anticipate one reaction to my sportive dig at vendors: Stu would say, “Look, Mr. Cynical, this is just semantics. We have to call products something, so why not focus on the positive? Would you have the cola companies call their product category Cavity-Causing Beverages? It’s just marketing. What’s your problem?”

I respectfully disagree. The words we use, in many ways, show what we are. And that matters in an information security industry that profits not from fixing the problem but from perpetuating it; that’s slow to adapt to new and converged threats; that attacks the problem at its frayed edges, implicitly indicting end users instead of addressing the inherent flaws at the core of infrastructure; that’s happy to sell post-facto bandages instead of creating a culture of preventive health. In an industry like that, words like endpoint security speak volumes.

They just don’t say anything. CIO

Scott Berinato is a senior writer for CSO. Send feedback

on this column to [email protected].

The words we use, in many ways, show what we are. And that matters in an information security industry that profits not from fixing the problem, but from perpetuating it.

essenTiAl technology

Vol/1 | issUe/175 8 J U LY 1 5 , 2 0 0 6 | REAL CIO WORLD

ET-Pundit.indd 58 7/12/2006 12:58:33 PM