Into and Beyond the New Millennium

12
31 Chapter 3 Into and Beyond the New Millennium Introduction This chapter concludes our historical look at systems management by presenting key IT developments in the decade of the 1990s along with a few predictions for the new millennium. This is worth reviewing since the future direction of IT is where many of us will spend much of our professional lives. We begin our final chapter in part one with the ever evolving main- frame, showing how it adapted to the major IT influences of the 1990s. Among these influences was the tendency of computer centers to move toward a more automated management of operations. This focus on a lights-out type of computing environment led to a more intense empha- sis on facilities management, the last systems management discipline we will cover in this book. Next we discuss how midrange computers and client-server platforms changed and moved closer to each other. PCs continued to grow sub- stantially during the 1990s, not only in sheer volume but also in the assortment of models that manufacturers offered to their customers. SysManagement-03 Page 31 Tuesday, November 6, 2001 6:29 PM

Transcript of Into and Beyond the New Millennium

31

Chapter

3

Into and Beyond the New Millennium

Introduction

This chapter concludes our historical look at systems management bypresenting key IT developments in the decade of the 1990s along witha few predictions for the new millennium. This is worth reviewingsince the future direction of IT is where many of us will spend much ofour professional lives.

We begin our final chapter in part one with the ever evolving main-frame, showing how it adapted to the major IT influences of the 1990s.Among these influences was the tendency of computer centers to movetoward a more automated management of operations. This focus on alights-out type of computing environment led to a more intense empha-sis on facilities management, the last systems management disciplinewe will cover in this book.

Next we discuss how midrange computers and client-server platformschanged and moved closer to each other. PCs continued to grow sub-stantially during the 1990s, not only in sheer volume but also in theassortment of models that manufacturers offered to their customers.

SysManagement-03 Page 31 Tuesday, November 6, 2001 6:29 PM

Prentice Hall PTR
This is a sample chapter of IT Systems Management: Designing, Implementing, and Managing World-Class Infrastructures ISBN: 0-13-087678-X For the full text, visit http://www.phptr.com ©2002 Pearson Education. All Rights Reserved.

32

Chapter

3

| Into and Beyond the New Millennium

Network computers—the so-called thin clients—and desktop worksta-tions—the so-called fat clients—and almost every variation in betweensaw their birth in this time frame.

The proliferation of interconnected PCs led to critical emphasis on theperformance and reliability of the networks that connected all thesevarious clients to their servers. As a result, capacity planning expandedits discipline into the more widespread area of networks.

Undoubtedly the most significant IT advancement in the 1990s was theworldwide use of the Internet. In the span of a few short years, compa-nies went from dabbling in limited exposure to the Internet to becom-ing totally dependent on its benefits and value. Many firms havespawned their own intranets and extranets. Security, availability, andcapacity planning are just some of the disciplines touched by thisincredibly potent technology.

No discussion of the new millennium would be complete without men-tion of the ubiquitous Y2K computer problem, the so-called millen-nium bug. Although the problem was identified and addressed in timeto prevent widespread catastrophic impacts, its influence on a few ofthe systems management disciplines is worth noting.

This chapter concludes with a table of all of the systems managementfunctions that will be discussed in detail in upcoming chapters. Thetable summarizes the origins of each discipline and the major refine-ments that each has incurred during the past 40 years.

Reinventing the Mainframe

The 1990s witnessed the world of mainframes coming full circle. Dur-ing the 1970s many companies centralized their large processors inwhat became huge data centers. Favorable economic growth in the1980s allowed many businesses to expand both regionally and glo-bally. Many IT departments in these firms decentralized their data cen-ters to distribute control and services locally. Driven by the need toreduce costs in the 1990s, many of these same companies now recen-tralized into even larger data centers.

These same business pressures that drove mainframes back into cen-tralized data centers also forced data center managers to operate theirdepartments more efficiently than ever. One method they employed to

SysManagement-03 Page 32 Tuesday, November 6, 2001 6:29 PM

Reinventing the Mainframe

33

accomplish this was to automate parts of the computer operationsorganization. Terms such as

automated tape libraries

,

automatedconsoles

, and

automated monitoring

started becoming common inthe 1990s.

As data centers gradually began operating more efficiently and morereliably, so did the facilitation function of these centers. Environmentalcontrols for temperature, humidity, and electrical power became moreautomated and integrated into the overall management of the data cen-ter. The function of facilities management had certainly been around inone form or another in prior decades, but during the 1990s it emergedand progressed as a highly refined function for IT operations managers.

Mainframes themselves also went through some significant refinementsduring this time frame. One of the most notable advances again camefrom IBM. An entirely new architecture that had been in developmentfor over a decade was introduced as System/390 (S/390), with its com-panion operating system OS/390. Having learned their lessons aboutmemory constraints in years past, this time around IBM engineersplanned to stay ahead of the game by designing a 48-bit memoryaddressing field into the system. This gives OS/390 the capability toone day address up to approximately 268 trillion bytes of memory.

Will this finally be enough memory to satisfy the demands of the newmillennium? Only time will tell, of course. But there are currently sys-tems in development by other manufacturers with 64-bit addressingschemes. I personally feel that 48-bit memory addressing will not bethe next great limiting factor in IT architectures; more likely it will beeither the database or network arenas that will apply the brakes.

IBM’s new architecture also introduced a different hardware technol-ogy for its family of processors. It is based on complementary metal-oxide semiconductor (CMOS) technology. While not as new or even asfast as other circuitry IBM has used in the past, it proves to be rela-tively inexpensive and extremely low in heat generation. Heat hasoften plagued IBM and others in their pursuit of the ultimate fast andpowerful circuit design. CMOS also lends itself to parallel processing,which more than compensates for its slightly slower speeds.

Other features were designed into the S/390 architecture that helpedprepare it for some of the emerging technologies of the 1990s. One ofthese was the more robust use of high-speed fiber-optic channels forI/O operations. Another was the integrated interface for IBM’s versionof UNIX, which it called AIX for advanced interactive executive. A

SysManagement-03 Page 33 Tuesday, November 6, 2001 6:29 PM

34

Chapter

3

| Into and Beyond the New Millennium

third noteworthy feature of the S/390 was its increased channel portcapacity. This was intended to handle anticipated increases in remote I/O activity primarily due to higher Internet traffic.

The Changing of Midrange and Client-Server Platforms

The decade of the 1990s saw many companies transforming themselvesin several ways. Downsizing and rightsizing became commonbuzzwords as global competition forced many industries to cut costsand become more productive. Acquisitions, mergers, and even hostiletakeovers became more commonplace as industries as diverse as aero-space, communications, and banking endured corporate consolidations.

The push to become more efficient and streamlined drove many com-panies back to centralizing their IT departments. The process of cen-tralization forced many an IT executive to take a long, hard look at theplatforms in their shops. This was especially true when merging com-panies attempted to combine their respective IT shops. It was impor-tant to evaluate which types of platforms would work best in a newconsolidated environment and what kinds of improvements each plat-form type offered.

By the early 1990s midrange and client-server platforms had bothshowed improvements in two areas. One was the volume of unitsshipped, which had more than doubled in the past 10 years. The otherwas in the types of applications now running on these platforms.Companies that had previously run most all of their mission-criticalapplications on large mainframes were now migrating many of thesesystems onto smaller midrange computers or more commonly ontoclient-server platforms.

The term

server

itself was going through a transformation of sorts atabout this time. As more data became available to clients—bothhuman and machine—for processing, the computers managing anddelivering this information started being grouped together as severs,regardless of their size. In some cases even the mainframe was referredto as a giant server.

As midrange computers became more network oriented, and as the moretraditional application and database servers became more powerful, the

SysManagement-03 Page 34 Tuesday, November 6, 2001 6:29 PM

The Growing Use of PCs and Networks

35

differences in terms of managing the two types began to fade. This wasespecially true in the case of storage management. Midrange and client-server platforms were by now both running mission-critical applicationsand enterprise-wide systems. The backing up of their data was becomingjust as crucial a function as it had been for years on the mainframe side.

The fact that storage management was now becoming so important onsuch a variety of platforms refined the storage management function ofsystems management. The discipline evolved into an enterprise-wideprocess involving products and procedures that no longer applied justto mainframes but to midrange and client-server platforms as well.Later in the decade, Microsoft introduced its new NT architecture thatmany in the industry presumed stood for New Technology. As NTbecame prevalent, storage management was again extended andrefined to include this new platform offering.

The Growing Use of PCs and Networks

By the time the last decade of the 20th Century rolled around, PCs andthe networks that interconnected them had become engrained in every-day life. Computer courses were being taught in elementary schools forthe young, in senior citizen homes for the not so young, and in mostevery institution of learning in between.

Not only had the number of PCs grown substantially during these past10 years, but so also had their variety. The processing power of desk-top computers changed in both directions. Those requiring extensivecapacity for intensive graphics applications evolved into powerful—and expensive—PC workstations. Those needing only minimal pro-cessing capability such as for data entry, data inquiries, or Internetaccess were offered as less expensive network computers.

These two extremes of PC offerings are sometimes referred to as fat cli-ents and thin clients, and there is a wide variation of choices betweenthem. Desktop models were not the only type of PC available in the1990s. Manufacturers offered progressively smaller versions com-monly referred to as laptops, for portable use, palmtops, for personalscheduling—greatly popularized by the Personal Digital Assistant

PalmPilot—

and thumbtops, an even smaller, though less popular offering.These models provided users with virtually any variation of PC theyrequired or desired.

SysManagement-03 Page 35 Tuesday, November 6, 2001 6:29 PM

36

Chapter

3

| Into and Beyond the New Millennium

Laptop computers alone had become so popular by the end of thedecade that most airlines, hotels, schools, universities, and resorts hadchanged policies, procedures, and facilities to accommodate the ubiq-uitous tool. Many graduate schools taught courses that actuallyrequired laptops as a prerequisite for the class.

As the number and variety of PCs continued to increase, so did theirdependence on the networks that interconnected them. This focusedattention on two important functions of systems management: avail-ability and capacity planning.

Availability now played a critical role in ensuring the productivity ofPC users in much the same way as it did during the evolution frommainframe batch applications to online systems. However, the empha-sis on availability shifted away from the computers themselves sincemost desktops had robust redundancy designed into them. Instead, theemphasis shifted to the availability of the LANs that interconnected allthese desktop machines.

The second discipline that evolved from the proliferation of networkedPCs was capacity planning. It too now moved beyond computer capac-ity to that of network switches, routers, and especially bandwidth.

The Global Growth of the Internet

Among all the numerous advances in the field of IT, none has had morewidespread global impact than the unprecedented growth and use ofthe Internet.

What began as a relatively limited-use tool for sharing research infor-mation between Department of Defense agencies and universities hascompletely changed the manner in which corporations, universities,governments, and even families communicate and do business.

The commonplace nature of the Internet in business and in the homegenerated a need for more stringent control over access. The systemsmanagement function of security evolved as a result of this need, pro-viding network firewalls, more secure password schemes, and moreeffective corporate policy statements and procedures.

As the capabilities of the Internet grew during the 1990s, the demandfor more sophisticated services intensified. New features such as voice

SysManagement-03 Page 36 Tuesday, November 6, 2001 6:29 PM

Lingering Effects of the Millennium Bug

37

cards, animation, and multimedia became popular among users andsubstantially increased network traffic. The need to more effectivelyplan and manage these escalating workloads on the network broughtabout additional refinements in the functions of capacity planning andnetwork management.

Lingering Effects of the Millennium Bug

By 1999 there were probably few people on the planet who had not atleast heard something about the Y2K computer problem. While manyare by now at least somewhat aware of what the millennium buginvolved, I will describe it here for readers who may not fully appreci-ate the source and significance of the problem.

As mentioned earlier, the shortage of expensive data storage, both inmain memory and on disk devices, led application software developersin the mid-1960s to conserve on storage space. One of the most com-mon methods used to save on storage space was to shorten the lengthof a date field in a program. This was accomplished by designating theyear with only its last two digits. The years 1957 and 1983, for exam-ple, would be coded as 57 and 83, respectively. This would reduce the4-byte year field down to 2 bytes.

Taken by itself, a savings of 2 bytes of storage may seem relativelyinsignificant. But date fields were very prevalent in numerous programsof large applications, sometimes occurring hundreds or even thousandsof times. An employee personnel file, for example, may contain sepa-rate fields for an employee date of birth, company start date, companyanniversary date, company seniority date, and company retirementdate, all embedded within each employee record. If the companyemployed 10,000 workers, it would be storing 50,000 date fields in itspersonnel file. The 2-byte savings from the contracted year field wouldtotal up to 100,000 bytes, or about 100 kilobytes of storage for a sin-gle employee file.

Consider an automotive or aerospace company that could easily havemillions of parts associated with a particular model of a car or an air-plane. Each part record may have fields for a date of manufacture, adate of install, a date of estimated life expectancy, and a date of actualreplacement. The 2-byte reduction of the year field in these inventoryfiles could result in millions of bytes of savings in storage.

SysManagement-03 Page 37 Tuesday, November 6, 2001 6:29 PM

38

Chapter

3

| Into and Beyond the New Millennium

But storage savings was not the only reason that a 2-byte year fieldwas used in a date record. Many developers needed to sort dateswithin their programs. A common scheme used to facilitate sortingwas to comprise the date in a

yy-mm-dd

format. The 2-digit year gavea consistent, easy-to-read, and easy-to-document form for date fields,not to mention slightly faster sort times due to fewer bytes that mustbe compared.

Perhaps the most likely reason that programmers were not worriedabout concatenating their year fields down to 2 digits was that few, ifany, thought that their programs would still be running in production20, 30, and even 40 years later. After all, the IT industry was still in itsinfancy in the 1960s when many of these programs were first devel-oped. IT was rapidly changing on a year-to-year and even month-to-month basis. Newer and more improved methods of software develop-ment—such as structured programming, relational databases, fourth-generation languages, and object-oriented programming—wouldsurely make obsolete and replace these early versions of productionprograms.

Of course, hindsight is most often 20/20. What we know now but didnot fully realize then was that large corporate applications becamemission-critical applications to many businesses during the 1970s and1980s. These legacy systems grew so much in size, complexity, andimportance over the years that it became difficult to justify the largecost of their replacement.

The result was that countless numbers of mission-critical programsdeveloped in the 1960s and 1970s with 2-byte year fields were still run-ning in production by the late 1990s. This represented the core of theproblem, which was that the year 2000 in most of these legacy pro-grams would be interpreted as the year 1900, causing programs to fail,lock up, or otherwise give unpredictable results.

A massive remediation and replacement effort began in the late 1990sto address the Y2K computer problem. Some industries such as bank-ing and airlines spent up to four years correcting their software toensure it ran properly on January 1, 2000. These programming effortswere, by and large, successful in heading off any major adverse impactsof the Y2K problem.

There were also a few unexpected benefits from the efforts thataddressed the millennium bug. One was that many companies were

SysManagement-03 Page 38 Tuesday, November 6, 2001 6:29 PM

Timelining the Disciplines of Systems Management

39

forced to conduct a long-overdue inventory of their production appli-cation profiles. Numerous stories surfaced about IT departments dis-covering from these inventories that they were supporting programsno longer needed or not even being used. Not only did applicationprofiles become more accurate and up to date as a result of Y2K prep-arations, but the methods used to compile these inventories alsoimproved

A second major benefit of addressing the Y2K problem involves therefinements it brought about in two important functions of systemsmanagement. The first was in change management. Remedial program-ming necessitated endless hours of regression testing to ensure that theoutput of the modified Y2K-compliant programs matched the outputof the original programs. Upgrading these new systems into productionoften resulted in temporary back-outs and then reupgrading. Effectivechange management procedures helped to ensure that these upgradeswere done smoothly and permanently when the modifications weredone properly; they also ensured that the back-outs were done effec-tively and immediately when necessary.

The function of production acceptance was also refined as a result ofY2K. Many old mainframe legacy systems were due to be replacedbecause they were not Y2K compliant. Most of these replacementswere designed to run on client-server platforms. Production acceptanceprocedures were modified in many IT shops to better accommodateclient-server production systems. This was not the first time that mis-sion-critical applications were implemented on client-server platforms,but the sheer increase in numbers due to Y2K compliance forced manyIT shops to make their production acceptance processes more stream-lined, effective, and enforceable.

Timelining the Disciplines of Systems Management

In Table 3-1 we summarize the timelines of the emergence and evolu-tion of the principal disciplines of systems management. The imple-mentation and management of each of these functions will be discussedin detail in Part 3.

SysManagement-03 Page 39 Tuesday, November 6, 2001 6:29 PM

40

Chapter

3

| Into and Beyond the New Millennium

Table 3–1 Emergence and Evolution of Systems Management Disciplines

Discipline 1960s 1970s 1980s 1990s 2000s

Availability Emerged (for batch)

Evolved (for onlines)

Evolved (for networks)

Performance and Tuning

Emerged (for batch)

Evolved (for onlines)

Evolved (for networks)

Storage Management

Emerged Evolved (for client-server)

Change Management

Emerged

Problem Management

Emerged

Strategic Security

Emerged Evolved (for networks)

Evolved (for Internet)

Configuration Management

Emerged

Network Management

Emerged

Production Acceptance

Emerged Evolved (for client-server)

Evolving (for rapid

application deployment)

Disaster Recovery

Emerged Evolved (for midrange & client-server)

Evolving (for

networks)

Capacity Planning

Emerged Evolved (for networks)

Facility Management

Emerged Evolving (for

automated operations)

SysManagement-03 Page 40 Tuesday, November 6, 2001 6:29 PM

Summary

41

Summary

This chapter described some of the many significant 1990s IT advance-ments that had a major influence on the disciplines of systems manage-ment. The trend to centralize back and optimize forward the corporatecomputer centers of the new millennium led to highly automated datacenters. This emphasis on automation led to increased scrutiny of thephysical infrastructure of data centers, which in turn led to a valuableoverhaul of facilities management. As a result, most major computercenters have never been more reliable in availability, nor more respon-sive in performance. Most have never been more functional in designnor more efficient in operation.

The 1990s saw the explosive global growth of the Internet. The wide-spread use of cyberspace led to improvements in network security andcapacity planning. Midrange computers and client-server platformsmoved closer toward each other technology-wise and pushed forwardthe envelope of enterprise-wide storage management. The popularityof Unix as the operating system of choice for servers, particularlyapplication and database servers, continued to swell during thisdecade. IBM adapted its mainframe architecture to reflect the popular-ity of Unix and the Internet.

PCs proliferated at ever increasing rates, both in numbers sold and inthe variety of models offered. Client PCs of all sizes were now offeredas thin, fat, laptop, palmtop, or even thumbtop. The interconnection ofall these PCs with each other and with their servers caused capacityplanning and availability for networks to improve as well.

The final decade of the second millennium concluded with a flurry ofactivity surrounding the seemingly omnipresent millennium bug. Asmost reputable IT observers correctly predicted, the problem was suffi-ciently addressed and resolved for most business applications monthsahead of the year 2000. The domestic and global impacts of the millen-nium bug have been relatively negligible to date. The vast effortsexpended in mitigating the impact of Y2K did help to refine both theproduction acceptance and the change management functions of sys-tems management.

This chapter concludes the first part of this book, which presented aunique historical perspective of some of the major IT developmentsthat took place during the latter half of the 20th Century and that hadmajor influence on initiating or refining the disciplines of systems

SysManagement-03 Page 41 Tuesday, November 6, 2001 6:29 PM

42

Chapter

3

| Into and Beyond the New Millennium

management. Table 3-1 summarized a timeline for the 12 functionsthat are described in detail in Part 3. But first, we’ll look at some ofthe key people issues involved with effectively implementing the func-tions of systems management.

SysManagement-03 Page 42 Tuesday, November 6, 2001 6:29 PM