The evolution of storage systems - Semantic Scholar€¦ · The evolution of storage systems by R....

13
The evolution of storage systems by R. J. T. Morris B. J. Truskowski Storage systems are built by taking the basic capability of a storage device, such as the hard disk drive, and adding layers of hardware and software to obtain a highly reliable, high-performance, and easily managed system. We explain in this paper how storage systems have evolved over five decades to meet changing customer needs. First, we briefly trace the development of the control unit, RAID (redundant array of independent disks) technologies, copy services, and basic storage management technologies. Then, we describe how the emergence of low-cost local area data networking has allowed the development of network-attached storage (NAS) and storage area network (SAN) technologies, and we explain how block virtualization and SAN file systems are necessary to fully reap the benefits of these technologies. We also discuss how the recent trend in storage systems toward managing complexity, ease- of-use, and lowering the total cost of ownership has led to the development of autonomic storage. We conclude with our assessment of the current state-of-the-art by presenting a set of challenges driving research and development efforts in storage systems. The first data storage device was introduced by IBM in 1956. Since then there has been remarkable pro- gress in hard disk drive (HDD) technology, and this has provided the fertile ground on which the entire industry of storage systems has been built. Storage systems are built by taking the raw storage capability of a storage device such as the HDD and by adding layers of hardware and software in order to obtain a system that is highly reliable, has high performance, and is easily manageable. Storage systems are some- times referred to as storage subsystems or storage de- vices (although device is better used to describe the raw storage component or an elementary storage sys- tem). Originally, the storage system was just the HDD, but over time storage systems have developed to in- clude advanced technologies that add considerable value to the HDD. Storage systems have evolved to support a variety of added services, as well as con- nectivity and interface alternatives. It is for this rea- son that file systems and storage management sys- tems are often considered parts of a storage system and thus will be briefly treated in this paper. To understand the evolution of storage systems, it is important to observe the evolution of the HDD. The areal density of the HDD has improved by seven orders of magnitude, and this has resulted in a re- duction of the floor space taken by the correspond- ing storage systems also by about seven orders of magnitude. Figure 1 plots the HDD areal density (on the left) and the price of various storage devices (on the right) since 1980. HDD prices have decreased by about five orders of magnitude since 1980, while the cost of storage systems has fallen about 2.5 orders Copyright 2003 by International Business Machines Corpora- tion. Copying in printed form for private use is permitted with- out payment of royalty provided that (1) each reproduction is done without alteration and (2) the Journal reference and IBM copy- right notice are included on the first page. The title and abstract, but no other portions, of this paper may be copied or distributed royalty free without further permission by computer-based and other information-service systems. Permission to republish any other portion of this paper must be obtained from the Editor. IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003 0018-8670/03/$5.00 © 2003 IBM MORRIS AND TRUSKOWSKI 205

Transcript of The evolution of storage systems - Semantic Scholar€¦ · The evolution of storage systems by R....

Page 1: The evolution of storage systems - Semantic Scholar€¦ · The evolution of storage systems by R. J. T. Morris B. J. Truskowski Storage systems are built by taking the basic capability

The evolutionof storage systems

by R. J. T. MorrisB. J. Truskowski

Storage systems are built by taking the basiccapability of a storage device, such as thehard disk drive, and adding layers ofhardware and software to obtain a highlyreliable, high-performance, and easilymanaged system. We explain in this paperhow storage systems have evolved over fivedecades to meet changing customer needs.First, we briefly trace the development of thecontrol unit, RAID (redundant array ofindependent disks) technologies, copyservices, and basic storage managementtechnologies. Then, we describe how theemergence of low-cost local area datanetworking has allowed the development ofnetwork-attached storage (NAS) and storagearea network (SAN) technologies, and weexplain how block virtualization and SAN filesystems are necessary to fully reap thebenefits of these technologies. We alsodiscuss how the recent trend in storagesystems toward managing complexity, ease-of-use, and lowering the total cost ofownership has led to the development ofautonomic storage. We conclude with ourassessment of the current state-of-the-art bypresenting a set of challenges drivingresearch and development efforts in storagesystems.

The first data storage device was introduced by IBMin 1956. Since then there has been remarkable pro-gress in hard disk drive (HDD) technology, and thishas provided the fertile ground on which the entire

industry of storage systems has been built. Storagesystems are built by taking the raw storage capabilityof a storage device such as the HDD and by addinglayers of hardware and software in order to obtaina system that is highly reliable, has high performance,and is easily manageable. Storage systems are some-times referred to as storage subsystems or storage de-vices (although device is better used to describe theraw storage component or an elementary storage sys-tem). Originally, the storage system was just the HDD,but over time storage systems have developed to in-clude advanced technologies that add considerablevalue to the HDD. Storage systems have evolved tosupport a variety of added services, as well as con-nectivity and interface alternatives. It is for this rea-son that file systems and storage management sys-tems are often considered parts of a storage systemand thus will be briefly treated in this paper.

To understand the evolution of storage systems, itis important to observe the evolution of the HDD.The areal density of the HDD has improved by sevenorders of magnitude, and this has resulted in a re-duction of the floor space taken by the correspond-ing storage systems also by about seven orders ofmagnitude. Figure 1 plots the HDD areal density (onthe left) and the price of various storage devices (onthe right) since 1980. HDD prices have decreased byabout five orders of magnitude since 1980, while thecost of storage systems has fallen about 2.5 orders

�Copyright 2003 by International Business Machines Corpora-tion. Copying in printed form for private use is permitted with-out payment of royalty provided that (1) each reproduction is donewithout alteration and (2) the Journal reference and IBM copy-right notice are included on the first page. The title and abstract,but no other portions, of this paper may be copied or distributedroyalty free without further permission by computer-based andother information-service systems. Permission to republish anyother portion of this paper must be obtained from the Editor.

IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003 0018-8670/03/$5.00 © 2003 IBM MORRIS AND TRUSKOWSKI 205

Page 2: The evolution of storage systems - Semantic Scholar€¦ · The evolution of storage systems by R. J. T. Morris B. J. Truskowski Storage systems are built by taking the basic capability

of magnitude in the same period.1 The sharper fallin the price of the HDD implies that the cost of theraw HDD accounts for a progressively smaller frac-tion of the total cost of the storage system—we willreturn to this crucial observation later.

Although Moore’s law tells us that the number oftransistors per unit area of silicon doubles every 1.5years, we see from Figure 1 that the number of bitsstored per unit of HDD media is doubling about ev-ery year! But more important than improvementsin device density or cost have been the new appli-cations that have been enabled by these advances.On the time line in Figure 1, two milestones standout. In 1996, digital storage became more cost-effective for storing data than paper, and, in 1998,we reached the point where film used in medical ra-diology could be economically supplanted by elec-tronic means. Another important milestone, this onein the consumer market, was reached several yearsago when it became cost effective to store video con-tent using digital storage systems. Soon after we sawthe emergence of HDD-based set-top-box devices(sometimes called personal video recorders) that of-fered improved management of entertainment videoin the home.

Besides the emergence of storage systems that en-able digitization and replacement of legacy media,it is instructive to consider how architectural out-comes are affected by the relative progress of tech-nologies. Figure 2 portrays the relative advances instorage, processor, and communications technolo-gies, obtained by plotting the cost/performance im-provement of widely available technologies in enduser products as documented by personal computer(PC) magazines since 1983. (Broadband to the homeis considered to have limited availability at present.)The plot shows that since 1990, storage technologyhas outrun both communications and processor tech-nologies. In fact the availability of inexpensive dig-ital storage has influenced the architectures that wesee in place today. For example, because storagetechnology has become relatively inexpensive whiledeployment of point-to-point broadband to homeshas been slow, HDD-based set top boxes are moreprevalent than video-on-demand. The parallel storyof technology in the enterprise is not shown but leadsto a similar conclusion: the amount of stored datahas outrun the ability of communications and pro-cessing systems to provide easy access to data. Thus,we see widespread use of multiple copies of data(e.g., Lotus Notes* replication, widespread internet

Figure 1 HDD storage density is improving at 100 percent per year (currently over 100 Gbit/in2). The price of storage is decreasing rapidly and is now significantly cheaper than paper or film.

SINCE 1997 RAW STORAGE PRICES HAVE BEEN DECLINING AT 50%–60% PER YEAR

25%CGR

100%CGR

1980 1990

10 000

1000

100

10

1

0.1

0.01

0.001

1000

100

10

1

0.1

0.01

0.001

0.0001

HDD STORAGE DENSITY STORAGE DEVICE PRICES

LABDEMOS

PRODUCTS

60%CGR

PROGRESS FROM BREAKTHROUGHS, INCLUDING MR, GMR HEADS, AFC MEDIA

HD

D A

RE

AL

DE

NS

ITY

(Gb

/in2)

PR

ICE

/MB

YTE

(DO

LLA

RS

)

DRAM

FLASH

3.5" HDD

2.5" HDD

1" MICRODRIVE

RANGE OFPAPER/FILM

1990 2000 2010 1980 1985 1995 2000 2005 2010

HDDDRAMFLASHPAPER/FILM

MORRIS AND TRUSKOWSKI IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003206

Page 3: The evolution of storage systems - Semantic Scholar€¦ · The evolution of storage systems by R. J. T. Morris B. J. Truskowski Storage systems are built by taking the basic capability

caching) as well as the deployment of storage sys-tems close to the end user in order to avoid networkdelays.

We have already pointed out that the cost of the HDDaccounts for a progressively smaller component ofthe total cost of a storage system. In fact, in recentyears the game has changed not once, but twice. First,the HDD has changed from a differentiating technol-ogy to a commoditized component in the typical stor-age system. As shown in Figure 6 of Reference 1,the HDD components within commercially availablemedium-to-high-function storage systems typicallycost less than 10 percent of the cost of the system.Customer value has migrated to “advanced func-tions” and the integration of these functions withinthe system itself. Then, as Figure 3 shows, the costof managing storage now dominates the total costof a storage system.2–4 This means that the value tothe customer of a storage system now resides in itsability to increase function beyond what is providedin the bare HDD, and specifically in its ability to lowermanagement costs and provide greater assurancesas to the availability of data (e.g., through backupand replication services). Buyers of storage systemsare now mainly buying the function that is embod-ied in the software (sometimes called the firmware)of the storage system. Furthermore, buyers of stor-age systems are “discounting” the initial purchaseprice in the buying criteria and weighing the totalcost of ownership more heavily.

In the present issue of the IBM Systems Journal, wehave included papers that document the ongoingevolution of some key storage system technologies.In the rest of this paper, we first discuss how thesenew capabilities were driven by technological devel-opments, such as local area networking, and by theIT (information technology) requirements for the en-terprise. Then we show how today’s challenges in theindustry are evolving into the challenges of the fu-ture.

Storage systems come of age: fromcomponents to systems

It has long been recognized that the disk drive alonecannot provide the range of storage capabilities re-quired by enterprise systems. The first storage de-vices were directly controlled by the CPU. Althoughsome advances did take place in the interim,System/360* in 1964 was the first offering of advancedfunctions in an external storage control unit.5 Thekey advantage of a control unit (or controller) was

that the I/O commands from the CPU (sometimescalled the host) were independently translated intothe specific commands necessary to operate the HDD(sometimes called the direct access storage device,or DASD), and so the HDD device itself could be man-aged independently and asynchronously from theCPU. The control unit included buffering that allowed

Figure 2 Improvement factors for PC technologies: since 1990 storage technology has outpaced both processor and communication technologies.

1980 1985 1990 1995 2000 2005 2010 2015

10 000

1000

100

10

1

IMP

RO

VE

ME

NT

FAC

TOR

YEAR

PROCESSOR [email protected] MIPS

STORAGE 1983@10 MB

COMMUNICATIONS (HOME)1983@1200 bps

Figure 3 Storage administration costs now dominate purchase costs for a storage system.

MIL

LIO

N

STORAGE: WHAT $3 MILLION BOUGHT IN 1984 AND 2000

1984 2000

$1

$2

$3$2 MILLIONSTORAGEADMINISTRATION

$2 MILLIONSYSTEM

$1 MILLIONSYSTEM

$1 MILLIONSTORAGE ADMINISTRATION

IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003 MORRIS AND TRUSKOWSKI 207

Page 4: The evolution of storage systems - Semantic Scholar€¦ · The evolution of storage systems by R. J. T. Morris B. J. Truskowski Storage systems are built by taking the basic capability

the CPU and HDD operations to overlap. Over timethere was a greater emphasis on availability, and con-trollers began to support multiple data paths fromCPUs to storage controllers. The IBM 3990 Model 3storage controller consisted of two clusters on sep-arate power and service boundaries and includedboth a large cache and nonvolatile storage (NVS)memories. The caches were used to improve readresponse time, whereas the NVS was used to providea fast write and dual copy capability. In addition, itwas possible to perform maintenance on the clus-ter, cache, or NVS under normal operation.6

Storage systems leapt further ahead in the early 1990swhen RAID (redundant array of independent disks)technology was introduced. RAID allowed the coor-dination of multiple HDD devices so as to providehigher levels of reliability and performance thancould be provided by a single drive. The emergenceof smaller form factor drives (5.25 inches and then3.5 inches) also encouraged the design of systemsusing a larger number of smaller drives, a natural fitwith RAID technology. The classical concept of par-ity was used to design reliable storage systems thatcontinued to operate despite drive failures. Paral-lelism was used to provide higher levels of perfor-mance. RAID technology was delivered in low costhardware and by the mid 1990s became standard onservers that could be purchased for a few thousanddollars. Many variations on RAID technology havebeen developed; see the survey in Reference 7. Thesewere used in large external storage systems that pro-vided significant additional function, including re-dundancy (no single point of failure in the storagesystem) and copy services (copying of data to a sec-ond storage system for availability).

Disaster recovery became a requirement for all ITsystems, and impacted the design of storage systemsas well. The required degree of protection of datarequires a solution that may range in technical so-phistication from occasional copies onto magnetictape (and manually transported), through electronicversions of essentially the same principle, to true mir-roring solutions and truly distributed systems. Sev-eral commonly employed techniques have emerged.A point-in-time copy (offered by IBM under the nameFlashCopy*) is the making of a consistent virtualcopy of data as it appeared at a single point in time.This copy is then kept up to date by following point-ers as changes are made. If desired, this virtual copycan, over time, be made into a real copy throughphysical copying. A second technique, mirroring or

continuous copy (offered by IBM under the namePeer-to-Peer Remote Copy) involves two mirror cop-ies of data, one at a primary (local) site and one ata secondary (recovery) site. We say this process issynchronous when data must be successfully writtenat the secondary system before the write issued bythe primary system is acknowledged as complete. Al-though synchronous operation is desirable, it is prac-tical only over limited distances (say, of the order of100 km). Therefore, other asynchronous schemes,and other significant optimizations, are used to im-prove the performance of these basic schemes. Fora complete discussion of the emerging requirementsand technologies for disaster recovery see Refer-ences 8 and 9.

The requirements for data availability were not com-pletely satisfied by reliable storage systems, even withredundant instances of hardware and data. Becausedata could be accidentally erased (through humanerror or software corruption), additional copies werealso needed for backup purposes. Backup systemswere developed that allowed users to make a com-plete backup of selected files or entire file systems.The traditional method of backup was to make abackup copy on tape, or in the case of a personalcomputer, on a set of floppy disks or a small tapecartridge. However, as systems became networkedtogether, LAN-based backup systems replaced media-oriented approaches, and these ran automaticallyand unattended, often backing up from HDD to HDD.Backup systems are not as simple as they sound, be-cause they must deal with many different types ofdata (of varying importance), on a variety of client,server, and storage devices, and with a level of as-surance that may exceed that for the systems theyare backing up. For reasons of performance and ef-ficiency, backup systems must provide for incremen-tal backup (involving only those files that havechanged) and file-differential backup (involving onlythose bytes in a file that have changed). These tech-niques pose an exceptionally stringent requirementon the integrity of the meta-data that are associatedwith keeping straight the versions of the backed-updata. In fact, to obtain the needed assurances andperformance, the most advanced database recoverytechnology must be used. The first systems to pro-vide these types of capabilities (including incremen-tal backup) are documented in Reference 10. File-differential backup was subsequently introduced, inwhich only the changed bytes within a file are sentand managed at the backup server. Since tape stillprovides the most cost-effective form of saving

MORRIS AND TRUSKOWSKI IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003208

Page 5: The evolution of storage systems - Semantic Scholar€¦ · The evolution of storage systems by R. J. T. Morris B. J. Truskowski Storage systems are built by taking the basic capability

backed up data, there are a number of special con-siderations relating to the append-only nature of tapedevices, and these need to be explicitly designed intoa backup system. Furthermore, when storage is con-nected using a SAN (described later), additional per-formance improvements are possible through by-passing the LAN (LAN-free backup) or the server(server-free backup). These technologies are ex-plained in detail in a paper on Tivoli Storage Man-ager (TSM) in this issue.11 A technique that can sup-plement the backup approach involves making apoint-in-time copy as previously described, and us-ing that virtual copy as the backup copy. If and whena physical copy is needed, a backup of the virtualcopy can be performed.

Although the raw cost of HDD storage has declined,tertiary media such as tape or optical disks continueto remain important, and therefore hierarchical stor-age systems that manage these levels of storage areneeded. Many of the requirements of hierarchicalstorage are already dealt with in TSM, because thebackup task requires managing across a hierarchyof storage devices, especially disk and tape systems,and involves dealing with the constraints of tape ac-cess. However, the criteria for managing a hierar-chy in some application environments (e.g., contentmanagement systems) do not always coincide withthose appropriate for backup applications, and, asa result, further capabilities have been developed.11

Besides backup applications, tape storage plays animportant role in data center operations, holding filesthat have been automatically migrated by a hierar-chical storage management system (such asDFSMShsm*12) and master data sets (used, for exam-ple, in credit card processing). All these applicationsexploit the high sequential bandwidth of tape in batchprocessing. By virtue of the way tapes are managed,tapes are often vastly underutilized. In fact, becauseof software constraints and a drive for simplicity, ithas been common practice in mainframe applicationsto store a single file on a tape. This low tape utili-zation motivated the development of a tape virtu-alization technology (used in the IBM Virtual TapeServer product) involving a disk cache in front of atape library, and independently treated files that arecached on disk while in use by the application, andlater unloaded and packed onto tape. Thus, by shield-ing the application from the physical mapping ontothe tape drive, significant improvement in perfor-mance, cost, and manageability is achieved.

Networked storage

The emergence of low-cost LAN technology drovethe most significant trend of the late 1980s and early1990s in storage systems. PCs became networked andthe client/server computing model emerged. Whilesome control of applications migrated from the datacenter to the PC, key data often had the status of acorporate or institutional resource, rather than a per-sonal resource, and therefore needed to be sharedand safeguarded. The PC was unmanaged and no-toriously unreliable, and so to achieve sharing ofdata, rudimentary low-cost PC-class storage serversbecame common. These systems were more “mis-sion-critical” than the PC-based client, and were thusprime candidates for technologies such as file-serv-ing, RAID, and LAN-based backup systems. The soft-ware used for networking was frequently Novell Net-Ware** or other software available from PC softwarevendors. At the same time, UNIX** enjoyed a resur-gence both in UNIX workstations (an alternative tothe PC client) and in UNIX servers. The widespreadavailability of the NFS** (Network File System) file-sharing protocols caused further specialization andthe emergence of file servers. The next step was theemergence of NAS (network-attached storage) sys-tems, which bundled the network file serving capa-bility into a single specialized box, typically servingstandard protocols such as NFS, CIFS (Common In-ternet File System), HTTP (HyperText Transfer Pro-tocol), and FTP (File Transfer Protocol). NAS systemswere simple to deploy because they came packagedas “appliances,” complete with utilities and manage-ment functions.

At the same time, IT organizations were attemptingto “regain control” over the dispersed assets char-acteristic of client/server computing. The data cen-ter took on a renewed importance in most enter-prises. To that end, multiple servers in a machine roomsought the capability to access their backend storagewithout necessarily having individual storage directlyattached and therefore dedicated to an individualserver. This caused the emergence of storage areanetworks (SANs). SANs had the advantage that stor-age systems could be separately engineered from theservers, allowing the pooling of storage (staticallyconfigured) and resulting in improved efficienciesand lower risks for the customer (storage investmentwas not tied to a particular server hardware or soft-ware choice). SAN technology opened up new oppor-tunities in simplified connectivity, scalability, and costand capacity manageability. Fibre Channel becamethe predominant networking technology13 and large

IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003 MORRIS AND TRUSKOWSKI 209

Page 6: The evolution of storage systems - Semantic Scholar€¦ · The evolution of storage systems by R. J. T. Morris B. J. Truskowski Storage systems are built by taking the basic capability

storage systems, such as IBM TotalStorage* Enter-prise Storage Server*14 (ESS), support this protocoland use it to attach multiple servers. To avoid con-fusion, the reader should think of NAS systems asworking with files and file access protocols, whereasa SAN enables access to block storage (the blocks ofdata may be stored on a storage system or an HDD).This distinction is illustrated in Figure 4 which showsthe data paths for direct-attached storage (A),SAN-attached storage (B), network-attached storage(C), and a mixed NAS and SAN environment (D). Ap-pliances having both SAN and NAS interfaces areavailable on the market; the IBM Storage Tank*, de-scribed later, has this capability.

The widespread adoption and commoditization ofEthernet LANs running TCP/IP (Transport ControlProtocol/Internet Protocol) has caused increasing in-terest in the use of this technology in the SAN. Theunifying of networking under TCP/IP and Ethernetoffers the benefits of standardization, increased rateof innovation, and lower costs, in both hardware anddevice support. Management costs are reduced sincestaffs need to be familiar with only one type of net-work, rather than two. The iSCSI (Internet SmallComputer System Interface) standard has been in-troduced to allow the SCSI protocol to be carriedacross a TCP/IP network.15 However, in order to re-alize the benefit, considerations such as discovery,performance, and security must be addressed. Theresolution of these concerns is discussed in detail inReference 16.

Although the availability of networked storage pro-vides improved access to data, it still leaves some keyissues unaddressed. SANs enable arbitrary connec-tivity between using system and storage, but they stillpose operational problems. Although a SAN may notbe hard-wired to its attached storage, it is still “soft-wired” in the sense that the configuration is typicallystatic, and changes cannot be made to the attachedstorage or using system without disruption. The ad-dition of a layer of indirection between the using sys-tem and storage, provided either through a hardwareswitch or an intermediary software-based system, al-lows the using system (typically a server) to deal withnondisruptive changes in the storage configuration.Additionally, virtual volumes can be created that cutacross multiple storage devices—this capability is re-ferred to as block virtualization. Block virtualizationallows all the storage on a SAN to be pooled and thenallocated for access by the using systems.17 This alsosimplifies various storage management tasks, such

as copy services, varying storage devices on-line oroff-line, and so on.

But access is not enough, because the using systemsoften assume sole access to data and are notequipped to share data or free space they own withother systems. Thus, each system is allocated its ownsupply of free space, a wasteful scheme. These prob-lems can be overcome by removing the task of meta-data management from each of the client systemsand creating a global capability that manages themapping, sharing, and free-space management ofstorage on all attached storage devices. Besides sim-plifying configuration and free-space management,new capabilities are added such as sharing of data,nondisruptive change, and automation of many stor-age management functions as described later. Thisconcept is referred to as a SAN file system. Becausethe meta-data are under control of a meta-dataserver, new capabilities are made possible, and a widerange of services are introduced that simplify man-agement and improve the availability of data. Forexample, policies associated with data determine howdata are managed for performance, availability, se-curity, cost, and so on. This is further described laterunder “Autonomic storage.”

An early implementation of a SAN file system is theIBM Storage Tank technology, illustrated in Figure5 and described in Reference 18. An implementa-tion of a quite similar concept is CXFS**, developedby SGI.19 Storage Tank presents a file system to theclient system by installing code at the VFS (virtualfile system) or IFS (installable file system) layer. Thiscode intercepts all file system I/O and manages it ac-cording to meta-data, which it obtains (and keepslocally cached) from the meta-data servers. WhileStorage Tank is a SAN file system, it also can aid inNAS management, because its clients can be gate-ways that run NFS or CIFS code and thus provide NASservice to other clients.

Putting it all together, the left side of Figure 6 showshow storage systems leverage RAID to improve thebasic functions of the HDD device, and exploit localarea networking technology in a SAN to gain phys-ical connectivity with a variety of storage devices. Butthe current evolution goes further, as shown on theright of Figure 6. Block virtualization provides flex-ible logical-to-physical mapping across storage de-vices, and a common file system with separate andconsolidated meta-data management allows dynamicpolicy-based resource management and added ca-pabilities in a heterogeneous systems environment.

MORRIS AND TRUSKOWSKI IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003210

Page 7: The evolution of storage systems - Semantic Scholar€¦ · The evolution of storage systems by R. J. T. Morris B. J. Truskowski Storage systems are built by taking the basic capability

Figure 4 The data paths for direct-attached, SAN, and NAS storage

NFSNFSSERVERSERVER

SAN

SERVERS

LAN

STORAGESTORAGESYSTEMSYSTEM

SAN

APPLICATION

DEVICE DRIVER

DEVICE DRIVER

LAN

HDDs

HDDsHDDsHDD

STORAGESTORAGESYSTEMSYSTEM

SERVERSSERVERSSERVERSS S

RAID CONTROLLER RAID CONTROLLER

RAID CONTROLLER RAID CONTROLLER RAID CONTROLLER RAID CONTROLLER

( ) C C S O G(A) DIRECT ATTACHED STORAGE(A) DIRECT ATTACHED STORAGE(A) DIRECT ATTACHED STORAGE(A) DIRECT-ATTACHED STORAGE(A) DIRECT-ATTACHED STORAGE (((((((BB))))))) S C S O GSAN ATTACHED STORAGE SAN-ATTACHED STORAGE

( ) ( )( ) ( )( ) ( )(C) NETWORK-ATTACHED STORAGE (NAS)(C) NETWORK-ATTACHED STORAGE (NAS) (((((D))))) MIXED NAS AND SAN

APPLICATION

DEVICE DRIVER

RAID CONTROLLER

APPLICATION

DEVICE DRIVER

APPLICATION

DEVICE DRIVER

APPLICATION

DEVICE DRIVER

DEVICE DRIVER DEVICE DRIVER

APPLICATION

DEVICE DRIVER

APPLICATION

DEVICE DRIVER

IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003 MORRIS AND TRUSKOWSKI 211

Page 8: The evolution of storage systems - Semantic Scholar€¦ · The evolution of storage systems by R. J. T. Morris B. J. Truskowski Storage systems are built by taking the basic capability

Systems management tools exploit these capabilitiesin order to facilitate the resource management taskof the administrator.

Autonomic storage

Recognizing the fact that progress in base informa-tion technology had outrun our systems managementcapabilities, in 2001 IBM published the AutonomicComputing Manifesto.20 The manifesto is a call toaction for industry, academia, and government to ad-dress the fundamental problem of ease and cost ofmanagement. The basic goal is to significantly im-prove the cost of ownership, reliability, and ease-of-use of information technologies. The cost factor hasalready been highlighted in Figure 3; we return laterto the issue of increasing reliability needs. To achievethe promise of autonomic computing, systems needto become more self-configuring, self-healing and self-

protecting, and during operation, more self-optimiz-ing. These individual concepts are not new, but theneed for their deployment has dramatically increasedas operational costs have increased and our depen-dence on systems heightened. The increased focusof autonomic computing is on implementing self-configuring, self-healing, self-protecting, and self-op-timizing capabilities not just at the component level,but holistically, at a global level.

Interestingly, storage systems have incorporated au-tonomic computing features at the component levelfor some time. RAID, as described earlier, is an ex-cellent example of a self-healing system. Further,high-function storage systems such as IBM’s ESS havenumerous autonomic computing features.14 It is in-structive to think about autonomic computing atthree levels.

Figure 5 An example of a SAN file system: IBM Storage Tank file system

STORAGESTORAGETANKTANKTANKSERVERSERVERSERVERCLUSTERCLUSTER

VFS W/CACHE VFS W/CACHE IFS W/CACHE

STORAGE AREA NETWORK

VFS W/CACHE VFS W/CACHE

SOLARISSOLARISSOLARIS HP/UXHP/UXHP/UX

IP NETWORK FOR CLIENT/META-DATA CLUSTER COMMUNICATIONSIP NETWORK FOR CLIENT/META-DATA CLUSTER COMMUNICATIONS

WIN2K XPWIN2K XPWIN2K, XP ADMINISTRATIVEADMINISTRATIVEADMINISTRATIVECCLIENTCLIENT

EXTERNALEXTERNALEXTERNALEXTERNALEXTERNALCLIENTSCLIENTSCLIENTSCLIENTSCLIENTS

LINUXLINUXLINUXAIXAIXAIX

NFSNFSNFSNFSNFSCIFSCIFSCIFSCIFSCIFS

META-DATA STORE

MULTIPLE STORAGE POOLSMULTIPLE STORAGE POOLSMULTIPLE STORAGE POOLSMULTIPLE STORAGE POOLS

SHAREDSHAREDSTORAGESTORAGESTORAGEDEVICESDEVICESDEVICES

S ODATA STOREDATA STOREDATA STOREDATA STORES O

MORRIS AND TRUSKOWSKI IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003212

Page 9: The evolution of storage systems - Semantic Scholar€¦ · The evolution of storage systems by R. J. T. Morris B. J. Truskowski Storage systems are built by taking the basic capability

The first level is the component level in which com-ponents contain features which are self-healing, self-configuring, self-optimizing, and self-protecting. Atthe next level, homogenous or heterogeneous systemswork together to achieve the goals of autonomiccomputing. As an example, consider the copy ser-vices described earlier, when systems at distant sitescan replicate data on a primary system in order toprovide continuous operation. Another example isthe adaptive operation within the Collective Intel-ligent Brick systems, now being researched at IBM,21

that allow “bricks” within a structure to take overfrom one another, or help each other carry a surgeof load. There is an interesting side benefit in au-tonomic computing capabilities that use redundancyand reconfiguration in case of failure. Systems be-having this way can be designed and packaged for“fail-in-place” operation. This means that failingcomponents (or bricks) need not necessarily be re-

placed; they can be worked around. Using this ca-pability we can package storage in three dimensionsrather than two, because failing bricks in the inte-rior of the structure do not have to be replaced.

At the third and highest level of autonomic comput-ing, heterogeneous systems work together toward agoal specified by the managing authority. For exam-ple, a specified class of data is assigned certain at-tributes of performance, availability, or security. Thedata specification might be based on a file subtree,a file ending, file ownership, or any other character-istic that can define a subset of data. These ideas owetheir heritage to the pioneering concepts of system-managed storage.2 Because it intercepts all I/Os andbecause it uses a separate control network to accessthe central repository of meta-data, the Storage TankSAN file system has the capability to perform at thishighest level of autonomic computing.18 It can

Figure 6 The evolution of storage networks from added physical capabilities to flexible administration of the storage resource

STORAGE

NETWORK

SERVERSSERVERS

STORAGENETWORKSAN

RAID CONTROLLER

APPLICATION

DEVICE DRIVER

APPLICATION

DEVICE DRIVER

RAID CONTROLLER

IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003 MORRIS AND TRUSKOWSKI 213

Page 10: The evolution of storage systems - Semantic Scholar€¦ · The evolution of storage systems by R. J. T. Morris B. J. Truskowski Storage systems are built by taking the basic capability

achieve this while obeying certain constraints on thedata, for example, location, security policies in force,or how the data may be reorganized. Given the vir-tual nature of data and because of the existence ofa separate meta-data manager, each file can be as-sociated with a policy that describes how it is to betreated for purposes of performance, security, avail-ability, and cost. This composite capability of auto-nomic systems working toward goals, while adher-ing to constraints, is referred to as policy management.Thus Storage Tank can arrange that files be placedon high-performance block storage systems with QoS(Quality-of-Service) management when needed, oron highly secure storage with encryption and accesscontrol in place, or on high availability systems thatare replicated across remote sites and have diverseaccess paths.

Future challenges

Certain known requirements for storage systems areaccelerating and new paradigms are emerging thatprovide additional challenges for the industry. Thevolume of data being generated continues to increaseexponentially, but a new source of data is becomingincreasing prevalent: machine-generated data. Un-til now most data have been authored by hand, us-ing the keyboard, and widely distributed and rep-licated through networking. This includes data indatabases, textual data, and data on Web pages.

However, these types of data are now being dwarfedby machine-generated data, that is, data sourced bydigital devices such as sensors, surveillance cameras,and digital medical imaging (see Figure 7). The newdata require extensive storage space and nontradi-tional methods of analysis and, as such, provide newchallenges to storage and data management tech-nologies. Effective data mining may provide impor-tant new opportunities in security, customer relation-ship management, and other application areas.

Another important trend is in the growth of refer-ence data. Reference data are commonly defined asstored data that are only infrequently retrieved (ifat all). As an example, consider the archived copiesof customer account statements—these are rarely ac-cessed after an initial period. This phenomenon isalso driving new directions in research, such as im-plementing reliable storage systems from low-cost(e.g. ATA) drives.

These trends in the growth of stored data are driv-ing new challenges in storage subsystems and themanagement of data. The traditional methods of ac-cessing data through a location or hierarchical filename are being reexamined in an attempt to designnew access methods in which data may be used wher-ever the data reside, based on content or some otherattribute. New crawling, search, and indexing tech-nologies are emerging to address that challenge, and

Figure 7 Machine-generated data from a variety of sources are increasing at an exponential pace and dominating manually authored data.

1995 2000 2005 2010 2015

GIG

AB

YTE

S/U

S C

AP

ITA

/YE

AR

YEAR

AUTHORED DATA

• CREATED BY HAND (SALARY, ORDERS, ETC.)• HISTORICALLY IN DATABASES• LOW VOLUME (BUT HIGH VALUE)

MACHINE-GENERATED DATA

• SENSORS • HIGH VOLUME• NOT AMENABLE TO TRADITIONAL DATABASE ARCHITECTURE

.001

.01

.1

1

10

100

1000

IN DATABASES

STORAGE ON LINE

ALL MEDICAL IMAGING

MEDICAL DATA STORED

PERSONAL MULTIMEDIA

SURVEILLANCE BYTES

SURVEILLANCE FOR URBAN AREAS

TEXT DATA

STATIC WEB DATA

MORRIS AND TRUSKOWSKI IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003214

Page 11: The evolution of storage systems - Semantic Scholar€¦ · The evolution of storage systems by R. J. T. Morris B. J. Truskowski Storage systems are built by taking the basic capability

new storage system requirements are emerging toefficiently support those applications. Additional re-quirements follow from the scale of data and the newusage models that they must support. In some casesthe repository for data must provide additional func-tions for the security and access control for thosedata. An interesting development that addresses thisneed is the object storage technology of Reference22, which allows access control at a much finer gran-ularity than current methods for SANs.

We discussed disaster recovery earlier. A combina-tion of trends is making the requirement of businesscontinuance (the business continues to operate inthe face of IT failures or disaster) more challenging.Pending SEC regulation, insurance company require-ments, and concerns about national and internationalsecurity are placing increased demands on storagearchitectures. The costs of down time for many en-terprises have generally been acknowledged to belarge and increasing. Systems can no longer be takendown for purposes of backing up data. As shown inReference 1, whereas disk drive capacity has beenalmost doubling every year, seek time has only beenimproving about 12 percent per year and transferrate about 40 percent per year. Hence it is not sur-prising that times to complete the backup have in-creased to the point where backup cannot be car-ried out “off shift,” and in many cases there is no offshift. This means that customers need to employtechnologies where a point-in-time copy can be madeand then a backup from this copy. Even that is notsufficient in some cases as, without special measures,the backup may not be completed before it is timeto start the next one. While backup time is clearlya problem, an even more severe issue is restore time.In some cases data must be restored in order to re-solve a service disruption, and, as a result, the mis-match of data volume and access rates becomes evenmore of a problem. Much research and developmentis underway to address these issues; new techniquesare proposed in References 8, 9, and 11.

A related trend is the growing importance of higherlevels of availability for storage systems. It is gen-erally acknowledged that decreasing the down time(e.g., an increase of availability from 0.99999 to0.999999) represents a significant engineering andoperational challenge as well as additional expense.Traditional RAID systems (e.g., RAID 5) are reachinga point where they cannot support the higher levelsof reliability, performance, or storage density re-quired. Indeed, storage space on HDDs has outrunthe rate at which data can be accessed, rebuild times

have increased correspondingly, and, therefore, thelikelihood of a damaging second failure during re-build has increased. Second, although the probabil-ity of undetected write errors on HDDs is small, themassive increase in storage space will over time in-crease the likelihood that problems from these er-rors are encountered. New approaches to storagestructures are being researched that rely on alter-native coding schemes, methods of adaptively cre-ating additional copies of data, and also super-densepackaging of disk drives and associated control cir-cuitry.21

Another problem area of growing interest concernsthe long-term preservation of data. This problem waseffectively described by Rothenberg in Reference 23.Although paper is often decried as an inferior stor-age medium because it is susceptible to water andfire damage, it may well outlast our best electronictechnologies—not because the media is not long last-ing, but because the formats of our digital recordsare subject to change, and the devices and programsto read these records are relatively short-lived. Thisproblem can be addressed in a number of ways, andthe most viable solution is to create data in a formthat is self-describing; that is, it comes with the datastructures and programs needed to interpret the data,coded in a simple universal language, for which aninterpreter can easily be created at some later time.This approach is described in Reference 24. Theeventual resolution of this issue overlaps with theproblems of scale above: there is no use for data thatcannot be found or understood.

Coming full circle to the beginning of the storageindustry, perhaps the most significant potentialchange after five decades may take place in the roleof the HDD. The HDD still appears to have consid-erable life left in it, and although a slowing in therate of progress is projected due to significant chal-lenges, there is a widely held view that no alterna-tive technology is likely to provide serious compe-tition in the enterprise for about the next ten years.25

Nevertheless, there is increased interest and activ-ity in alternative storage devices.26 We noted thatHDD cost is now 10 percent or less than the cost ofthe system it goes into, but we also saw in Figure 1that presently available alternative device technol-ogies, such as semiconductor memory (DRAM [dy-namic random-access memory] or Flash), are stillabout two orders of magnitude more expensive thanthe HDD, thus ruling them out as storage devices inenterprise storage systems. Alternatives to HDDbased on radically new ideas or lower-cost manufac-

IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003 MORRIS AND TRUSKOWSKI 215

Page 12: The evolution of storage systems - Semantic Scholar€¦ · The evolution of storage systems by R. J. T. Morris B. J. Truskowski Storage systems are built by taking the basic capability

turing technologies will be needed to supplant theHDD. While these may first find application in per-vasive or consumer devices, eventually they may besuccessfully applied in enterprise storage systems.

Although the above challenges are influencing theevolution of storage systems, the greatest need is fornew technology that lowers management costs andimproves the ease of use and dependability of stor-age systems.

Conclusion

Storage systems have become an important compo-nent of information technology. There are projec-tions that many enterprises will routinely spend moreon storage technology than on server technology.Over a period of 50 years, and building on a baseof giddying advances in hard drive component tech-nology, the storage game has shifted to one where“systems” technology is needed to meet new require-ments and where storage systems technology is a keyelement in coping with the information overload andin getting management costs under control. IT users’assets are embodied in their data, and storage sys-tems are evolving to allow increased exploitation andprotection of those assets. The storage system is nolonger just a piece of hardware, but a complex sys-tem in which significant function is implemented insoftware.

If we assess where we are in storage systems today,it is clear that we now have reliable component sub-systems that can be flexibly and reliably intercon-nected into networked storage systems. Autonomicstorage is here today and being extended to the nextlevel, where storage components work together toget the job done, whatever the challenge—load vari-ability, disasters, and so on. All these technologiesare increasingly paying off in managing complexity(and therefore cost) where it is most needed—in thetasks for which people are responsible. We are ona track to further develop these advanced storagesystems technologies. But storage is not the only partof the IT stack, and storage systems need to increas-ingly play their part in getting all the automation towork, so that a business can respond quickly, flex-ibly, and at much lower cost to a range of new chal-lenges. Institutions are facing change at a new rate,whether it is coming from supply and demand, la-bor costs, changing customer preferences, or new lev-els of integration. These institutions need the abil-ity to respond in real time, with all the resources attheir disposal—including those of their employees,

suppliers, partners, and distributors—and using allparts of the IT stack. This on demand requirementwill cause the rate of innovation we have describedin storage systems to continue to accelerate.

Acknowledgment

The authors thank Jayashree Subrhamonia for herhelp in assembling the special issue of the IBM Sys-tems Journal and Alain Azagury for his careful read-ing of this paper and his many suggestions.

*Trademark or registered trademark of International BusinessMachines Corporation.

**Trademark or registered trademark of Novell, Inc., The OpenGroup, Sun Microsystems, Inc. or Silicon Graphics, Inc.

Cited references

1. E. Grochowski and R. D. Halem, “Technological Impact ofMagnetic Hard Disk Drives on Storage Systems,” IBM Sys-tems Journal 42, No. 2, 338–346 (2003, this issue).

2. J. P. Gelb, “System-managed storage,” IBM Systems Journal28, No. 1, 77–103 (1989).

3. “Storage on Tap: Understanding the Business Value of Stor-age Service Providers,” ITCentrix, Inc. (March 2001).

4. Server Storage and RAID Worldwide, SRRD-WW-MS-9901,Gartner Group/Dataquest (May 1999).

5. C. P. Grossman, “Evolution of the DASD Storage Control,”IBM Systems Journal 28, No. 2, 196–226 (1989).

6. J. Menon and M. Hartung, “The IBM 3990 Model 3 DiskCache,” Proceedings of the Thirty-Third IEEE Computer So-ciety International Conference (Compcon), San Francisco,March 1988, IEEE, New York (1988), pp. 146–151.

7. P. M. Chen, E. K. Lee, G. A. Gibson, R. H. Katz, and D. A.Patterson, “RAID: High Performance, Reliable SecondaryStorage,” ACM Computing Surveys 26, No. 2, 145–185 (1994).

8. A. C. Azagury, M. E. Factor, and W. F. Micka, “AdvancedFunctions for Storage Subsystems: Supporting ContinuousAvailability,” IBM Systems Journal 42, No. 2, 268–279 (2003,this issue).

9. A. Azagury, M. Factor, W. Micka, and J. Satran, “Point-in-time Copy: Yesterday, Today and Tomorrow,” Proceedingsof the Tenth Goddard Conference on Mass Storage Systems andTechnologies/Nineteenth IEEE Symposium on Mass StorageSystems and Technologies, Maryland, April 15–18, 2002, IEEE,New York (2002), pp. 259–270.

10. L.-F. Cabrera, R. Rees, and W. Hineman, “Applying Data-base Technology in the ADSM Mass Storage System,” Pro-ceedings of the 21st International Conference on Very Large Da-tabases (VLDB), September 1995, Zurich, Switzerland,Morgan Kaufmann Publishers, San Francisco (1995), pp. 597–605.

11. M. Kaczmarski, T. Jiang, and D. A. Pease, “Beyond BackupToward Storage Management,” IBM Systems Journal 42, No.2, 322–337 (2003, this issue).

12. L. L. Ashton, E. A. Baker, A. J. Bariska, E. M. Dawson, R. L.Ferziger, S. M. Kissinger, T. A. Menendez, S. Shyam, J. P.Strickland, D. K. Thompson, G. R. Wilcock, and M. W. Wood,“Two Decades of Policy-Based Storage Management for theIBM Mainframe Computer,” IBM Systems Journal 42, No. 2,302–321 (2003, this issue).

MORRIS AND TRUSKOWSKI IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003216

Page 13: The evolution of storage systems - Semantic Scholar€¦ · The evolution of storage systems by R. J. T. Morris B. J. Truskowski Storage systems are built by taking the basic capability

13. A. Benner, Fibre Channel: Gigabit Communications and I/Ofor Computer Networks, McGraw-Hill, Inc., New York (1996).

14. M. Hartung, “IBM TotalStorage Enterprise Storage Server:A designer’s view,” IBM Systems Journal 42, No. 2, 384–397(2003, this issue).

15. J. Satran et al., “Internet Protocol Small Computer SystemInterface (iSCSI),” RFC 3385, Internet Draft, IETF, 2002,http://www.ietf.org.

16. P. Sarkar, K. Voruganti, K. Meth, O. Biran, and J. Satran,“Internet Protocol Storage Area Networks,” IBM SystemsJournal 42, No. 2, 218–231 (2003, this issue).

17. J. S. Glider, C. F. Fuente, and W. J. Scales, “The SoftwareArchitecture of a SAN Storage Control System,” IBM Sys-tems Journal 42, No. 2, 232–249 (2003, this issue).

18. J. Menon, D. A. Pease, R. Rees, L. Duyanovich, and B. Hill-sberg, “IBM Storage Tank—A Heterogeneous Scalable SANFile System,” IBM Systems Journal 42, No. 2, 250–267 (2003,this issue).

19. “SGI CXFS: A High-Performance, Multi-OS SAN File Sys-tem from SGI,” White Paper, Silicon Graphics, Inc., Moun-tain View, CA (2002).

20. P. Horn, Autonomic Computing: IBM’s Perspective on the Stateof Information Technology, IBM Corporation (October 15,2001); available at http://www.ibm.com/autonomic.

21. R. Merritt, “IBM Stacks Hard-Disk Bricks to Build DenseStorage Cube,” EE-Times, May 1, 2002, www.eetimes.com/at/news/OEG20020423S0091.

22. A. Azagury, V. Dreizin, M. Factor, E. Henis, D. Naor, N.Rinetzky, O. Rodeh, J. Satran, A. Tavory, and L. Yerush-almi, “Towards an Object Store,” Proceedings of the 20th Sym-posium Mass Storage Systems and Technologies (April 2003).

23. J. Rothenberg, “Ensuring the Longevity of Digital Docu-ments,” Scientific American 272, No. 1 (January 1995).

24. R. A. Lorie, “Long Term Preservation of Digital Informa-tion,” Proceedings of the First Joint Conference on Digital Li-braries, Roanoke, VA, June 2001, ACM, New York (2001),pp. 346–352.

25. D. A. Thompson and J. S. Best, “The Future of MagneticData Storage Technology,” IBM Journal of Research and De-velopment 44, No. 3, 311–322 (May 2000).

26. “Move Over Silicon,” The Economist Technology Quarterly,Dec. 14, 2002, pp. 20–21.

Accepted for publication March 9, 2003.

Robert J. T. Morris IBM Research Division, Almaden ResearchCenter, 650 Harry Road, San Jose, California 95120([email protected]). Dr. Morris is the director of the IBM Al-maden Research Center where he oversees scientists and engi-neers doing exploratory and applied research in hardware andsoftware areas such as nanotechnology, materials science, stor-age systems, data management, Web technologies and user in-terfaces. He is also Vice President for personal systems and stor-age research, managing this worldwide research work within IBM.A computer scientist, Dr. Morris has over 20 years of experiencein the IT industry. Before coming to Almaden as lab director, hewas a director at the IBM Thomas J. Watson Research Centerin New York, where he led teams in personal and advanced sys-tems research. He began his employment with IBM at Almaden,working on storage and data management technologies. Previ-ously, he was at Bell Laboratories where he began his career incomputer communications technology. Dr. Morris was namedchairman of the Bay Area Science Innovation Consortium(BASIC) in 2002. He has published more than 50 articles in thecomputer science, electrical engineering, and mathematics liter-

ature and has received 12 patents. He holds a Ph.D. degree incomputer science from the University of California at Los An-geles and is a member of the IBM Academy of Technology anda Fellow of the IEEE. He was an editor of the IEEE Transactionson Computers from 1986–1991 and serves on various universityadvisory committees, including the Government University In-dustry Research Roundtable.

Brian J. Truskowski IBM Systems Group, Route 100, Somers,New York 10589 ([email protected]). Mr. Truskowski is generalmanager, Storage Software, IBM Systems Group. He was namedto this position in January 2003. He is responsible for developingand implementing the storage software strategy, as well as thedelivery of all storage software products and services. Previously,as Chief Technology Officer of the Storage Systems Group, hewas responsible for overall technical strategy, ensuring that bus-iness and product implementation plans were consistent with theoverall technical direction of SSG. He and his team also mon-itored developments in the storage industry, studying trends andnew technologies to determine how to apply them to the IBMstorage business. Prior to that, Truskowski was vice president,storage subsystems development, responsible for the technicalstrategy, development, and delivery of the IBM worldwide stor-age subsystems solutions, including all subsystem hardware plat-forms and storage management software offerings. He was namedto that position in January of 1998. Mr. Truskowski joined IBMin 1981 and held several technical and management positions inthe Rochester, MN development laboratory. In 1989, followingthe completion of advanced degree work, he was assigned to di-vision staff in Somers, NY, before returning to the Rochester de-velopment laboratory in late 1991. In 1994, he was named direc-tor, AS/400� systems development, and had responsibility forAS/400 software development, including OS/400�. Two years laterhe moved to the chairman’s office as a technical assistant to LouisV. Gerstner, IBM chairman and chief executive officer. Mr. Trus-kowski earned a B.S. degree in electrical engineering from Mar-quette University, and a M.S. degree in management of technol-ogy from the Massachusetts Institute of Technology. He also hasan M.B.A. degree from Winona State University (Minnesota).

IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003 MORRIS AND TRUSKOWSKI 217