Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a...

17
Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Infrastructure Virtualization openBench Labs Analysis :

Transcript of Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a...

Page 1: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

Leverage Direct Attached Storage fora Virtual Infrastructure iSCSI SAN

Infra

stru

ctur

eVi

rtual

izat

ion

openBench Labs

Analysis:

Page 2: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

Leverage Direct Attached Storage fora Virtual Infrastructure iSCSI SAN

Analysis:

Author: Jack Fegreus, Ph.D.Chief Technology Officer

openBench Labshttp://www.openBench.com

September 27, 2007

Jack Fegreus is Chief Technology Officer at openBench Labs, which consults with a number of independent publications. He currently serves asCTO of Strategic Communications, Editorial Director of Open magazineand contributes to InfoStor and Virtualization Strategy. He has served asEditor in Chief of Data Storage, BackOffice CTO, Client/Server Today, andDigital Review. Previously Jack served as a consultant to Demax Softwareand was IT Director at Riley Stoker Corp. Jack holds a Ph.D. in Mathematicsand worked on the application of computers to symbolic logic.

Page 3: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

Table of Contents

TTaabbllee ooff CCoonntteennttss

Executive Summary 04

Assessment Scenario 06

Functionality and Performance Spectrum 11

Value Proposition 16

03

Page 4: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

To derive maximal benefit from system virtualization, storage virtualization, as implemented in a SAN, is a necessary prerequisite. Themobility of a virtual machine (VM) and its data plays an important role indaily operational tasks from load balancing with VMotion™ and VMware

Distributed ResourceManager (DRS) to systemtesting. The issues ofavailability and mobilityreally rise to the forefrontin a disaster recovery scenario: The image offiles stranded on a serverthat is not functional doesnot make a great posterfor high availability.

Seldom an issue indatacenters, harnessingthe holistic synergies ofsystem and storage virtu-

alization is often a major hurdle for IT at smaller companies or remotebranch locations without a SAN in place. To remove that stumbling block,LeftHand Networks has introduced their Virtual SAN Appliance (VSA) forVMware® ESX. In a few simple steps, a system administrator can employtwo or more VSAs, to create a fully redundant iSCSI SAN using previouslyisolated direct attached storage on ESX servers.

Infrastructure virtualization became a critical focus area for CIOsbecause it eliminates the constraints of physical limitations from IT devices.Once a resource's function is separated from its physical implementation,that resource can be placed into a generic pool, which can be managed as asingle functional entity. That opens significant opportunities to reduce IToperating costs through more efficient resource utilization. In particular,administrators can configure OS and applications software as a template to

Executive Summary

EExxeeccuuttiivvee SSuummmmaarryy“For SMB and remote office sites, LeftHand Networks VirtualSAN Appliance opens the door to leverage inexpensive directattached storage via an iSCSI SAN on ESX Servers in a secure,cost-effective, and fail-safe environment”

ooppeennBBeenncchh LLaabbss TTeesstt BBrriieeffiinngg::LeftHand Networks Virtual SAN Appliance for VMware® ESX

1) Certified virtual appliance: Supported on VMware Virtual Infrastructure 3and ESX Server.

2) Install a highly available SAN without additional hardware: Via theSAN/IQ software platform on Linux, VSAs aggregate direct attached storageresources in a clustered iSCSI SAN.

3) Leverage VI3's VMotion, HA and DRS features: SAN/IQ appliances createa fully redundant SAN infrastructure required by advanced VMware modules.

4) Simplify system administration through automation: SAN/IQ automatesthe management of storage modules in each VSA leaving a simplified virtualstorage environment for system administration.

5) Provide a DR solution to remote or branch offices: Running within theESX servers, the iSCSI SAN can be controlled and configured by local sys-tem managers to provide a full DR solution via replication and snapshots.

04

Page 5: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

simplify and standardize the provisioning of system configurations.

More importantly, storage virtualization in a VMware VirtualInfrastructure (VI) environment is far less complex than storage virtualization in a typical FC SAN with physical systems. A standard OS,such as Microsoft® Windows and Linux®, assumes exclusive ownership ofstorage volumes. For that reason, the file systems for a standard OS doesnot incorporate a distributed lock manager (DLM), which is essential, ifmultiple systems are to maintain a consistent view of a volume's contents.That makes volume ownership a critical part of storage virtualization andexplains why SAN management is the exclusive domain of storage administrators at most enterprise-class sites.

This is not the case for servers running ESX. The file system for ESX,dubbed VMFS, handles distributed file locking, which eliminates exclusivevolume ownership as a burning issue. What's more, the files in a VMFSvolume are single-file images of a VM disk—making them loosely analogous to an ISO formatted CDROM. When a VM mounts a disk, itopens a disk-image file; VMFS locks that file; and the VM gains exclusiveownership of the disk volume.

With volume ownership issue moot, iSCSI becomes a perfect way toextend the benefits of physical and functional separation via a cost-effectivelightweight SAN. That’s made iSCSI de rigueur in large datacenters for ESXservers, for which a SAN is essential to leverage all the advanced features ofa VMware Virtual Infrastructure (VI) environment. Storage administratorsonly need to provision iSCSI concentrators or routers with bulk storagefrom a Fibre Channel SAN. ESX system administrators can easily addressthe remaining iSCSI and ESX issues.

LeftHand Networks’ Virtual SAN Appliance (VSA) opens the door forSMB and remote office sites to create and manage a secure, cost-effective,and fail-safe iSCSI SAN by simply leveraging existing direct attached storage(DAS) on ESX Servers. Moreover, for cost-conscious risk-averse IT decisionmakers, there are two critical distinctions for the LeftHand Networks VSA:

11.. This VMware appliance is a version of SAN/IQ®: a storage software platform built on the Linux Kernel: It is not as an application running on a commercial OS that requires an additional license.

22.. The LeftHand Networks appliance is designed to be clustered using multiple ESX servers in order to withstand the loss of one or more servers.

Executive Summary

05

Page 6: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

AAVVAAIILLAABBIILLIITTYY,, PPEERRFFOORRMMAANNCCEE,, AANNDD OOVVEERRHHEEAADD

To support aVMware VI environment,openBench Labs setup two quad-processor servers, aDell® PowerEdge™1900 and an HPProLiant DL580G3, as our core ESXservers. Theseservers would runmultiple VMs witheither WindowsServer® 2003 SP2or Novel SUSE®Linux EnterpriseServer 10 SP1 astheir OS. For ourinitial SAN/IQ clus-ter, we ran aLeftHand VSA onour Dell PowerEdge1900 and an HPProliant ML350 G3.

Each VSA getsits functionality byrunning the latestversion of theLeftHand SAN/IQ

storage software platform within a VM. Our test scenarios focused on easy ofuse, performance, and failover with two and three VSAs forming a cluster.

Assessment Scenario

06

AAsssseessssmmeenntt SScceennaarriioo“To assess iSCSI performance, oblLoad stresses the abilityof all storage and networking components in a SAN fabric to support rapid responses to excessively highnumbers of I/O operations per second.”

Page 7: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

These tests were explicitly designed to test the flexibility of SAN/IQ. Wefocused on flexibility because a key component of the SAN/IQ value proposition is automation of system management tasks. A heterogeneouscluster tests the ability of SAN/IQ to aggregate disparate devices into a simple uniform virtual resource pool. Nonetheless, a homogeneous clusterwill provide a virtualized with more predictable performance characteristics.

In all cases, the enterprise-class features of the SAN/IQ virtual environment include thin provisioning, synchronous and asynchronousreplication, automatic failover and failback, and snapshots. Administratorsaccess and manage all of these functions via a Java-based CentralizedManagement Console. Nonetheless, what distinguishes LeftHandNetworks from their competitors is the degree to which SAN/IQ simplifies the configuration and management of a highly available iSCSISAN. Using software modules, dubbed mangers, SAN/IQ automates communications between VSAs, coordinates data replication,synchronizes data when a VSA changes state, and handles cluster reconfiguration as VSAs are brought up or shut down.

Even more important for small business sites and branch offices,SAN/IQ significantly simplifies the configuration and management ofstorage for its iSCSI SAN. The strategy employed by SAN/IQ to accomplish SAN simplification is to eliminate traditional administratortasks via automation within the VSAs. The extensive use of automationwithin SAN/IQ goes as far as entirely handling I/O load balancing, whichis a critical for cluster scalability. Tasks that are not entirely automated aregeneralized for virtual storage and implemented at the cluster level of theSAN/IQ storage hierarchy. As a result, system administrators provisionand manage volumes working exclusively at the cluster level.

What enables the advanced automation implemented by SAN/IQ is aningenious method for storing data in clusters. The first indication of thisscheme becomes visible, when the total storage supported by a cluster isnot the strict sum of the available direct attached storage at each of theVSA storage modules that comprise the cluster.

As VSAs join to form a cluster, SAN/IQ examines the total amount ofdirect attached (DAS) storage discovered at each ESX server host. SAN/IQthen automatically creates a cluster-wide storage pool in which each VSAstorage module is allocated the same amount of disk space-equal to thesmallest amount of storage found in any individual VSA pool.

Assessment Scenario

07

Page 8: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

When a systemadministrator createsa virtual disk volumein a cluster, SAN/IQapportions the diskimage file to equallysized subdisks residing on each ofthe VSA storage modules in the cluster. As a result, allI/O commands asso-

ciated with that volume are spread across all of the VSAs in that cluster.This scheme provides maximal scalability for the cluster with no interven-tion or tuning from a system administrator. What's more, whenever a newVSA joins a cluster or an existing VSA is removed, SAN/IQ immediately restructures all of the cluster volumes as a background task.

To providefor high availability, anadministratorcan assign orchange thereplicationlevel of a volume: none,two-, three-,or four-wayreplication.With two-,three-, orfour-way

replication, each VSA replicates its portion of a volume onto one, two, orthree adjacent nodes in the cluster. That means each member of the clus-ter stores a distinct subset of two, three, or four sub disk regions. In turn,one, two, or three cluster members could fail and the entire volume wouldstill be available on line.

To automate decision-making in such an environment, SAN/IQ implements a voting algorithm that requires a strict majority of managers,dubbed a quorum, to agree on any course of action to be taken. As a result,

Assessment Scenario

08

In a cluster with threeVSAs, SAN/IQ automaticallyallocates three times thespace discovered in theVSA with the least amountof DAS as the total capaci-ty of the cluster. When alogical disk volume—F1—is created, SAN/IQ assignseach VSA one third of thevolume. With two-wayreplication, each VSA alsocopies its section to thenext node. As a result, anynode can fail withoutaffecting the volume'savailability in the cluster.

From the perspective ofthe SAN/IQ console, weimmediately could see thatour initial 25GB volume,which had the default oftwo-way replication, wasconsuming 50GB of spacein the cluster's pooledstorage.

Page 9: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

three or five managers should be active within a Management Group forSAN/IQ to ensure high availability for clusters. Nonetheless, a single-nodesingle-manager configuration will function perfectly well.

We initially created a SAN/IQ Management Group and cluster with justtwo VSAs. That scheme required us to activate a manager on each VSA andadd a third manager to have a "quorum" of managers. The easiest way toadd a third manager was to run an additional Virtual Manager on one ofthe ESX servers. A more robust solution would be to run the LeftHandNetworks Failover Manager, which is simply a VM with just the SAN/IQmanager module within a VMware player on an existing Windows server.

After running with two VSAs and a virtual manager on two ESX servers,openBench Labs opted to build a more robust VI configuration by addinga third ESX server. When we added the third VSA to the cluster, SAN/IQimmediately began to restripe the existing virtual volume across all threelocal storage modules. In this configuration, we were able to select three-way replication for volumes that required the highest availability. Withmultiple levels of replication, SAN/IQ provides a subtle means to tier storage based on data availability.

To lessen the immediate impact—which can be prodigious—of storageprovisioning with replication, SAN/IQ supports full and thin provisioning.Full provisioning reserves the same amount of space on the SAN that ispresented to application servers. On the other hand, thin provisioningreserves space based on the amount of actual data, rather than the totalcapacity that is presented to application servers.

The traditional advantages for thin provisioning rose out of the factthat expanding volumes on Windows and Linux servers was once a verydaunting task. Thin provisioning at the SAN avoided all the expensesrelated to either immediately over provisioning storage or a later overhauling of an OS. While advances in logical volume support withinWindows and Linux now mitigate the impact of expanding volumeswithin an OS, system virtualization greatly expands the matrix ofmappings from all of the systems to all of the physical servers on whichthose systems may run. As a result, thin provisioning at the SAN levelbecomes absolutely essential in a Virtual Infrastructure environment.

As with load balancing, SAN/IQ fully supports dynamic allocation ofstorage resources by automatically growing logical volumes from a pool ofphysical storage. For dynamic growth, thin provisioning is activated for any

Assessment Scenario

09

Page 10: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

logical drive using a simple checkbox. Once set, SAN/IQ automatically handles all of the details and maps additional storage blocks on the fly asreal usage warrants the growth of a logical drive. From a storage management perspective, it is only necessary to insure that an adequatenumber of DAS drives are available from which physical blocks can be allocated as needed to logical blocks.

BBEENNCCHHMMAARRKK PPAARRTTIICCUULLAARRSS

To test both the functionality and the performance of the VSA supportediSCSI SAN, we ran our oblLoad-random access-benchmark within WindowsServer 2003 VMs on each of our core ESX servers. We measured the resultswithin the test VM and also measured the performance of the three VSAs viaVMware's VI Console client. Performance for oblLoad was entirely dependent upon disk spindles and controller caching:

11.. The number of drives in each server's data array-four Ultra320 SCSIdrives on the HP Proliant ML350 server and six 10,000rpm SATA drives onthe Dell 1900 server

22.. The caching capabilities of each server's storage controller-an HPSmartArray with a 128MB cache and a DELL PERC 5i with a 256MB cache.

To assess iSCSI performance, oblLoad stresses the ability of all storageand networking components in a SAN fabric to support rapid responsesto excessively high numbers of I/O operations per second (IOPS). TheoblLoad benchmark simulates database access in a high-volume transaction-processing environment. As a result, this benchmark stressesdata access far more than it stresses data throughput.

The goal of oblLoad is to measure the total number of IOPS that complete with the constraint that the average of all response times neverexceeds 100ms. To do this, the benchmark launches an increasing numberof disk I/O daemons that initiate a series of 8KB read/write requests.Requests are placed into two groups: one group is randomly distributedover the entire volume and the other is directed at a fixed hot spot, whichrepresents database index tables. That hot spot serves as a means to assesscaching performance of the underlying storage system. As the number ofdisk daemons increases, so too should the effectiveness of the array controller's caching within the hot spot.

The oblLoad benchmark stresses data access far more than it does datathroughput, which, in the context of iSCSI command processing, createsmuch more overhead for all SAN components than a sequential throughputtest. As numbers of IOPS rapidly increase, a storage system that can keep

Assessment Scenario

10

Page 11: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

pace will in effect help to generate more overhead as the host must processmore network packets and SCSI commands. That's why solutions to I/Obottlenecks often are the cause of CPU bottlenecks.

MMAANNAAGGEEMMEENNTT HHIIEERRAARRCCHHYY

The SAN/IO Central Management Console provides a single-windowview into a world of SAN storage that is distinctly hierarchical in structure. Under this scheme, VSAs are federated to simplify the notion ofmanagement by policy. At the top of the hierarchy is the ManagementGroup, which integrates a collection of distributed VSAs within specificset of security and access policies. By allowing administrators to dynamically add or delete a VSA, a Management Group creates a federated approach to data integration, pooling, and sharing.

Within aManagement Group,VSAs can be groupedinto a storage clusterfor the purpose ofaggregating localstorage into a single,large virtualized storage resource thatis compliant with thepolicies of theManagement Group.The goal is to lowerthe cost of storage

management by hiding all of the complexities associated with the physicalaspects of direct attached storage such as data location and local ownership.

Using the SAN/IQ console, we were able to log into our site'sManagement Group and gain access to each VSA without having to log into

Functionality and Performance Spectrum

11

SSAANN//IIQQ HHiieerraarrcchhyy iinn aa VVII EEnnvviirroonnmmeennttGroup Functionality

Management

Group

A collection of clustered storage pools grouped under specific storage policies that typically correspond to Service LevelAgreements.

Cluster A group of VSAs that form a storage pool

Storage System

Modules

A Virtual SAN Appliance, which is configured with virtual RAID—physical RAID configuration is a hardware or ESX server function.

Volumes Data storage partitions created in a cluster and exported as logical disks.

Snapshots Read-only copies of a volume created at a specific point in time.

Remote Copy A specialized snapshot copied to a remote volume.

FFuunnccttiioonnaalliittyy aanndd PPeerrffoorrmmaannccee SSppeeccttrruumm“In theory, at the point of VSA failure, the remaining managersshould confer and move all processing for the logical drive to theremaining VSA… .In practice, that is exactly what happened.More importantly, all of that happened automatically.”

Page 12: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

each VSA individually. In particular—thanks to the SAN/IQ hierarchy—wehad complete access to our storage cluster and every VSA contained withinour cluster. More importantly, by virtue of this hierarchy, we were able toconfigure and manage VSAs holistically.

SSAANNIITTIIZZEEDD DDAASS

Without a distributed lock manager in their file systems, the pitfalls ofsharing logical volumes between systems in a Windows or Linux environ-ment are so catastrophic and so certain, that it is only a matter of time untila key metadata file is corrupted and the volume's structure dissolves fasterthan a gumball under Niagara Falls. To avoid the prospect of having criticaldisk volumes turned into random collections of ones and zeros, storage virtualization on a traditional Fibre Channel SAN has become a very criticaland exacting process. For a storage administrator, ensuring that a logicalvolume can only be accessed by a single host typically involves a combina-tion of LUN masking and switch-port zoning: A process that drives up thecost of SAN management.

The reason that standard file systems lack a distributed locking mechanism can be summed up by one word: complexity. In a typicallyWindows or Linux logical volume, there will be tens if not hundreds ofthousands of files. In contrast, an ESX volume—dubbed a datastore in theVI argot—will have just tens of files of which each is a complete disk image.That reduces the complexity of developing a distributed file locking mecha-nism for VMFS by 3 orders of magnitude.

With a distributed locking mechanism to ensure that VMFS files areaccessed exclusively, ESX expects to share volumes. As a result, lightweightvirtualization is all that is needed in a VI environment, which makes aniSCSI SAN a powerful low-cost mechanism for distributing storage.

Within SAN/IQ, system administrators easily handle the presentation ofvirtualized volumes to hosts by creating Authentication Groups settingsecurity policies for the virtualization of target volumes within aManagement Group. Virtualization rules can be based on the unique ID ofthe client's iSCSI initiator or the Challenge-Handshake AuthenticationProtocol (CHAP). More importantly, the SAN/IQ VSAs create a robust,full-featured, general-purpose iSCSI SAN. Any system with LAN connectivity can be a SAN client. IT can even leverage this SAN to centralize storage for all local desktop machines in addition to any ESXservers that are not part of the Management Group.

12

Functionality and Performance Spectrum

Page 13: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

DDAATTAA IINN AA KKLLEEIINN BBOOTTTTLLEE

Once wegranted all ofour ESXservers iSCSIaccess rightsto the logical volumes thatthe VSA cluster wasexporting, wewere able toimport thosevolumes onany ESXserver for usewith any ofits VMs. Oneach ESXserver, weessentially re-imported

storage that initially had been attached, configured, and orphaned on a single ESX server.

The foundationfor this networkinglegerdemainis a virtualswitchedLAN createdwithin eachESX server tohandle allnetworkingfor hostedVMs. Oneach ESXserver, the

foundation of the virtual switch was a pair of physical Ethernet TOEs,

13

Functionality and Performance Spectrum

Working at theManagement Group level,we created access liststhat would handle a groupof iSCSI volumes. We virtualized ownership bygranting access rights tiedto the iSCSI initiator ID of asystem, such as the Dell1900 server running viathe VMware software initiator in ESX.

Using the VMware VIConsole, we created a virtual LAN infrastructurethat included a VMkernelport for use by the VMwareiSCSI initiator on each ofour ESX servers. HostedWindows and Linux VMsdid not make direct iSCSIconnections. Only the ESXserver was authorized toconnect with our VSA cluster. Those connectionswere virtualized as standard SCSI connectionsfor the VMs.

Page 14: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

which were teamed by ESX. All VMs, including our VSAs, connect to thevirtual switch via virtual NICs. We also needed to create a virtual NIC forthe ESX server and a port utilized by the ESX kernel to make iSCSI connections via VMware's initiator software.

In configuring our virtual machines, openBench Labs maintained a 1-to-1 relationship between disk volumes and datastores. By so doing, wewere able to maximize the mobility of disk volumes in our test environ-ment. A far more common practice is to group disks by system or, morebroadly, by business process within a datastore.

ESX Server provides two ways to make blocks of storage accessible to avirtual machine. The first way is to use an encapsulated, VM disk file.This is precisely what we needed to do when we configured the logicaldisks presented by the direct attached storage systems on the HP and Dellservers. As a result, the logical drives utilized by our VSA cluster were for-matted as VMFS datastores.

The alternative is to use a raw LUN formatted with a native file systemassociated with the virtual machine (VM), as though it were a VMFS-hosted file. That scheme, dubbed Raw Device Mapping (RDM), requiresan associated VMFS datastore to host a pointer file to redirect requestsfrom the VMFS file system to the raw LUN. Interestingly, when weimported raw iSCSI devices from our VSA cluster, the "raw" underlyingfile system was VMFS.

As a result of our using VMFS files with our virtual machines, weneeded to fill—on the order of 99% utilization—all logical volumes withreal data before starting any tests. That's because when ESX creates a newdisk file inside a datastore, it creates a sparse file with a known structure.In particular, read requests to any "empty" blocks will never trigger aphysical read.

WWHHIIRRLLIINNGG DDAAEEMMOONNSS

The most important parameter for assessing iSCSI performance is thenumber of I/O operations that can be performed. When we ran ouroblLoad tests, which measures I/O requests per second (IOPS), we measured the results on two dimensions:

11.. Locally, we measured the rate of IOPS completed on a Windows Server VM

22.. Globally we measured data throughput rate at each VSA in the SAN/IQ cluster.

14

Functionality and Performance Spectrum

Page 15: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

We beganthe processusing a VMrunningWindowsServer 2003 onour DellPowerEdge1900. Quicklyit became clearwe were notstressing thelimits of ouriSCSI cluster,but rather thelimits of thelocal storagesystems. Inparticular, theVSA on the

HP ProLiant ML350 G3, which was configured with half of the controllercache and 67% of the drive spindles of the Dell PowerEdge 1900, was twice

as slow pro-cessing its halfof the logicalvolume.

Clearly, theimplication ofthat data isthat the HPserver wasslowing downcluster performance.To verify thathypothesis weused theVMware VIconsole to suspend processing of

15

Functionality and Performance Spectrum

Running in a WindowsServer VM on the Dell1900, oblLoad deliveredclose to 2,500 IOPs over awide spectrum of I/O daemons. Again, SAN/IQbalanced the I/O load overboth VSA modules in ourheterogeneous cluster. Acloser look at throughputon each VSA, however,shows the DELL serverwith twice the cache and50% more spindles wastwice as fast processingI/Os.

When we failed the VSArunning on the HP ML350ProLiant server, there wasan incredibly slight pauseon the VM running theoblLoad benchmark. IOPSdipped 15%—from 2,050to 1,725 IOPS—as SAN/IQfailed the cluster over tothe remaining VSA.Performance then rose45%-to 2,960 IOPS-withthe faster VSA. When weran oblLoad in the singleVSA configuration, theaverage IOP rate rose to3,200.

Page 16: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

the VSA running the HP ML350 server while the VM on the Dell ESX serv-er was running oblLoad. In essence, we created a VSA failure.

In theory, at the point of VSA failure, the remaining two managersshould confer and move all processing for the logical drive to the remaining VSA on the Dell server. What's more, now that the cluster wasentirely dependent on the Dell server with its more robust storage subsystem, I/O processing should have a measurable improvement. Inpractice, that is exactly what happened. More importantly, all of that happened automatically with no intervention on our part.

DDOOIINNGG IITT

For CIOs today, two top-of-mind propositions are resource consolidation and virtualization. Both are considered excellent ways toreduce the cost of IT operations through efficient and effective utilizationof IT resources, which extends from capital equipment to human capital.

With the functions of ITresources separatedform their physicalimplementations,resources can beplaced into a smallnumber of generic

pools, for which it is much easier to create rules and procedures that focuson utilization. That decoupling also allows storage resources to be physically distributed and yet centrally managed via a virtual storage pool.

Through device virtualization, SANs allow administrators to more easilytake advantage of robust RAS features for data protection and recovery,such as snapshots and replication. What's more, the ability to create systemtemplates for operating systems and applications software further helps to

Value Proposition

16

VVaalluuee PPrrooppoossiittiioonn “Beyond providing an iSCSI SAN that ESX servers and anyother clients can leverage, the LeftHand Networks Virtual SANAppliance creates a highly automated SAN that imposes verylittle management overhead.”

VViirrttuuaall SSaann AApppplliiaannccee QQuuiicckk RROOIIFull Virtualization of Direct Attached Storage

Simplified iSCSI SAN Infrastructure Management

Leverage All Advanced VMware Features

No Single Point of Failure

No Additional Hardware Required

Page 17: Leverage Direct Attached Storage for a Virtual ...1].pdf · Leverage Direct Attached Storage for a Virtual Infrastructure iSCSI SAN Analysis: Author: Jack Fegreus, Ph.D. Chief Technology

simplify and standardize IT configurations and the provisioning of thoseconfigurations. This makes virtualization of systems, storage, and networksa holistic necessity.

Nonetheless, SAN infrastructure costs have historically presented asignificant hurdle to SAN adoption and expansion. Storage virtualizationon an FC SAN with physical servers is a very complex proposition. As aresult, the benefits of SAN architecture have not spread beyond servers incomputer centers.

On the other hand, storage virtualization in a VMware VirtualInfrastructure environment is far less complex thanks to the nature of theVMFS file system, for which a file is a disk image much like a CD ROM. Asa result, a VI environment can better leverage a SAN, while at the sametime it simplifies the services that it requires from a SAN. That makes aniSCSI SAN a powerful way to cost effectively extend the benefit of physicaland functional separation traditionally derived from an FC SAN via alighter weight mechanism.

What's more, with the LeftHand Networks VSA, that functionality canbe leveraged on a foundation of direct attached storage with no additionalcosts for any iSCSI hardware components. By provisioning direct attachedstorage to clustered Virtual SAN Appliances on each ESX server, that storage can be re-imported as iSCSI volumes, which opens the door to allof the advanced features of a VI environment while constraining the costsof operations management.

Beyond providing an iSCSI SAN that ESX servers and any otherclients can leverage, the LeftHand Networks Virtual SAN Appliance creates a highly automated SAN that imposes very little managementoverhead. Through the creation of a management hierarchy to virtualizeresources and an automation scheme within that hierarchy, SAN/IQ creates a high-availability scenario that can easily be managed by systemadministrators in a small business' IT environment or via a lights-outmanagement hardware located at a remote branch office.

Value Proposition

17