Critical Capabilities for General-Purpose, High-End ... · Critical Capabilities for...

28
G00263130 Critical Capabilities for General-Purpose, High- End Storage Arrays Published: 20 November 2014 Analyst(s): Valdis Filks, Stanley Zaffos, Roger W. Cox Here, we assess 12 high-end storage arrays across high-impact use cases and quantify products against the critical capabilities of interest to infrastructure and operations. When choosing storage products, I&O leaders should look beyond technical attributes, incumbency and vendor/product reputation. Key Findings With the inclusion of solid-state drives in arrays, performance is no longer a differentiator in its own right, but a scalability enabler that improves operational and financial efficiency by facilitating storage consolidation. Product differentiation is created primarily by differences in architecture, software functionality, data flow, support and microcode quality, rather than components and packaging. Clustered, scale-out, and federated storage architectures and products can achieve levels of scale, performance, reliability, serviceability and availability comparable to traditional, scale-up high-end arrays. The feature sets of high-end storage arrays adapt slowly, and the older systems are incapable of offering data reduction, virtualization and unified protocol support. Recommendations Move beyond technical attributes to include vendor service and support capabilities, as well as acquisition and ownership costs, when making your high-end storage array buying decisions. Don't always use the ingrained, dominant considerations of incumbency, vendor and product reputations when choosing high-end storage solutions. Vary the ratios of SSDs, Serial Attached SCSI and SATA hard-disk drives in the storage array, and limit maximum configurations based on system performance to ensure that SLAs are met during the planned service life of the system.

Transcript of Critical Capabilities for General-Purpose, High-End ... · Critical Capabilities for...

G00263130

Critical Capabilities for General-Purpose, High-End Storage ArraysPublished: 20 November 2014

Analyst(s): Valdis Filks, Stanley Zaffos, Roger W. Cox

Here, we assess 12 high-end storage arrays across high-impact use casesand quantify products against the critical capabilities of interest toinfrastructure and operations. When choosing storage products, I&O leadersshould look beyond technical attributes, incumbency and vendor/productreputation.

Key Findings■ With the inclusion of solid-state drives in arrays, performance is no longer a differentiator in its

own right, but a scalability enabler that improves operational and financial efficiency byfacilitating storage consolidation.

■ Product differentiation is created primarily by differences in architecture, software functionality,data flow, support and microcode quality, rather than components and packaging.

■ Clustered, scale-out, and federated storage architectures and products can achieve levels ofscale, performance, reliability, serviceability and availability comparable to traditional, scale-uphigh-end arrays.

■ The feature sets of high-end storage arrays adapt slowly, and the older systems are incapableof offering data reduction, virtualization and unified protocol support.

Recommendations■ Move beyond technical attributes to include vendor service and support capabilities, as well as

acquisition and ownership costs, when making your high-end storage array buying decisions.

■ Don't always use the ingrained, dominant considerations of incumbency, vendor and productreputations when choosing high-end storage solutions.

■ Vary the ratios of SSDs, Serial Attached SCSI and SATA hard-disk drives in the storage array,and limit maximum configurations based on system performance to ensure that SLAs are metduring the planned service life of the system.

■ Select disk arrays based on the weighting and criteria created by your IT department to meetyour organizational or business objectives, rather than choosing those with the most features orhighest overall scores.

What You Need to KnowSuperior nondisruptive serviceability and data protection characterize high-end arrays. They are thevisible metrics that differentiate high-end array models from other arrays, although the gap isclosing. The software architectures used in many high-end storage arrays can trace their lineageback 20 years or more.

Although this maturity delivers high availability and broad ecosystem support, it is also becoming ahindrance with respect to flexibility, adaptability and delays to the introduction of new features,compared with newer designs. Administrative and management interfaces are often morecomplicated when using arrays involving older software designs, no matter how much the internalstructures are hidden or abstracted. The ability of older systems to provide unified storageprotocols, data reduction and detailed performance instrumentation is also limited, because theoriginal software was not designed with these capabilities as design objectives.

Gartner expects that, within the next four years, arrays using legacy software will need major re-engineering to remain competitive against newer systems that achieve high-end status, as well ashybrid storage solutions that use solid-state technologies to improve performance, storageefficiency and availability. In this research, the aggregated scores among the arrays are minimal.Therefore, clients are advised to look at the individual capabilities that are important to them, ratherthan the overall score.

Because array differentiation has decreased, the real challenge of performing a successful storageinfrastructure upgrade is not designing an infrastructure upgrade that works, but designing one thatoptimizes agility and minimizes total cost of ownership (TCO). Another practical consideration is thatchoosing a suboptimal solution is likely to have only a moderate impact on deployment and TCO forthe following reasons:

■ Product advantages are usually short-lived and temporary. Gartner refers to this phenomenonas the "compression of product differentiation."

■ Most clients report that differences in management and monitoring tools, as well as ecosystemsupport among various vendors' offerings, are not enough to change staffing requirements.

■ Storage TCO, although growing, still accounts for less than 10% (6.5% in 2013) of most ITbudgets.

Page 2 of 28 Gartner, Inc. | G00263130

Analysis

Introduction

The arrays evaluated in this research include scale-up, scale-out, hybrid and unified storagearchitectures. Because these arrays have different availability characteristics, performance profiles,scalability, ecosystem support, pricing and warranties, they enable users to tailor solutions againstoperational needs, planned new application deployments, and forecast growth rates and assetmanagement strategies.

Midrange arrays with scale-out characteristics can satisfy the high-availability criteria whenconfigured with four or more controllers and multiple disk shelves. Whether these differences inavailability are enough to affect infrastructure design and operational procedures will vary by userenvironment, and will also be influenced by other considerations, such as host system/capacityscaling, downtime costs, lost opportunity costs and the maturity of the end-user change controlprocedures (e.g., hardware, software, procedures and scripting), which directly affect availability.

Critical Capabilities Use-Case Graphics

The weighted capabilities scores for all use cases are displayed as components of the overall score(see Figures 1 through 6).

Gartner, Inc. | G00263130 Page 3 of 28

Figure 1. Vendors' Product Scores for the Overall Use Case

Source: Gartner (November 2014)

Page 4 of 28 Gartner, Inc. | G00263130

Figure 2. Vendors' Product Scores for the Consolidation Use Case

Source: Gartner (November 2014)

Gartner, Inc. | G00263130 Page 5 of 28

Figure 3. Vendors' Product Scores for the OLTP Use Case

Source: Gartner (November 2014)

Page 6 of 28 Gartner, Inc. | G00263130

Figure 4. Vendors' Product Scores for the Server Virtualization and VDI Use Case

Source: Gartner (November 2014)

Gartner, Inc. | G00263130 Page 7 of 28

Figure 5. Vendors' Product Scores for the Analytics Use Case

Source: Gartner (November 2014)

Page 8 of 28 Gartner, Inc. | G00263130

Figure 6. Vendors' Product Scores for the Cloud Use Case

Source: Gartner (November 2014)

Vendors

DataDirect Networks SFA12K

The SFA12KX, the newest member of the SFA12K family, increases SFA12K performance/-throughput via a hardware refresh and through software improvements. Like other members of theSFA12K family, it remains a dual-controller array that, with the exception of an in-storageprocessing capability, prioritizes scalability, performance/throughput and availability over value-added functionality, such as local and remote replication, thin provisioning and autotiering. Thesepriorities align better with the needs of the high-end, high-performance computing (HPC) market

Gartner, Inc. | G00263130 Page 9 of 28

than with general-purpose IT environments. Further enhancing the appeal of the SFA12KX in largeenvironments is dense packaging: 84 HDDs/4U or 5 PB/rack, and GridScaler and ExaScalergateways that support parallel file systems, based on IBM's GPFS or the open-source Lustreparallel file system.

The combination of high bandwidth and high areal densities has made the SFA12K a popular arrayin the HPC, cloud, surveillance and media markets that prioritize automatic block alignment andbandwidth over input/output operations per second (IOPS). The SFA12K's high areal density alsomakes it an attractive repository for big data and inactive data, particularly as a backup target forbackup solutions doing their own compression and/or deduplication. Offsetting these strengths arelimited ecosystem support beyond parallel file systems and backup/restore products; lack ofvSphere API for Array Integration (VAAI) support, which limits its appeal for use as VMware storage;zero bit detection, which limits its appeal with applications such as Microsoft Exchange and OracleDatabase; and quality of service (QoS) and security features that could limit its appeal inmultitenancy environments.

EMC VMAX

The maturity of the VMAX 10K, 20K and 40K hardware, combined with the Enginuity software andwide ecosystem support, provides proven reliability and stability. However, the need for backwardcompatibility has complicated the development of new functions, such as data reduction. TheVMAX3, which has recently become generally available, has not yet had time to be market-validated, because it only became available on 26 September 2014. Even with new controllers,promised Hypermax software updates and a new InfiniBand internal interconnect, mainframesupport is not available, nor is the little-used Fibre Channel over Ethernet (FCoE) protocol.Nevertheless, with new functions, such as in-built VPLEX, recover point replication, virtual thinprovisioning and more processing power, customers should move quickly to the VMAX3, because ithas the potential to develop further.

The new VMAX 100K, 200K and 400K arrays still lack independent benchmark results, which, insome cases, leads users to delay deploying a new feature into production environments until thefeature's performance has been fully profiled, and its impact on native performance is fullyunderstood. The lack of independent benchmark results has also led to misunderstandingsregarding the configuration of back-end SSDs and HDDs into redundant array of independent disks(RAID) groups, which have required users to add capacity to enable the use of more-expensive 3D+1P RAID groups to achieve needed performance levels, rather than larger, more-economical 7D+1P RAID groups.

EMC's expansion into software-defined storage (SDS; aka ViPR), network-based replication (akaRecoverPoint) and network-based virtualization (aka VPLEX) suggests that new VMAX users shouldevaluate the use of these products, in addition to VMAX-based features, when creating their storageinfrastructure and operational visions.

Fujitsu Eternus DX8700 S2

The DX8700 S2 series is a mature, high-end array with a reputation for robust engineering andreliability, with redundant RAID groups spanning enclosures and redundant controller failover

Page 10 of 28 Gartner, Inc. | G00263130

features. Within the high-end segment, Fujitsu offers simple unlimited software licensing on a per-controller basis; therefore, customers do not need to spend more as they increase the capacity ofthe arrays. The DX8700 S2 series was updated with a new level of software to improve performanceand improved QoS, which not only manages latency and bandwidth, but also integrates with theDX8700 Automated Storage Tiering to move data to the required storage tier to meet QoS targets. Itis a scale-out array, providing up to eight controllers.

The DX8700 S2 has offered massive array of idle disks (MAID) or disk spin-down for years. Eventhough this feature has been implemented successfully without any reported problems, it has notbeen adopted, nor has it gained popular market acceptance. The same Eternus SF managementsoftware is used across the entire DX product line, from the entry level to the high end. Thissimplifies manageability, migration and replication among Fujitsu storage arrays. Customerfeedback is positive concerning the performance, reliability, support and serviceability of theDX8700 S2, and Gartner clients report that the DX8700 S2 RAID rebuild times are faster thancomparable systems. The management interface is geared toward storage experts, but is simplifiedin the Eternus SF V16, thereby reducing training costs and improving storage administratorproductivity. To enable workflow integration with SDS platforms, Fujitsu is working closely with theOpenStack project.

HDS HUS VM

The Hitachi Data Systems (HDS) Hitachi Unified Storage (HUS) VM is an entry-level version of theVirtual Storage Platform (VSP) series. Similar to its larger VSP siblings, it is built around Hitachi'scross-bar switches, has the same functionality as the VSP, can replicate to HUS VM or VSPsystems using TrueCopy or Hitachi Universal Replicator (HUR), and uses the same managementtools as the VSP. Because it shares APIs with the VSP, it has the same ecosystem support;however, it does not scale to the same storage capacity levels as the HDS VSP G1000. Similarly, itdoes not provide data reduction features. Hardware reliability and microcode quality are good; thisincreases the appeal of its Universal Volume Manager (UVM), which enables the HUS VM tovirtualize third-party storage systems.

Hitachi Data Systems offers performance transparency with its arrays, with SPC-1 performance andthroughput benchmark results available. Client feedback indicates that the use of thin provisioninggenerally improves performance and that autotiering has little to no impact on array performance.Snapshots have a measurably negative, but entirely acceptable, impact on performance andthroughput. Offsetting these strengths are the lack of native Internet Small Computer SystemInterface (iSCSI) and 10-Gigabit Ethernet (GbE) support, which is particularly useful for remotereplication, as well as relatively slow integration with server virtualization, database, shareware andbackup offerings. Integration with the Hitachi NAS platform adds iSCSI, Common Internet FileSystem (CIFS) and Network File System (NFS) protocol support for users that need more than justFibre Channel support.

HDS VSP G1000

The VSP has built its market appeal on reliability, quality microcode and solid performance, as wellas its ability to virtualize third-party storage systems using UVM. The latest VSP G1000 was

Gartner, Inc. | G00263130 Page 11 of 28

launched in April 2014, with more capacity and performance/throughput achieved via fastercontrollers and improved data flows. Configuration flexibility has been improved by a repackagingof hardware that enables controllers to be packaged in a separate rack. VSP packaging alsosupports the addition of capacity-only nodes that can be separated from the controllers. It providesa larger variety of features, such as a unified storage, heterogeneous storage virtualization andcontent management via integration with HCAP. Data compression and reduction are notsupported. Performance needs dictate and independently configure each redundant node's front-and back-end ports, cache, and back-end capacity. However, accelerated flash can be used toaccelerate performance in hybrid configurations. Additional feature highlights include thinprovisioning, autotiering, volume-cloning and space-efficient snapshots, synchronous andasynchronous replication, and three-site replication topologies.

The VSP asynchronous replication (aka HUR) is built around the concept of journal files stored ondisk, which makes HUR tolerant of communication line failures, allows users to trade off bandwidthavailability against recovery point objectives (RPOs) and reduces the demands placed on cache. Italso offers a data flow that enables the remote VSP to pull writes to protected volumes on thedisaster recovery site, rather than having the production-side VSP push these writes to the disasterrecovery site. Pulling writes, rather than pushing them, reduces the impact of HUR on the VSPsystems and reduces bandwidth requirements, which lowers costs. Offsetting these strengths arethe lack of native iSCSI and 10GbE support, as well as relatively slow integration with servervirtualization, database, shareware and backup offerings.

HP 3PAR StoreServ 10000

The 3PAR StoreServ 10000 is HP's preferred, go-to, high-end storage system for open-systeminfrastructures that require the highest levels of performance and resiliency. Scalable from two toeight controller-nodes, the 3PAR StoreServ 10000 requires a minimum of four controller-nodes tosatisfy Gartner's high-end, general-purpose storage system definition. Competitive with small andmidsize, traditional, frame-based, high-end storage arrays, particularly with regard to storageefficiency features and ease of use, HP continues to make material R&D investments to enhance3PAR StoreServ 10000 availability, performance, capacity scalability and security capabilities.Configuring 3PAR StoreServ storage arrays with four or more nodes limits the effects of high-impactelectronics failures to no more than 25% of the system's performance and throughput. The impactof electronic failures is further reduced by 3PAR's Persistent Cache and Persistent Port failoverfeatures, which enable the caches in surviving nodes to stay in write-in mode and active hostconnections to remain online.

Resiliency features include three-site replication topologies, as well as Peer Persistence, whichenables transparent failover and failback between two 3PAR StoreServ 10000 systems locatedwithin metropolitan distances. However, offsetting the benefit of these functions are the relativelylong RPOs that result from 3PAR's asynchronous remote copy actually sending the differencebetween two snaps to faraway disaster recovery sites; microcode updates that can be time-consuming, because the time required is proportional to the number of nodes in the system; and arelatively large footprint caused by the use of four-disk magazines, instead of more-densepackaging schemes.

Page 12 of 28 Gartner, Inc. | G00263130

HP XP7

Sourced from Hitachi Ltd. under joint technology and OEM agreements, the HP XP7 is the nextincremental evolution of the high-end, frame-based XP-Series that HP has been selling since 1999.Engineered to be deployed in support of applications that require the highest levels of resiliency andperformance, the HP XP7 features increased capacity scalability and performance over itspredecessor, the HP XP P9500, while leveraging the broad array of proven HP-XP-series datamanagement software. Beyond expected capacity and performance improvements, the new Active-Active High Availability and Active-Active data mobility functions that elevate storage system anddata center availability to higher levels, as well as providing nondisruptive, transparent applicationmobility among hosts servers at the same or different sites are two notable enhancements. The HPXP7 shares a common technology base with the Hitachi/HDS VSP G1000, and HP differentiates theXP7 in the areas of broader integration and testing with the full HP portfolio ecosystem and theavailability of Metro Cluster for HP Unix, as well as by restricting the ability to replicate between XP7and HDS VSPs.

Positioned in HP's traditional storage portfolio, the primary mission of the XP7 is to serve as anupgrade platform to the XP-Series installed base, as well as to address opportunities involving IBMmainframe and storage for HP Nonstop infrastructures. Since HP acquired 3PAR, XP-Seriesrevenue continues to decline annually, as HP places more go-to-market weight behind the 3PARStoreServ 10000 offering.

Huawei OceanStor 18000

The OceanStor 18000 storage array supports both scale-up and scale-out capabilities. Data flowsare built around Huawei's Smart Matrix switch, which interconnects as many as 16 controllers, eachconfigured with its own host connections and cache, with back-end storage directly connected toeach engine. Hardware build quality is good, and shows attention to detail in packaging andcabling. The feature set includes storage-efficiency features, such as thin provisioning andautotiering, snapshots, synchronous and asynchronous replication, QoS that nondisruptivelyrebalances workloads to optimize resource utilization, and the ability to virtualize a limited numberof external storage arrays.

Software is grouped into four bundles and is priced on capacity, except for path failover and load-balancing software, which is priced by the number of attached hosts to encourage widespreadusage. The compatibility support matrix includes Windows, various Unix and Linuximplementations, VMware (including VAAI and vCenter Site Recovery Manager support) and Hyper-V. Offsetting these strengths are relatively limited integration with various backup/restore products,configuration and management tools that are more technology- than ease-of-use-oriented, a lack ofdocumentation and storage administrators familiar with Huawei, and a support organization that islargely untested outside mainland China.

IBM DS8870

The DS8870 is a scale-up, two-node controller architecture that is based and dependent on IBM'sPower server business. Because IBM owns the z/OS architecture, IBM has inherent cross-selling,

Gartner, Inc. | G00263130 Page 13 of 28

product integration and time-to-market advantages supporting new z/OS features, relative to itscompetitors. Snapshot and replication capabilities are robust, extensive and relatively efficient, asshown by features such as FlashCopy; synchronous, asynchronous three-site replication; andconsistency groups that can span arrays. The latest significant DS8870 updates include Easy Tierimprovements, as well as a High Performance Flash Enclosure, which eliminates earlier, SSD-related architectural inefficiencies and boosts array performance. Even with the addition of the FlashEnclosure, the DS8870 is no longer IBM's high-performance system, and data reduction featuresare not available unless extra SAN Volume Controller (SVC) devices are purchased in addition to theDS8870.

Overall, the DS8870 is a competitive offering, but it is at some risk of architectural obsolescencebecause, with its 12 October announcement, it is approximately two-thirds of the way through itsestimated marketing life. Ease-of-use improvements have been achieved by taking the XIVmanagement GUI and implementing it on the DS8870. However, customer reports are that the newGUI still needs a more detailed administrative approach, and is not yet suited to high-levelmanagement, as provided by the XIV icon-based GUI. Due to the dual-controller design, majorsoftware updates can disable one of the controllers for as long as an hour. These updates need tobe planned, because they can reduce the availability and performance of the system by as much as50% during the upgrade process. With muted traction in VMware and Microsoft infrastructures, IBMpositions the DS8870 as its primary enterprise storage platform to support z/OS and AIXinfrastructures.

IBM XIV

The current XIV is in its third generation. The freedom from legacy dependencies is apparent fromits modern, easy-to-use, icon-based operational interface, and a scale-out distributed processingand RAID protection scheme. Good performance and the XIV management interface are winningdeals for IBM. This generation enhances performance with the introduction of SSD and a fasterInfiniBand interconnect among the XIV nodes. The advantages of the XIV are simple administrationand inclusive software licenses, which make buying and upgrading the XIV simple, without hiddenor additional storage software license charges. The mirror RAID implementation creates a rawversus usable capacity, which is not as efficient as traditional RAID 5/6 designs; therefore, thescalability only reaches 325TB. However, together with inclusive software licensing, the XIV usablecapacity is priced accordingly, so that the price per TB is competitive in the market.

A new Hyper-Scale feature enables IBM to federate a number of XIV platforms to create a PB+scale infrastructure under the Hyper-Scale Manager to enable the administration of several XIVsystems as one. Positioned as IBM's primary high-end storage platform for VMware, MicrosoftHyper-V and cloud infrastructure deployments, IBM has released several new and incremental XIVenhancements, foremost of which are three-site mirroring, multitenancy and VMware vCloud Suiteintegration.

NetApp FAS8000

The high-end FAS series model numbers were changed from FAS6000 to FAS8000. The upgradeincluded faster controllers and storage virtualization built into the system and enabled via a softwarelicense. Because each FAS8000 HA node pair is a scale-up, dual-controller array, to qualify for

Page 14 of 28 Gartner, Inc. | G00263130

inclusion in this Critical Capabilities research requires that the NetApp FAS8000 series must beconfigured with at least four FAS8000 nodes managed by Clustered Data Ontap. This supports amaximum of eight nodes for deployment with storage area network (SAN) protocols and up to 24nodes with NAS protocols. Depending on drive capacity, Clustered Data Ontap can support amaximum raw capacity of 2.6PB to 23.0PB in a SAN infrastructure, and 7.8PB to 69.1PB in a NASinfrastructure.

The FAS system is no longer the flagship high-performance, low-latency storage array for NetAppcustomers that value performance over all other criteria. They can now choose NetApp productssuch as the FlashRay. Seamless scalability, nondisruptive upgrades, robust data service software,storage-efficiency capabilities, flash-enhanced performance, unified block-and-file multiprotocolsupport, multitenant support, ease of use and validated integration with leading independentsoftware vendors (ISVs) are key attributes of an FAS8000 configured with Clustered Data Ontap.

Oracle FS1-2

The hybrid FS1-2 series replaces the Oracle Pillar Axiom storage arrays and is the newest arrayfamily in this research. Even though the new system has fewer SSD and HDD slots, scalability interms of capacity is increased by approximately 30% to a total of 2.9PB, which includes up to912TB of SSD. The design remains a scale-out architecture with the ability to cluster eight FS1-2pairs together. The FS1 has an inclusive software licensing model, which makes upgrades simplerfrom a licensing perspective. The software features included within this model are QoS Plus,automated tiered storage, thin provisioning, support for up to 64 physical domains (multitenancy)and multiple block-and-file protocol support. However, if replication is required, Oracle MaxRepengine is a chargeable optional extra.

The MaxRep product provides synchronous and asynchronous replication, consistency groups andmultihop replication topologies. It can be used to replicate and, therefore, migrate older Axiomarrays to newer FS1-2 arrays. Positioned to provide best-of-breed performance in an Oracleinfrastructure, the FS1-2 enables Hybrid Columnar Compression (HCC) to optimize Oracle Databaseperformance, as well as engineered integration with Oracle's virtual machine (VM) and its broadlibrary of data management software. However, the FS1 has yet to fully embrace integration withcompeting hypervisors from VMware and Microsoft.

Context

Even as much of the storage array market is consolidating into one general-purpose market,Gartner appreciates the entrenched usage and appeal of simple labels. Therefore, even though theterms "midrange" and "high end" no longer accurately describe present array capabilities, userbuying behaviors or future market directions, Gartner has chosen to publish separate midrange andhigh-end Critical Capabilities research (see Note 1). By doing so, Gartner can provide analyses ofmore arrays in a potentially more traditional, client-friendly format.

Gartner, Inc. | G00263130 Page 15 of 28

Product/Service Class Definition

Architectural Definitions

The following criteria classify storage array architectures by their externally visible characteristics,rather than vendor claims or other nonproduct criteria that may be influenced by fads in the diskarray storage market.

Scale-Up Architectures

■ Front-end connectivity, internal bandwidth and back-end capacity scale independently of eachother.

■ Logical volumes, files or objects are fragmented and spread across user-defined collections ofdisks, such as disk pools, disk groups or RAID sets.

■ Capacity, performance and throughput are limited by physical packaging constraints, such asthe number of slots in a backplane and/or interconnect constraints.

Scale-Out Architectures

■ Capacity, performance, throughput and connectivity scale with the number of nodes in thesystem.

■ Logical volumes, files or objects are fragmented and spread across multiple storage nodes toprotect against hardware failures and improve performance.

■ Scalability is limited by software and networking architectural constraints, not physicalpackaging or interconnect limitations.

Hybrid Architectures

■ Incorporate SSD, HDD, compression and/or deduplication into basic design

■ Can be implemented as scale-up or scale-out arrays

■ Can support one or more protocols, such as block or file, and/or object protocols, including FC,iSCSI, NFS, Server Message Block (SMB; aka CIFS), FCoE and InfiniBand

Including compression and deduplication in the initial system design often results in both having aneutral to often positive impact on system performance and throughput, as well as simplifiedmanagement, in part by eliminating byte boundary alignment considerations in array configurations.

Unified Architectures

■ Can simultaneously support one or more block, file, and/or object protocols, including FC,iSCSI, NFS, SMB (aka CIFS), FCoE, InfiniBand and others

■ Include gateway and integrated data flow implementations

Page 16 of 28 Gartner, Inc. | G00263130

■ Can be implemented as scale-up or scale-out arrays

Gateway implementations provision block storage to gateways that implement NAS and objectstorage protocols. Gateway-style implementations run separate NAS and SAN microcode loads oneither virtualized or physical servers, and, consequently, have different thin provisioning, autotiering,snapshot and remote copy features that are not interoperable. By contrast, integrated or unifiedstorage implementations use the same primitives independent of protocol, which enable them tocreate snapshots that span SAN and NAS storage, and dynamically allocate server cycles,bandwidth and cache, based on QoS algorithms and/or policies.

Mapping the strengths and weaknesses of these different storage architectures to various usecases should begin with an overview of each architecture's strengths and weakness, as well as anunderstanding of workload requirements (see Table 1).

Gartner, Inc. | G00263130 Page 17 of 28

Table 1. Strengths and Weaknesses of the Storage Architectures

Strengths Weaknesses

Scale Up ■ Mature architectures:

■ Reliable

■ Cost-competitive

■ Large ecosystems

■ Independently upgrade:

■ Host connections

■ Back-end capacity

■ May offer shorter RPOs over asynchronousdistances

■ Performance and bandwidth do notscale with capacity

■ Limited compute power may resultin the use of efficiency and dataprotection features negativelyaffecting performance

■ Electronics failures and microcodeupdates may be high-impact events

ScaleOut

■ IOPS and GB/sec scale with capacity

■ Nondisruptive load balancing

■ Greater fault tolerance than scale-uparchitectures

■ Use of commodity components

■ High electronics costs relative toback-end storage costs

Hybrid ■ Efficient use of Flash, compression anddeduplication

■ Consistent performance experience withminimal tuning

■ Excellent price/performance

■ Low environmental footprint

■ Relatively immature technology

■ Limited ecosystem and protocolsupport

Unified ■ Maximal deployment flexibility

■ Comprehensive storage efficiency features

■ Performance may vary by protocol(block versus file)

Source: Gartner (November 2014)

Critical Capabilities Definition

Manageability

This refers to the automation, management, monitoring, and reporting tools and programssupported by the platform. This can include single-pane management consoles, and monitoring and

Page 18 of 28 Gartner, Inc. | G00263130

reporting tools designed to support personnel seamlessly, manage systems, and monitor systemusage and efficiencies.

They can also be used to anticipate and correct system alarms and fault conditions before or soonafter they occur.

RAS

Reliability, availability and serviceability (RAS) is a design philosophy that consistently delivers highavailability by building systems with reliable components, "derates" components to increase theirmean times between failures, and designs systems and clocking to tolerate marginal components.

RAS also involves hardware and microcode designs that minimize the number of critical failuremodes in the system; serviceability features that enable nondisruptive microcode updates;diagnostics that minimize human errors when troubleshooting the system; and nondisruptive repairactivities. User-visible features can include tolerance of multiple disk and/or node failures, faultisolation techniques, built-in protection against data corruption, and other techniques (such assnapshots and replication) to meet customers' RPOs and recovery time objectives (RTOs).

Performance

This collective term describes IOPS, bandwidth (MB/second) and response times (milliseconds perI/O) visible to attached servers. In well-designed systems, the potential performance bottlenecks areencountered at the same time when supporting various common workload profiles.

When comparing systems, users are reminded that performance is more a scalability enabler than adifferentiator in its own right.

Snapshot and Replication

These features protect against and recover from data corruption problems caused by human andsoftware errors, and technology and site failures, respectively. They are also useful in reducingbackup windows and minimizing the impact of backups on production workloads.

Archiving also benefits from these features in the same way as backups.

Scalability

This refers to the ability of the storage system to grow not just capacity, but performance and hostconnectivity. The concept of usable scalability links capacity growth and system performance toSLAs and application needs.

Ecosystem

This refers to the ability of the platform to support third-party ISV applications, such as databases,backup/archiving products and management tools, hypervisor and desktop virtualization offerings,and various OSs.

Gartner, Inc. | G00263130 Page 19 of 28

Multitenancy and Security

This refers to the ability of a storage system to support a diverse variety of workloads, isolateworkloads from each other, and provide user access controls and auditing capabilities that logchanges to the system configuration.

Storage Efficiency

This refers to the ability of the platform to support storage-efficiency technologies, such ascompression, deduplication, thin provisioning and autotiering, to improve utilization rates, whilereducing storage acquisition and ownership costs.

Use Cases

Overall

Overall use case is a generalized usage scenario. It does not represent the ways specific users willutilize or deploy technologies or services in their enterprises.

Consolidation

This simplifies storage management and disaster recovery, and improves economies of scale byconsolidating multiple, dissimilar storage systems into fewer, larger systems.

RAS, performance, scalability, and multitenancy and security are heavily weighted selection criteria,because the system becomes a shared resource, which magnifies the effects of outages andperformance bottlenecks.

OLTP

Online transaction processing (OLTP) is affiliated with business-critical applications, such asdatabase management systems.

These require 24/7 availability and subsecond transaction response times. Hence, the greatestemphasis on RAS and performance features, followed by snapshots and replication, which enablerapid recovery from data corruption problems and technology or site failure. Manageability,scalability and storage efficiency are important, because they enable the storage system to scalewith data growth, while staying within budget constraints.

Server Virtualization and VDI

This use case encompasses business-critical applications, back-office and batch workloads, anddevelopment.

The need to deliver I/O response times of 5 ms or lower to large numbers of VMs or desktops thatgenerate cache-unfriendly workloads, while providing 24/7 availability, heavily weights performanceand storage efficiency, followed closely by multitenancy and security. The heavy reliance on SSDs,

Page 20 of 28 Gartner, Inc. | G00263130

autotiering, QoS features that prioritize and throttle I/Os, and DR solutions that are tightly integratedwith virtualization software also makes RAS and manageability important criteria.

Analytics

This applies to storage consumed by big data applications using map/reduce technologies.

It also involves all analytic applications that are packaged, or provide business intelligence (BI)capabilities for a particular domain or business problem (see definition in "Hype Cycle for AnalyticApplications, 2013").

Cloud

This applies to storage arrays used in private, hybrid and public cloud infrastructures, and how theyapply to specific, cost, scale, manageability and performance needs.

Hence, storage efficiency and resiliency are important selection considerations, and are highlyweighted.

Inclusion CriteriaThis research evaluates the high-end, general-purpose storage systems supporting the use casesenumerated in Table 2.

Gartner, Inc. | G00263130 Page 21 of 28

Table 2. Weighting for Critical Capabilities in Use Cases

Critical Capabilities Overall Consolidation OLTPServer Virtualization and

VDI Analytics Cloud

Manageability 13% 12% 10% 13% 15% 16%

RAS 17% 18% 20% 14% 15% 15%

Performance 16% 5% 25% 20% 20% 10%

Snapshot and Replication 10% 5% 10% 12% 15% 10%

Scalability 13% 15% 15% 9% 10% 15%

Ecosystem 8% 8% 5% 10% 7% 9%

Multitenancy and Security 11% 18% 5% 10% 8% 15%

Storage Efficiency 12% 19% 10% 12% 10% 10%

Total 100% 100% 100% 100% 100% 100%

As of November 2014

Source: Gartner (November 2014)

This methodology requires analysts to identify the critical capabilities for a class of products/services. Each capability is then weighted in terms of its relative importance for specific product/service use cases.

The 12 storage arrays selected for inclusion in this research are offered by vendors discussed in"Magic Quadrant for General-Purpose Disk Arrays," which includes arrays supporting block and/orfile protocols. Following are the "go/no-go" criteria that must be met for classification as a high-endstorage array. These criteria for qualification as a high-end array are more severe than those formidrange arrays. For this reason, arrays that satisfy the high-end criteria also satisfy the midrangecriteria. More specifically, high-end arrays must meet the following criteria:

■ Single electronics failures:

■ Are invisible to the SAN and connected application servers

■ Affect less than 25% of the array's performance/throughput

■ Microcode updates:

■ Are nondisruptive and can be nondisruptively backed out

■ Affect less than 25% of the array's performance/throughput

■ Repair activities and capacity upgrades:

Page 22 of 28 Gartner, Inc. | G00263130

■ Are invisible to the SAN and connected application servers

■ Affect less than 50% of the array's performance/throughput

■ Support dynamic load balancing

■ Support local replication and remote replication

■ Typical high-end disk array ASPs more than $250,000

The storage arrays evaluated in this research include scale-up, scale-out and unified storagearchitectures. Because these arrays have different availability characteristics, performance profiles,scalability, ecosystem support, pricing and warranties, they enable users to tailor solutions againstoperational needs, planned new application deployments, and forecast growth rates and assetmanagement strategies.

Critical Capabilities Rating

Each product or service that meets our inclusion criteria has been evaluated on several criticalcapabilities on a scale from 1.0 (lowest ranking) to 5.0 (highest ranking). Rankings (see Table 3) arenot adjusted to account for differences in various target market segments. For example, a systemtargeting the small and midsize business (SMB) market is less costly and less scalable than asystem targeting the enterprise market, and would rank lower on scalability than the larger array,despite the SMB prospect not needing the extra scalability.

Gartner, Inc. | G00263130 Page 23 of 28

Table 3. Product/Service Rating on Critical Capabilities

Product or Service Ratings Dat

aDir

ect

Net

wo

rks

SFA

12K

EM

C V

MA

X

Fujit

su E

tern

us D

X87

00 S

2

HD

S H

US

VM

HD

S V

SP

G10

00

HP

3P

AR

Sto

reS

erv

1000

0

HP

XP

7

Hua

wei

Oce

anS

tor

1800

0

IBM

DS

8870

IBM

XIV

Net

Ap

p F

AS

8000

Ora

cle

FS1-

2

Manageability 4.0 4.2 3.8 4.0 4.0 4.5 4.0 3.5 4.0 4.5 4.5 4.0

RAS 3.7 4.3 4.2 4.3 4.5 3.7 4.5 4.2 4.2 4.0 3.7 3.5

Performance 4.5 3.8 4.2 3.7 4.3 4.0 4.3 4.0 4.0 3.8 3.7 3.8

Snapshot and Replication 1.0 4.0 4.0 4.2 4.2 4.0 4.2 4.0 4.0 4.0 4.2 3.5

Scalability 4.5 4.3 4.5 3.3 4.5 4.0 4.5 4.0 3.8 3.0 4.0 3.2

Ecosystem 2.0 4.5 3.2 4.0 4.0 4.0 4.0 3.3 3.5 4.0 4.3 3.0

Multitenancy and Security 3.3 3.7 4.0 4.0 4.2 4.0 4.2 4.0 4.0 3.3 4.3 4.0

Storage Efficiency 3.2 3.5 3.5 3.5 3.5 4.2 3.5 3.3 3.7 2.8 3.7 4.0

As of November 2014

Source: Gartner (November 2014)

Table 4 shows the product/service scores for each use case. The scores, which are generated bymultiplying the use case weightings by the product/service ratings, summarize how well the criticalcapabilities are met for each use case.

Page 24 of 28 Gartner, Inc. | G00263130

Table 4. Product Score on Use Cases

Use Cases Dat

aDir

ect

Net

wo

rks

SFA

12K

EM

C V

MA

X

Fujit

su E

tern

us D

X87

00 S

2

HD

S H

US

VM

HD

S V

SP

G10

00

HP

3P

AR

Sto

reS

erv

1000

0

HP

XP

7

Hua

wei

Oce

anS

tor

1800

0

IBM

DS

8870

IBM

XIV

Net

Ap

p F

AS

8000

Ora

cle

FS1-

2

Overall 3.46 4.03 3.98 3.87 4.18 4.04 4.18 3.83 3.93 3.68 4.01 3.65

Consolidation 3.46 4.00 3.94 3.85 4.13 4.04 4.13 3.79 3.91 3.55 4.02 3.68

OLTP 3.63 4.04 4.06 3.85 4.23 4.01 4.23 3.89 3.96 3.70 3.94 3.63

Server Virtualization andVDI 3.38 4.02 3.95 3.88 4.16 4.05 4.16 3.81 3.92 3.72 4.01 3.66

Analytics 3.38 4.03 3.98 3.90 4.18 4.05 4.18 3.84 3.95 3.76 4.02 3.66

Cloud 3.42 4.05 3.97 3.88 4.18 4.06 4.18 3.82 3.93 3.69 4.07 3.65

As of November 2014

Source: Gartner (November 2014)

To determine an overall score for each product/service in the use cases, multiply the ratings inTable 3 by the weightings shown in Table 2.

Gartner Recommended ReadingSome documents may not be available as part of your current Gartner subscription.

"Use SSDs, Rather Than Disk Striping, to Improve Storage Performance and Cut Costs"

"Critical Capabilities for Solid-State Arrays"

"Magic Quadrant for General-Purpose Disk Arrays"

"How to Negotiate Lower Storage Acquisition Costs"

"Options for Building Your Own Storage"

"IT Leaders Can Benefit From Disruptive Innovation in the Storage Industry"

Gartner, Inc. | G00263130 Page 25 of 28

"Critical Capabilities for General-Purpose Midrange Storage Arrays"

"How Products and Services Are Evaluated in Gartner Critical Capabilities"

Note 1 z/OS Support

This research compares storage arrays that support z/OS mainframe environments with arrays thatdo not. This difference in the presence or absence of z/OS support is taken into account only in thearray ecosystem ratings, where it contributes positively to arrays supporting z/OS, and has noinfluence on arrays not supporting z/OS. It has no influence on other ratings or the rating weightsused in the tool.

Critical Capabilities Methodology

This methodology requires analysts to identify the critical capabilities for a class ofproducts or services. Each capability is then weighted in terms of its relative importancefor specific product or service use cases. Next, products/services are rated in terms ofhow well they achieve each of the critical capabilities. A score that summarizes howwell they meet the critical capabilities for each use case is then calculated for eachproduct/service.

"Critical capabilities" are attributes that differentiate products/services in a class interms of their quality and performance. Gartner recommends that users consider theset of critical capabilities as some of the most important criteria for acquisitiondecisions.

In defining the product/service category for evaluation, the analyst first identifies theleading uses for the products/services in this market. What needs are end-users lookingto fulfill, when considering products/services in this market? Use cases should matchcommon client deployment scenarios. These distinct client scenarios define the UseCases.

The analyst then identifies the critical capabilities. These capabilities are generalizedgroups of features commonly required by this class of products/services. Eachcapability is assigned a level of importance in fulfilling that particular need; some sets offeatures are more important than others, depending on the use case being evaluated.

Each vendor’s product or service is evaluated in terms of how well it delivers eachcapability, on a five-point scale. These ratings are displayed side-by-side for allvendors, allowing easy comparisons between the different sets of features.

Ratings and summary scores range from 1.0 to 5.0:

1 = Poor: most or all defined requirements not achieved

2 = Fair: some requirements not achieved

Page 26 of 28 Gartner, Inc. | G00263130

3 = Good: meets requirements

4 = Excellent: meets or exceeds some requirements

5 = Outstanding: significantly exceeds requirements

To determine an overall score for each product in the use cases, the product ratings aremultiplied by the weightings to come up with the product score in use cases.

The critical capabilities Gartner has selected do not represent all capabilities for anyproduct; therefore, may not represent those most important for a specific use situationor business objective. Clients should use a critical capabilities analysis as one ofseveral sources of input about a product before making a product/service decision.

Gartner, Inc. | G00263130 Page 27 of 28

GARTNER HEADQUARTERS

Corporate Headquarters56 Top Gallant RoadStamford, CT 06902-7700USA+1 203 964 0096

Regional HeadquartersAUSTRALIABRAZILJAPANUNITED KINGDOM

For a complete list of worldwide locations,visit http://www.gartner.com/technology/about.jsp

© 2014 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. Thispublication may not be reproduced or distributed in any form without Gartner’s prior written permission. If you are authorized to accessthis publication, your use of it is subject to the Usage Guidelines for Gartner Services posted on gartner.com. The information containedin this publication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy,completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. Thispublication consists of the opinions of Gartner’s research organization and should not be construed as statements of fact. The opinionsexpressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues,Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company,and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner’s Board ofDirectors may include senior managers of these firms or funds. Gartner research is produced independently by its research organizationwithout input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartnerresearch, see “Guiding Principles on Independence and Objectivity.”

Page 28 of 28 Gartner, Inc. | G00263130