Why The Era of General Purpose Storage is Coming to an · PDF fileWHITE PAPER • Why the...

11
Dragon Slayer Consulting Marc Staimer Why The Era of General Purpose Storage is Coming to an End Emergence of Application Engineered Storage (AES) W H I T E P A P E R

Transcript of Why The Era of General Purpose Storage is Coming to an · PDF fileWHITE PAPER • Why the...

Dragon Slayer Consulting

Marc Staimer

Why The Era of General Purpose Storage is Coming to an End

Emergence of Application Engineered Storage (AES)

W H I T E P A P E R

WHITE PAPER • Why the Era of General Purpose Storage is Coming to an End

Dragon Slayer Consulting • Q2 2013 2

Why The Era Of General Purpose Storage Is Coming To An End Marc Staimer, President & CDS of Dragon Slayer Consulting

Introduction

Evolution has taught us that those species that fail to adapt and evolve over time become extinct. Evolution occurs so that a given biological species can survive the changes it must face each and every day. Evolution is biological adaptation. Nowhere has this been more clearly evident than the emergence of superbugs. Superbugs are bacteria and viruses that have evolved to survive ever more sophisticated antibiotics or antivirals. Evolution is not limited to biology. It can be seen playing out in the ongoing progression of information technologies both big and small. IT is loaded with application and computing evolutionary examples including:

Applications running on physical servers evolving into applications running on virtual servers to overcome physical server sprawl consuming floor/rack space, power, cooling, adapter cards, network switches, cables, transceivers, conduit, and management cycles. Of course that problem has morphed into virtual server sprawl that will require further evolution.

Physical servers evolving into both bigger more powerful machines to adequately support ever more virtual machines. They’ve also adapted to the escalating problem of power/cooling consumption by becoming smaller with CPUs that consume less power/cooling.

High availability evolving from redundant hardware into sophisticated software at the application and hypervisor layer. Server hardware HA is expensive. Application and data HA is not and makes hardware HA irrelevant.

Application data processing evolving into both smaller and larger data sets adapting to a broader swath of computational platforms (smartphones, tablets, massive scale-out clusters and more) as well as the increasing need for actionable information on enormous amounts of unstructured data growing geometrically.

Users’ compute devices evolving from desktops to laptops to smartphones and now tablets as each generation demands more mobility and freedom.

Data processors evolving from single cores where clock rates kept increasing to multiple cores where the number of cores keeps increasing to overcome the clocking constraints as the need for more and more power accelerates.

This technological evolution has not been limited to applications and processing. On the contrary, no technology has been changing as rapidly as data storage systems. For example:

DAS has evolved to SAN or NAS even unified (SAN and NAS) and/or object to make storage systems a bit broader in the protocols, applications, and systems concurrently supported.

Data storage systems have morphed from single tiers or pools of storage to multiple performance tiers including DRAM, NV-RAM, solid state Flash drives (SSDs), high performance hard disk drives (HDD), and low performance high capacity drives; to better match application performance requirements to data value and cost.

Data storage systems have evolved from being capacity highly inefficient to becoming highly efficient through the use of thin provisioning, deduplication, compression, and virtualization technologies. This is an adaptation to exponential data growth while budgets grow in zero to low single digits.

Data storage systems have also evolved beyond simple data targets to tackle the issues of data protection, disaster tolerance, disaster recovery, and continuous data availability to provide data insurance when the data has become so valuable.

Fig 1. Superbug MRSA

Fig 2. VMs

Fig 3. Mobility

WHITE PAPER • Why the Era of General Purpose Storage is Coming to an End

Dragon Slayer Consulting • Q2 2013 3

Data storage system RAID has evolved from physical disks to virtual disk pools; from protecting data being lost when a single disk fails to protecting it when 2, 3, or even more disks fails; from slow rebuilds to incredibly fast rebuilds. All of it in response to the increased risk of data loss from a drive or drives (hard disk drives and/or solid state drives) failing, and they do fail. In fact, historically and statistically, they fail in batches.

Evolutionary rapid adaptations are necessary for long-term survival. And just like biological evolution, technical evolution never stops. It is continuous. Once evolutionary adaptation stops the species or technology dies. It dies because the environment is also always changing, placing relentless pressure for adaptation or extinction.

This has never been truer with regards to external shared data storage systems than it is today, right now. Shared storage has historically attempted to be all things to all applications. As the number of

applications supported on a general-purpose data storage system increases so does the total available market. It is in the interest of a data storage vendor to support as many different applications as possible for greater potential sales. In other words, external shared SAN, NAS or Unified storage is commonly positioned as general-purpose storage (i.e. jack of all trades, master of none) to expand its usefulness. The problem is that applications have become more diverse than similar. External shared data storage system requirements are quickly moving well beyond performance. VMware vSphere, virtual data center technologies, Oracle databases plus business applications that run on Oracle servers and storage, backup and replication software,

Microsoft applications such as Hyper-V, Exchange, SharePoint, SQL Server and more, are all demanding a lot more from their storage systems than raw performance, capacity, and data protection. They are demanding that their attached external shared data storage rise up to be peers with the application and not just simply a resource. They are demanding that each have intimate knowledge of the other.

Those storage systems that can step up to meet this latest evolutionary challenge will survive. Those that cannot…

Fig 4. Application Chaos in General-Purpose Storage

Fig 5. Application & Storage Peers

WHITE PAPER • Why the Era of General Purpose Storage is Coming to an End

Dragon Slayer Consulting • Q2 2013 4

Table of Contents

Introduction ............................................................................................................... 2

Escalating Application Storage Issues .......................................................................... 5

Lack of “On-Demand” Automation ..............................................................................5

Fixed Predetermined Application-Data Storage Interaction ..........................................5

Knowledge, Skills, and Experience Shortage .................................................................5

Why These Problems Are Quietly Reaching Critical Mass ..............................................6

Typical Application Storage Workarounds ................................................................... 6

VMware Storage API Integration .................................................................................6

Microsoft Windows and Hyper-V VSS (Volume Shadow Services) Integration ...............7

Storage Auto-Tiering or Caching with Flash SSDs ..........................................................7

Software Defined Storage (SDS) ...................................................................................8

Workarounds Conclusion ............................................................................................8

Application Engineered Storage (AES) ......................................................................... 8

Oracle ZFS Storage Appliances – The First AES ............................................................. 9

Management ..............................................................................................................9

Partitioning .................................................................................................................9

Hybrid Columnar Compression(HCC) .......................................................................... 10

Database Aware Data Protection ............................................................................... 10

Plus Engineered “Workarounds” ................................................................................ 10

Summary and Conclusion .......................................................................................... 11

WHITE PAPER • Why the Era of General Purpose Storage is Coming to an End

Dragon Slayer Consulting • Q2 2013 5

Escalating Application Storage Issues

Applications traditionally required external shared data storage systems to supply them with capacity, performance, and data protection. That’s what most data storage systems do and do reasonably well. Their big problem is limited automation and an inability to adapt in real time to dynamic change requirements from the applications.

Lack of “On-Demand” Automation

The times they are a changing. Applications today are demanding much more than storage capacity, performance, and data protection provisioning. Application datasets have rapidly ballooned in size as the data universe continues to expand exponentially (2.8 ZBs in 2012 expected to grow to 40 ZBs by 2020 per IDC.) To manage the increasing data tsunami, applications automatically consume more of everything including processing, IO, bandwidth, storage capacity, and storage performance. This is a problem for data storage systems.

Storage is not customarily designed to allocate capacity and performance resources on-demand. Resources are typically manually allocated in advance. As those storage resources are consumed and more are required, the storage admin will then manually allocate more. It’s not a dynamic automated process for the vast majority of storage systems. Those allocation tasks are labor intensive requiring scheduling and more often than not, scheduled downtime. Scheduled downtime is a rare commodity

in today’s 7 by 24 global economy. Yet far too many storage vendors still assume that scheduled downtime—disruptive to any business—is okay.

Fixed Predetermined Application-Data Storage Interaction

Applications do not generally see or directly control data storage. That’s usually accomplished via the operating system, hypervisor, or file system, although there are exceptions such as relational databases. In all cases that relationship is fixed or predetermined meaning that the storage does what it’s told to do in a very narrow set of pre-configured parameters. It serves up capacity in the amount that has been allocated. It provides RAID-based data protection on pre-arranged parameters. It delivers performance based on what was set up. In other words it is an inflexible relationship that can only be altered with admin intervention.

Consider that application performance has peaks and valley. And yet, data storage cannot for the most part read or anticipate those peaks and valleys. It knows the IO and/or throughput demands at any given moment in time and will respond to them based on the performance pre-sets and other application demands being placed on the data storage system at that moment in time. There is no integration of application and storage, no cooperative processing or communication, no dynamic adaptation to unexpected application needs, and no flexibility.

Knowledge, Skills, and Experience Shortage

Application and hypervisor administrators have become more specialized and narrower in their scope and depth. Storage knowledge is viewed through the lens of the application or hypervisor and is nominal at best. These admins commonly lack both the basic storage knowledge and the experience to set up, configure, manage, and operate data storage optimally for their applications, servers, or virtual machines.

Storage admins on the other hand are generalists with a dearth of application and/or hypervisor knowledge. They know how to tweak their storage to get the best performance or utilization out of it, but not optimized for every application, server, and VM that connects to that storage. They commonly lack specific application tuning knowledge, skills, and experience. Even

Fig 6. Limited “On-Demand” Resources

Fig 7. Inadequate Flexibility

Fig 8. App Admins Don’t Know Storage

WHITE PAPER • Why the Era of General Purpose Storage is Coming to an End

Dragon Slayer Consulting • Q2 2013 6

when they do have them for a specific application, their cycles are way too limited to constantly tune the storage for optimum application performance.

Why These Problems Are Quietly Reaching Critical Mass

IT organizations discovered over the past decade that general-purpose servers or one size fits all did not work for all applications. Some apps preferred Windows or Linux x86 CISC architecture, whereas others preferred a RISC architecture and UNIX. It made little sense and created a lot of heartburn trying to force fit all applications into one server architecture. This led to a surge in application-specific server deployment. That application surge is accelerating with mainstream adoption of server virtualization. Server virtualization has made it incredibly easy to spin up a VM, which in turn has led to out-of-control VM sprawl. VM sprawl greatly worsens the problems previously discussed. More applications mean more demands on the attached data storage systems. Data storage systems have limited capabilities in providing differentiated service to these application-server combinations regardless of whether they’re physical or virtual. And every combination has its own storage performance and functional requirements. Yet, the vast majority of them are connected to general-purpose storage. This has the feel of a misalignment.

There are three reasons why the vast majority of data storage systems are general purpose. First, general-purpose storage systems are just easier to manage and require less application knowledge, skills, and experience. The second reason is inertia. It’s how it’s always been done or the “if it’s not broke, don’t fix it” philosophy. Unfortunately, it is broken or soon will be, badly broken. And the third, as previously discussed, is market reach. General-purpose storage enables data storage systems to connect to more applications, creating the largest possible potential available market. That market reach looks good on paper to many IT managers and storage administrators. And it’s why there has been a lemming like trend to Unified Storage (SAN, NAS, and now Object storage in the same storage system). But, it does not work nearly as well in practice. To poorly paraphrase a line from Tolkien’s “Lord of the Rings”, Unified storage is “one data storage system to rule them all”. It is also described as a jack-of-all-trades-master-of-none philosophy. Regrettably, general-purpose unified storage requires too many compromises in performance and management. It aims at the “good enough” heart of the bell curve for the performance and resource requirements.

The general-purpose system did not work well with servers and applications, and it evolved into virtual machines. The general-purpose system doesn’t work well with storage either and it is time to evolve. Those IT organizations that have not yet discovered these problems soon will. It is only a matter of time.

Typical Application Storage Workarounds

There are four common workarounds. These include:

VMware Storage API integration

Microsoft Windows and Hyper-V VSS integration

Storage tiering or caching with Flash SSDs

Software defined storage

VMware Storage API Integration

VMware has many storage APIs (vSphere API for data protection, vSphere API for Array Integration, vSphere API for storage awareness, T10 compliance, array-based thin provisioning, hardware acceleration for NAS, enhanced hardware-assisted locking, and vSphere API for multi-pathing). By adhering to these APIs, the storage vendors give VMware vSphere administrators the ability to manage the attached shared data storage. In some cases they enhance the ability of specific functions such as data protection with VADP to leverage vSphere functionality (vSphere snapshots).

However, none of these APIs actually address application and storage issues and problems. The APIs assume the vSphere administrator will be storage knowledgeable, skilled, and experienced. They don’t make the storage application aware or Engineered. These APIs make vSphere storage aware. This is useful and important while still not solving the problem. Storage does not dynamically react to the applications in any way. The vSphere API for data protection is a good example (VADP). VADP quiesces (pauses) a VM takes a snapshot, and then restarts the VM. Unfortunately, VADP is not application aware.

WHITE PAPER • Why the Era of General Purpose Storage is Coming to an End

Dragon Slayer Consulting • Q2 2013 7

It does not flush the cache and complete the writes in the correct order. The snapshots of applications that require an orderly shut down are not crash consistent and may be corrupted. VADP also takes one VM snapshot at a time making it resource-intensive and relatively slow. Another example of vSphere’s minimal application awareness is its storage IO control (SIOC). SIOC is a brute force approach to solving application performance dynamic on-demand requirements at the VMware cluster level. SIOC monitors the IO latency of each VM datastore in the cluster. When IO latency reaches a threshold (30 ms by default), the datastore is regarded as congested and SIOC intervenes to redistribute available resources based on VM prioritization. In other words, high priority VMs are given storage IO resources from lower priority VMs. It does not manage the storage resources; it manages the vSphere access to those resources. SIOC also does not take into account application demand spikes or have any inherent knowledge or awareness of application requirements. The unspoken premise is that when storage uses VMware vSphere APIs then application storage issues and problems are auto-magically resolved. The reality is that it helps treat the symptoms. It does not cure the problem.

Microsoft Windows and Hyper-V VSS (Volume Shadow Services) Integration

Microsoft VSS integration is a more limited attempt to solve application and storage issues. Somewhat analogous to VMware’s VADP, VSS pauses VMs and applications, takes a snapshot, and then resumes their operations. One key difference is that applications that are VSS integrated (Exchange, Oracle, SharePoint, SQL Server) will be properly quiesced with their caches flushed and writes properly completed in the correct order.

Once again, this is a partial solution and only deals with one aspect of application’s issues with storage. It definitely improves the dynamics between applications and storage in a limited way.

Storage Auto-Tiering or Caching with Flash SSDs

Flash solid-state drives (SSD) have proven to be a huge performance boost to applications. They provide up to 1000 times the performance of spinning hard disk drives (HDD). They do so at a price. SSDs cost significantly more than HDDs. This is why there are hybrid storage systems that combine the performance of SSDs with the low cost of HDDs. These systems either use storage auto-tiering or caching to capture the most efficient use of those SSDs. Storage auto-tiering points the most mission critical high performance applications at the SSD tier and then, based on policies, moves designated data to lower performing, lower cost HDD tiers. Policies can cover time since last access, frequency of access, time since created, and more.

The problem with storage auto-tiering and Flash SSDs is that storage auto-tiering moves data between tiers based on historical trends not real-time status. Every time data is moved between tiers it consumes CPU cycles that cannot then be used for application IO or throughput. Storage auto-tiering decreases IO performance quite noticeably when there is constant movement or thrashing of data between tiers. And mixing SSDs and HDDs guarantees constant auto tiering data movement. That thrashing can shorten the expensive Flash SSD wear life, especially noticeable on multi-level cell (MLC) Flash.

Flash SSD caching is the other more common implementation. It’s primarily write-through caching (a.k.a. read caching). Data is only put into the cache if it passes a specified policy threshold that registers it as hot data. Flash SSD caching is more popular than Flash SSD auto storage tiering because it’s simpler, moves a lot less data, uses much fewer CPU cycles, and, most importantly, reacts in near real-time to applications demanding more read IO performance. The biggest problem with Flash SSD caching is the size of the cache. Too small of a cache means not all of the “hot” accessed data is placed in cache leading to increased cache misses. Cache misses equal reduced performance.

Both of these methods aim at solving application performance storage issues. And they do to a significant extent even though they are brute force

approaches. They may not have any inherent application awareness, but they try to make up for that by throwing massive amounts of performance at the problem. Higher performance is always good; however,

WHITE PAPER • Why the Era of General Purpose Storage is Coming to an End

Dragon Slayer Consulting • Q2 2013 8

ultimately it only delivers a temporary Band-Aid solution with more speed. It only delays the problem from becoming acute because most implementations don’t resolve any of the aforementioned underlying issues, and there’s a limit to the amount of performance any storage system can throw at the problem. In addition, these workarounds don’t address the storage expertise requirements.

Software Defined Storage (SDS)

Software defined storage is the latest hyped storage trend creating buzz. It abstracts the storage image and services from the physical storage. SDS provides: policy-driven capacity provisioning; performance management; transparent mapping between VM datastores and large volumes or file stores.

One of the ballyhooed functions is guaranteed SLAs. The guarantee is for storage performance. It’s based on volume performance that’s logically tied to an application. It’s not application aware or knowledgeable. It assumes the storage admin has that expertise.

SDS could just as easily be described as the latest generation of storage virtualization. And just like storage virtualization, it’s storage-centric not application aware or centric. It does mask a lot of the storage expertise requirements via automation. However, it doesn’t currently react to unexpected demands or to specific application requirements because it is not application aware.

Workarounds Conclusion

Each of these workarounds has some value in ameliorating the application-storage problems. All of them address symptoms of the problem. None of them really address the underlying problem, which is lack of application knowledge and integration. It’s difficult to react to applications’ variable demands without knowing and integrating with the applications.

General-purpose storage does not address this new era of application centricity. The workarounds treat symptoms but not the underlying root cause of the problems. What’s required is storage that’s engineered with the application or application engineered storage.

Application Engineered Storage (AES)

General-purpose storage does not address this new era of application centricity. The workarounds treat symptoms but not the underlying root cause of the problems. What’s required is storage that’s engineered with the application or application engineered storage.

AES works with the application in a cooperative manner taking on the processing load for functions that belong in the storage system where they’re more efficiently handled. This in turn optimizes the compute platform resources for compute processing and storage resources for IO and throughput. In several cases this has a synergistic effect on performance where the application performance gain is greater than the system IO would suggest.

This is because application engineered storage attacks the root cause of application storage performance issues instead of merely treating the symptoms. Other advantages in dealing with the root cause are also significant. For example: as the application scales so does the AES; applications automatically take advantage of new technologies as they’re introduced into the AES;

application expertise means AES expertise since it appears as an extension of the application.

There are several essential underlying premises to AES. Effectual AES as a prerequisite must be able to:

Take advantage of application information to reduce read and write latencies.

Accelerate data transfers between the application and AES to deliver the right information in the right place at the right time.

Off-load low-level processing from the application server.

Implementing these requirements is nontrivial. It requires both the storage system processing power and software architecture that are capable of handling concurrently all storage functions in addition to the application integration functions. If it doesn’t have both it can’t execute application off-load functions fast enough to make a difference. This means the AES needs quite a bit of computing power as well as a

Fig 11. SDS

Fig 12. AES

WHITE PAPER • Why the Era of General Purpose Storage is Coming to an End

Dragon Slayer Consulting • Q2 2013 9

highly scalable OS to competently support hundreds or thousands of application specific IO requests in parallel.

Delivering AES calls for cross-functional skills that most storage vendors simply do not have.

Oracle ZFS Storage Appliances – The First AES

Because Oracle provides the world’s number one relational database, applications that run on that RDBMS, storage systems, servers, Zettabyte file systems (ZFS), and Solaris OS, it is distinctively suited to tackle the application-storage problem head-on. Oracle has engineered the industry’s first application engineered storage systems with their ZFS Storage Appliances. ZFS Storage Appliances incorporate Oracle’s latest generations of compute processing with extensive memory (up to Terabytes), SSDs, HDDs, Solaris OS, sophisticated storage functions, and ZFS to generate exceptional compute in addition to storage performance. It can handle over a hundred threads processing many thousands of IO requests in parallel vs. conventional storage systems that are limited to as little as 8 processing threads and Gigabytes of memory. It is this unique architecture that enables ZFS Storage Appliances to directly integrate with Oracle databases and applications. The results are notably higher efficiencies, flexible on-demand interactions, and greater performance, at much lower costs.

Oracle’s AES appears to be impressive, but the proof is in how it’s executed.

Management

ZFS Storage Appliances are designed to eliminate three layers of management between the Oracle database, Solaris operating system, and the storage itself. This increased management automation places more of the expertise into the storage system instead of the administrator. It removes dozens to hundreds of redundant tasks saving administrators vast amounts of time. The Edison Group management costs comparative study empirically verified this with findings showing that the ZFS Storage Appliances are generally 36% faster in administrative tasks, 36% faster in storage provisioning, and 44% faster in monitoring and troubleshooting issues

than is required for general purpose data storage systems such as NetApp’s FAS filers. These savings convert into a full time equivalent (FTE) operating cost savings of approximately $27,000 per year.

Partitioning

ZFS Storage Appliances’ tight integration with Oracle database increases performance and efficiencies by at least 3 to 5x over general-purpose data storage systems (such as NetApp FAS, HP 3PAR, EMC VNX, DELL Compellent, and others) that do not offer deep Oracle Database application integration. One of the ways ZFS Storage Appliances do this is via database data mapping to multiple storage tiers within the same ZFS Storage Appliance. Large tables can be partitioned easily on a partition key, frequently range-partitioned on the key that represents a time component, with current “active” data located on the higher performance tier storage. As that data ages, becoming less active or “passive”, it’s automatically moved via the partitioning to the lower cost, lower performing storage tier. This built-in data “online archiving” is always active and available to the application. It doesn’t have to be recovered or migrated to or from another storage system, resulting in

Fig 13. Oracle ZFS Storage Appliances

Fig 14. Intuitive Management

Fig 15. Partitioning

WHITE PAPER • Why the Era of General Purpose Storage is Coming to an End

Dragon Slayer Consulting • Q2 2013 10

quantifiably measurable improved application data. Any access to the active data that is based on the same partition key (such as sales date), will automatically benefit from the partition pruning performed by the database. In other words, as database tables grow in size, active data performance will not degrade. The ZFS Storage Appliance with Oracle database integration removes the requirement to regularly archive or purge data that is typical in order to maintain the required performance of database applications. The database archive is always online available at any time through the application, and is also maintained throughout database and application upgrades, unlike data in offline archiving software.

Contrasting the general-purpose storage user experience, ZFS Storage Appliances require much reduced:

Administrator manual labor-intensive tasks;

Storage systems, high-end higher-performance, more expensive storage;

Storage infrastructure (cables, transceivers, conduit, switches, power, cooling, rack space, and floor space);

And licensing for backup and archiving software.

ZFS Storage Appliances also enable a lot more of the data to be kept online at all times for much longer periods of time. This greatly improves performance of the applications that depend on and access those large Oracle databases.

Hybrid Columnar Compression(HCC)

Even greater efficiencies and performance gains can be seen in the integration of ZFS Storage Appliances with Oracle Database’s Hybrid Columnar Compression (HCC.) HCC is available only on Oracle ZFS Storage Appliances and provides as much as 50x data compression. HCC demonstrably reduces storage capacity requirements on Oracle Database 3 to 5x more than any other vendor’s best data reduction option, sharply decreasing storage footprint and associated acquisition and operational costs. More importantly, because HCC is a cooperative, collaborative process between Oracle Database and the ZFS Storage Appliance, compressed data does not have to be rehydrated when moving between the two. In addition, compressed data can be accessed directly so there is no latency/response time/performance degradation ramifications experienced as with other data reduction technologies. In fact, 3x to 8x faster queries have been demonstrated in customer applications.

Database Aware Data Protection

ZFS Storage Appliance integration with Oracle databases extends to data protection as well. Snapshots are application aware and make sure that the Oracle Database is properly quiesced (cache flushed and all writes completed in their proper order), before the ZFS Storage Appliance takes the snap.

Oracle also provides tightly engineered Oracle database backup on ZFS Storage Appliances. It ties directly into RMAN and can move backed up data on disk to Oracle tape/tape libraries. None of this requires any additional backup or replication software. Nor does it require any database performance degrading agent software, or agent software of any kind. All backed up data is deduped and compressed as well. Oracle database backup performance is exceptional at approximately 30TB/hr. What is more impressive is the restore performance of approximately 10TB/hr.

Oracle has further engineered the ZFS Storage Appliances with Exadata connecting them via QDR (40Gbps) Infiniband. Infiniband has built-in remote direct memory access (RDMA) that enables higher performance via much lower latencies and much reduced IO processing. No other storage system today integrates with Oracle RMAN and Infiniband making the ZFS Storage Appliance the most efficient and highest performing data protection system currently available.

Plus “Workarounds”

Not to be outdone, Oracle ZFS Storage Appliances provide storage auto-tiering with extensive DRAM, SSDs, and various classes of capacity/performance high-density disks. There is value at making sure the latest and greatest storage technology is part of the solution. But it cannot solve the application storage issues by itself. That requires application-engineered storage.

Fig 16. HCC

WHITE PAPER • Why the Era of General Purpose Storage is Coming to an End

Dragon Slayer Consulting • Q2 2013 11

Summary and Conclusion Applications and servers have evolved. It is rare today to find more than one application on a physical server in or virtual machine image. Basically it’s one application per server. Storage systems have lagged in this new world of application machine proliferation. Most storage systems, including all major storage system suppliers in Oracle environments, remain blissfully application unaware. They are not able to cooperate or collaborate with applications and, therefore, cannot respond in real-time to changing application dynamics. Nor can they improve application efficiencies or lower costs. They can and do attempt to solve the application storage problems with workarounds. These workarounds’ help is analogous to the way ibuprofen reduces fevers. They treat the symptoms, but not the problem.

To solve these application storage problems requires application-engineered storage (AES). Oracle’s ZFS Storage Appliances are the first instances of this evolutionary storage category—specifically architected to work together with business-critical enterprise applications. They will most likely not be the last.

About the author: Marc Staimer is the founder, senior analyst, and CDS of Dragon Slayer Consulting in Beaverton, OR. The consulting practice of 15 years has focused in the areas of strategic planning, product development, and market development. With over 33 years of marketing, sales and business experience in infrastructure, storage, server, software, databases, and virtualization, he’s considered one of the industry’s leading experts. Marc can be reached at [email protected].