The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

14
WHITE PAPER The Value of Memory-Dense Servers: IBM's System x MAX5 for Its eX5 Server Family Sponsored by: IBM Michelle Bailey March 2010 IDC OPINION The technology industry has reached a crossroads. After more than a decade of physical server sprawl, nearly exponential growth in storage, and a proliferation of network technologies, IT organizations are now facing tremendous challenges in planning for a future enterprise architecture that is less expensive, less complex, and more agile than today's infrastructure. At the core of this reinvention is virtualization and, increasingly, a converged set of IT infrastructure that is built on a service-centric approach to supporting the business. This new technology cycle is squarely aimed at improving utilization rates, driving efficiency across the datacenter, and simplifying deployment and ongoing maintenance in order to ultimately shorten time to market and optimize the business value from IT investments. Many IT organizations are well on their way to creating a more flexible and responsive enterprise architecture. Server virtualization has quickly become mainstream and is the foundational platform for the datacenter. More than 50% of all server workloads are now deployed on virtual machines, and this is driving a sea change in the types of technologies that IT organizations are procuring and configuring and their approach to IT processes and practices. We have already seen customers move toward more richly configured servers to maximize the number of virtual machines (VMs) consolidated per physical server. The correct balance of processor, memory, and I/O is critical in architecting an effective virtualization solution. Initially, the emphasis on building physical systems for virtual machines focused on multicore processors. However, with the maturity in virtualization, most IT organizations now report that the single greatest limiter in driving higher VM densities is tied to the amount of memory that their virtual machines can access. Servers that were previously built to support single applications have become inadequate in meeting the virtualization goals of customers. Prior to virtualization, only the most demanding workloads required high memory footprints — large databases, OLTP applications, and enterprise ERP and CRM solutions. Today, because each virtual machine requires its own memory to ensure consistent application performance, systems with large memory capabilities become essential. As a result, new x86-based servers are coming to market that can massively expand memory capacities. Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com

description

This IDC white paper highlights how IBM eX5 systems with MAX5 memory technology play a significant role in increasing the value of memory dense servers.

Transcript of The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

Page 1: The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

W H I T E P AP E R

T h e V a l u e o f M e m o r y - D e n s e S e r v e r s : I B M ' s S y s t e m x M AX 5 f o r I t s e X 5 S e r v e r F a m i l y Sponsored by: IBM

Michelle Bailey March 2010

I D C O P I N I O N

The technology industry has reached a crossroads. After more than a decade of physical server sprawl, nearly exponential growth in storage, and a proliferation of network technologies, IT organizations are now facing tremendous challenges in planning for a future enterprise architecture that is less expensive, less complex, and more agile than today's infrastructure. At the core of this reinvention is virtualization and, increasingly, a converged set of IT infrastructure that is built on a service-centric approach to supporting the business. This new technology cycle is squarely aimed at improving utilization rates, driving efficiency across the datacenter, and simplifying deployment and ongoing maintenance in order to ultimately shorten time to market and optimize the business value from IT investments.

Many IT organizations are well on their way to creating a more flexible and responsive enterprise architecture. Server virtualization has quickly become mainstream and is the foundational platform for the datacenter. More than 50% of all server workloads are now deployed on virtual machines, and this is driving a sea change in the types of technologies that IT organizations are procuring and configuring and their approach to IT processes and practices.

We have already seen customers move toward more richly configured servers to maximize the number of virtual machines (VMs) consolidated per physical server. The correct balance of processor, memory, and I/O is critical in architecting an effective virtualization solution. Initially, the emphasis on building physical systems for virtual machines focused on multicore processors. However, with the maturity in virtualization, most IT organizations now report that the single greatest limiter in driving higher VM densities is tied to the amount of memory that their virtual machines can access. Servers that were previously built to support single applications have become inadequate in meeting the virtualization goals of customers.

Prior to virtualization, only the most demanding workloads required high memory footprints — large databases, OLTP applications, and enterprise ERP and CRM solutions. Today, because each virtual machine requires its own memory to ensure consistent application performance, systems with large memory capabilities become essential. As a result, new x86-based servers are coming to market that can massively expand memory capacities.

Glo

bal H

eadq

uarte

rs: 5

Spe

en S

treet

Fra

min

gham

, MA

0170

1 U

SA

P.

508.

872.

8200

F

.508

.935

.401

5

ww

w.id

c.co

m

Page 2: The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

2 #222224 ©2010 IDC

With this change in technology comes a new set of metrics for measuring ongoing success in virtualization. "Cost per application" or "cost per VM" is now used to gauge the effectiveness of technology investments, and as a consequence, customers are looking to match their consolidation goals with newer systems infrastructure that helps maximize VM densities relative to physical hardware.

S I T U AT I O N O V E R V I E W

A N e w A p p r o a c h t o D a t a c e n t e r E c o n o m i c s I s R e q u i r e d

For many years, IT organizations would install at least one physical server per application, and often three to five servers per application, when taking into account test/development, staging, and disaster recovery environments. This inevitably led to an explosion in the number of physical systems and devices installed as well as datacenter sites. Prior to virtualization, most IT organizations faced:

Physical server sprawl. The number of installed physical servers has increased sixfold from just over 5 million in 1996 to more than 30 million in 2010.

Overprovisioning and underutilized assets. Most applications consume a fraction of a standalone server's total capacity, averaging 5–10% CPU utilization of a typical x86 server.

Spiraling operational costs. Most customers have underinvested in systems management and automation tools relative to the investments that have been made in x86 systems infrastructure. This has meant that many datacenters employ manually intensive processes, resulting in greater burdens on staff.

Server sprawl that exacerbates the power and cooling challenges of aging datacenter facilities. The average age of a datacenter in the United States is 12 years. This means that the typical datacenter was built to support a substantially different set of infrastructure that has become increasingly dense over time. Most datacenters were designed to support 1–2kW per rack versus 8–15kW per rack that we routinely observe.

V i r t u a l i z a t i o n I s t h e K i l l e r A p p f o r t h e D a t a c e n t e r

Virtualization technologies have completely transformed the way in which customers build, deploy, and manage their systems infrastructure. Virtualization tools allow multiple logical servers or "virtual machines" to run on a single physical server. By consolidating applications onto fewer physical servers, customers have been able to slow the sprawl of physical servers within their datacenters. In fact, today most datacenters report that virtualization has become the default build for new server installations (see Figure 1).

Page 3: The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

©2010 IDC #222224 3

Customers have realized three primary benefits in deploying virtualization technologies:

Physical server consolidation. Consolidation remains the main driver for deploying virtualization today. By consolidating multiple virtual machines on a single physical server, customers have less server hardware to purchase and fewer installed servers. The most direct benefits are server hardware savings and, consequently, fewer hardware maintenance agreements. Other benefits include reduced energy demands for the datacenter and lower requirements for floor space and rack space. This consolidation helps in reducing staff burdens for purchasing, deployment, and hardware maintenance; however, customers have yet to see any significant benefit from application and OS management.

Improved availability and disaster recovery. Mobility tools enable the migration of a virtual machine from one piece of physical server hardware to another. Customers have found these technologies particularly useful for reducing planned downtime and alleviating the pressure on shrinking maintenance windows. Mobility tools are also used to combat unplanned downtime and can be used alone or in conjunction with existing tools such as clustering and replication. Over time, we expect that customers will be able to regularly move virtual machines not just across the datacenter floor but also from one site to another, creating a new paradigm for disaster recovery.

Improved flexibility. Virtualization has allowed customers to be more responsive to the business. Virtual server deployments can literally reduce the time to deploy a server to minutes compared with days or even weeks for physical server deployments, meaning that time to market is significantly reduced. Virtualization also decouples the server hardware from the application so that maintaining legacy applications is greatly simplified.

Page 4: The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

4 #222224 ©2010 IDC

F I G U R E 1

S e r v e r V i r t u a l i z a t i o n A d o p t i o n

Q. Which of the following statements most closely describes the build decision for new server hardware at your organization?

0 10 20 30 40 50

Standalone servers are the default build,and we will deploy virtualization only if our

customers request it

Standalone servers are the default build,and we will suggest virtualization withapplication owners but will not push it

Standalone servers are the default build,but we strongly advise or incent our

application owners to use virtualizationwhere possible

Virtualization is the default build for newserver hardware unless a case can be

made for a standalone, unvirtualized server

(% of respondents)

n = 400

Source: IDC's Server Virtualization Multiclient Study, 2009

T h e I m p a c t s o f M a i n s t r e a m S e r v e r V i r t u a l i z a t i o n A d o p t i o n

Given the broad adoption of virtualization, the physical server market has changed substantially and the number of installed servers worldwide is leveling off. However, at the same time, the number of virtual machines is exploding. This "virtual server sprawl" is already having a profound impact on IT operations and procurement strategies.

Virtual Machine Sprawl a Rising Datacenter Cost

IDC expects that more than 50 million virtual servers and just 30 million physical systems will be installed by 2013, resulting in more than 80 million logical machines (see Figure 2).

Page 5: The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

©2010 IDC #222224 5

F I G U R E 2

N e w E c o n o m i c M o d e l f o r t h e D a t a c e n t e r S h i f t s t o A u t o m a t i o n T o o l s A r e a R e qu i r e m en t

Source: IDC, 2009

Virtual Machine Densities on the Rise

The rapid growth in the number of virtual machines is due not just to the growing proportion of servers being virtualized but also to the growing number of virtual machines installed per physical server.

After years of building in overhead on hardware resources to help guarantee service-level agreements (SLAs), most customers had modest goals for increasing the utilization of their servers. Many report an ideal of moving from 5% or 10% utilization for standalone servers to 30% or 40% utilization for virtual servers. This has meant that on average, the number of VMs per server has been approximately 6 to 1. Figure 3 demonstrates the average number of VMs deployed per physical server, according to a recent survey of 400 systems administrators. While a consolidation ratio of 6 VMs per server is the average, IDC routinely sees customers standardizing on consolidation ratios of 8:1 or 10:1 and leading-edge customers deploying 25, 30, or even 40 VMs per physical server.

Page 6: The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

6 #222224 ©2010 IDC

Changing Server Configurations to Optimize for Virtualization

IDC finds that IT organizations with more aggressive VM density goals are deploying more richly configured systems with significantly higher memory installations (see Figure 4). To achieve this increase in memory, customers will often buy servers with higher processor counts for two reasons:

1. The higher the socket count, the greater the access to physical memory.

2. Servers with higher numbers of sockets tend to have higher numbers of DIMM slots on the motherboard.

Often, we find that customers that purchase systems with high core counts for improved memory accessibility have underutilized processors.

F I G U R E 3

S e r v e r V i r t u a l i z a t i o n D e n s i t i e s , 2 0 0 8

25+ VMs per physical server

(3.4%)

20–24 VMs per physical server

(4.5%)

15–19 VMs per physical server

(4.5%)

10–14 VMs per physical server

(10.2%)

5–9 VMs per physical server

(24.3%)

2–4 VMs per physical server

(42.2%)

1 VM per physical server (10.9%)

n = 400

Source: IDC's Server Virtualization Multiclient Study, 2009

Page 7: The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

©2010 IDC #222224 7

F I G U R E 4

S e r v e r V i r t u a l i z a t i o n D e n s i t i e s b y M e m o r y I n s t a l l e d p e r S e r v e r

21.2

29.532.3

41.7

12.1

05

1015202530354045

<4 4–5 6–9 10–19 20+

(Number of VMs per server)

Ave

rage

mem

ory

inst

alle

d pe

r ser

ver (

GB)

n = 400

Source: IDC's Server Virtualization Multiclient Study, 2009

New Hardware Solutions Are Required for Substantial Increases in VM Densities

IDC research shows that customers are expecting to achieve utilization rates of 60–80% on their hardware compared with 30–40% today. This type of utilization is on par with that seen in mainframe technologies. To meet this goal, IT organizations must make substantial changes in the way they purchase and configure their server hardware. They must recognize that:

Memory capacity is just as important as processor power in virtual server configurations. For the past several years, IT organizations have been taking advantage of improvements in multicore technology to drive up VM densities. Also, new hardware assist functionality built in to processors has helped reduce virtualization overhead and enabled I/O offloading. However, while processor improvements have been extremely beneficial, many customers now report that the biggest constraint to increasing VM densities lies in the ability to add memory to a system (see Figure 5).

Virtualized servers have much richer configurations relative to standalone servers. IDC continues to see customers buying servers with large numbers of cores as well as large numbers of DIMM slots to support additional memory for virtualization. Typically, we see virtualized x86 servers with 28GB of RAM and a disproportionate number of 4–8 sockets compared with just 4GB RAM and 1–2 sockets on unvirtualized servers. Servers with higher processor counts provide additional memory access by default because they typically have greater numbers of DIMM slots and higher overall memory capacities.

Page 8: The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

8 #222224 ©2010 IDC

Physical memory can be severely limiting to VM densities. Virtual machines must have access to enough physical memory to start the VM and run the guest operating system as well as the application. Administrators have to specify either the total amount of system memory required or the maximum, minimum, and shared memory needed, depending on their choice of virtualization technology. With higher numbers of VMs per server, memory can quickly become overcommitted. So without extended memory solutions, IT organizations have to either limit the number of VMs per server (and therefore increase the number of physical servers installed) or increase the number of installed sockets per server to increase the amount of addressable memory on a system or purchase expensive high-capacity DRAM modules.

Types of applications also impact the memory requirement for virtual servers. The size of an application also has a substantial impact on the number of VMs installed per server. The number of users, the active concurrency of these users, and the memory addressability requirements of the application play a large role in determining the VM density of a virtualized server. Database and OLTP applications, for example, have both high memory and I/O requirements and are not suitable candidates for virtualization with limited memory configurations and where there is overhead from the hypervisor.

Traditional Thinking Hampers VM Densities

IDC's research shows that as the number of cores on a virtual server increases, so too does the memory configuration. VM densities also rise and then level off at just under 10 VMs per server on average. Today, this is primarily because servers with higher core counts are typically used to support higher-end workloads. VM densities actually start to decline with 32 or more installed sockets due to the increased use of richer applications on these multiprocessor servers. So rather than driving up VM densities on these larger boxes, many customers are applying traditional thinking to systems configuration — that is, that smaller applications run on smaller servers and large applications run on larger servers.

Figure 6 displays the average amount of installed memory and the corresponding number of virtual machines based on core count. Servers with four cores in total (typically dual-socket, dual-core processor systems) average 14GB of installed RAM and support just six virtual machines. This translates into approximately one core and 2.5GB of memory per VM. In contrast, a virtualized server with 32 or more cores averages almost 45GB of total memory and just under nine virtual machines. This is almost four cores and 5GB of memory per VM.

As the core count of these servers increases, so too does the prevalence of memory-intensive applications such as business processing, Oracle Database, business analytics, and collaborative applications (see Figure 7). As shown in Figure 6, VM densities for servers with high core counts level off at 8.5 VMs per server. Interestingly, customers are able to virtualize a broader set of applications as the core count of the server increases. IDC expects that without a change to memory capabilities, VM densities will stabilize on higher-end systems as customers deploy more memory-intensive applications on these servers.

Page 9: The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

©2010 IDC #222224 9

F I G U R E 5

V i r t u a l S e r v e r C o n f i g u r a t i o n R e qu i r em en t s : x 8 6 - B a s e d S e r v e r s O n l y

Q. Which of the following hardware components are mainly driving the richer configurations on your virtual servers?

0102030405060708090

Memory Processors Storage I/O devices Other

(% o

f res

pond

ents

who

m

entio

ned

that

com

pone

nt is

dr

ivin

g ric

her c

onfig

urat

ions

)

n = 400 Note: Multiple responses were allowed.

Source: IDC's Server Virtualization Multiclient Study, 2009

F I G U R E 6

M e m o r y D en s i t y a n d V M D en s i t y b y S e r v e r C o r e C o u n t

05

101520253035404550

4 cores 8 cores 16 cores 32+ cores

Mem

ory

(GB)

0

1

2

3

4

5

6

Average memory (GB)Average number of VMsAverage number of cores per VMAverage memory per VM (GB)

n = 400

Source: IDC's Server Virtualization Multiclient Study, 2009

Page 10: The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

10 #222224 ©2010 IDC

F I G U R E 7

V i r t u a l S e r v e r W o r k l o ad P r o f i l e b y S e r v e r C o r e C o u n t

n = 400

Source: IDC's Server Virtualization Multiclient Study, 2009

Automation a Key Driver to Future Success in Virtualization

Most customers have invested far less in systems management and automation tools relative to the investments that have been made in hardware virtualization. Consequently, many datacenters still employ manually intensive processes to manage their virtual machines. The processes are often based on the management of their physical machines. For instance, even though most IT organizations will leverage mobility tools that enable the movement of virtual machines from one physical server to another, most of this migration is done using a combination of manual intervention and point tools, and typically these VMs are moved for the purposes of maintenance (not failover). This movement tends to happen monthly or quarterly and usually during off-hours.

While the success of virtualization has largely been built on server hardware savings, the future success of an increasingly virtualized architecture is in automation. Automation provides IT organizations with the ability to link workflow practices to an "on-demand" and highly utilized infrastructure. Most importantly, automation enables IT organizations to minimize the manually intensive tasks of systems administrators and significantly lower maintenance costs that can be paralyzing to innovation. As a result, customers are building a shared pool of compute, memory, I/O, and storage upon which to support existing applications and launch new projects as well as reduce datacenter power and cooling demands.

Page 11: The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

©2010 IDC #222224 11

Changing Thinking Required in the Use of Automation Tools to Drive Up VM Densities

Most IT organizations are a long way from fully trusting workload-balancing tools that could automate many of these tasks. IDC expects that if customers don't significantly improve automation capabilities for their virtualized environments, IT management costs will actually rise over the next five years as systems administrators struggle to maintain a growing installed base of virtual servers that need to be patched, upgraded, and secured as any physical server (see Figure 8). Without implementing automated workload-balancing techniques, customers will have to continue to build in systems overhead, which impacts the ability to more fully utilize system resources. Application availability and performance will be at risk as bottlenecks will likely ensue on a system that is maximized without the ability to seamlessly move in resources on demand.

As customers begin to build a new automation platform for their virtual environments, memory-rich systems can bridge the movement to automation by providing the appropriate headroom to successfully drive up VM densities.

F I G U R E 8

N e w E c o n o m i c M o d e l f o r t h e D a t a c e n t e r M an a g e m en t C o s t s S h i f t t o V i r t u a l S e r v e r s

Source: IDC, 2009

Page 12: The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

12 #222224 ©2010 IDC

I B M ' s M e m o r y E x t e n s i o n S o l u t i o n f o r V i r t u a l i z a t i o n a n d D a t a b a s e s

In response to customer requirements for higher memory footprints in virtualized servers and for high-end databases, IBM has released its eX5 server line with its MAX5 memory technology that can provide up to double the amount of physical memory available per server relative to industry standards. The eX5 server line is the fifth generation in IBM's Enterprise X-Architecture. IBM has been innovating around Intel-based solutions since 2000 to create a more scalable x86-based architecture to balance processing, memory, and I/O for higher-end workloads.

MAX5 is utilized across IBM's newly released eX5 servers in 2-socket, 4-socket, and 8-socket configurations for a maximum of 1TB, 1.5TB, and 3.0TB of total memory in each of the respective systems with 16GB DRAM modules. These large memory capacities are made possible by attaching the IBM System x MAX5 memory expansion drawer, thereby increasing the number of available DIMM slots. The MAX5 memory expansion drawer provides 32 additional DIMM slots for each eX5 rack server. Thus, a 2-socket server can be expanded to 64 DIMM slots, a 4-socket server can be expanded to 96 DIMM slots, and each of the server chassis in an 8-socket server can be expanded to 192 DIMM slots.

The Advantages of Memory-Dense Servers

IT organizations have been able to achieve substantial consolidation objectives with virtualization to date, but in order for IT to continue to drive down costs in the datacenter, additional improvements are needed within hardware solutions to drive up VM densities. If customers are to consider more than 20 VMs per server, they will need to procure servers with very high memory capabilities. Given that a proportional increase in processor counts is not required, IDC believes that organizations will increasingly look to a new set of server infrastructure that scales memory capacity while optimizing for processor counts. There are multiple benefits to this type of "memory-rich" system:

Scale virtual server environments without installing new physical servers. By procuring servers with higher memory capabilities, IT organizations can choose to grow their installed base of virtual servers as their requirements increase without adding another physical server. Customers can scale their server environment by installing additional memory modules rather than installing a new server. This approach saves on not only hardware, real estate, and power and cooling but also time to order, builds, and deployment of a new piece of hardware.

Choose DIMM counts, DRAM modules, and overall memory costs. By selecting servers with high numbers of DIMM slots, customers can choose to fill these DIMM slots with lower-cost 2GB and 4GB memory DRAM modules or maximize the available memory access with more expensive 8GB or 16GB DRAM modules. Customers can also decide if they want to fill up the DIMM slots with less expensive memory or use fewer, more expensive DRAM modules and allow for future expansion with free DIMM slots.

Page 13: The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

©2010 IDC #222224 13

Improve application choice for physical and virtual servers. Memory-rich servers can be used not only for delivering high numbers of virtual machines per server but also for hosting higher-end 64-bit workloads such as large databases and OLTP, ERP, or CRM solutions that are memory and/or I/O intensive and are sensitive to the overhead of virtualization. This type of architecture also makes virtualization of these higher-end workloads more realistic. While customers may choose to install fewer, larger VMs on these servers, they can still reap the additional benefits of virtualization, mainly higher availability and improved flexibility from mobility and deployment tools.

Better leverage processor-based software pricing. For customers that have applications priced by socket or core, implementing memory-rich systems without an increase in socket or core count means that IT organizations can take advantage of existing software pricing and improve consolidation rates without an increase in software costs.

Aid in migrating large databases to a virtual environment or x86 architecture. With massively scalable memory architectures, x86 customers will have greater choice in where to run their large databases. Prior to these innovations, customers would typically deploy large databases on richly configured standalone systems. Memory capacities in excess of 1TB provide customers with significantly more options for migrating these databases from existing platforms. Memory-rich systems also open up the possibility of virtualizing these databases so that customers can exploit the advantages of mobility and rapid deployment that come with virtualization.

Improve database performance by providing more memory addressability and memory sharing. IT organizations could choose to use memory-rich systems for the purposes of improving the performance of large databases on x86 platforms. Enhanced memory addressability lowers the thrash on system performance with memory-hungry databases and improves memory sharing.

C O N C L U S I O N

IDC believes that a new IT business cycle has begun. Over the next 10 years, IT organizations will be challenged to meet increasing demands from the business without innovating around technology. At the same time, the expectation is to continue to drive greater efficiencies and maximize IT budgets. As businesses become increasingly connected and interconnected to technology, the need to support an ever-growing portfolio of applications and analytics requires a smarter set of IT systems.

Virtualization will be at the heart of future datacenter transformations and fundamentally requires a different set of systems that are tightly integrated and purpose built for virtualization. This new generation of servers is designed from the ground up to support virtual machines and will require large memory footprints to optimize virtual workloads and large databases. These systems bring together server, storage, and networking systems as well as automation tools that seek to reduce management complexities that have become a burden for most large IT organizations. While these systems will be more proprietary in nature, the trade-off is in simplifying deployment and maintenance.

Page 14: The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

14 #222224 ©2010 IDC

To continue to drive efficiencies in datacenter consolidation and address ongoing consolidation, IT organizations should carefully assess the total cost of implementing memory-rich systems with high VM densities, as well as scalable workloads, against the moderate virtualization goals they have today. IDC believes that without a change in IT practices and policies, the cost of computing will continue to rise as virtualization becomes saturated at more modest consolidation levels.

To drive up VM densities, customers should:

Balance newer processing capabilities in systems with dense memory configurations. This is essential for a host of benefits: improving consolidation ratios, expanding the choice of physical and virtual servers for more applications, leveraging processor-based software licensing, enabling migration of large databases to a virtual environment or x86 architecture, and improving database performance with more memory addressability and memory sharing.

Take advantage of innovations in processing architecture with embedded virtualization assist technology to enable offloading and lower the overhead from the hypervisor.

Implement networked storage solutions that enable mobility of virtual machines across physical systems and allow for optimization of applications across the entire datacenter while still meeting SLA requirements for availability and performance.

Implement automation and workload-balancing tools to reduce the amount of required hardware for overhead purposes and reach a higher level of system utilization and lower staff maintenance costs.

Consolidate applications with the same operating system on physical servers to encourage page sharing between applications. This lowers the overhead on system memory should capacity become low.

Aggressively test current IT practices and policies and reevaluate if these serve longer-term goals for virtualization adoption and consolidation. This will likely require a change in current thinking and may be the most difficult change to make in creating a more integrated set of technologies for the future datacenter.

C o p y r i g h t N o t i c e

External Publication of IDC Information and Data — Any IDC information that is to be used in advertising, press releases, or promotional materials requires prior written approval from the appropriate IDC Vice President or Country Manager. A draft of the proposed document should accompany any such request. IDC reserves the right to deny approval of external usage for any reason.

Copyright 2010 IDC. Reproduction without written permission is completely forbidden.

suwe
Text Box
XSW03070-USEN-00