hyper-V-HA

9
Hyper-V: Microsoft’s Approach to Server Virtualization Organizations are looking to server virtualization to consolidate, increase effici ency and save money . Micro soft’ s approach to server virtua lizati on is significantly different from that taken by VMware. In this informative E-Guide brought to you by SearchWindowsServer.com and Dell, learn specifically how Hyper-V is designed to meet high availability requirements, how this technology is being used at the hardware level, the role of a 64-bit operating system with virtualization, and more. Sponsored By: E-Guide SearchWinIT.com SearchExchange.com SearchSQLServer.com SearchEnterpriseDesktop.com SearchWindowsServer.com SearchDomino.com LabMice.net TechTarget Wi n do ws Me dia

Transcript of hyper-V-HA

8/6/2019 hyper-V-HA

http://slidepdf.com/reader/full/hyper-v-ha 1/9

Hyper-V: Microsoft’sApproach to Server

VirtualizationOrganizations are looking to server virtualization to consolidate, increase

efficiency and save money. Microsoft’s approach to server virtualization is

significantly different from that taken by VMware. In this informative E-Guide

brought to you by SearchWindowsServer.com and Dell, learn specifically how

Hyper-V is designed to meet high availability requirements, how this technology

is being used at the hardware level, the role of a 64-bit operating system with

virtualization, and more.

Sponsored By:

E-Guide

SearchWinIT.comSearchExchange.comSearchSQLServer.com

SearchEnterpriseDesktop.comSearchWindowsServer.comSearchDomino.com

LabMice.

TechTargetWindowsMedia

8/6/2019 hyper-V-HA

http://slidepdf.com/reader/full/hyper-v-ha 2/9

Hyper-V: Microsoft’s Approach to Server Virtualization

Table of Contents

Sponsored by: Page 2 of 9

Table of Contents:

Can Microsoft Hyper-V meet high availability requirements?

Server virtualization at the hardware level with Hyper-V

Virtualization and 64-bit: A match made in Windows heaven

Resources from Dell

Hyper-V: Microsoft’sApproach to ServerVirtualization

E-Guide

8/6/2019 hyper-V-HA

http://slidepdf.com/reader/full/hyper-v-ha 3/9

Can Microsoft Hyper-V meet high availability requirements?

Danielle Ruest and Nelson Ruest, Contributors

The buzz in the industry right now is all about virtualization. Virtualization vendors are jockeying for position and

each one touts the features the others are not supposed to have. One of these is a feature every hypervisor should

have and one that Windows Server 2008 Hyper-V seems to be without in this version: live virtual machine (VM)

migration. Live migration in a virtual environment does not mean migration from one state to another, such as

migrating a physical machine to a virtual state. It means moving a VM from one host server to another while the

VM is running and delivering service to end users without interrupting the service.

In order to move virtual machines in this manner, you need to make sure that each host server has access to the

files that make it up. When you move a VM through live migration, you don’t want to have to move the files that

make it up since those can be considerable in size. Instead, you want to move only the in-memory contents of the

virtual machine — contents that are stored within the host server’s memory.

Both VMware ESXi and Citrix XenServer have the ability to do this, and both use the same strategy. Generally,

host servers are linked together in high-availability clusters or resource pools. The servers tie into the same shared

storage container and because of this, they have immediate access to the files that make up the VM during such a

move. This is the first rule of host servers: They must be configured to tie into shared storage in order to provide

high availability for the virtual machines they host (see Figure 1).

Figure 1

Microsoft’s Hyper-V does not support live migration. Instead it supports Quick Migration—a feature that saves the

state of a VM and moves it to another host. Because the state of the virtual machine is saved, there is an interrup-

tion in service, even though in some host server configurations this interruption can be as minimal as four seconds.

Hyper-V provides this feature through Windows Server 2008’s Failover Clustering service, where host server nodes

are linked together into a failover cluster. These clusters can provide host server redundancy at the site level when

two or more nodes—Windows Server 2008 can create clusters of up to 16 nodes—are linked to shared storage, or

at the multi-site level when two or more nodes are joined through WAN links to provide redundant services should

damage occur at the site level (see Figure 2 on next page).

Multi-site failover is based on each cluster node having the same contents as the other. This is achieved through

content replication, a feature that Windows Server 2008 does not offer—you get it through a third-party replication

Hyper-V: Microsoft’s Approach to Server Virtualization

Can Microsoft Hyper-V meet high availability requirements?

Sponsored by: Page 3 of 9

8/6/2019 hyper-V-HA

http://slidepdf.com/reader/full/hyper-v-ha 4/9

8/6/2019 hyper-V-HA

http://slidepdf.com/reader/full/hyper-v-ha 5/9

So, is it essential for Microsoft Hyper-V to have live migration? The answer is no, not at this time. Most organiza-

tions running Hyper-V as a hypervisor will also run Windows workloads in their virtual machines. By relying on

Windows Server 2008’s own internal features, it’s easy for administrators to make sure there are no service inter-

ruptions to end users, no matter what happens to the host server. It doesn’t work for every Windows workload, but

it does for most of them, and as a proven technology, it works really well.

Server virtualization at the hardware level with Hyper-V

Brien M. Posey, Contributor

For as long as Windows has existed, applications have been prohibited from communicating directly with hardware.

This is because one of the major principles behind the Windows operating system is that it acts as a level of abstrac-

tion between hardware apps. Applications never communicate with the hardware directly. Instead, they communicate

with Windows, which in turn uses various device drivers to enable it to communicate with the physical hardware.

Recently, however, this philosophy has started to change—at least when it comes to server virtualization. Let’s start

with a little history.

A look back at Virtual Server 2007

Prior to the release of Windows Server 2008, Microsoft’s primary virtualization solution was Virtual Server 2007.

Virtual Server used the standard philosophy, that is, that apps were not allowed to communicate directly with the

system hardware, taking something of a monolithic approach to server virtualization.

Windows treated Virtual Server 2007 pretty much the same as any other Windows application in that the host

operating system ultimately retained control of all of the system’s resources. That meant guest operating systems

all shared system resources, such as memory, network communications, video processing and so on.

This sharing of resources is both inefficient and risky. It’s inefficient because guest operating systems do not have a

dedicated pool of system resources. Instead, the host operating system acts sort of like a dictator, telling the guest

OS if or when it can have access to certain resources. Both Windows and Virtual Server 2007 act as a bottleneck

for guest operating systems.

This is a risky approach because of the way that resources are shared between guest and host operating systems.

Suppose for a moment that the host OS had a buggy NIC driver, and that bug eventually made it so the host OS

could not communicate on the network. Because the guest operating systems are completely dependent on the

host, they would not be able to communicate across the network either.

Hyper-V: Microsoft’s Approach to Server Virtualization

Server virtualization at the hardware level with Hyper-V

Sponsored by: Page 5 of 9

8/6/2019 hyper-V-HA

http://slidepdf.com/reader/full/hyper-v-ha 6/9

Enter the hypervisor

With the release of Hyper-V, Microsoft took a completely different approach to server virtualization in that virtual

machines are now allowed to communicate directly with the hardware (well, sort of). The exception is disk I/O,which is still coordinated through the host operating system. Guest servers running on Hyper-V completely bypass

the host OS and communicate directly with the server’s hardware. The reason why Microsoft is able to take such a

radically different approach to server virtualization is that Hyper-V is based on some relatively recent changes to

server hardware.

The latest server hardware supports something called hardware-assisted virtualization. For example, Intel servers

offer Intel VT (Virtualization Technology), while AMD has AMD-V. Hyper-V absolutely requires that your server be

equipped with one of these two technologies. It is also worth noting that I recently deployed a server that was

equipped with Intel VT, but I had to enable virtualization at the BIOS level before I was allowed to install Hyper-V.

So what else makes Hyper-V different from previous virtualization technologies? Unlike Virtual Server 2007, Hyper-V is a very small application. This size reduction is due to the fact that it’s really the hardware that is doing most of 

the virtualization work.

Hyper-V creates special partitions for guest operating systems. These are different from disk partitions, because the

partitions include memory and other system resources. Each virtual machine functions solely within its own partition,

which greatly reduces the chances that a failure with the host OS or with a different guest could impact a guest

operating system. It also makes virtualization much more secure since the virtual machines are physically isolated

from each other.

Hardware-assisted virtualization is a technology that is really worth paying attention to. I have been using Hyper-V

on a few different servers in my lab, and guest operating systems seem to perform much better than they do in a

Virtual Server 2007 environment. In fact, they tend to perform so well that I sometimes forget that they are virtual

servers instead of physical ones.

Virtualization and 64-bit: A match made in Windows heaven

Christa Anderson, Contributor

Virtualization is the hot topic of the day. Be it application virtualization, OS virtualization or presentation virtualiza-

tion; if you can virtualize it, someone’s probably slapped that label on it.

The thing is, all of these technologies have been around for some time, even years in some cases. Multi-user

Windows has existed in various forms since 1992 and became a core part of the Windows operating system with

Windows 2000. VMware Inc. has been evangelizing virtualized servers and clients since the company’s inception,

Hyper-V: Microsoft’s Approach to Server Virtualization

Virtualization and 64 bit: A match made in Windows heaven

Sponsored by: Page 6 of 9

8/6/2019 hyper-V-HA

http://slidepdf.com/reader/full/hyper-v-ha 7/9

and SoftGrid was talking up application isolation and streaming long before Microsoft purchased the company in

2006. People were buying it, too.

So why is virtualization a hot topic now, instead of two years ago?

There are several possible reasons. The virtualization features have improved with every release and therefore have

become more like working on a non-virtualized computer. Increased interest in environmentally friendly computing

solutions has fostered interest in remote access and server consolidation.

Still, perhaps the most important reason why virtualization has become such a hot topic is that the infrastructure

now exists to support it and make it scale while ensuring a rich experience. Reliable high-speed LANs and WANs are

part of that infrastructure, as is 64-bit Windows.

In fact, 64-bit Windows is a key part of virtualization because of the one major virtualization bottleneck—memory.

Let’s take a look at the relationship between physical memory (the DMMs you install in your computer) and virtualmemory (the place where the operating system stores data and applications in use).

In a 32-bit system, Windows can address up to 4 GB of virtual memory. Two gigabytes of virtual memory are

shared among kernel-mode processes that support core functions of the operating system, and 2 GB are allocated

individually to each user-mode process and isolated from all other user-mode processes. The number of virtual

memory addresses available to user-mode processes may appear enormous because each process sees the entire

2 GB area for its exclusive use.

But in order for virtual memory to be useful, the memory manager must be able to map the virtual address to a

physical location so that when the data is needed, the operating system knows how to go get it. Windows does this

through a system of pages that store data, page tables that index the pages and a record of page table entries.

Combined, these all document how a virtual memory address maps to a physical location.

The 32-bit operating system’s method of mapping virtual addresses to physical ones works for up to 4 GB of 

physical memory under normal circumstances, since the addresses are 32 bits long. The rest of the virtual memory

addresses must be backed by an area of hard disk called the page file, which provides alternate storage but is

slower than RAM.

The issue here is that on a virtualized system, there’s going to be a lot of user-mode processes. A single computer

may support half a dozen or so users for virtualizing desktops using technology like Microsoft’s Hyper-V or VMware’s

ESX Server. And it may support dozens or hundreds of users for virtualizing applications using Terminal Services.

Every user will have his or her own set of applications, and all those applications were originally designed to run on

a single-use computer. Virtualization platforms are designed to be as parsimonious as possible with memory, but at

the end of the day they’re bound by the demands of the applications.

Virtualized PCs have an even greater problem than terminal server sessions. The entire operating system must be

virtualized to support each connection. Another issue is that virtualization becomes the victim of its own success.

Hyper-V: Microsoft’s Approach to Server Virtualization

Virtualization and 64 bit: A match made in Windows heaven

Sponsored by: Page 7 of 9

8/6/2019 hyper-V-HA

http://slidepdf.com/reader/full/hyper-v-ha 8/9

If the virtualized experience is limited, then people won’t like it. But if it’s got most of the same features of a non-

virtualized platform, then supporting that takes resources. For example, the new support for monitor spanning in

Windows Server 2008 Terminal Services requires more memory than a single monitor because the viewing space

is larger.

Therefore, you need an efficient virtualization platform with enough memory to back it properly. Although terminal

servers have used 32-bit operating systems for smaller deployments, 64-bit platforms—combined with adequate

processor support and a disk topology designed to reduce I/O bottlenecks—will be necessary to support larger

deployments. And that’s just as true for virtualized operating systems attempting desktop replacement. For this

reason, Microsoft’s Hyper-V is available only on 64-bit operating systems, although you can install 32-bit operating

systems as guests on Hyper-V.

There are some catches to 64-bit operating systems too, of course. For one, 64-bit processes use more memory

than their 32-bit counterparts, so you’ll need to run enough processes to require more than 4 GB of memory before

it’s worth it. In addition, 64-bit operating systems need 64-bit drivers, which can be harder to find. Still, although

they require more planning to implement, 64-bit operating systems are the future, especially since they are more

or less required to support the virtualization that people are looking for.

Hyper-V: Microsoft’s Approach to Server Virtualization

Virtualization and 64 bit: A match made in Windows heaven

Sponsored by: Page 8 of 9

8/6/2019 hyper-V-HA

http://slidepdf.com/reader/full/hyper-v-ha 9/9

Hyper-V: Microsoft’s Approach to Server Virtualization

Resources from Dell

Sponsored by: Page 9 of 9

Resources from Dell

Virtualization 101

Using virtualization for testing and development environments

What is Microsoft Hyper-V?

About Dell

Dell Inc. (NASDAQ: DELL) listens to customers and delivers innovative technology and services they trust and

value. Uniquely enabled by its direct business model, Dell is a leading global systems and services company and

No. 34 on the Fortune 500. For more information, visit www.dell.com, or to communicate directly with Dell via a

variety of online channels, go to www.dell.com/conversations. To get Dell news direct, visit www.dell.com/RSS.