Virtualization - NIIT Page 1 I ntroduction Virtualization is a technical innovation, designed to...

13
Virtualization WHITE PAPER SEPTEMBER 2011 NIIT

Transcript of Virtualization - NIIT Page 1 I ntroduction Virtualization is a technical innovation, designed to...

Virtualization

WHITE PAPER SEPTEMBER 2011

NIIT

NIIT

National Institute of Information Technologies

White Paper

On

“VIRTUALIZATION”

Submitted by:

Mohsen Shahbazi Behzadi

Student ID: S103008900126

Date: 28 Sep 2011

Esfahan – Iran

TABLE OF CONTENTS

1) Introduction ......................................................................................................................P.1

2) A Short Look at Virtualization History ...................................................................P.1

3) Virtualization Benefits ..................................................................................................P.2

4) The Different Types of Virtualization .....................................................................P.3

a) Hardware “assisted” Virtualization(Platform) ...........................................P.3

b) Server Virtualization ..............................................................................................P.3

i) Full Virtualization .............................................................................................P.4

ii) Para Virtualization(VPM) ..............................................................................P.4

c) OS-level Virtualization(Operating System level) .......................................P.5

d) Application Virtualization ...................................................................................P.6

i) Desktop Virtualization ....................................................................................P.6

e) Network Virtualization .........................................................................................P.7

f) Storage Virtualization ............................................................................................P.8

5) The Reasons to Adopt Virtualization strategies .............................................P.8

6) The Challenges of Managing a Virtual Environment ......................................P.9

a) Policy-based management ..................................................................................P.9

b) Bandwidth implications .......................................................................................P.10

c) Image proliferation .................................................................................................P.10

d) Security ........................................................................................................................P.10

e) Human issues ............................................................................................................P10

7) References ........................................................................................................................P.10

Overview

Virtualization is a technology that enables running multiple operating systems

side-by-side on the same processing hardware. This white paper provides an

easy to understand introduction to virtualization, and explains the benefits

that virtualization technology can provide for engineering applications.

NIIT

Page 1

Introduction

Virtualization is a technical innovation, designed to

increase the level of system abstraction and enable IT

users to harness ever-increasing levels of computer

performance.

At its simplest level, virtualization allows you, virtually

and Cost-effectively, to have two or more virtual

computing environments, running different operating

systems and applications on one piece of hardware. For

example, with virtualization, you can have both a Linux

virtual machine and a Microsoft Windows virtual

machine on one system. Alternatively, you could host a

Microsoft Windows 95 desktop and a Microsoft

Windows XP desktop on one workstation.

In slightly more technical terms, virtualization

essentially decouples users, operating systems, and

applications from the specific hardware characteristics

of the systems they use to perform computational

tasks. This technology promises to usher in an entirely

new wave of hardware and software innovation. For

example, and among other benefits, virtualization is

designed to simplify system upgrades (and in some

cases may eliminate the need for such upgrades), by

allowing users to capture the state of a virtual machine

(VM) and then transport that state in its entirety from

an old to a new host system.

Virtualization is also designed to enable a generation of

more energy-efficient computing. Processor, memory,

and storage resources that today must be delivered in

fixed amounts determined by real hardware system

configurations will be delivered with finer granularity

via dynamically tuned VMs.

For example, consider a common engineering

application that can benefit from virtualization

(Running multiple operating systems in

parallel).Today, many designers need to take

advantage of real-time processing or control while

providing a graphical user interface. While traditionally

this would have required two physical computers (One

for each operating system),

virtualization enables running both

operating systems on the same PC or embedded

controller. Eliminating the need for an extra computer

means a better integrated overall system, a saving in

cost, and a reduction in footprint.

It is important to note that virtualization is not only

being used in the engineering domain. Many

information technology (IT) companies have used

virtualization to consolidate large groups of servers at

saving that can reach millions of dollars. This white

paper will specifically outline the major benefits that

virtualization can provide engineers, and provide an

introduction to the types of (common) virtualization.

A Short Look at Virtualization History

When you think of the beginning of Server

Virtualization, companies like VMW are may come to

mind. The thing you may not realize is Server

Virtualization actually started back in the early 1960’s

and was pioneered by companies like General Electric

(GE), Bell Labs, and International Business Machines

(IBM).

In the Early 1960’s IBM had a wide range of systems;

each generation of which was substantially different

from the previous. This made it difficult for customers

to keep up with the changes and requirements of each

new system. Also, computers could only do one thing at

a time. If you had two tasks to accomplish, you had to

run the processes in batches. This Batch processing

requirement wasn’t too big of a deal to IBM since most

of their users were in the Scientific Community and up

until this time Batch processing seemed to have met

the customer’s needs.

Because of the wide range of hardware requirements,

IBM began work on the S/360 mainframe system

designed as a broad replacement for many of their

other systems; and designed to maintain backwards

compatibility. When the system was first designed, it

was meant to be a single user system to run Batch Jobs.

The main advantages of using virtual machines with a

NIIT

Page 2

time sharing operating system was more efficient use

of the system since virtual machines were able to share

the overall resources of the mainframe, instead of

having the resources split equally between all users.

There was better security since each users was running

in a completely separate operating

system. And it was more reliable since no

one user could crash the entire system; only their own

operating system.

Virtualization Benefits

Save Hardware Cost and footprint

Virtualization provides the ability to take advantage

of multiple operating systems on one physical PC or

embedded controller, without investing in a separate

computer for every OS. This allows engineers to buy

less hardware and reduce overall system footprint

(Which is especially important in deployed

applications).

Take Advantage of Operating System Services

With virtualization it is possible to take advantage of

the capabilities offered by different operating

systems on just one set of hardware. For example, a designer may wish to use graphics services provided by

Windows in conjunction with deterministic processing provided by a real-time OS such as labVIEW Real-Time.

Make Use of Multicore Processors

Virtualization software can allow users to directly assign groups of processor cores to individual operating

systems. For example, if an engineer wishes to use Linux and a real-time OS, more CPU and memory resources

can be allocated to the real time-time OS to optimize performance. Running virtualization software on a given

computer allows designers to make the most of their processing resources by keeping processor cores busy.

Increase System Security

Since individual operating systems running on a virtualized machine can be isolated from each other,

virtualization is one way to create secure machines. This reduces the need for multiple physical computers that

operate at different security levels but are not fully utilized.

Business Continuity and Disaster Recovery

Virtualization allows easier software migration, including system backup and recovery, which makes it

extremely valuable as a Disaster Recovery (DR) or Business Continuity Planning (BCP) solution. Virtualization

can duplicate critical servers, so IT does not need to maintain expensive physical duplicates of every piece of

hardware for DR purposes. DR systems can even run on dissimilar hardware. In addition, virtualization

reduces downtime for maintenance, as a virtual image can be migrated from one physical device to another to

maintain availability while maintenance is performed on the original physical server. This applies equally to

NIIT

Page 3

servers and desktops, or even mobile devices – virtualization allows workers to remain

productive and get back online faster when their hardware fails.

The Types of Virtualization

Hardware “Assisted” Virtualization

Hardware virtualization is a system which uses one processor to

act as if it were several different computers. This has two main

purposes. One is to run different operating systems on the same

hardware. The other is to allow more than one user to use the

processor at the same time.

The name “hardware virtualization” is used to cover a range of

similar technologies carrying out the same basic function.

Strictly speaking, it should be called “hardware-assisted

virtualization”. This is because the processor itself carries out

some of the virtualization work. This is in contrast to techniques

which are solely software based.

The primary use of hardware virtualization is to allow multiple users to access the processor. This means that

each user can have a separate monitor, keyboard and mouse and run his or her operating system

independently. As far as the user is concerned, they will effectively be running their own computer. This set-up

can cut costs considerably as multiple users can share the same core hardware.

There are some significant limitations to hardware virtualization. One is that it still requires dedicated

software to carry out the virtualization, which can bring additional costs. Another is that, depending on the way

the virtualization is carried out, it may not be as easy to add in extra processing power later on as and when it

is needed. Perhaps the biggest drawback is that no matter how efficiently the virtualization is carried out, the

maximum processing power of the chip cannot be exceeded. This means it must be split between the different

users. Whether this is a problem depends on what type of applications they are running: the system is better

suited to activities such as web browsing and word processing than activities such as video editing which eat

up more processor power.

Server Virtualization

Server virtualization, also known as a Virtual Dedicated Server

(VDS) or Virtual Private Server (VPS), is cheaper than a dedicated

server and solves the resource-sharing problems of a shared

server by earmarking resources for each subscriber and allowing

each virtual server to run completely separately from the others,

even running separate operating systems, if desired. Server

virtualization also has applications within organizations, as it can

allow tasks and processes that are not compatible to be operated

on the same server completely without interaction or overlap,

NIIT

Page 4

making the use of the server more efficient. Another benefit of virtual servers is allowing for

redundancy within a single piece of hardware. A second virtual server could contain the sample

application and/or the same data to use as a backup in case of a failure.

Server virtualization can be accomplished in three different ways. The first is referred to as full virtualization

or the virtual machine model; the second as paravirtualization or the paravirtual machine (PVM) model;

and the third is called OS-level virtualization or virtualization at the OS (operating system) level.

Full Virtualization

Full virtualization is a process where an entire computer

system is made into a software construct. This construct acts

like the original hardware in every way. Software that is

designed for the hardware will install on the construct as

though it was the actual computer and then run with little to

no slowdown. Using full virtualization has several uses, such

as testing software in a virtual environment or expanding

the usefulness of a single computer or server through virtual

operating systems. While partial virtualization is very

common, full virtualization is relatively rare.

In order to be full virtualization, an entire hardware system needs to be transformed into

software. Every action and nuance of the original hardware needs to move over to the virtual

system. Since this is such a large undertaking, and some system manufacturers take steps to

discourage it, full virtualization is somewhat rare. It is much more common to find partial

virtualization, where all the necessary system bits are present, but the physical hardware

system handles much of the low-level calculations and functions.

Paravirtualization

Paravirtualization is a method of allowing software running

on a virtual system to bypass the virtual interface and run

operations on the system’s actual hardware. In a standard

virtual system, the only program that utilizes the system’s

actual hardware is the virtual interface. The rest of the

software runs totally inside the virtual environment. With

paravirtualization, there are ways that the included

software can access actual resources rather than virtual

ones. This speeds up certain functions without sacrificing

computing power.

In most virtual systems, a real machine has a program installed that operates as the virtual

interface for the rest of the operations. This interface, often called a hypervisor, is usually

inaccessible to the users of the virtual system; only people with actual hardware access can get

to it. When virtual users do have access to the hypervisor, they are often severely limited in

what they can do to the system. The hypervisor is essentially the center of the virtual system. It

NIIT

Page 5

oversees the installed virtual software and provides a platform for virtual users. When

programs on the virtual system need access to hardware, the hypervisor will take the

information and process it itself or format it and send it to the underlying system.

In a system that uses paravirtualization, a virtual program has the option to bypass the virtual

operating system and operate directly with the system’s hardware when it needs hardware

access. Some operations are very difficult for the virtual system to accomplish. When a virtual

program needs to perform one of these tasks, it takes fewer resources for the program to skip

the virtual layer and go directly to the hardware system. Paravirtualization is still done

sparingly, as too many direct hardware calls can overtax the system.

In order to use paravirtualization, both the actual system and the virtual system need certain

preparations. The biggest factor is the paravirtualization software itself; only operating systems

and hypervisors with paravirtualization capacity can perform these functions. While these are

often excluded from a standard install, most server software companies have add-ons available

that will give their products the correct capabilities.

Virtualization at the OS (operating system) level

Operating system virtualization is a method of altering a standard operating system so it may handle multiple

users all at the same time. These individual users would not have any interaction with one another. Their

information would also remain separate, even though they are using the same system. While this technology

has several uses, the most common uses are in hosting situations and server consolidation.

With operating system virtualization, a single system is

set up to operate like several individual systems. The

virtualized system is set up to simultaneously except

commands from different users. These commands remain

separate from one another; the results and impact of any

given command has no effect on commands from others.

This division of resources should be transparent to the

user, they shouldn’t be able to tell whether they are on a

virtual system or not.

A common example of this process is the logout command. On a normal computer system, logging out of the

operating system will suspend user input until required by the logout system or the user logs back in. In a

system using operating system virtualization, when one user logs out the operating system just logs out the

single user, but the rest of the users are unaffected.

There are two common circumstances where operating system virtualization is used, hosting environments

and server consolidation. Web hosting companies, e-mail storage systems and other account-based hosting

systems, often use virtual systems. Since the users of these types of systems require very few resources it is

possible for many people to log on at once without taxing the system. Each user operates inside their own

environment without interacting with, or seeing the resources of, other users.

NIIT

Page 6

The second common area where a user may encounter operating system virtualization is on a

consolidated server. As computer systems increase in power, one new server may be able to take the

jobs of several older ones. In this case, it is possible to combine all the server resources onto the new machine.

Since the old servers were separate, it is often necessary to maintain the isolation used by the original systems.

In both of these areas, multiple users that have no relationship to one another have to use the same server. This

is one of the most common aspects of operating system virtualization. If the users were part of the same group,

then they could coexist and share resources. The only reason to keep them separate is when the users have no

relationship with one another and have no reason to combine systems.

Application Virtualization

Application virtualization is a process for changing the way that

software runs on a computer’s operating system. With application

virtualization tools, software makers can create programs that will run

on a wider range of operating systems, or in more diverse conditions.

Making applications “virtual” helps to provide more compatibility for a

piece of software in complex and diverse hardware setups.

In traditional software design, a software program is executed by the

operating system directly. With application virtualization, the process is

different. The “run-time” process involves indirect program execution.

This means that some remote technology or extra component is helping

the computer to “read” and “run” the program.

Different kinds of application virtualization include application streaming and “desktop virtualization”. In

desktop virtualization, there may be “helper” elements installed to assist in the execution of software. In

application streaming, help can be delivered through networks, over an Internet connection.

Application virtualization is similar to what’s called “software as a service.” Many software as a service or SaaS

setups include applications training or similar methods. The overall benefit of SaaS is to provide software over

the Web as opposed to selling it “out of the box.” In traditional “out of the box” setups, the user has to install

and register a software product. With SaaS and application virtualization technologies, none of this is required.

Desktop Virtualization

Desktop virtualization is a computer process

where individual workstations use a central

desktop stored on a separate server. This server

may have multiple desktop systems for different

types of workers, such as a marketing

department having a different desktop than tech

support, but it doesn’t have individual desktops

for each worker.

With a desktop virtualization process, a central

system stores all of the system’s information.

This central server contains the hosted operating

NIIT

Page 7

system and configuration information. If the server hosts multiple desktops, it has a

different configuration setup or operating system installed for each one.

Generally, these desktops are very basic. They contain little by way of extra programming or

personalized features. Many remote desktop systems allow individuals to save their personal

settings on the server along with their desktop. These individual configuration files are usually

very basic as well; the majority of systems try to keep desktops as similar as possible.

Saving configuration settings is just the tip of what the server does in a typical desktop

virtualization process. They also allow the saving of documents and web histories. This will

keep all of the information generated by a workforce in a single, central, location. With the

exception of the client side system that allows a person to access the desktops, everything is

done completely on the desktop server.

Network Virtualization

Network virtualization is a method used to

combine computer network resources into a single

platform, known as a virtual network. It is achieved

by software and services that allow the sharing of

storage, bandwidth, applications, and other

network resources. The technology utilizes a

method similar to the virtualization process used

to simulate virtual machines within physical

computers. A virtual network treats all hardware

and software in the network as a single collection

of resources, which can be accessed regardless of

physical boundaries. In simple terms, network

virtualization allows each authorized user to share

network resources from a single computer.

There are two forms of network virtualization, external and internal. External virtualization generally

combines multiple networks — or parts of networks — into a single virtual entity. Internal virtualization

provides system-wide sharing and other network functionality to the software containers, which act as hosting

environments for the software components of the network, on a single physical system. The external variety is

the most commonly used method to create virtual networks. Vendors that distribute these virtualization tools

generally offer either one form or another.

Network virtualization is not an entirely a new concept. In fact, virtual private networks (VPNs) have been

widely used by network administrators for years. Virtual local area networks (VLANs) also represent a

common variation of network virtualization. Both serve as examples of how significant advancements in

computer connectivity methods have made it possible for networks to no longer be restricted by geographical

lines.

NIIT

Page 8

Storage virtualization

Computer data is stored on disks and solid state media for availability over days, months or years. In small

systems, such as a personal computer there is a CPU and one or two hard disks. When a disk fails or runs out of

space another disk has to be manually added, and, the data has to be placed on that disk. In large systems, there

can be hundreds of disks and digital storage systems, and the complexity of managing the information

increases considerably. Storage virtualization is the grouping of storage devices such that it seamlessly appears

to be one large storage device.

Storage virtualization can be handled by hardware

or software, or a combination of the two. It has a

number of benefits. Data may be moved from one

device to another device behind the scenes while

the system is making requests, and the request is

automatically routed to the new location. When a

storage device has to be added or removed, this can

be done without bringing down the system. This

increases the availability of the system to the

ultimate users.

With storage virtualization, information can be intelligently managed; for example, data that is accessed less

frequently can be moved to a slower device. Utilization of storage space could be improved. Each storage

device by itself may have unused space, but that unused space might be too small to be utilized for a single file

that the operating system wants to place. With storage virtualization, unused space on multiple devices is

automatically “accumulated” because parts of the file can be stored on separate devices.

Storage virtualization thus provides the system with the storage it needs without getting bogged down by the

limitations of the individual devices. Of course, now, a significant amount of information has been virtualized. It

is very important that this information is retained in a fail-proof manner, usually by storing it in multiple

locations.

The Reasons to Adopt Virtualization Strategies

Virtualization is a technology that can benefit just about any business. Thousands of IT executive around the

world use virtualization solutions to reduce IT costs while increasing the efficiency, utilization and flexibility of

their existing computer hardware. Here are a few of the top benefits to developing your virtualization strategy:

Server Consolidation and Optimization: Virtualization makes it possible to achieve significantly higher

sever utilization by pooling common infrastructure resources and avoiding the need for one server for each

application model.

Infrastructure Cost Reduction: With virtualization, you can reduce the number of servers and related IT

hardware in the data center. This leads to reductions in real estate, power and cooling requirements, resulting

in significantly lower IT costs.

NIIT

Page 9

Improved Business Continuity: Eliminate planned downtime and recover quickly from

unplanned outages with the ability to securely backup and migrating entire virtual environments with

no interruption in service.

Improved Desktop Manageability & Security: Deploy, manage and monitor secure desktop environments

that end users can access locally or remotely, with or without a network connection, on almost any standard

desktop, laptop or tablet

The Challenges of Managing a Virtual Environment

While virtualization offers a number of significant business benefits, it also introduces some new management

challenges that must be considered and planned for by companies considering a virtualization strategy.

The key management challenges for companies adopting virtualization include:

Policy-based management

Bandwidth implications

Image proliferation

Security

Human issues

Policy-Based Management

Enterprises should look to deploy automated policy based management alongside their virtualization strategy.

Resource management, for example, should include automated policy-based tools for disk allocation and usage,

I/O rates, CPU usage, memory allocation and usage, and network I/O. Management tools need to be able to

throttle resources in shared environments, to maintain service levels and response times appropriate to each

virtual environment. Administrators should be able to set maximum limits, and allocate resources across

virtual environments proportionally. Allocations need to have the capability to change dynamically to respond

to peaks and troughs in load characteristics. Management tools will also be required to automate physical to

virtual, virtual to virtual, and virtual to physical migration.

Bandwidth Implications

Enterprises should make sure they have the appropriate network bandwidth for their virtualization

requirements. For example, instead of one server using a 100Mb Ethernet cable, now 10 or even 100 virtual

servers must share the same physical pipe. While less of a problem within the datacenter or for communication

between virtual servers running in a single machine, network bandwidth is a significant issue for application

streaming and remote desktop virtualization. These technologies deliver quite substantial traffic to end users,

in most cases significantly higher than is required for standard-installed desktop computing. Streaming

technologies, although in many cases more efficient than complete application delivery, also impose high

bandwidth requirements.

NIIT

Page 10

Image Proliferation

Operating system and server virtualization can lead to a rapid proliferation of system images, because it is so

much easier and faster to deploy a new virtual image than to deploy a new physical server, without approval or

hardware procurement. This can impose very high management and maintenance costs, and potentially lead to

significant licensing issues including higher costs and compliance risks. This proliferation also leads to

significant storage issues, such as competing I/O and extreme fragmentation, requiring much faster and multi-

channel disk access, and more maintenance time, effort, and cost. Enterprises need to manage their virtual

environment with the same level of discipline as their physical infrastructure, using discovery tools to detect

and prevent new systems from being created without following proper process.

Security

While virtualization can have many worthwhile security benefits, security also becomes more of a management

issue in a virtualized environment. There will be more systems to secure, more points of entry, more holes to

patch, and more interconnection points across virtual systems (where there is most likely no router or

firewall), as well as across physical systems. Access to the host environment becomes more critical, as it will

often allow access to multiple guest images and applications. Enterprises need to secure virtual images just as

well as they secure physical systems.

Human Issues

Enterprises should not underestimate the potential for human issues to affect their virtualization plans

adversely. Virtualization requires a new set of skills and methodologies, not just within IT, but often (certainly

in the case of application and desktop virtualization) in the end-user community. Perhaps most importantly,

this new technology requires new and creative thinking, not just new training and skills.

http://en.wikipedia.org/wiki/Virtualization

http://www.kefox.com/virtualization_details.html

http://www.virtualizationpractice.com/blog/?p=8843

http://www.vmware.com/virtualization/green-it/pc.html

http://www.kernelthread.com/publications/virtualization/

http://static.highspeedbackbone.net/html/virtualization-spotlight.html

http://www.gfi.com/blog/5-benefits-switching-virtualization-technology/

http://www.nashnetworks.ca/virtualization-a-small-business-perspective.htm

http://www.exforsys.com/tutorials/virtualization/understanding-basic-virtualization.html

Reference

s