LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
-
Upload
vanika-kapoor -
Category
Education
-
view
116 -
download
0
description
Transcript of LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
Xen VirtualizationXen Virtualization
Submitted To:Mr. Prakash Kumar
Submitted By:Vanika Kapoor(10103453)
Atishay Baid(10103457)
Submitted To:Mr. Prakash Kumar
Submitted By:Vanika Kapoor(10103453)
Atishay Baid(10103457)
VirtualizationVirtualization
Separation of administrative zones Separation of software failure Consolidation of hardware resources
Full utilization of hardwareEasier hardware provisioning -- Want a
server? You’ve got a server.Excellent test environments
Separation of administrative zones Separation of software failure Consolidation of hardware resources
Full utilization of hardwareEasier hardware provisioning -- Want a
server? You’ve got a server.Excellent test environments
What virtualization isn’tWhat virtualization isn’t Not an HA solution by itself Naïve Implementation: Not suitable for some secure applications
Timing of private keysUnknown -- Lots of new codeHost OS adds a new point of entry
May actually increase complexityAdds Host OSes to manageAdds to total number of points of managementEncourages “guerilla” server projects
Not an HA solution by itself Naïve Implementation: Not suitable for some secure applications
Timing of private keysUnknown -- Lots of new codeHost OS adds a new point of entry
May actually increase complexityAdds Host OSes to manageAdds to total number of points of managementEncourages “guerilla” server projects
Full VirtualizationFull Virtualization
Hardware Virtual MachinesVMWare, Xen HVM, KVM, Microsoft VM, ParallelsRuns unmodified guestsGenerally worst performance, but often acceptableSimulates bios, communicates with VMs through
ACPI emulation, BIOS emulation, sometimes custom drivers
Can sometimes virtualize accross architectures, although this is out of fashion.
Hardware Virtual MachinesVMWare, Xen HVM, KVM, Microsoft VM, ParallelsRuns unmodified guestsGenerally worst performance, but often acceptableSimulates bios, communicates with VMs through
ACPI emulation, BIOS emulation, sometimes custom drivers
Can sometimes virtualize accross architectures, although this is out of fashion.
Para-virtualizationPara-virtualization Hypervisor runs on the bare metal. Handles CPU
scheduling and memory compartmentalization. Dom0, a modified Linux Kernel, handles networking
and block storage for all guests. Dom0 is also privileged to manage the VMs on the system.
DomU, or the guests OS, sends some requests straight to the hypervisor, and others to the Dom0.
Because the kernel knows its virtualized, features can be built into it: hot connection/disconnection of resources, friendly shutdown, serial console.
Other paravirtualization schemes: Sun Logical Domains, VMware (sometimes)
Hypervisor runs on the bare metal. Handles CPU scheduling and memory compartmentalization.
Dom0, a modified Linux Kernel, handles networking and block storage for all guests. Dom0 is also privileged to manage the VMs on the system.
DomU, or the guests OS, sends some requests straight to the hypervisor, and others to the Dom0.
Because the kernel knows its virtualized, features can be built into it: hot connection/disconnection of resources, friendly shutdown, serial console.
Other paravirtualization schemes: Sun Logical Domains, VMware (sometimes)
Elements of a Xen VMElements of a Xen VM
Virtual Block DeviceImage fileReal block device (either LVM or physical)
Network BridgesRouted, terminates at the Dom0Bridged, terminates at the network
interfaceVirtual Framebuffer
VNC Server
Virtual Block DeviceImage fileReal block device (either LVM or physical)
Network BridgesRouted, terminates at the Dom0Bridged, terminates at the network
interfaceVirtual Framebuffer
VNC Server
Example VM ConfigExample VM Config
name = ”DomU-1"maxmem = 512memory = 512vcpus = 2bootloader = "/usr/bin/pygrub"on_poweroff = "destroy"on_reboot = "restart"on_crash = "restart"vfb = [ "type=vnc,vncunused=1,keymap=en-us" ]disk = [ "tap:aio:/var/lib/xen/images/Centos5Image.img,xvda,w" ]vif = [ "mac=00:16:3e:79:fd:8d,bridge=xenbr0" ]
name = ”DomU-1"maxmem = 512memory = 512vcpus = 2bootloader = "/usr/bin/pygrub"on_poweroff = "destroy"on_reboot = "restart"on_crash = "restart"vfb = [ "type=vnc,vncunused=1,keymap=en-us" ]disk = [ "tap:aio:/var/lib/xen/images/Centos5Image.img,xvda,w" ]vif = [ "mac=00:16:3e:79:fd:8d,bridge=xenbr0" ]
xm -- Xen Managerxm -- Xen Manager
Commandline tool on Dom0 for managing vms. Quick overview of options:
console -- attach to a device’s console create -- boot a DomU from a config file destroy -- immediately stop a DomU list -- List running DomUs migrate -- Migrate a console to another Dom0 pause/unpause -- akin to suspend. TCP connections will timeout shutdown -- Tell a DomU to shut down. network-attach/network-detach block-attach/block-detach
Commandline tool on Dom0 for managing vms. Quick overview of options:
console -- attach to a device’s console create -- boot a DomU from a config file destroy -- immediately stop a DomU list -- List running DomUs migrate -- Migrate a console to another Dom0 pause/unpause -- akin to suspend. TCP connections will timeout shutdown -- Tell a DomU to shut down. network-attach/network-detach block-attach/block-detach
Graph ViewGraph View
Xen Live MigrationXen Live MigrationMigrate machines off during upgrades or
balance loadSet xend.conf to allow migration from
other xen Dom0s.Machine must reside on shared storage.Must be on the same level2 networkxm migrate -l Machine dest.ip.addr.ess
Migrate machines off during upgrades or balance load
Set xend.conf to allow migration from other xen Dom0s.
Machine must reside on shared storage.Must be on the same level2 networkxm migrate -l Machine dest.ip.addr.ess
Shared Storage OptionsShared Storage Options
NFSSimple hardware failoverwell-understood configurationSpotty reliability history
Block level storage (iscsi or FC)More complex configurationMultipathingCommercial solutions are expensiveWe’re seeing traction for open iscsi lately.
NFSSimple hardware failoverwell-understood configurationSpotty reliability history
Block level storage (iscsi or FC)More complex configurationMultipathingCommercial solutions are expensiveWe’re seeing traction for open iscsi lately.
What to Look for In StorageWhat to Look for In Storage
Redundant host connectionsSnapshottingReplicationSensible Volume ManagementThin ProvisioningIP-based failover, esp. if x86 based
Redundant host connectionsSnapshottingReplicationSensible Volume ManagementThin ProvisioningIP-based failover, esp. if x86 based
Storage SystemsStorage Systems OpenFiler
Nice fronted.Replication with DRBDiscsi with linux iscsi-target
OpenSolaris/ZFSThin provisioningToo many ZFS features to listStorageTek AVS -- Replication in may formsComplex configuration
NexentaStorZFS/AVS in Debian.Rapidly Evolving
SAN/IQ Failover, storage virtualization, n(y) redundancyExpensive and wickedly strict licensing
Too Many propriety hardware systems to list
OpenFilerNice fronted.Replication with DRBDiscsi with linux iscsi-target
OpenSolaris/ZFSThin provisioningToo many ZFS features to listStorageTek AVS -- Replication in may formsComplex configuration
NexentaStorZFS/AVS in Debian.Rapidly Evolving
SAN/IQ Failover, storage virtualization, n(y) redundancyExpensive and wickedly strict licensing
Too Many propriety hardware systems to list
Network SegmentationNetwork Segmentation
802.1q VLAN taggingAll VLANs operate on the same physical network, but
packets carry an extra tag that indicates which network they belong in.
Create an interface and a bridge for each vlan.Connect Xen DomUs to their appropriate vlanConfigure host’s switch ports as vlan trunk ports.Configure router somewhere, or a layer 3 switch is
useful here.
802.1q VLAN taggingAll VLANs operate on the same physical network, but
packets carry an extra tag that indicates which network they belong in.
Create an interface and a bridge for each vlan.Connect Xen DomUs to their appropriate vlanConfigure host’s switch ports as vlan trunk ports.Configure router somewhere, or a layer 3 switch is
useful here.
Commercial XensCommercial Xens
Citrix XenServerOracle VMVirtualIron
Typical Features:Resource QoSPerformance trendingPhysical Machine Failure detectionPretty GUI!API for server provisioning
Citrix XenServerOracle VMVirtualIron
Typical Features:Resource QoSPerformance trendingPhysical Machine Failure detectionPretty GUI!API for server provisioning
Recovery strategiesRecovery strategies
Mount virtual block device on Dom0 losetup /dev/loop0 XenVBlockImage.img losetup -akpartx -a /dev/loop0pvscan (if using LVM inside VM)vgchange -a y VolGroup00mount /dev/mapper/VolGroup00-LogVol00 /mnt/xen
chroot /mnt/xen (or whatever recovery steps you take next)
Mount virtual block device on Dom0 losetup /dev/loop0 XenVBlockImage.img losetup -akpartx -a /dev/loop0pvscan (if using LVM inside VM)vgchange -a y VolGroup00mount /dev/mapper/VolGroup00-LogVol00 /mnt/xen
chroot /mnt/xen (or whatever recovery steps you take next)
Xen Recovery -- contXen Recovery -- cont
Boot from recovery CD as HVMdisk = [ ’tap:aio:/home/xen/domains/damsel.img,ioemu:hda,w','file:/home/jack/knoppix.iso,ioemu:hdc:cdrom,r' ]builder="hvm"extid=0device_model="/usr/lib/xen/bin/qemu-dm"kernel="/usr/lib/xen/boot/hvmloader"boot="d"vnc=1vncunused=1apic=0acpi=1
Create custom Xen Kernel OS image for rescues
Boot from recovery CD as HVMdisk = [ ’tap:aio:/home/xen/domains/damsel.img,ioemu:hda,w','file:/home/jack/knoppix.iso,ioemu:hdc:cdrom,r' ]builder="hvm"extid=0device_model="/usr/lib/xen/bin/qemu-dm"kernel="/usr/lib/xen/boot/hvmloader"boot="d"vnc=1vncunused=1apic=0acpi=1
Create custom Xen Kernel OS image for rescues
PitfallsPitfalls
Failure to segregate network802.1q and iptables firewalls everywhere
Creating Single Points of FailureMake sure that VMs are clustered If they can’t be clustered, auto started on another
machineAssess reliability of shared storage
Storage Bottlenecks Not planning for extra points of management
cfengine, puppet, centralized authentication Less predictable performance modeling
Failure to segregate network802.1q and iptables firewalls everywhere
Creating Single Points of FailureMake sure that VMs are clustered If they can’t be clustered, auto started on another
machineAssess reliability of shared storage
Storage Bottlenecks Not planning for extra points of management
cfengine, puppet, centralized authentication Less predictable performance modeling
Maintaining HAMaintaining HAHardware will failIndividual VMs will crashCluster Multiple VMs for each applicationLoad Balancers can be VMs too.
Hardware will failIndividual VMs will crashCluster Multiple VMs for each applicationLoad Balancers can be VMs too.
HA -- ContinuedHA -- Continued Failure Detection, make VM restart on different
machines if a machine fails Make VMs migrate off a host when you shut it down Build your testing system into the VM scheme.
At least one testing system per type of host. Diligently do all changes on that before rolling out.
Have at least one development VM per VM cluster. Make sure that networking equipment and storage
is redundant too If running web servers, keep a physical web server
on hand to serve a “We’re sorry, come back later” page. For mail servers, an independant backup MX.
Failure Detection, make VM restart on different machines if a machine fails
Make VMs migrate off a host when you shut it down Build your testing system into the VM scheme.
At least one testing system per type of host. Diligently do all changes on that before rolling out.
Have at least one development VM per VM cluster. Make sure that networking equipment and storage
is redundant too If running web servers, keep a physical web server
on hand to serve a “We’re sorry, come back later” page. For mail servers, an independant backup MX.
What is File System?What is File System?• A file system is a hierarchical structure (file
tree) of files and directories.
• This file tree uses directories to organize data and programs into groups, allowing the management of several directories and files at one time.
• Some tasks are performed more efficiently on a file system than on each directory within the file system.
What is Network File System? What is Network File System?
• NFS developed by SUN Microsystems for use on its UNIX-based workstations.
• A distributed file system
• Allows users to access files and directories located on remote computers
• But, data potentially stored on another machine.
• NFS builds on the Open Network Computing Remote Procedure Call (ONC RPC) system
Continue…Continue…Continue…Continue…Mechanism for storing files on a network.
Allows users to ‘Share’ a directory.
NFS most commonly used with UNIX systems.
Other software platforms:-Mac OS, Microsoft Windows, Novell NetWare, etc.
Major Goals:-simple crash recovery
-reasonable performance :80% of a local drive
Versions and VariationsVersions and VariationsVersion 1 and Version 2
V1 Sun used only for in-house experimental purposesDid not release it to the publicV2 of the protocol originally operated entirely over UDP and was meant to keep the protocol stateless, with locking (for example) implemented outside of the core protocol.Both suffered from performance problemsBoth suffered from security problems
security dependant upon IP address
Version 3NFS v3 can operate across TCP as well as
UDP
Support for asynchronous writes on the server
Obtains multiple file name, handles and attributes
Support for 64-bit file sizes and offsetsHandle files larger than 4 gigabytes (GB)
Improves performance, and allowed it to work more reliably across the Internet
Version 3NFS v3 can operate across TCP as well as
UDP
Support for asynchronous writes on the server
Obtains multiple file name, handles and attributes
Support for 64-bit file sizes and offsetsHandle files larger than 4 gigabytes (GB)
Improves performance, and allowed it to work more reliably across the Internet
Version 4Currently version 2 and version 3
protocols are in use with version 4 under consideration for a standard
includes more performance improvements
Mandates strong security introduces a stateful protocol
developed with the IETF (Internet Engineering Task Force)
Version 4Currently version 2 and version 3
protocols are in use with version 4 under consideration for a standard
includes more performance improvements
Mandates strong security introduces a stateful protocol
developed with the IETF (Internet Engineering Task Force)
File Syste
m
File Syste
m
NFS Client
NFS Client
Network
NFS Server
File Syste
m
NFS Architecture NFS Architecture
RPC request Action
GETATTR Get file attribute
SETATTR Set file attribute
LOOKUP File name search
ACCESS Check access
READ Read file
WRITE Write to the file
CREATE Create file
REMOVE Remove file
RENAME Rename file
stateless server and client server can be rebooted and user on
client might be unaware of the rebootclient/server distinction occurs at the
application/user level not the system level
highly flexible, so we need to be disciplined in our administration/configuration
stateless server and client server can be rebooted and user on
client might be unaware of the rebootclient/server distinction occurs at the
application/user level not the system level
highly flexible, so we need to be disciplined in our administration/configuration
AdvantagesAdvantages
DisadvantageDisadvantage
uses RPC authenticationeasily spoofed
filesystem data is transmitted in cleartextData could be copied
Network slower than local diskComplexity, Security issues.
uses RPC authenticationeasily spoofed
filesystem data is transmitted in cleartextData could be copied
Network slower than local diskComplexity, Security issues.
ConclusionConclusion
New technologies open up new possibilities for network file systems
Cost of increased traffic over Ethernet may cause problems for xFS, cooperative caching.
New technologies open up new possibilities for network file systems
Cost of increased traffic over Ethernet may cause problems for xFS, cooperative caching.