VMware vSphere Storage Enhancements

104
Simplify, Virtualize and Protect Your Datacenter Cost Savings and Business Continuity with VMware's Latest vSphere Solution 10/27/22 copyright 2007 I/O Continuity Group 1

description

Technical descriptions of VMware's latest editon, vSphere with implications for storage enhancements.

Transcript of VMware vSphere Storage Enhancements

Page 1: VMware vSphere Storage Enhancements

Simplify, Virtualize and Protect Your Datacenter

Cost Savings and Business Continuity with VMware's Latest vSphere Solution

04/08/23 copyright 2007 I/O Continuity Group 1

Page 2: VMware vSphere Storage Enhancements

Cloud Computing: What does it mean?

• Fighting Complexity In The Data Center

• Good Is Good, But Cheap Is Sometimes Better

copyright I/O Continuity Group, LLC 2

Page 3: VMware vSphere Storage Enhancements

Cloud Computing and Economic Recovery

• Data centers represent expensive "pillars of complexity" for companies of all sizes, which is why they're being threatened by cloud computing.

• Spending $1 to acquire infrastructure and then spending $8 to manage it is unacceptable

• The idea is to let companies view all resources (internal and in the cloud) as a single "private cloud," and easily move data and applications among various data centers and cloud providers as needed.

copyright I/O Continuity Group, LLC 3

Page 4: VMware vSphere Storage Enhancements

4

Copyright © 2006 Dell Inc.

Datacenter Challenges

• IT Managers are looking for increased levels of resource utilization.

• Storage waste is measured in “unused” but allocated (aka stranded) storage

– Storage Disk Utilization ratio is below 50% in most datacenters

– Storage hardware vendors have released support for “Thin Provisioning” in their storage array.

• Reducing CPU overhead between the host server and storage is another method of increasing storage efficiency.

– This reduction in overhead can greatly increase the throughput of a given system.

Page 5: VMware vSphere Storage Enhancements

Conclusion

• Carefully compare the cost of private vs public clouds.

• Often outsourcing hardware and services comes at a prohibitive cost.

• Housing your hardware in a secure 24x7x365 facility is the best insurance for unexpected downtime and unmet SLAs.

04/08/23

copyright 2007 I/O Continuity Group 5

Page 6: VMware vSphere Storage Enhancements

copyright I/O Continuity Group, LLC 6

Background

Review of Basics

VMware on SAN Integration

Page 7: VMware vSphere Storage Enhancements

Traditional DASDirect-Attached Storage

External SCSI Storage Array = Stranded Capacity

Parallel SCSI3 connection provides throughput of approx 200 MB/s after overhead.

LAN

Each server is separately attached to a dedicated SCSI storage array requiring high storage maintenance with difficult scalability and provisioning.

Different vendor platforms cannot share the same external array.

7copyright I/O Continuity Group, LLC

Popular method for deploying applications was to install each on a dedicated server.

Page 8: VMware vSphere Storage Enhancements

SAN- attached Storage

FC Storage Array

FC SAN Switches

200/400/800 MB/sOR

IP SAN w/ iSCSI Ethernet

Switches

Tape Library

Servers with NICs and FC HBA’s

LAN

FC SAN’s offer a SHARED, high speed, dedicated block-level infrastructure independent of the LAN. IP SANs uses Ethernet Switches

Brocade

8copyright I/O Continuity Group, LLC

Applications able to run anywhere.

Page 9: VMware vSphere Storage Enhancements

Physical Servers represent the Before illustration running one application per server.

VMware “Converter” can migrate physical machines to Virtual Machines running on ESX in the After illustration.

9copyright I/O Continuity Group, LLC

Page 10: VMware vSphere Storage Enhancements

What is a Virtual Machine?

04/08/23

copyright 2007 I/O Continuity Group 10

Virtual Machine VM

Virtual Hardware

Regular Operating System

Regular Application

Users see a software platform like a physical computer running an OS and application.ESX hypervisor sees a discrete set of files:•Configuration file (.vmx)•Virtual Disk file (.vmdk)•NVRAM settings file•Log file

Shared Storage

Page 11: VMware vSphere Storage Enhancements

ESX Architecture

04/08/23

copyright 2007 I/O Continuity Group 11

Memory CPU Disk and NIC

Three ESX4 Versions:

1.Std ESX installed on supported hardware

2.ESXi installed on supported hardware (without Service Console)

3.ESXi Embedded hard coded in OEM server firmware (not upgradeable)

vSphere is aka ESX4

Shared Hardware Resources

Page 12: VMware vSphere Storage Enhancements

Storage Overview

• Industry Storage Technology

• VMware Datastore Format Types

04/08/23

copyright 2007 I/O Continuity Group 12

Locally Attached

Fibre Channel

iSCSI or IP SAN

NAS

VMware VMFS

NFSRaw Device Mappings -RDM

Internal or external DAS

High speed SCSI on SAN

SCSI over std TCP/IP

File level share on LAN

Page 13: VMware vSphere Storage Enhancements

ESX Datastore and VMFS

04/08/23

copyright 2007 I/O Continuity Group 13

Volume LUNVolume LUN(Storage (Storage

hardware)hardware)

Datastore Datastore VMFS mounted VMFS mounted

on ESX from on ESX from LUNLUN

VM Files

Datastores are logical storage units on a physical LUN (disk device) or on a disk partition.

Datastore format types are VMFS or NFS (RDMs are for VMs).

Datastores can hold VM files, templates and ISO images, or the RDM used to access the raw data.

VM VM

ESX

Storage Array

Page 14: VMware vSphere Storage Enhancements

VMware DeploymentConclusions

• Adopting a SAN is a precondition for implementing VMware’s server virtualization requiring “shared storage”.

• SANs consolidate and share disk resources to save costs on wasted space and eliminate outages and downtime.

• iSCSI is the IP SAN storage protocol of choice for organizations with tight budgets.

• FC is the SAN storage protocol of choice for mission-critical, high performance applications.

• Choosing a storage system housing both iSCSI and FC connections provides the most flexibility and scalability.

04/08/23

copyright 2007 I/O Continuity Group 14

Page 15: VMware vSphere Storage Enhancements

copyright I/O Continuity Group, LLC 15

vSphere Storage Management and

Efficiency Features

Page 16: VMware vSphere Storage Enhancements

16

Copyright © 2006 Dell Inc.

New vSphere Storage Features

• Storage Efficiency

– Virtual Disk Thin Provisioning

– Improved iSCSI software initiator

• Storage Control

– New vCenter Storage Capabilities

– Dynamic Expansion of VMFS Volumes

• Storage Flexibility and Enhanced Performance

– Enhanced Storage VMotion

– Pluggable Storage Architecture

– Paravirtualized SCSI and Direct Path I/O

Page 17: VMware vSphere Storage Enhancements

04/08/23

copyright 2007 I/O Continuity Group 17

Thin Provisioning

Page 18: VMware vSphere Storage Enhancements

Disk Thin Provisioning In a Nutshell

• Thin Provisioning was designed to handle unpredictable VM application growth.

– On the one hand, you don’t want to over-allocate disk space which may never be used,

– On the other hand, you don’t want to under-allocate disk space which requires growing the disk requiring admin time.

• Thin Provisioning adopts a “shared disk pool” approach to disk capacity allocation, thereby automating the underlying administration.

• All you have to do is ensure the overall disk pool capacity never runs out of available space.

04/08/23

copyright 2007 I/O Continuity Group 18

Page 19: VMware vSphere Storage Enhancements

Disk Thin Provisioning Comparison

04/08/23

copyright 2007 I/O Continuity Group 19

Without Thin Provisioning (aka Thick) With Thin Provisioning

If you create a 500GB virtual disk, the VM will use entire 500GB VMFS Datastore allocated.

If you create a 500GB virtual disk but only 100GB of VMFS Datastore is used, then only 100GB will be utilized, even though 500GB is technically allocated to the VM for growth.

Create when VM is deployedOr during VM migration

Page 20: VMware vSphere Storage Enhancements

Disk Thin Provisioning Defined

• Method to Increase the Efficiency of Storage Utilization– VM’s Virtual Disk see all, but use only amount of underlying

storage resource needed by the VMs application (out of the Datastore shared pool)

– Initial allocation of virtual disk requires 1 MB of space in Datastore (level of disk granularity).

– Additional 1 MB chunks of storage will be allocated as storage demand grows, with some lost capacity for metadata.

• Capacity is comparable to airline industry overbooking flights.– Airlines can reassign seats for no-show, booked passengers

– Thin Provisioning reallocates unused storage to other VMs while they continue to grow into that available capacity on-the-fly.

04/08/23

copyright 2007 I/O Continuity Group 20

Page 21: VMware vSphere Storage Enhancements

Thick LUNThick LUN500 GB 500 GB

virtual diskvirtual disk

Without Thin Provisioning/Without VMware

04/08/23

copyright 2007 I/O Continuity Group 21

ServersWith Dedicated

Disks

ESX Servers on SAN

Switches

Storage Array holding SHARED

disks

HBA’s HBA’s

Thin LUNThin LUN500 GB 500 GB

Virtual DiskVirtual Disk

Traditional Servers with DAS- Direct Attached

Storage

Totally stranded storage devices. What you see is what you get.

All VMs see capacity allocated, but Thin LUN offers only what’s used. .

400 GB unused but allocated

100GB application usage

SCSI Adapters SCSI Adapters

Page 22: VMware vSphere Storage Enhancements

Thin Provisioning

04/08/23

copyright 2007 I/O Continuity Group 22

•Virtual machine disks consume only the amount of physical space in use at a given time

•VM sees full logical disk size at all times•Full reporting and alerting on consumption

•Benefits•Significant improvement of actual storage utilization•Eliminates need to over-provision virtual disk capacity•Reduces storage costs by up to 50%•Can convert “thick” to “thin” in conjunction with Storage VMotion data migration.

120 GB allocated to Thin VM disks, with 60 GB used

Page 23: VMware vSphere Storage Enhancements

23

Copyright © 2006 Dell Inc.

Virtual Disk Thin Provisioning Configured

Page 24: VMware vSphere Storage Enhancements

Thin Disk Provisioning Operations

04/08/23

copyright 2007 I/O Continuity Group 24

Page 25: VMware vSphere Storage Enhancements

Improved Storage Management

04/08/23

copyright 2007 I/O Continuity Group 25

Datastore now managed as an object within vCenter to view all components in the storage layout and utilization levels.Details for each datastore reveal which ESX servers are accessing capacity..

Page 26: VMware vSphere Storage Enhancements

Thin ProvisioningCaveats

• There is capacity overhead in Thin LUNs for handling individual VM allocations (metadata consumes some space).

• Check Storage Vendor Compatibility for Thin Provisioning support (depends on storage hardware vendor support).

• Understand how to configure Alerts well in advance of running out of physical storage

– Hosts attempting to write to completely full Thin LUN can cause loss of entire Datastore.

• VMs with Thin Provisioned Disks do not work with VMware FT Fault Tolerance (req’r thick-eagerzeroed disks)

copyright I/O Continuity Group, LLC 26

Page 27: VMware vSphere Storage Enhancements

Thin Provisioning Conclusions

• For VMs expected to grow frequently or unpredictably, consider adopting Thin Provisioning while monitoring the disk capacity utilization.

• The more VMs sharing a Thin Provisioned Datastore, the faster it will fill up, so size the initial capacity accordingly.

• If there is any risk of forgetting to grow the Thin Provisioned Datastore when it gets full, DO NOT ADOPT Thin Provisioning.

• If a Thin Provisioned LUN runs out of disk space, all data could be lost.

04/08/23

copyright 2007 I/O Continuity Group 27

Page 28: VMware vSphere Storage Enhancements

04/08/23

copyright 2007 I/O Continuity Group 28

iSCSI Software Initiator

Page 29: VMware vSphere Storage Enhancements

iSCSI Software InitiatorIn a Nutshell

• iSCSI is a more affordable storage protocol than Fibre Channel, however it is slower for lighter VM workloads.

• vSphere iSCSI stack is tweaked and tuned to use less CPU time and deliver better throughput.

– Software iSCSI (NIC) runs at the ESX layer

– Hardware iSCSI uses an HBA leveraged by ESX

• vSphere iSCSI configuration process is easier without requiring Service Console connection to communicate with the iSCSI target.

copyright I/O Continuity Group, LLC 29

Page 30: VMware vSphere Storage Enhancements

What is iSCSI?

• IP SAN sends blocks of data over TCP/IP protocol (This network is traditionally used for file transfers).

• To address the cost of FC-switched SANs, storage vendors added support for basic Ethernet switch connections (GigE- 1000 Mbps).

• ESX Hosts connecting to iSCSI SAN require an Initiator:

– Software iSCSI – relies on Ethernet NIC

– Hardware iSCSI – uses dedicated HBA

• Normally a host server can only connect through one of the two storage connection types (FC SAN or IP SAN).

04/08/23

copyright 2007 I/O Continuity Group 30

Page 31: VMware vSphere Storage Enhancements

iSCSI Software Initiator Key Improvements

• vSphere Goal is CPU Efficiency:

• Software iSCSI stack entirely rewritten.

– (NIC-dependent protocols push ESX CPU- 10 GB NICs push CPU 10x harder)

• ESX4 uses TCP/IP2 optimized stack, tuned through IPv6 locking and multi-threading capabilities.

• Reduced use of atomics and pre-fetching of locks with better use of internal locks (low level ESX programming).

• Better cache memory efficiency with optimized cache affinity settings.

04/08/23

copyright 2007 I/O Continuity Group 31

Page 32: VMware vSphere Storage Enhancements

Why is ordinary Software iSCSI Slow?

• iSCSI is sometimes referred to as a “bloated” protocol due to high overhead and inefficiency of IP network.

• The faster the network the higher the drag on the host CPU for added processing.

04/08/23

copyright 2007 I/O Continuity Group 32

iSCSI protocol over TCP/IP with high overhead processing

Fibre Channel Protocol over aHigh-speed dedicated network

FC SAN

Page 33: VMware vSphere Storage Enhancements

vSphereSoftware iSCSI Configuration

• iSCSI Datastore Configuration is Easier and Secure

• No longer requires Service Console connection to communicate with an iSCSI target (unnecessary configuration step)

• New iSCSI initiator features include bi-directional CHAP authentication for better security (2-way initiator/target handshake).

04/08/23

copyright 2007 I/O Continuity Group 33

General tab changes are global and propagate down to each target.

Bi-directional CHAP is added for authentication to initiator.

Page 34: VMware vSphere Storage Enhancements

iSCSI Performance Improvements

04/08/23

copyright 2007 I/O Continuity Group 34

SW iSCSI stack is most improved

Page 35: VMware vSphere Storage Enhancements

Software iSCSI Conclusions

• Software iSCSI has been considered unsuitable for many VM application workloads due to its high overhead.

• vSphere has tuned its stack for sending VM block transmissions over TCP/IP (IP SAN) to buffer the natively slower iSCSI protocol.

• Once 10 Gbit iSCSI is widely supported, higher workloads will run better.

• Consider a test-dev environment to test Software iSCSI hosts prior to deployment of mission-critical VMs.

04/08/23

copyright 2007 I/O Continuity Group 35

Page 36: VMware vSphere Storage Enhancements

04/08/23

copyright 2007 I/O Continuity Group 36

Dynamic Expansion of VMFS Volumes

Page 37: VMware vSphere Storage Enhancements

Dynamic Storage GrowthIn a Nutshell

• VM applications outgrowing VMFS virtual disk can be dynamically expanded to grow capacity (with no reboot).

• Prior to vSphere, the only option for increasing the size of an “existing” VM’s virtual disk was adding new LUNs partitions (“spanning extents”) vs growing the original LUN .

• Corruption or loss of one extent (partition) in a spanned virtual disk resulted in loss of all combined extents= risky.

• Hot Extend now allows the Virtual Disk to grow dynamically up to 2 TB.

• Thin Provisioning impacts capacity usage in the datastore; Hot Extend allows resizing VM virtual disk.

copyright I/O Continuity Group, LLC 37

Page 38: VMware vSphere Storage Enhancements

Without Hot Disk ExtendLUN Spanning

04/08/23

copyright 2007 I/O Continuity Group 38

Before 20 GB

Added20 GB

Each 20 GB Extent (Virtual Disk) becomes a separate partition (file system with drive letter) in the guest OS.

After40 GB

If one spanned extent is lost, the entire volume becomes corrupt.

Page 39: VMware vSphere Storage Enhancements

Hot Extend VMFS Volume Growth Option

• Hot Extend Volume Growth expands a LUN as a single extent so that it fills the available adjacent capacity.

• Used to increase size of a virtual disk.

– Only flat virtual disks in “Persistent Mode”

– No snapshots in “Virtual Mode”

• vSphere 4 VMFS volumes can grow an expanded LUN up to a 2 TB virtual disk.

copyright I/O Continuity Group, LLC 39

Before 20 GB

After 40 GB up to 2 TBVolume Grown

Page 40: VMware vSphere Storage Enhancements

Dynamic Expansion up to VM

copyright I/O Continuity Group, LLC 40

VM Guest OS level Virtual Disk

ESX level Datastoreholding VM Virtual Disks-ESX Admin

Storage level LUN(LUN presented as one Datastore- SAN Admin

Datastore Volume Growth

Dynamic LUN Expansion

Hot Virtual Disk Extend

Page 41: VMware vSphere Storage Enhancements

Virtual Disk Hot Extend Configuration

04/08/23

copyright 2007 I/O Continuity Group 41

Increase from 2 GB to 40 GB

After updating the VM Properties, use Guest OS to format file system to use newly allocated disk space. Must be a non-system virtual disk.

Ultimate VM application capacity is not always predictable at the outset.

Page 42: VMware vSphere Storage Enhancements

Virtual Disk Hot Extend Conclusion

• VMs that require more disk space can use the new Virtual Disk Hot Extend to grow an existing LUN to a larger size.

• The alternative is to add “extents” representing separate file system partitions. The drawback is the loss of the entire virtual disk volume if just one of the added extents fails.

• Thin Provisioning allows VM virtual disks to grow from its configured capacity by adding capacity on-the-fly (avoids wasting datastore space).

• Virtual Disk Hot Extend allows a VM Guest OS to enlarge its original capacity (allows VM application to grow from the originally configured size).

04/08/23

copyright 2007 I/O Continuity Group 42

Page 43: VMware vSphere Storage Enhancements

04/08/23

copyright 2007 I/O Continuity Group 43

Storage VMotion

Page 44: VMware vSphere Storage Enhancements

Storage VMotion In a Nutshell

04/08/23

copyright 2007 I/O Continuity Group 44

Storage VMotion (SVM) enables live migration of virtual machine disks from one datastore to another with no disruption or downtime.

This hot migration of the storage location allows easy movement of VMs data.

Like VMotion, Storage VMotion reduces service disruptions without server downtime.

Minimizes disruption when rebalancing or retiring storage arrays, reducing or eliminating planned storage downtime.

Simplifies array migration and upgrades, reducing I/O bottlenecks by moving virtual machines while the VM remains up and running.

ESX 3.5 Limitations:•FC datastores support only•RCLI – no GUI•Relied on snapshot technology for migrations•Experimental usage

Page 45: VMware vSphere Storage Enhancements

Enhanced Storage VMotionFeatures

• New GUI capabilities and full integration into vCenter.

• Migration from FC, iSCSI or NFS to any of the three storage protocols.

• Migrate from Thick or Thin LUNs to the opposite virtual disk format during Storage VMotion.

• New Change Block Tracking method moves VM’s home disk over to a new Datastore (without using a VM snapshot)

• Storage VMotion moves location of data while VM stays online.

copyright I/O Continuity Group, LLC 45

Page 46: VMware vSphere Storage Enhancements

Storage VMotion Benchmarks

04/08/23

copyright 2007 I/O Continuity Group 46

Change Block Tracking replaces Snapshot technology

Less CPU processing consumed

Shorter time to migrate data.

Fewer resources consumed in process.

Page 47: VMware vSphere Storage Enhancements

Storage VMotion New Capabilities

• How “Change Block Tracking” scheme improves handling migrations:

– Speeds up the migration process

– Reduces former excessive memory and CPU requirements. No longer requires 2x memory

– Leverages “fast suspend/resume” with change block tracking to speed up migration

• Supports moving VMDKs from Thick to Thin formats or migrate RDMs to VMDKs

– RDMs support storage vendor agents directly accessing the disk.

copyright I/O Continuity Group, LLC 47

Page 48: VMware vSphere Storage Enhancements

Storage VMotionBenefits

• Avoids downtime required when coordinating needs of:

– Application owners

– Virtual Machine owners

– Storage Administrators

• Moves a running VM to a different Datastore when performance suffers on over-subscribed ESX host or datastore.

• Easily reallocate stranded or unclaimed storage (ie wasted space) non-disruptively by moving VM to larger capacity storage LUN.

copyright I/O Continuity Group, LLC 48

Page 49: VMware vSphere Storage Enhancements

Storage VMotionHow it Works

copyright I/O Continuity Group, LLC 49

1. Copies VM to new location on Destination

2. Start tracking changes in delta block

3. Fast suspend and resume VM on new Destination disk

Source Disk Array (FC)

Destination Disk Array (iSCSI)

4. Copies remaining VM delta disk blocks

5. Deletes original VM on source

Use to improve I/O workload distribution.

Page 50: VMware vSphere Storage Enhancements

Storage VMotion Pre-requisites

Prior to running Storage VMotion:

• Remove Snapshots from VMs to be migrated

• RDMs must be in “Persistent” mode (not Virtual mode)

• ESX host requires VMotion license

• ESX host must have Access to both Source and Target Datastores

• Cannot VMotion (move VM) concurrently during Storage VMotion data migration.

• Up to Four Concurrent Storage VMotion migrations supported.

copyright I/O Continuity Group, LLC 50

Page 51: VMware vSphere Storage Enhancements

Storage VMotion Conclusion

• Built-in GUI provides more efficient, flexible storage options and easier processing of data migrations while VMs are running.

• Most suitable uses include:

– When a Datastore becomes full.

– When a VM’s application data requires faster or slower disk access (tiered storage).

– When moving data to a new storage vendor.

– To migrate RDM to VMFS or Thick to Thin or vice versa

04/08/23

copyright 2007 I/O Continuity Group 51

Page 52: VMware vSphere Storage Enhancements

04/08/23

copyright 2007 I/O Continuity Group 52

Paravirtualized SCSI

PV SCSI

Page 53: VMware vSphere Storage Enhancements

Paravirtualized SCSI In a Nutshell

• PV SCSI is a high-performance Virtual Storage Adapter.

• VMI paravirtualization std supported by some guest OSs.

• Designed for Virtual Machine applications requiring better throughput and lower CPU utilization.

• Best suited for environments with very I/O intensive guest applications.

• Improves efficiency by:

– Reducing the cost of virtual interrupts

– Batching the processing of I/O requests

– Batching I/O completion interrupts

– Reduces number of context switches between guest and VMM04/08/23

copyright 2007 I/O Continuity Group 53

Page 54: VMware vSphere Storage Enhancements

PV SCSI

• Serial-Attached SCSI (SAS) paravirtualized PCIe storage adapter (Peripheral Component Interconnect express or Local SCSI bus)

– A virtual adapter with the hardware specification written by VMware with drivers for W2K3/8 and RHEL5.

– Provides functionality similar to VMware’s BusLogic, LSILogic and LSILogic SAS.

– Supports MSI-X, PME, MSI capabilities in the device (“Message Signaled Interrupts” use in-band vs out-of-band PCI memory space for

lower interrupt latency). 04/08/23

copyright 2007 I/O Continuity Group 54

Configure PV SCSI drive in VM

Page 55: VMware vSphere Storage Enhancements

PV SCSI Key Benefits

• Efficiency gains from PVSCSI can result in:

– additional 50 percent CPU savings for Fibre Channel (FC)

– up to 30 percent CPU savings for iSCSI.

• Lower overhead and higher CPU efficiency in I/O processing.

– Higher throughput (92% higher IOPS) and lower latency (45% less latency)

– Better VM scalability (more VMs/VCPUs per host)

• Configuring PV SCSI may require VM downtime to move virtual disks (.vmdk) to new adapter. Only works on VM data drives – not supported for boot drives.

copyright I/O Continuity Group, LLC 55

Page 56: VMware vSphere Storage Enhancements

VMware Performance TestingReduced CPU Usage

04/08/23

copyright 2007 I/O Continuity Group 56

•Additional 50% CPU savings for Fibre Channel (FC)

•Up to 30 percent CPU savings for iSCSI.

Less CPU usage and overhead

FC HBAs offer least overhead.

Page 57: VMware vSphere Storage Enhancements

PV SCSI Configuration

• In VM Properties, highlight Hard Drive and select SCSI (1:0) or higher, you’ll see the new SCSI controller added.

• Click Change Type and select VMware Paravirtual

copyright I/O Continuity Group, LLC 57

Page 58: VMware vSphere Storage Enhancements

PV SCSI Use Cases

• The performance factors indicate benefits to adopting PV SCSI, but ultimately the decision will most depend the VM application workload.

• Other factors to consider include vSphere Fault Tolerance which cannot be enabled on a VM using PVSCSI.

• VMware recommends that you create a primary adapter for use with a disk that will host the system software (boot disk) and a separate PVSCSI adapter for the disk that will store user data.

copyright I/O Continuity Group, LLC 58

Page 59: VMware vSphere Storage Enhancements

PV SCSI Conclusions

• PV SCSI improves VM access time to disk positively impacting on application performance through the storage stack (ESX storage adapter to SAN storage).

• VM’s paravirtualized SCSI adapter improves the disk communication and application response time.

• With PV SCSI-supported hardware and guest OS, simply create a new .VMDK file for a VM’s data applications.

04/08/23

copyright 2007 I/O Continuity Group 59

Page 60: VMware vSphere Storage Enhancements

04/08/23

copyright 2007 I/O Continuity Group 60

Pluggable Storage Architecture (PSA)

Page 61: VMware vSphere Storage Enhancements

Pluggable Storage ArchitectureIn a Nutshell

• Multipathing technology optimizes I/O throughput across multiple SAN connections between a host and storage system. See example on next slide.

• Previously, vmkernel could not use third party storage plug-in to spreading I/O load across ALL available SAN Fibre paths (known as“multipathing”), relying on inefficient native MPIO scheme.

• vSphere now integrates third-party vendor solutions to improve host throughput and failover.

• Third-party plug-ins install on the ESX host and require a reboot, with functionality depending on the storage hardware controller type (Active/Active or Active/Passive)

04/08/23

copyright 2007 I/O Continuity Group 61

Page 62: VMware vSphere Storage Enhancements

Pluggable Storage Architecture

04/08/23

copyright 2007 I/O Continuity Group 62

ESX 3.5 did not support third-party storage vendor multi-path software.

ESX 3.5 required native MPIO driver which was not optimized for dynamic load balancing and failover.

vSphere ESX 4 allows storage partners to write plug-ins for their specific capabilities.

Dynamic multipathing and load balancing on “active-active arrays” replacing low intelligent “native multipathing” (basic round-robin or fail-over)

Page 63: VMware vSphere Storage Enhancements

Pluggable Storage Architecture (PSA)

Two classes of third-party plug-ins:

• Basic path-selection (PSPs) optimize the choice of which path to use for active/passive type arrays

• Full storage array type (SATPs) allow load balancing across multiple paths and path selection for active/active arrays

copyright I/O Continuity Group, LLC 63

NMP=generic VMware Native Multipathing-default without vendor plug-inPSP=Path Selection Plug-in Third-Party PSP=vendor written path mgmt plug-inSATP=vendor Storage Array Type plug

Page 64: VMware vSphere Storage Enhancements

Pluggable Storage Architecture (PSA)

04/08/23

copyright 2007 I/O Continuity Group 64

By default, VMware provides a generic MPP called NMP (native multipathing).

Multipathing plug-in

Page 65: VMware vSphere Storage Enhancements

Enhanced Multipathing with Pluggable Storage Architecture

04/08/23

copyright 2007 I/O Continuity Group 65

Each ESX4 host will apply one of the plug-in options based on storage vendor choices.

Page 66: VMware vSphere Storage Enhancements

VMDirectPath I/O (Experimental)

• VM DirectPath I/O enables virtual machines to directly access the underlying hardware devices by binding a physical FC HBA to a single guest OS.

• Enhances CPU efficiency for workloads that require constant and frequent access to I/O devices

• This feature maps a single HBA to a single VM and will not enable sharing of the HBA by more than a single Virtual Machine.

• Other virtualization features, such as VMotion, hardware independence and sharing of physical I/O devices will not be available to the virtual machines using VMDirectPath

copyright I/O Continuity Group, LLC 66

Page 67: VMware vSphere Storage Enhancements

Third-party PSPs

04/08/23

copyright 2007 I/O Continuity Group 67

Page 68: VMware vSphere Storage Enhancements

Higher Performance API for Multipathing

• Experimental support for the following storage I/O devices:

• QLogic QLA25xx 8Gb Fibre Channel

• Emulex LPe12000 8Gb Fibre Channel

• LSI 3442e-R and 3801e (1068 chip based) 3Gb SAS adapters

• vSphere performance claims of a 3x increase to over 300,000 I/O operations per second, which bodes well for most mission-critical applications.

copyright I/O Continuity Group, LLC 68

Page 69: VMware vSphere Storage Enhancements

EMC PowerPath/VE

• Integrates popular path management software directly into ESX vmkernel handling I/O below the VM, guest OS, application, database and file system, but the HBA.

• All Guest OS I/O run through PowerPath using a “pseudo device” acting like a traffic cop directing processing to the appropriate data path.

• PowerPath/VE removes all admin overhead by providing 1) dynamic load balance across ALL paths and 2) dynamic path failover and recovery.

• EMC’s PowerPath/VE API and generic NMP cannot manage the same device simultaneously.

• Licensing is on per-socket basis (like VMware) copyright I/O Continuity Group, LLC 69

Page 70: VMware vSphere Storage Enhancements

PSA Conclusions

• Former ESX3 native MPIO (multipathing I/O) did not support for 3rd party plug-in’s, lumping all VM workloads on a single path, with no load balancing.

• vSphere’s Pluggable Storage Architecture supports 3rd party plugs with more choices for allocating multiple VMs across the ESX paths (typically four) from the host HBAs through the SAN down to the storage system.

• Storage vendors write their own Plug-ins for PSA to manage dynamic path failover, failback and load balancing.

• Variations in the type of storage controller also affects PSA configuration.

04/08/23

copyright 2007 I/O Continuity Group 70

Page 71: VMware vSphere Storage Enhancements

copyright I/O Continuity Group, LLC 71

Improved VM Availability and

Failover

Page 72: VMware vSphere Storage Enhancements

04/08/23

copyright 2007 I/O Continuity Group 72

Fault Tolerance FT

Page 73: VMware vSphere Storage Enhancements

FT in a Nutshell

• VM’s running in an ordinary HA Cluster of ESX hosts (with or without DRS) will experience downtime during automatic failover if an ESX host goes down.

• FT VMs in an HA Cluster never go down by using a ghost image VM running on a second ESX host able to survive the loss of the primary VM running on the failed host.

04/08/23

copyright 2007 I/O Continuity Group 73

Page 74: VMware vSphere Storage Enhancements

HA vs FT

• HA

– Simple High Availability Cluster Solution, like MSCS

– Leads to VM Interruptions During Host Failover

• FT is New to vSphere

– Improves on HA by ensuring VMs never go down

– Designed for Most Mission-Critical Applications

– Does not inter-operate with other VMware features

copyright I/O Continuity Group, LLC 74

Page 75: VMware vSphere Storage Enhancements

New Fault Tolerance

copyright I/O Continuity Group, LLC 75

Provides continuous protection for (VM) in when host fails. (takes VMware HA to next level)

Included in vSphere Advanced, Enterprise and Enterprise Plus editions

•Limit of four FT-enabled VMs per ESX host

Page 76: VMware vSphere Storage Enhancements

Fault Tolerance (FT) Technology

• FT uses “Record and Replay” technology to record the “primary” VM’s activity and later play it back on the “secondary” VM.

• FT creates ghost image VM on another ESX host sharing same virtual disk file as the primary VM.

– Essentially both VMs function as a single VM

• Transfers CPU and virtual device inputs from Primary VM (Record) to Secondary VM (Replay) relying on a heartbeat monitor between ESX hosts. The FT logging NIC supports lockstep.

copyright I/O Continuity Group, LLC 76

Page 77: VMware vSphere Storage Enhancements

FT Lockstep Technology

copyright I/O Continuity Group, LLC 77

•Requires identical processor on secondary ESX host (clock speed within 400

MHz) to monitor and verify the operation of the first processor.•VMs kept in sync on Secondary ESX host, receiving same inputs.•Only Primary VM produces output (ie disk writes and network transmits).•Secondary VM’s output is suppressed by network until it becomes a primary VM. Both hosts send heartbeat signals through the Logging NICs.Essentially both VMs function as a single VM.

.

Page 78: VMware vSphere Storage Enhancements

FT System Requirements

• ESX hardware requires same Family of Processors

– Need specific processors that support Lockstep technology

– HV (Hardware Virtualization) must be enabled

– Turn OFF Power Mgmt in BIOS (ESX can never be down)

• Primary and Secondary ESX hosts must be running same build of ESX (no mixing of ESX3 and ESX4)

• Primary and secondary ESX hosts must be in HA cluster.

• At least 3 ESX hosts in HA Cluster for every single host failure

• At least gigabit NICs required

• At least two teamed NICs on separate physical switches (one for VMotion, one for FT and one NIC as shared failover for both).

• 10 Gbit NICs support Jumbo Frames for performance boost

copyright I/O Continuity Group, LLC 78

Page 79: VMware vSphere Storage Enhancements

Other FT Configuration Restrictions

• FT Protected VMs must Share Same Storage with No-Single-Point-of-Failure design (multipathing, redundant switches, NIC teaming)

• No thin or sparse virtual disks, only “thick-eagerzeroed” on VMFS3 formatted disks (otherwise FT will convert to thick).

• No Datastores using RDM (Raw Device Mapping) in Physical compatibility mode (virtual Compatibility mode is support).

• Remove MSCS Clustering of VMs before protecting with FT.

• No DRS supported with FT (only manual VMotion)

• No simultaneous Storage VMotion , without disabling FT first.

• No FT VM backup using vStorage API/VCB or VMware Data Recovery (requires snapshots not supported with FT).

• No NPIV (N-Port ID Virtualization) which assigns unique HBA addresses to each VM sharing a single HBA.

copyright I/O Continuity Group, LLC 79

Page 80: VMware vSphere Storage Enhancements

Other FT Configuration Guidelines

• VM hardware must be upgraded to v7.

• No support for VM paravirtualized guest OS (PV SCSI)

• No VM snapshots are supported with FT

• No VMs with NPT/EPT (Nested Page Tables/Extended Page Tables) or hot plug devices or USBs.

• VMs cannot use more than one vCPU (SMP is not supported)

• Run VMware Site Survey utility to verify software and hardware support

• Enable Host Certificate checking (enabled by default) before adding ESX host to vCenter Server.

copyright I/O Continuity Group, LLC 80

Page 81: VMware vSphere Storage Enhancements

FT Conclusions

• Fault Tolerance keeps mission-critical VMs on line even if an ESX host fails, which takes HA to the next level.

• Consider adoption if hardware CPU, GigE NICs, VM hardware and guest OS support are available and there are VMs with high SLAs configured in an ESX HA Cluster.

• FT currently does not integrate with SMP vCPUs, PV SCSI, VM snapshots, Thin LUNs, RDM or Storage VMotion.

• SMBs might save on licensing costs from a competitor product called “Marathon everRun”.

04/08/23

copyright 2007 I/O Continuity Group 81

Page 82: VMware vSphere Storage Enhancements

04/08/23

copyright 2007 I/O Continuity Group 82

Data Recovery

Page 83: VMware vSphere Storage Enhancements

Data Recovery in a Nutshell

• Backing up an ESX3 VM to tape using ordinary VCB is complex due to difficult integration with tape libraries and third-party vendor backup software.

• vSphere’s Data Recovery solution copies ESX4 VMs to disk without need of third-party vendor backup software.

• Data Backup and Recovery is implemented through a wizard-driven GUI to create a disk-based backup of a VM on a separate disk.

• Data Recovery copies VM’s files to a different disk while it is running.

• Data Recovery uses VM snapshots to eliminate any downtime.

04/08/23

copyright 2007 I/O Continuity Group 83

Page 84: VMware vSphere Storage Enhancements

Data Recovery

04/08/23

copyright 2007 I/O Continuity Group 84

Data Recovery provides faster restores to disk than tape-based backup solutions.

•Data Recovery Appliance is deployed as an OVF template (a pre-configured VM).

•Must add the Virtual Appliance to vCenter Server Inventory. •License is based on number of ESX hosts being backed up.

Page 85: VMware vSphere Storage Enhancements

vSphere Data Recoveryvs VCB

New Data Recovery• Implemented via a Virtual

Appliance (pre-configured VM)

• D2D Model (Disk-to-Disk) VMs backups on Shared Disk

• Easy, wizard-driven backup job and restore job creation for SMBs

• Agentless, disk-based backup and recovery tool, leveraging disk as destination storage.

Current VCB• Implemented via VCB Proxy Server

interacting with hosts & tape libraries (manual config)

• D2D2T Model (Disk-toSnapshot-to Tape)

• Complicated integration between ESX host VMs and Third Party Backup Software and Tape HW.

• Agent-based or agentless solution designed for Enterprise Data Protection

04/08/23

copyright 2007 I/O Continuity Group 85

Page 86: VMware vSphere Storage Enhancements

Data RecoveryKey Components

04/08/23

copyright 2007 I/O Continuity Group 86

Page 87: VMware vSphere Storage Enhancements

Implementation Considerations

• Not compatible with ESX/ESXi 3.x/VC2.5 and older

• Must upgrade VMs to HW version 7 to leverage changed blocked tracking for faster generation of changes to be transferred

• Update VMware Tools for Windows VMs to enable VSS which properly quiesces VM prior to snapshot

• Does not backup snapshot tree (only active VMs)

• Destination disk selection impacts performance– “you get what you pay for”

• Use of shared storage allows off-LAN backups leading to faster data transfer/minimize LAN load

04/08/23

copyright 2007 I/O Continuity Group 87

Page 88: VMware vSphere Storage Enhancements

Next Evolution of VCB shipping with vSphere

04/08/23

copyright 2007 I/O Continuity Group 88

Improved API enables native integration with partnerbackup application

Page 89: VMware vSphere Storage Enhancements

Data Recovery Conclusions

• Prior to vSphere, backup was a complicated command line configuration with configuration steps requiring integration with a VCB proxy server, tape library drivers and backup software.

• vSphere Data Recovery is a good substitute for VCB, when adequate disk storage is available and backup to tape is less essential.

• VM downtime or interruption for Data Recovery backup and restore to disk is non-existent, due to use of VM snapshot technology.

• The next revision of VCB will integrate the two solutions.

04/08/23

copyright 2007 I/O Continuity Group 89

Page 90: VMware vSphere Storage Enhancements

04/08/23

copyright 2007 I/O Continuity Group 90

New vSphere Licensing

Page 91: VMware vSphere Storage Enhancements

Understanding vSphere Licensing

• vSphere fully redesigned licensing scheme

– VMware no longer issuing VI 3 licenses

• License administration is built directly into vCenter Server

– No separate license server

• One single key contains all advanced features on ESX host.

– License keys are simple 25-character strings instead of complex text files.

– Encodes a CPU quantity determining total ESX hosts that can use the license key. Keys can be split among multiple ESXs.

– If you upgrade to add DRS or other features, you receive a replacement license key.

copyright I/O Continuity Group, LLC 91

Page 92: VMware vSphere Storage Enhancements

Legacy VI3 vCenter License Server Topology

copyright I/O Continuity Group, LLC 92

ESX3 Server ESX3 Server ESX3 Server

Active Directory Domain

Database Server

VirtualCenterDatabase

VirtualCenter Server

VMware License Server running on separate

VM or Server

•Licenses were stored on a license server in the datacenter.•When ESX server boots, it would learn from License Server the available per-processor licenses and supported features.•Before ESX3, licenses were installed locally for each feature.

Page 93: VMware vSphere Storage Enhancements

New vSphere ESX License Configuration

copyright I/O Continuity Group, LLC 93

•Click License Features in Configuration tab to Add a License Key (enter 25 character key bundling all features) view licensed Product Features and assign to an ESX host.

•Operates only on ESX4 hosts

•License Server stays in place if upgrading to vSphere and impacts only legacy hosts.

In navigation bar: Home->Administration->Licensing

Page 94: VMware vSphere Storage Enhancements

Upgrading to vSphere License Keys

• Existing VI 3.x licenses will not work on ESX 4

• Must activate new vCenter 4 and ESX 4 license keys, rec’d via email and added to portal

• Customer can log into License Portal at at http://www.vmware.com/licensing/license.portal

copyright I/O Continuity Group, LLC 94

Page 95: VMware vSphere Storage Enhancements

New License Count

• This example shows how the new and old license counts map.

• NOTE: VMware vSphere licenses are sold in 1-CPU increments (vs VMware Infrastructure 3.x licenses were sold in 2-CPU increments.)

• Single CPU Licensing now available, so a2 CPU license may now be split and used on 2 single-CPU physical hosts.

copyright I/O Continuity Group, LLC 95

Page 96: VMware vSphere Storage Enhancements

License Downgrade Options

• If you purchased vSphere licenses and wish to convert licenses to VI 3.

• Allowed for Standard, Advanced, Enterprise and Enterprise Plus.

• 20 vSphere Advanced licenses downgrade to 10 dual-CPUs of VI3 Standard using vSphere Licensing Portal (2-CPU licenses per ESX3 host)

• ESX 4 Single CPU and Essentials (Plus) are not downgradeable.

copyright I/O Continuity Group, LLC 96

Page 97: VMware vSphere Storage Enhancements

vSphere Upgrade Requirements

• Some downtime is required to upgrade from VI 3.x environments to vSphere 4.

• Upgrade vCenter Server

• Upgrade ESX/ESXi hosts

• Upgrade VMs to version 7 (introduces new hardware version)

• Optional PVSCSI (new paravirtualized SCSI driver)

copyright I/O Continuity Group, LLC 97

Page 98: VMware vSphere Storage Enhancements

vSphere Compatibility Lists

• Compatibility of Existing VMware Products

– View, Lab Manager, Site Recovery not on list yet

• Hardware and Guest OS Compatibility Lists

– Check minimum levels underlying memory and CPU

• Database and Patches for vCenter Server 2.0

– Oracle 9i and SQL 2000 no longer supported

• Complete Backup of vCenter Server and Database prior to upgrade

copyright I/O Continuity Group, LLC 98

Page 99: VMware vSphere Storage Enhancements

Survey of Upgrade Timing

copyright I/O Continuity Group, LLC 99

Majority out of 140 votes are waiting at least 3-6 months. The preference to allow some time before implementation (survey shows 6 months) indicates interest in added, fuller support on vSphere 4 in the near future.

Page 100: VMware vSphere Storage Enhancements

VMware UpgradeConclusion

• Consider adoption or upgrading to vSphere if server hardware is 64-bit, hardware-VT-assisted technology and other hardware and guest OS pre-requisites are met to support the new vSphere feature sets desired.

• If your VMs are mission-critical (can never go down) or involve heavy workloads (needing faster processing) consider vSphere adoption/upgrade.

• Software iSCSI performance and single processor licensing is attractive to SMBs.

• If you currently have a support contract, the upgrade process is easy. If you’re are deploying vSphere for the first time without 64-bit servers, a license downgrade will be required.

04/08/23

copyright 2007 I/O Continuity Group 100

Page 101: VMware vSphere Storage Enhancements

Q&A

• How would you describe your current datacenter?

• What have you identified as your biggest datacenter issues?

• What is the estimated timing of your next upgrade initiative?

• Have you deployed 64-bit server hardware?

• What stage of SAN adoption are you at?

• Is your backup window shrinking?

• Please feel free to send us any questions subsequent to the presentation.

04/08/23

copyright 2007 I/O Continuity Group 101

Page 102: VMware vSphere Storage Enhancements

VMware on SAN Design Questions

• How many servers do I currently have and how many do I add every year? (aka server sprawl)

• How much time do IT staff spend setting up new servers with operating system and applications?

• How often do my servers go down?

• Is our IT budget shrinking?

• How difficult is it to convert a physical machine to a virtual machine with each option?

• What hardware and guest OS’s are supported?

copyright I/O Continuity Group, LLC 102

Page 103: VMware vSphere Storage Enhancements

Vendor Neutral Design Benefits

• There are four main benefits to designing SAN’s independent of any single equipment vendor.

• The relative importance of these benefits will change depending on your priorities and which vendors you choose.

• The benefits in some cases are:

– Lower costs

– Getting the best possible technology

– Greater flexibility for future technology improvements

– Non-proprietary and non-exclusivity models

copyright I/O Continuity Group, LLC 103

Page 104: VMware vSphere Storage Enhancements

Closing Remarks

• Thank you for joining us.

• Let us know how we can assist you with your next datacenter project.

04/08/23

copyright 2007 I/O Continuity Group 104