A Brief History of 3PAR, Nimble & VMware VVolsDesign partnership between HPE and VMware Virtual Volumes
VMworldVVol introduced with
HPE as 1 of 5 original design partners
3PAR Reference Platform
3PAR selected as the FC reference platform
Aug2011
May2012
Jul2014
VVol Beta3PAR 1 of 3 partners ready on Beta Day 1
Mar2015
VVol GA with vSphere 6.0
3PAR 1 of 4 partners ready
Nov2016
VVol 2.0 GA with vSphere 6.5
3PAR & Nimble ready on Day 1
Today
Continued development3PAR, Nimble and VMware continue
working to develop and refine VVols
Jun2015
Mar2016
Nimble Tech PreviewNimble tech
preview released
Nimble GANimble releases VVol
support
Why did VMware create a new storage architecture?
Key take-away: There are many hidden challenges with traditional external storage architectures in vSphere which decrease efficiency and increase management
LUN Centric – storage pre-allocated into silos
No Visibility – cannot see inside VMFS volume
Poor Efficiency – typically over-provisioning LUNs
Increased Administration –must always go to storage admin
Todays exercise: Challenges in external storage architectures
Difficult Reclamation – space reclamation is a manual process
Hardware Centric – must use vendor tools & plug-ins
Long Provisioning – time consuming manual requests
Data Services – array data services not aligned with VMs
What are VMware VVols?– New vSphere storage architecture to replace VMFS/NFS
– Standard (VASA) for all storage vendors to adhere to
– Enables vSphere to write VMs natively to storage arrays
– Common storage management tasks are automated
– Designed to be dynamic, eliminates LUN provisioning
– Storage array features can be applied to individual VMs
Key take-away: VVols provides automated and dynamic management of storage resources and enables storage arrays to interact directly with VMs
How VVols transforms storage in vSphere
LUN-CentricSiloed storage pools, array services aligned
to LUNs
VM-CentricNo siloed storage
pools, array services aligned to VMs
VMFS
VVol
s
StaticPre-allocated storage,
over- provisioned resources
ComplexComplicated
management using vendor specific tools
Time-consumingLonger provisioning,
manual space reclamation
DynamicDynamically allocated
storage, using only what is needed
SimpleSimplified
management using vSphere interfaces
EffortlessInstant provisioning,
automatic space reclamation
What changes between file and block protocols with VVols
vSphere managed (VMFS)
File
Blo
ckFile
systemStorage visibility
Data store Level
Network or fabric LUNs VMDK files
iSCSI initiator
(sw/hw) or HBA
I/O transport
Host present VM storage
Array managed
(NFS)
Host adapter
VMlevelNetwork Mount point VMDK filesNFS Client
vSphere native
VVols
VM levelStorage container VVols
vSphere native
Storage container
Overview of VVols Storage Architecture– Protocol Endpoint: Logical I/O proxy that serves
as the data path between ESXi hosts to VMs and their respective VVols
– VASA Provider: Software component that mediates out-of-band communication for VVols traffic between vCenter Server, ESXi hosts & storage array
– Storage Container: Pool of raw storage capacity that becomes a logical grouping of VVols, seen as a virtual datastore by ESXi hosts
– Virtual Volume (VVol): Container that encapsulates VM files, virtual disks and their derivatives
– Storage Profile: Set of rules that define storage requirements for VMs based on capabilities provided by storage array (same as VSAN)
Protocol Endpoint
VASA Provider
SPBM
ESXi hosts vCenter Server
Control PathDataPath
VM VM VM VM VM
Storage Container
VVO
Ls
Storage Array
2. Storage Policies created in vSphere and assigned array capabilities
Storage Policy-Based Management
Policy 1
3. VMs assigned a Storage Policy based on requirements and SLAs
1. Storage Array advertises capabilitiesvia VASA Provider
4. SPBM provisions VM on appropriate storage as defined by policy and maintains compliance
Policy 2 Policy 3
Top Reasons Customers will want to Start Using VVols Now
1. Don’t have to go all in2. Get early experience3. Get your disk space back4. Available in all vSphere editions5. Let the array do the heavy lifting6. Start using SPBM now7. Snapshots don’t suck anymore8. Easier for IT Generalists9. One architecture to rule them all
1. You don’t to go all in with VVolsYou can use VVols alongside of VMFS, easy migration using Storage vMotion, take your time with it
2. Get early VVol experienceDon’t wait until the last minute. How long did you wait to switch from ESX to ESXi or from vSphere Client to Web Client?
3. Get your disk space backNo more over-provisioning or manual space reclamation, keep your array thin as possibly automatically
4. Available in all vSphere editionsIt’s an architecture not a feature, nothing to license, will replace VMFS one day
5. Let the array do the heavy liftingAn array is a purpose built I/O engine, array features are more powerful and has better visibility into storage resources than vSphere has
6. Start using SPBM nowDon’t get left out, get started using the same SPBM that VSAN users have been enjoying
7. Snapshots don’t suck anymoreNo longer have to wait hours for I/O intensive snapshot deletions to commit, plus your backups will complete faster
8. Easier for IT GeneralistsDon’t need to be a storage admin, fully manage storage from within vSphere
9. One architecture to rule them all…and in the darkness bind them - NFS, iSCSI, Fiber Channel, who cares? VVols provides a unified storage architecture across protocols
10. The VM is now a unit of storageBecause it’s all about the VM, no more LUNs – the VM is now a first class citizen and array has VM-level visibility
Introduction
– Array replication support via SPBM introduced in vSphere 6.5 (VASA 3.0)
– Nimble supported replication in vSphere 6.0
– Nimble and 3PAR first to complete VVol replication implementations (before merger)
– Replication done at the VVol level (VM) not the datastore level like SRM (LUN)
– Replication Groups automatically created on array contain VVol objects
– Designed to be managed in vSphere, not on array side
Nimble and 3PAR were both VMware design
partners on VVols 2.0
Components of VVol Replication
Fault Domains [new]– Something that fails as a whole (3PAR = storage array, Nimble = Nimble group)
Replication Groups [new]– Replicate VVol-based VMs between fault domains
– Groups maintain consistent point[s]-in-time
– 3PAR - Only maintains most recent point in time, which is no older than the most recent RPO
– Nimble – Uses Volume Collections [most recent point for RGs] - plugin (workflows for replication)
– RPOs can be “stretched” when adding VMs or VVols to a replication group
– Groups are in a Source, Target, InTest or FailedOver state
– Terms Source/Primary/Protected are interchangeable
– Terms Target/Secondary/Recovery are interchangeable
VVol Replication diagramVVols Storage
Container
Replication Group
“Remote” Array (Fault Domain)“Local” Array (Fault Domain)
vSphereStorage Policy
(SPBM)
Preparing for VVol Replication
• 3PAR OS 3.3.1 required• Remote Copy license• Connect arrays via FC or
iSCSI• Define remote targets for
Remote Copy• Create Storage
Containers on primary and secondary arrays
• vSphere 6.5 required• Register VASA Provider• Mount Storage
Containers (VVol datastores) at primary and secondary sites
• Define Storage Policy with replication rules (Components)
vSphere 3PAR Nimble
• vSphere 6.0 or 6.5• NimbleOS 3.6.0 required
for VASA 3.0• No licenses required• Configure source and
destination as replication partners
• Create Folders (storage containers) on source and destination
Creating Nimble Folder / Storage Container
– Nimble folders are used for general organization
– Setting VVol management type makes it a Storage Container
– Must set a capacity limit
– Can optionally set a folder-level performance limit
Replication Storage Policies Summary
– Granular (per VM/Virtual Disk)
– Empowers vSphere admin
– No need to coordinate with Storage admin
– No need to involve Storage admin during disaster recovery
– Storage admin may need to do some clean-up after a true disaster
– New Component feature for SPBM allows Components to be defined once and re-used
– Replication Components contain rules and constraints related to array replication
– Components are attached to policies which are attached to VMs
– SPBM maintains compliance to ensure VM resides on storage that can meet policy definition
Example Storage Policy [3PAR]
Need only specify one replication
constraint to enable VVol replication
Where possible, other VVol capabilities (i.e. deduplication) are mirrored at remote site if
remote array/CPG allows for it
Target array & Storage Container
Target & Snapshot CPG drive tiers
Remote Copy mode (Periodic only)
Desired RPO Minutes (5 min)
Example Storage Policy [Nimble]
Protection schedule by minute, hour, daily or weekly
Can set local and remote snapshot retention policies
Choose replication partner and frequency
Option to delete replicas from partner when source deleted
Changing a VM’s Replication Policy
Selecting either a common or per storage object
Replication Group
Replication Groups on the Arrays932 cli% showvvolvm -sc SanJoseContainer -rcopy
------(MB)------VM_Name GuestOS VM_State Num_vv Physical LogicalVMSanJose other26xLinux64Guest Unbound 2 10290 5120TinyVM2 other26xLinux64Guest Unbound 2 9522 5120TinyVM1 other26xLinux64Guest Unbound 2 10290 5120TinyVM3 other26xLinux64Guest Unbound 3 7986 6144VMFremont other26xLinux64Guest Unbound 2 8754 5120---------------------------------------------------------------
5 total 11 46842 26624
RcopyStatus RcopyGroup RcopyRole RcopyLastSyncTime
Synced VMSanJosgrp_3c12e41 Primary 2017-07-06 16:20:00 PDTSynced TinyVM2grp_80ed051 Primary 2017-07-06 16:22:00 PDTNone NASynced TinyVM3grp_1a3d5ba.r99931 Secondary 2017-07-06 16:20:10 PDTSynced VMFremongrp_ca4027f.r99931 Secondary 2017-07-06 16:21:10 PDT------------------------------------------------------------------------
Output continued
below
3PAR SSMC User InterfaceVVol replication info
- Remote Copy Group- Remote Copy Role- Remote Copy Status
vSphere Replication Components
Protected Site
DR Orchestrator(PowerCLI)
ESXi
vCenter
ESXi
VASA Provider
Protocol Endpoint
Remote Copy
Recovery Site
ESXi
vCenter
ESXi
VASA Provider
Protocol Endpoint
VVol Storage Container
Types of Disaster Recovery operations
• Uncontrolled• Typically loss of power
or hardware failure• Not usually a per VM
event, all VMs recovered• Primary and recovery
site reverse roles
• Controlled• Typically used for
disaster avoidance, planned maintenance or relocating VMs
• Can be per VM or all VMs
• Primary and recovery site reverse roles
Planned Failover Unplanned Failover Test Failover
• Controlled• Typically used to
validate VM recoverability
• Non-impactful• Replica VMs cloned and
made visible to vSphere at recovery site
Recovery and Cloning – Nimble vCenter Plugin
– Snapshot based recovery– Choose from all available snapshots
– Local VM Recovery– In-place restore of VM– Cloned as a new VM
– Local Virtual Disk Recovery– In-place restore of virtual disk– Cloned as a new disk to same VM– Cloned as a new disk to different VM
– Remote VM Recovery– Cloned as new VM at remote site– No need to stop replication
Planned Failover Workflow with VVolsBefore Failover
Array
Local Site
vCenter
ESXi
C1
D1
Sn1
Sw1
C2
D2
Sw2
C3 D3
PowerCLI
Source VVol
SourceReplication
Group
Storage Container
Remote Site
C1
D1
Sn1
C2
D2
C3 D3
TargetReplication
Group
VM2VM1 VM3
Replica VVol
Storage Container
Array
vCenter
ESXi
vSphere API vSphere API
Planned Failover Workflow with VVolsDiscover VM-to-group relationship (Get-SpbmReplicationGroup/Pair)
C1
D1
Sn1
Sw1
C2
D2
Sw2
C3 D3
PowerCLI
C1
D1
Sn1
C2
D2
C3 D3
VM2VM1 VM3
vSphere API vSphere API
Array
Local Site
ESXi
Remote Site
Array
ESXi
vCenter vCenter
Planned Failover Workflow with VVolsPower down source VMs (stop-VM)
Array
Local Site
ESXi
C1
D1
Sn1
Sw1
C2
D2
Sw2
C3 D3
PowerCLI
Remote Site
C1
D1
Sn1
C2
D2
C3 D3
VM2VM1 VM3
Array
ESXi
vSphere API vSphere APIvCenter vCenter
Planned Failover Workflow with VVolsPerform final sync (Sync-SpbmReplicationGroup)
Array
Local Site
ESXi
C1
D1
Sn1
C2
D2
C3 D3
PowerCLI
Remote Site
C1
D1
Sn1
C2
D2
C3 D3
VM2VM1 VM3
Array
ESXi
vSphere API vSphere APIvCenter vCenter
Planned Failover Workflow with VVolsIssue a planned failover (Start-SpbmReplicationFailover)
Array
Local Site
ESXi
C1
D1
Sn1
C2
D2
C3 D3
PowerCLI
Remote Site
C1
D1
Sn1
C2
D2
C3 D3
VM2VM1 VM3
Array
ESXi
vSphere API vSphere API
C1
D1
Sn1
C2
D2
vCenter vCenter
SourceReplication
GroupTarget
Replication Group
Failed-over Replication
Group
Planned Failover Workflow with VVolsRegister newly-failed-over VMs (New-VM)
Array
Local Site
ESXi
C1
D1
Sn1
C2
D2
C3 D3
PowerCLI
Remote Site
C3 D3
VM2VM1
Array
ESXi
vSphere API vSphere API
C1
D1
Sn1
C2
D2
VM2VM1 VM3
vCenter vCenter
SourceReplication
GroupFailed-over Replication
Group
Planned Failover Workflow with VVolsApply an SPBM replication Policy to the VMs (Set-SpbmEntityConfiguration)
Array
Local Site
ESXi
C1
D1
Sn1
C2
D2
C3 D3
PowerCLI
Remote Site
C3 D3
VM2VM1
Array
ESXi
vSphere API vSphere API
C1
D1
Sn1
C2
D2
VM2VM1 VM3
Replication PolicyvCenter vCenter
SourceReplication
GroupFailed-over Replication
Group
Planned Failover Workflow with VVolsUnregister VMs at primary site (remove-VM)
Array
Local Site
ESXi
C1
D1
Sn1
C2
D2
C3 D3
PowerCLI
Remote Site
C3 D3
VM2VM1
Array
ESXi
vSphere API vSphere API
C1
D1
Sn1
C2
D2
VM2VM1 VM3
vCenter vCenter
SourceReplication
GroupFailed-over Replication
Group
Planned Failover Workflow with VVolsPower up VMs at Failed-Over site (start-VM)
Array
Local Site
ESXi
C1
D1
Sn1
C2
D2
C3 D3
PowerCLI
Remote Site
C3 D3
Array
ESXi
vSphere API vSphere API
C1
D1
Sn1
C2
D2
VM2VM1 VM3
vCenter vCenter
SourceReplication
GroupFailed-over Replication
Group
Sw1 Sw2
Planned Failover Workflow with VVolsPower up VMs at Failed-Over site (Start-SpbmReplicationReverse)
Array
Local Site
ESXi
C1
D1
Sn1
C2
D2
C3 D3
PowerCLI
Remote Site
C3 D3
Array
ESXi
vSphere API vSphere API
C1
D1
Sn1
C2
D2
VM2VM1 VM3
vCenter vCenter
SourceReplication
GroupFailed-over Replication
Group
TargetReplication
GroupSource
Replication Group
Sw1 Sw2
Switch to 3PAR Replication unplanned failover demo
41
Play STO3305BUS_Siebert_UnplannedFailover.mp4 (5:11)
The HPE 3PAR & Nimble advantage with VVOLs
Solid and mature6+ years of developmentVMware design partner
Fibre Channel reference platform
Simple and reliableInternal VASA Provider
No external failure pointsZero step installation
Innovative and efficientSnapshots on different tiers
Smallest capacity VM footprintManage VVols folder by folder
Rich and powerfulApp level optimized storage policies
Architectures optimized for VVolsVM Recovery directly from vCenter
Call To Action – Find out more
While at VMworld:– Visit HPE Booth #600 to talk to our experts
– Check out HOL-1827 for some hands-on with Nimble VVols and replication
– Attend VVols Partner Panel session ID #
– Attend HPE in-booth VVol sessions
Anytime:– Stalk us on Twitter
– @Ericsiebert– @Julian_cates
– Download DR scripts from github– Bookmark Around The Storage Block blog– 3PAR VVol Key docs:
– 3PAR VMware VVol Implementation Guide– 3PAR VMware VVol Replication Guide
– Nimble VVol Key docs:– VMware Integration Guide– VMware vSphere 6 Deployment Considerations
Two Primaries Window
During Planned and Unplanned failover a window exists when you have two source/primary replication groups– During this time window, VMs changed as well as added/removed from replication groups, will require resolution when
the groups are reversed.
– With traditional storage, any changes at the original source datastore are completely wiped out
– With VVols, it’s possible to reverse some of the actions taken at the original source site. For example:– A VM deleted at the primary is kept alive until the reverse-replication operation is performed. At which point, the VM is recovered,
assuming it still exists at the new source site.– Adding a new VM to the original source replication group will not be lost, but instead, auto-dismissed from the group when the reverse
operation occurs.– VM snapshots taken at the source replication group will be lost upon reversal.
48
Summarize and simplify this
Handling Conflicts with Two Primary Replication GroupsDeleting a VM at the original source
C1
D1
Sn1
C2
D2
C3 D3
PowerCLI
C3 D3
VM2VM1
vSphere API vSphere API
C1
D1
Sn1
C2
D2
VM2VM1 VM3
Array
Local Site
ESXi
Remote Site
Array
ESXi
vCenter vCenter
SourceReplication
GroupFailed-over Replication
Group
Handling Conflicts with Two Primary Replication GroupsAdding a VM to the original source after fail-over
Array
Local Site
ESXi
C1
D1
Sn1
C2
D2
C3 D3
PowerCLI
Remote Site
C3 D3
VM2VM1
Array
ESXi
vSphere API vSphere API
SourceReplication
Group
Failed-over
Replication Group
C1
D1
Sn1
C2
D2
VM2VM1 VM3
vCenter vCenter
VM4
C4
D4
Planned Failover Workflow with VVolsIn-conflict VVols are automatically removed from the group, but not destroyed
Array
Local Site
ESXi
C1
D1
Sn1
C2
D2
C3 D3
PowerCLI
Remote Site
C3 D3
Array
ESXi
vSphere API vSphere API
C1
D1
Sn1
C2
D2
VM2VM1 VM3
vCenter vCenter
VM4
C4
D4
C4
D4
SourceReplication
GroupFailed-over Replication
Group
TargetReplication
GroupSource
Replication Group
Benefits of VVols on Nimble Storage
• No need to manage additional resources• Highly availableEmbedded VASA Provider
• Automated registration of VP• Automatic creation of PE and host access control managementVASA Provider Management
• Manage VVOLs folder by folder• Folders can grow and shrink dynamicallyFolders
• Replicate VM using array based replication• Recover VM using Nimble vCenter Plugin Backup and Recovery
Nimble VVol Implementation
• New policy-based framework from VMware• Foundation of VMware’ Software Defined Datacenter control pane• Based on the VASA management protocol
Storage Policy Based Management
• Built-in application abstractions• Pick from drop-down list - Exchange, SQL, VDI, Splunk, ESX etc.• Auto-selects optimal storage settings
Nimble Policy Based Management
• Makes Nimble storage natively VM aware• Enables virtual disk level offload of data services• Snapshots, replication, encryption
Virtual Volumes
Additional SPBM Policy Options
– Application Policy
– Protection Schedule
– QoS / Performance
– Deduplication
– Data Encryption
– All-Flash
VVol Replication Objects
VVol Storage Container
Replication Group
Array A – a.k.a Fault Domain Array B
VVol Config, Data, RO/RWsnapshots and k/v repli-cated. SWAP and storagepolicies not replicated
Names of Storage Con-tainers and CPGs sharedshared between arrays
Point-in-Time Snapshot for Failover Test [3PAR]
RO
C D
VM1
VM2
C D
RO
RO
C D
VM1
VM2
C D
RO
RO’
C’ D’
VM1
VM2
C’ D’
RO’
VVol
VVol Datastore
Replication Group
Data & K/V
Snapshot relationship
3PAR VASA Replication CapabilitiesBuilt on top of HPE 3PAR Remote Copy
rcopyTargetContainer
• Specifies the combined array sysName and Storage Container.
• Multiple values possible for each target sysName, one for each Storage Container at that target.
• Syntax: sysName:storageContainerName
rcopyMode
• Specifies requested remote copy mode.• Periodic [only mode supported today for VVols]
• Remote Copy itself supports synchronous, streaming and multi-target synchronous / periodic (SLD)
rcopyRPO
• Specifies periodic sync period.
• Range: (5min - 1 year)
• Requires rcopyMode be set to Periodic
3PAR VASA Replication CapabilitiesBuilt on top of HPE 3PAR Remote Copy
rcopyRemoteCPG
• Specifies the Provisioning Group (a.k.a CPG, a template for the volume physical characteristics, such RAID, and device type) to be used for remote copy at the target site for disk VVols.
• If not specified defaults to CPG capability, if specified. If not specified, VASA Provider selects a CPG.
rcopyRemoteSnapCPG
• Specifies the CPG to be used for remote copy at the target site for base volumes
• One selectable from SPBM dropdown
• If not specified defaults to same as rcopyRemoteCPG capability
Where possible, other VVol capabilities are mirrored at the remote site. For example, if deduplication is selected at the local site, a replica VVol on the remote array would be created with deduplication, if the remote array and remote CPG allow for it.
Disaster Recovery with VVol-based VMsTypes of failover
Planned
– Changes the responsibility for mastership of the VVol-based VMs from the Source/Primary site to the Target/Secondary site.
– Source and Target roles are reversed.
– Involves:
– Collecting replicated VM and group information at the Source and Target sites.
– Powering Down VMs at Source site
– Issuing a failover operation (Spbm-FailoverReplicationGroup in PowerCLI)
– Unregistering VMs at the source site
– Registering the VMs at the failover/target site.
– Applying a storage policy that allows replication back to the original Source site.
– Issuing a reverse operation (Spbm-ReverseReplicationGroup in PowerCLI)
– Optionally powering up the failed-over VMs
60
Disaster Recovery with VVol-based VMsTypes of failover
Unplanned
– Changes the responsibility for mastership of the VVol-based VMs from the Source/Primary site to the Target/Secondary site.
– Source and Target roles are reversed (eventually).
– When the disaster occurs:– Collecting replicated group information at the Target/Recovery site
– Issuing a failover operation (Spbm-FailoverReplicationGroup in PowerCLI)
– Registering the VMs at the failover/target site.
– Applying a storage policy that allows replication back to the original Source site.
– Optionally powering up the failed-over VMs
– After the original Source site has recovered from it’s disaster:– Powering Down VMs at Source site (if needed)
– Unregistering VMs at the source site
– Issuing a reverse operation (Spbm-ReverseReplicationGroup in PowerCLI)
61
There’s no undo option. Once you issue a Planned or Unplanned failover, the only option is to continue forward. Issue a reverse, and optionally another failover and reverse to bring things back up at the original site.
Disaster Recovery with VVol-based VMsTypes of failover
Test
– Allows testing of replicated VVol-based VMs at the secondary site, by making copies of the replica VVols and exposing them to ESXi hosts for VM testing.
– Source and Target roles are NOT reversed.
– Involves:
– Collecting replicated VM and group information at the Source and Target sites.
– Issuing a test-failover operation (Start-SpbmReplicationTestFailover in PowerCLI)
– Registering the new test-VMs at the target site.
– Applying a storage policy that allows replication back to the original Source site– To help verify those replication policies are valid.
– Powering up, testing and powering down the in-test VMs
– Un-registering the test-VMs
– Issuing a stop-test-failover operation (Stop-SpbmReplicationTestFailover in PowerCLI)– Once the test is stopped, all VMs created by the test are destroyed permanently.
62
Top Related