From Virtual Learning Environments to Pervasive Learning Environments
Storage for Virtual Environments 2011 R2
-
Upload
stephen-foskett -
Category
Technology
-
view
2.271 -
download
1
description
Transcript of Storage for Virtual Environments 2011 R2
Storage for Virtual Environments
Stephen FoskettFoskett Services and Gestalt IT
Live Footnotes:- @Sfoskett- #VirtualStorage
CC-BY-NC-SA © 2011, Foskett Services
This is Not a Rah-Rah Session
CC-BY-NC-SA © 2011, Foskett Services
Agenda
• Virtualization of storage, server, and network• Modern storage infrastructure• Why virtualize?• What the future looks like
• The Impact of server virtualization• Performance impact: I/O• A new level of connectivity• Integration
Session 1: Introducing the Virtual Data Center
• Presenting storage to virtual servers• Shared storage, NFS, and raw devices• Connectivity: FC, NFS, and iSCSI
• Storage features for virtualization• VMware vStorage, thin provisioning, PSA, VAAI, SIOC
Session 2: Technical Considerations - Configuring Storage for VMs
• Converged I/O: NPIV, FCoE, etc• Storage virtualization• New storage architectures
Session 3: Expanding the Conversation
CC-BY-NC-SA © 2011, Foskett Services
Introducing the Virtual Data Center
CC-BY-NC-SA © 2011, Foskett Services
This Hour’s Focus:What Virtualization Does
•Introducing storage and server virtualization▫The future of virtualization▫The virtual datacenter
•Virtualization confounds storage▫Three pillars of performance▫Other issues
•Storage features for virtualization▫What’s new in VMware
CC-BY-NC-SA © 2011, Foskett Services
Virtualization of Storage, Server
and Network• Storage has been stuck in the Stone Age since the
Stone Age!▫Fake disks, fake file systems, fixed
allocation▫Little integration and no communication
• Virtualization is a bridge to the future
▫Maintains functionality for existing apps▫Improves flexibility and efficiency
CC-BY-NC-SA © 2011, Foskett Services
A Look at the Future
Short-term: Virtual servers
• Legacy applications running in virtual machines
• Presentation of modern resources as “stone knives and bear skins”
• All the smarts and value live in the hypervisor
Medium-term: Virtual data centers
• Integrated platforms for legacy and modern applications
• Convergence and integration of server, network, and storage
• The smarts live in the orchestration engine
Long-term: Something totally different
• “Run-anywhere” applications
• Mobility for internal or external infrastructure
• The smarts live in the application
CC-BY-NC-SA © 2011, Foskett Services
Server Virtualization is On the Rise
112%
2 to 531%
6 to 924%
10 to 20
16%
21 to 40
10%
More than 407%
On average, how many VMs run on each virtualization host server in your pro-
duction environment(s)?
Zero
Less than 10%
10% to 24%
25% to 49%
50% to 74%
75% to 90%
Greater than 90%
0% 10% 20% 30%
What percentage of your organization’s production servers do you expect to have
virtualized by the end of 2011?
2009
2010
Data: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010
CC-BY-NC-SA © 2011, Foskett Services
Server Virtualization is a Pile of Lies!What the OS thinks it’s running on…
What the OS is actually running on…
Physical Hardware
VMkernel
Binary Translation, Paravirtualization, Hardware Assist
Scheduler and
Memory Allocator
vNICvSwitch
NIC Driver
vSCSI/PVVMDKVMFS
I/O Driver
Guest OSVM
Guest OSVM
CC-BY-NC-SA © 2011, Foskett Services
And It Gets Worse Outside the Server!
CC-BY-NC-SA © 2011, Foskett Services
CPU
Backup
The Virtual Data Center of Tomorrow
The Cloud™
Storage Network
Applications
Applications
Applications
Applications
Applications
LegacyManagement
CC-BY-NC-SA © 2011, Foskett Services
The Real Future of IT Infrastructure
Orchestration Software
Containerized Application
VM-Aware OS
Hypervisor
Flexible Server Chassis
Converged I/O over Ethernet
Storage Network
CC-BY-NC-SA © 2011, Foskett Services
Three Pillars of VM Performance
Virtual Machine
Performance
CPU I/O (Storage/ Network)
Memory
CC-BY-NC-SA © 2011, Foskett Services
Confounding Storage Presentation
•Storage virtualization is nothing new…▫ RAID and NAS virtualized disks▫ Caching arrays and SANs masked volumes▫ New tricks: Thin provisioning, automated tiering, array
virtualization
•But, we wrongly assume this is where it ends▫ Volume managers and file systems▫ Databases
•Now we have hypervisors virtualizing storage▫ VMFS/VMDK = storage array?▫ Virtual storage appliances (VSAs)
CC-BY-NC-SA © 2011, Foskett Services
Begging for Converged I/O
• How many I/O ports and cables does a server need?▫ Typical server has 4 ports, 2 used▫ Application servers have 4-8 ports used!
• Do FC and InfiniBand make sense with 10/40/100 GbE?▫ When does commoditization hit I/O?▫ Ethernet momentum is unbeatable
• Blades and hypervisors demand greater I/O integration and flexibility▫ Other side of the coin – need to virtualize I/O
1 GbE Network 1 GbE Cluster
4G FC Storage
CC-BY-NC-SA © 2011, Foskett Services
Driving Storage Virtualization• Server virtualization demands storage features
▫Data protection with snapshots and replication
▫Allocation efficiency with thin provisioning+
▫Performance and cost tweaking with automated sub-LUN tiering
▫Improved locking and resource sharing• Flexibility is the big one
▫Must be able to create, use, modify and destroy storage on demand
▫Must move storage logically and physically▫Must allow OS to move too
CC-BY-NC-SA © 2011, Foskett Services
“The I/O Blender” Demands New Architectures
•Shared storage is challenging to implement
•Storage arrays “guess” what’s coming next based on allocation (LUN) taking advantage of sequential performance
•Server virtualization throws I/O into a blender – All I/O is now random I/O!
CC-BY-NC-SA © 2011, Foskett Services
Server Virtualization Requires SAN and NAS
• Server virtualization has transformed the data center and storage requirements▫VMware is the #1 driver of SAN adoption
today!▫60% of virtual server storage is on SAN or
NAS▫86% have implemented some server
virtualization• Server virtualization has enabled and demanded
centralization and sharing of storage on arrays like never before!
Source: ESG, 2008
CC-BY-NC-SA © 2011, Foskett Services
Keys to the Future For Storage Folks
• Management integration (vCenter)• Storage drivers and paravirtualization• VMFS, RDM• Functional integration (VAAI, SIOC)
Storage Presentation
• Everything over Ethernet (iSCSI, NFS, FCoE)• CNA, DCB, NPIV, IOV
Converged I/O
• Volume Management• Thin provisioning• Automated tiering• Snapshots and replication• Virtual storage appliances
Storage Virtualization
• Solid-state: SSD and caching• Post-RAID (wide striping, erasure codes)
New Storage Architectures
Ye Olde Seminar Content!
CC-BY-NC-SA © 2011, Foskett Services
Primary Production Virtualization Platform
VMware ESX Server
Microsoft Hyper-V
Citrix XenServer
Other (Oracle, KVM, mainframe,
etc)
None
Which hypervisor is your organization’s primary virtualization platform?
Data: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010
CC-BY-NC-SA © 2011, Foskett Services
Storage Features for Virtualization
VMware ESX Microsoft Hyper-V
Multi-Pathing NMP/PSA/PowerPath MPIO/PowerPath
Storage Live Migration Storage VMotion N/A
Paravirtualized Drivers PVSCSI IDE/SCSI
Boot from SAN iSCSI There is a way…
Block Zeroing VAAI/T10 N/A
Granular Locking VAAI N/A
Array-Offload Snapshots VAAI N/A
Native Snapshots per VM 32 50
Native Thin Provisioning ESX 4.0+ Yes
Max Partition Size 2 TB – 512 B 2 TB+
Direct I/O VMDirectPath N/A
CC-BY-NC-SA © 2011, Foskett Services
Which Features Are People Using?
Thin Provisioning
Snapshots
vCenter Plugins
Clones
Deduplication
Compression
Solid State
Large solid-state based caches
Auto-Tiering
VMware level QoS (SIOC)
Storage Array level QoS
0% 10% 20% 30% 40% 50% 60% 70% 80%
What neato storage features are you using in your virtualization environment?
Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers
CC-BY-NC-SA © 2011, Foskett Services
What’s New in vSphere 4 and 4.1
•VMware vSphere 4 (AKA ESX/ESXi 4) is a major upgrade for storage▫ Lots of new features like thin provisioning, PSA, any-to-any
Storage VMotion, PVSCSI▫ Massive performance upgrade (400k IOPS!)
•vSphere 4.1 is equally huge for storage▫ Boot from SAN▫ vStorage APIs for Array Integration (VAAI)▫ Storage I/O control (SIOC)
CC-BY-NC-SA © 2011, Foskett Services
What’s New in vSphere 5• VMFS-5 – Scalability and efficiency
improvements• Storage DRS – Datastore clusters and
improved load balancing• Storage I/O Control – Cluster-wide and NFS
support• Profile-Driven Storage – Provisioning,
compliance and monitoring• FCoE Software Initiator• iSCSI Initiator GUI• Storage APIs – Storage Awareness (VASA)• Storage APIs – Array Integration (VAAI 2) –
Thin Stun, NFS, T10• Storage vMotion - Enhanced with mirror
mode• vSphere Storage Appliance (VSA)• vSphere Replication – New in SRM
CC-BY-NC-SA © 2011, Foskett Services
And Then, There’s VDI…• Virtual desktop infrastructure (VDI) takes everything
we just worried about and amplifies it:
▫Massive I/O crunches▫Huge duplication of data▫More wasted capacity▫More user visibility▫More backup trouble
Vendor Showcase and Networking Break
What’s next
CC-BY-NC-SA © 2011, Foskett Services
Technical Considerations - Configuring Storage for VMs
The mechanics of presenting and using storage in virtualized environments
CC-BY-NC-SA © 2011, Foskett Services
This Hour’s Focus:Hypervisor Storage Features
•Storage vMotion•VMFS•Storage presentation: Shared, raw, NFS,
etc.•Thin provisioning•Multipathing (VMware Pluggable Storage
Architecture)•VAAI and VASA•Storage I/O control and storage DRS
CC-BY-NC-SA © 2011, Foskett Services
Storage vMotion•Introduced in ESX 3 as “Upgrade
vMotion”▫ESX 3.5 used a snapshot while the
datastore was in motion▫vSphere 4 used changed-block tracking
(CBT) and recursive passes▫vSphere 5 Mirror Mode mirrors writes to
in-progress vMotions and also supports migration of vSphere snapshots and Linked Clones
•Can be offloaded for VAAI-Block (but not NFS)
CC-BY-NC-SA © 2011, Foskett Services
vSphere 5: What’s New in VMFS 5
VMFS-3 Upgraded VMFS-5
Block Size 1, 2, 4, or 8 MB Same as previous 1 MB
Max Extent 2 TB 60 TB 60 TB
VAAI ATS (Atomic Test & Set) Locking
No Yes Yes
1K Files Stored As Descriptors No Yes Yes
Sub-Block Size 64 KB 64 KB 8 KB
Max Files 30,720 30,720 >100,000
Partition Type MBR MBR < 2 TBGPT > 2 TB
MBR < 2 TBGPT > 2 TB
Starting Sector 128 128 2048
• Max VMDK size is still 2 TB – 512 bytes• Virtual (non-passthru) RDM still limited to 2 TB• Max LUNs per host is still 256
CC-BY-NC-SA © 2011, Foskett Services
Hypervisor Storage Options:Shared Storage
• The common/ workstation approach▫ VMware: VMDK image in VMFS datastore▫ Hyper-V: VHD image in CSV datastore▫ Block storage (direct or FC/iSCSI SAN)
• Why?▫ Traditional, familiar, common (~90%)▫ Prime features (Storage VMotion, etc)▫ Multipathing, load balancing, failover*
• But…▫ Overhead of two storage stacks (5-8%)▫ Harder to leverage storage features▫ Often shares storage LUN and queue▫ Difficult storage management
VMHost
GuestOS
DAS or SANStorage
VMFS VMDK
CC-BY-NC-SA © 2011, Foskett Services
Hypervisor Storage Options:Shared Storage on NAS
• Skip VMFS and use NAS▫ NFS or SMB is the datastore
• Wow!▫ Simple – no SAN▫ Multiple queues▫ Flexible (on-the-fly changes)▫ Simple snap and replicate*▫ Enables full Vmotion▫ Link aggregation (trunking) is possible
• But…▫ Less familiar (ESX 3.0+)▫ CPU load questions▫ Limited to 8 NFS datastores (ESX default)▫ Snapshot consistency for multiple VMDK
VMHost
GuestOS
NASStorage
VMDK
CC-BY-NC-SA © 2011, Foskett Services
Hypervisor Storage Options:Guest iSCSI
• Skip VMFS and use iSCSI directly▫ Access a LUN just like any physical server▫ VMware ESX can even boot from iSCSI!
• Ok…▫ Storage folks love it!▫ Can be faster than ESX iSCSI▫ Very flexible (on-the-fly changes)▫ Guest can move and still access storage
• But…▫ Less common to VM folks▫ CPU load questions▫ No Storage VMotion (but doesn’t need it)
VMHost
GuestOS
iSCSIStorage
LUN
CC-BY-NC-SA © 2011, Foskett Services
Hypervisor Storage Options:Raw Device Mapping (RDM)
• Guest VM’s access storage directly over iSCSI or FC▫ VM’s can even boot from raw devices▫ Hyper-V pass-through LUN is similar
• Great!▫ Per-server queues for performance▫ Easier measurement▫ The only method for clustering▫ Supports LUNs larger than 2 TB (60 TB
passthru in vSphere 5!)
• But…▫ Tricky VMotion and dynamic resource
scheduling (DRS)▫ No storage VMotion▫ More management overhead▫ Limited to 256 LUNs per data center
VMHost
GuestOS
SAN Storage
Mapping File
I/O
CC-BY-NC-SA © 2011, Foskett Services
Hypervisor Storage Options:Direct I/O
• VMware ESX VMDirectPath - Guest VM’s access I/O hardware directly▫ Leverages AMD IOMMU or Intel VT-d
• Great!▫ Potential for native performance▫ Just like RDM but better!
• But…▫ No VMotion or Storage VMotion▫ No ESX fault tolerance (FT)▫ No ESX snapshots or VM suspend▫ No device hot-add▫ No performance benefit in the real world!
VMHost
GuestOS
SAN Storage
Mapping File
I/O
CC-BY-NC-SA © 2011, Foskett Services
Which VMware Storage Method Performs Best?
Mixed random I/O CPU cost per I/O
Source: “Performance Characterization of VMFS and RDM Using a SAN”, VMware Inc., ESX 3.5, 2008
VMFS,RDM (p), or RDM (v)
CC-BY-NC-SA © 2011, Foskett Services
vSphere 5: Policy or Profile-Driven
Storage• Allows storage tiers to be defined in vCenter based on
SLA, performance, etc.• Used during provisioning, cloning, Storage vMotion,
Storage DRS• Leverages VASA for metrics and characterization• All HCL arrays and types (NFS, iSCSI, FC)• Custom descriptions and tagging for tiers• Compliance status is a simple binary report
CC-BY-NC-SA © 2011, Foskett Services
Native VMware Thin Provisioning
• VMware ESX 4 allocates storage in 1 MB chunks as capacity is used▫Similar support enabled for virtual disks on
NFS in VI 3▫Thin provisioning existed for block, could be
enabled on the command line in VI 3▫Present in VMware desktop products
• vSphere 4 fully supports and integrates thin provisioning
▫Every version/license includes thin provisioning
▫Allows thick-to-thin conversion during Storage VMotion
• In-array thin provisioning also supported (we’ll get to that…)
CC-BY-NC-SA © 2011, Foskett Services
Four Types of VMware ESX Volumes
ZeroedThick (default)
Pre-allocated on
creation
No initial zeroing
Zeroing on write/delete
EagerZeroedThick
Pre-allocated on
creation
Zeroing on creation
Zeroing on write/delete
Thick
Pre-allocated on
creation
No initial zeroing
No zeroing
Thin
Allocation on demand
No initial zeroing
Zeroing on write/delete
Friendly to on-array thin provisioning
What will your array do? VAAI helps…
Note: FT is not
supported
CC-BY-NC-SA © 2011, Foskett Services
Storage Allocation and Thin Provisioning
Thick-on-Thick•The traditional way•No hassles, no worries
•Guaranteed to be inefficient
Thin-on-Thick•Let ESX handle thin tasks
•Efficiency gains limited per ESX host
•Simple static storage allocation
•No ESX FT
Thick-on-Thin•Let the array handle thin tasks
•Efficiency gains benefit everyone
•Less worry for VM admins
Thin-on-Thin•Both ESX and the array do their thin thing
•Maximum efficiency•More opportunities for incompatibilities and issues
•No ESX FT
VMware tests show no performance impact
from thin provisioning after zeroing
CC-BY-NC-SA © 2011, Foskett Services
Pluggable Storage Architecture:
Native Multipathing• VMware ESX includes multipathing built in
▫ Basic native multipathing (NMP) is round-robin fail-over only – it will not load balance I/O across multiple paths or make more intelligent decisions about which paths to use
Pluggable Storage Architecture (PSA)VMware NMP
VMware PSP
VMware SATP
Third-Party PSP
Third-Party SATP Third-Party MPP
CC-BY-NC-SA © 2011, Foskett Services
Pluggable Storage Architecture: PSP and SATP
• vSphere 4 Pluggable Storage Architecture allows third-party developers to replace ESX’s storage I/O stack▫ESX Enterprise+ Only
• There are two classes of third-party plug-ins:▫Path-selection plug-ins (PSPs) optimize the
choice of which path to use, ideal for active/passive type arrays
▫Storage array type plug-ins (SATPs) allow load balancing across multiple paths in addition to path selection for active/active arrays
• EMC PowerPath/VE for vSphere does everything
CC-BY-NC-SA © 2011, Foskett Services
Storage Array Type Plug-ins (SATP)
• ESX native approaches▫ Active/Passive▫ Active/Active▫ Pseudo Active
• Storage Array Type Plug-Ins▫ VMW_SATP_LOCAL – Generic local direct-attached storage▫ VMW_SATP_DEFAULT_AA – Generic for active/active arrays▫ VMW_SATP_DEFAULT_AP – Generic for active/passive arrays▫ VMW_SATP_LSI – LSI/NetApp arrays from Dell, HDS, IBM, Oracle, SGI▫ VMW_SATP_SVC – IBM SVC-based systems (SVC, V7000, Actifio)▫ VMW_SATP_ALUA – Asymmetric Logical Unit Access-compliant arrays▫ VMW_SATP_CX – EMC/Dell CLARiiON and Celerra (also VMW_SATP_ALUA_CX)▫ VMW_SATP_SYMM – EMC Symmetrix DMX-3/DMX-4/VMAX, Invista▫ VMW_SATP_INV – EMC Invista and VPLEX▫ VMW_SATP_EQL – Dell EqualLogic systems
• Also, EMC PowerPath and HDS HDLM and vendor-unique plugins not detailed in the HCL
CC-BY-NC-SA © 2011, Foskett Services
Path Selection Plug-ins (PSP)
• VMW_PSP_MRU – Most-Recently Used (MRU) – Supports hundreds of storage arrays
• VMW_PSP_FIXED – Fixed - Supports hundreds of storage arrays
• VMW_PSP_RR – Round-Robin - Supports dozens of storage arrays
• DELL_PSP_EQL_ROUTED – Dell EqualLogic iSCSI arrays
• Also, EMC PowerPath and other vendor unique
CC-BY-NC-SA © 2011, Foskett Services
vStorage APIs for Array Integration (VAAI)
• VAAI integrates advanced storage features with VMware• Basic requirements:
▫ A capable storage array▫ ESX 4.1+▫ A software plug-in for ESX
• Not every implementation is equal
▫Block zeroing can be very demanding for some arrays▫Zeroing might conflict with full copy
Block Zero
• Communication method for thin provisioning
• Supports T10 standard as well as custom APIs
Full Copy
• Commands the array to make a mirror of a LUN
• Only custom APIs supported
Hardware-Assisted Locking
• Enables granular locking of block storage devices
• Only custom APIs supported
CC-BY-NC-SA © 2011, Foskett Services
VAAI Support MatrixProducts Plug-in Fibre
ChanneliSCSI Block
Zeroing
Full Copy
Hardware -Assisted Locking
EMC Symmetrix VMAX VMW_VAAI_SYMM
Y Y Y Y Y
EMC CLARiiON CX4, Celerra NS, CNS, VNX?
vmw_vaaip_cx Y Y Y Y Y
HP 3PAR E200, F-Class, S400, S800, T-Class
3PAR_vaaip_InServ
Y Y Y Y Y
Fujitsu Eternus 4000, 8000, DX410/440, DX8100/8400/8700
fjt_vaaip_module Y Y Y Y Y
NetApp FAS2000/3000/6000, N3000/5000/6000/7000
VMW_VAAIP_NETAPP
Y Y Y Y Y
HDS AMS 2040/2100/2300/2500, BR1600, USP V/VM, VSP, NSC 55, USP 100/1100/600, HP P9500
vmw_vaaip_hds Y Y Y Y Y
IBM XIV, SVC, V7000, Fujitsu VS850, Actifio
IBM_VAAIP_MODULE
Y Y Y Y Y
Dell EqualLogic PS4000/5000/5500/6000 vmw_vaaip_eql N/A Y Y Y Y
HP LeftHand P4000/4300/4500/4800, VSA
vmw_vaaip_lhn N/A Y Y Y Y
Actifio, Bull Optima2000, iStorage D3/D4, IBM Storwize V7000, IBM SVC , Fujitsu Eternus VS850
vmw_vaaip_t10 Y Y Y N N
CC-BY-NC-SA © 2011, Foskett Services
vSphere 5: VAAI 2
Block Zero
•Communication method for thin provisioning
•Supports T10, custom APIs, SCSI UNMAP
Full Copy
•Commands the array to make a mirror of a LUN
•T10 and custom APIs supported
Hardware-Assisted Locking
•Enables granular locking of block storage devices
•T10 and custom APIs supported
Thin Provisioning
Stun
•“VM stun” for out-of-space conditions
•Requires VASA
Thin Space Reclaim
•Reclamation of VMFS dead space
•Applies to VMDK actions: Delete VM or snapshot, Storage vMotion
•Uses SCSI UNMAP
Full File Clone
•Like Full Copy for NFS
•For “clone” or “deploy from template” but not Storage vMotion
Extended Stats API
•Brings in more detail (thin status)
Reserve Space
•Adds thick provisioning for NFS
Native Snapshot Support
•Will be used for VDI•Not really talked about yet
NAS plugins come from vendors, not VMware
Blo
ck
(FC
/iS
CS
I)F
ile
(NF
S)
T10 compliance is improved - No plug-in needed for many arrays
CC-BY-NC-SA © 2011, Foskett Services
vSphere 5: vSphere Storage APIs – Storage Awareness (VASA)
• VASA is communication mechanism for vCenter to detect array capabilities▫RAID level, thin provisioning state,
replication state, etc.
• Two locations in vCenter Server:▫“System-Defined Capabilities” – per-
datastore descriptors▫Storage views and SMS API’s
CC-BY-NC-SA © 2011, Foskett Services
Storage I/O Control (SIOC)• Storage I/O Control (SIOC) is all about fairness:
▫ Prioritization and QoS for VMFS▫ Re-distributes unused I/O resources▫ Minimizes “noisy neighbor” issues
• ESX can provide quality of service for storage access to virtual machines▫ Enabled per-datastore▫ When a pre-defined latency level is exceeded on a VM it begins to throttle
I/O (default 30 ms)▫ Monitors queues on storage arrays and per-VM I/O latency
• But:▫ vSphere 4.1 with Enterprise Plus▫ Disabled by default but highly recommended!▫ Block storage only (FC or ISCSI)▫ Whole-LUN only (no extents)▫ No RDM
CC-BY-NC-SA © 2011, Foskett Services
Storage I/O Control in Action
CC-BY-NC-SA © 2011, Foskett Services
Virtual Machine Mobility
• Moving virtual machines is the next big challenge
• Physical servers are difficult to move around and between data centers
• Pent-up desire to move virtual machines from host to host and even to different physical locations
• VMware DRS would move live VMs around the data center▫ The “Holy Grail” for server managers▫ Requires networked storage (SAN/NAS)
CC-BY-NC-SA © 2011, Foskett Services
vSphere 5: Storage DRS
• Datastore clusters aggregate multiple datastores
• VMs and VMDKs placement metrics:▫ Space - Capacity utilization and
availability (80% default)▫ Performance – I/O latency (15 ms
default)• When thresholds are crossed,
vSphere will rebalance all VMs and VMDKs according to Affinity Rules
• Storage DRS works with either VMFS/block or NFS datastores
• Maintenance Mode evacuates a datastore
VMDK Affinity (default)
VMDK Anti-Affinity
VM Anti-Affinity
Lunch
What’s next
CC-BY-NC-SA © 2011, Foskett Services
Expanding the ConversationConverged I/O, storage virtualization and new storage architectures
CC-BY-NC-SA © 2011, Foskett Services
This Hour’s Focus:Non-Hypervisor Storage Features
• Converged networking
▫Storage protocols (FC, iSCSI, NFS)▫Enhanced Ethernet (DCB, CAN, FCoE)
• I/O virtualization• Storage for virtual storage
▫Tiered storage and SSD/flash▫Specialized arrays▫Virtual storage appliances (VSA)
CC-BY-NC-SA © 2011, Foskett Services
Introduction: Converging on Convergence
• Data centers rely more on standard ingredients
• What will connect these systems together?
• IP and Ethernet are logical choices
Modern Data Center
IP and Ethernet networks
Intel-compatible
server hardware
Open Systems
(Windows and UNIX)
CC-BY-NC-SA © 2011, Foskett Services
Drivers of Convergence
• Demanding greater network and storage I/O• The “I/O blender”• Mobility and abstraction
Virtualization
• Need to reduce port count, combining LAN and SAN• Network abstraction features
Consolidation
• Data-driven applications need massive I/O• Virtualization and VDI
Performance
CC-BY-NC-SA © 2011, Foskett Services
Which Storage Protocol to Use?• Server admins don’t know/care about storage
protocols and will want whatever they are familiar with
• Storage admins have preconceived notions about the merits of various options:
▫FC is fast, low-latency, low-CPU, expensive▫NFS is slow, high-latency, high-CPU, cheap▫iSCSI is medium, medium, medium,
medium
CC-BY-NC-SA © 2011, Foskett Services
vSphere Protocol Performance
CC-BY-NC-SA © 2011, Foskett Services
vSphere CPU Utilization
CC-BY-NC-SA © 2011, Foskett Services
vSphere Latency
CC-BY-NC-SA © 2011, Foskett Services
Microsoft Hyper-V Performance
CC-BY-NC-SA © 2011, Foskett Services
Which Storage Protocols Do People Use?
17%
213%
320%
427%
533%
How many storage protocols are used?
DAS (in-ternal
storage to the
server)
iSCSI FC FCoE NFS0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
What storage protocol(s) support your virtualization environment?
(multiple responses accepted)
Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers
CC-BY-NC-SA © 2011, Foskett Services
The Upshot: It Doesn’t Matter• Use what you have and are familiar with!• FC, iSCSI, NFS all work well
▫ Most enterprise production VM data is on FC, many smaller shops using iSCSI or NFS
▫ Either/or? - 50% use a combination• For IP storage
▫ Network hardware and config matter more than protocol (NFS, iSCSI, FC)▫ Use a separate network or VLAN▫ Use a fast switch and consider jumbo frames
• For FC storage▫ 8 Gb FC/FCoE is awesome for VMs▫ Look into NPIV▫ Look for VAAI
CC-BY-NC-SA © 2011, Foskett Services
The Storage Network Roadmap
1995 1997 1999 2001 2003 2005 2007 2009 2011 2013 2015 2017 20190
5
10
15
20
25
30
35
Network Performance Timeline
FCP
Gig
abit
s per
Second
1995
1997
1999
2001
2003
2005
2007
2009
2011
2013
2015
2017
2019
0
5
10
15
20
25
30
35
40
Network Performance Timeline
FCPEthernet LAN
Gig
abit
s per
Second
1995
1997
1999
2001
2003
2005
2007
2009
2011
2013
2015
2017
2019
0
5
10
15
20
25
30
35
40
Network Performance Timeline
FCPEthernet LANiSCSIFCoE
Gig
abit
s per
Second
1995
1997
1999
2001
2003
2005
2007
2009
2011
2013
2015
2017
2019
0
10
20
30
40
50
60
70
80
90
100
Network Performance Timeline
FCPEthernet LANiSCSIFCoEEthernet Backplane
Gig
abit
s per
Second
CC-BY-NC-SA © 2011, Foskett Services
Serious Performance
• 10 GbE is faster than most storage interconnects• iSCSI and FCoE both can perform at wire-rate
1 GbE
SATA-300
SATA-600
4G FC
SAS 2
8G FC
4x SDR IB
10 GbE
4x QDR IB
0 500 1000 1500 2000 2500 3000 3500
Full-Duplex Throughput (MB/s)
CC-BY-NC-SA © 2011, Foskett Services
Latency is Critical Too• Latency is even more critical in shared storage• FCoE with 10 GbE can achieve well over 500,000 4K
IOPS (if the array and client can handle it!)
1 GbESATA-300SATA-600
4G FCSAS 28G FC
4x SDR IB10 GbE
4x QDR IB
0 k 100 k 200 k 300 k 400 k 500 k 600 k 700 k 800 k
4K IOPS
CC-BY-NC-SA © 2011, Foskett Services
Benefits Beyond Speed
• 10 GbE takes performance off the table (for now…)
• But performance is only half the story:▫ Simplified connectivity▫ New network architecture▫ Virtual machine mobility
1 GbE Network 1 GbE Cluster
4G FC Storage
10 GbE
(Plus 6 Gbps extra capacity)
CC-BY-NC-SA © 2011, Foskett Services
Enhanced 10 Gb Ethernet
• Ethernet and SCSI were not made for each other• SCSI expects a lossless transport with guaranteed delivery• Ethernet expects higher-level protocols to take care of issues
• “Data Center Bridging” is a project to create lossless Ethernet• AKA Data Center Ethernet (DCE), Converged Enhanced Ethernet
(CEE)• iSCSI and NFS are happy with or without DCB
• DCB is a work in progress• FCoE requires PFC (Qbb or PAUSE), DCBX (Qaz)• QCN (Qau) is still not ready
Priority Flow Control (PFC)
802.1Qbb Congestion Management (QCN)
802.1Qau
Bandwidth Management (ETS)
802.1Qaz
Data Center Bridging Exchange Protocol (DCBX)
Traffic Classes 802.1p/Q
PAUSE802.3x
CC-BY-NC-SA © 2011, Foskett Services
FCoE CNAs for VMware ESX
Manufacturer Model or Series
Supports 802.1Qaz
Bandwidth Management
(ETS)
Supports 802.1Qaz Data
Center Bridging Exchange Protocol (DCBX)
Supports 802.1Qbb
Priority Flow Control (PFC)
Supports 802.1Qau
Congestion Management
(QCN)
Brocade
1007 (IBM Blade) Y Y Y N
1010/1020 Y Y Y Y
Emulex
LP21000 Y Y Y N
OneConnect OCe10100
Y Y Y N
QLogic
QLE8000 N Y Y N
QLE8100 Y Y Y N
QLE8200 Y Y Y Y
No Intel (OpenFCoE) or Broadcom support in vSphere 4…
CC-BY-NC-SA © 2011, Foskett Services
vSphere 5: FCoE Software Initiator
• Dramatically expands the FCoE footprint from just a few CNAs
• Based on Intel OpenFCoE? – Shows as “Intel Corporation FCoE Adapter”
CC-BY-NC-SA © 2011, Foskett Services
I/O Virtualization: Virtual I/O
• Extends I/O capabilities beyond physical connections (PCIe slots, etc)▫Increases flexibility and mobility of VMs
and blades▫Reduces hardware, cabling, and cost for
high-I/O machines▫Increases density of blades and VMs
PCI over Ethernet
• Aprius, Xsigo• Extends PCI bus to
“card chassis”• Works over backplane
or LAN• Allows greater
connectivity, mobility
Virtual Networking
• Cisco Nexus 1000v• Embeds virtual switch
inside hypervisor• Extends networking
domain to virtual machines
• Manageability, features, mobility
Converged I/O
• HP Virtual Connect, DCB, InfiniBand
• Shares physical connection with multiple logical ones
• Allows different protocols to use one resource (Ethernet, IB)
CC-BY-NC-SA © 2011, Foskett Services
I/O Virtualization: IOMMU (Intel VT-d)
• IOMMU gives devices direct access to system memory▫AMD IOMMU or Intel VT-d▫Similar to AGP GART
• VMware VMDirectPath leverages IOMMU▫Allows VMs to access devices directly▫May not improve real-world performance
System Memory
I/O Device CPU
IOMMU MMU
CC-BY-NC-SA © 2011, Foskett Services
Does SSD Change the Equation?
• RAM and flash promise high performance…• But you have to use it right
Positives• Very fast random read• Low latency and high consistency• Fast-ish write• Low power consumption and heat• Lightweight, compact size• High mechanical reliability
Shortcomings• Big write blocks can slow writes• Shorter lifespan• Write performance declines with use
and capacity
CC-BY-NC-SA © 2011, Foskett Services
Flash is Not A Disk
• Flash must be carefully engineered and integrated▫Cache and intelligence to offset write penalty▫Automatic block-level data placement to
maximize ROI
• IF a system can do this, everything else improves▫Overall system performance▫Utilization of disk capacity▫Space and power efficiency▫Even system cost can improve!
CC-BY-NC-SA © 2011, Foskett Services
Cost
an
d P
erf
orm
an
ce
The Tiered Storage Cliché
Tier 0 – Flash!
Tier 1 – Enterprise arrays (ours)
Tier 2 – Midrange arrays (ours)
Tier 3 – Tape or archiving or something
Optimize
d for
Savings!
CC-BY-NC-SA © 2011, Foskett Services
Tiered Storage Evolves
TypeLocation of
TiersData
ClassificationData
MovementGranularity
Extra-array tieringOne drive type per array
Manual and static
Manual migrationWhole LUN or file system (GB to TB)
Internal tieringMultiple drive types per array
Manual and static
Manual promotion
Whole LUN or file system (GB to TB)
Internal rebalancingMultiple drive types per array
Manual or automatic
Automatic promotion
Whole LUN or file system (GB to TB)
Page-based tieringMultiple drive types per array
Automatic performance-based
Automatic promotion
Pages (KB to MB)
CachingInside or outside the array
Automatic performance-based
No movement Pages (KB to MB)
CC-BY-NC-SA © 2011, Foskett Services
Three Approaches to SSD For VM
In-Array Caching
• Some use PCIe cards (NetApp)
• Others use SSDs (EMC, HDS, IBM, etc.)
In-Server SSD
• Not a cache but a drive (PCIe or SATA/SAS)
• Available from many vendors (Fusion-io, TMS, LSI, Micron, Virident)
In-Server Caching
• Mostly PCIe-based
• IO Turbine – Software to leverage PCIe SSD
• Marvell DragonFly – PCIe RAM+SSD
EMC Project Lightning promises to deliver all three!
CC-BY-NC-SA © 2011, Foskett Services
Storage for Virtual Servers (Only!)
• New breed of storage solutions just for virtual servers▫Highly integrated (vCenter, VMkernel
drivers, etc.)▫High-performance (SSD cache)
• Mostly from startups (for now)▫Tintri – NFS-based caching array▫Virsto+EvoStor – Hyper-V software, moving
to VMware
CC-BY-NC-SA © 2011, Foskett Services
Virtual Storage Appliances (VSA)
• What if the SAN was pulled inside the hypervisor?
• VSA = A virtual storage array as a guest VM
• Great for lab or PoC• Some are not for
production• Can build a whole data
center in a hypervisor, including LAN, SAN, clusters, etc
Physical Server Resources
CPU
RAM
Hypervisor
VM Guest VM GuestVirtual Storage
Appliance
Virtual LAN
Virtual SAN
CC-BY-NC-SA © 2011, Foskett Services
vSphere 5: vSphere Storage Appliance (VSA)
• Aimed at SMB market• Two deployment options:
▫ 2x replicates storage 4:2▫ 3x replicates round-robin 6:3
• Uses local (DAS) storage• Enables HA and vMotion with no SAN or
NAS• Uses NFS for storage access• Also manages IP addresses for HA
CC-BY-NC-SA © 2011, Foskett Services
Virtual Storage Appliance Options
VSA Type Features Cost
FalconStor NSS iSCSI Snapshots, Replication, Encryption, Thin Provisioning
$1,000+
HP LeftHand iSCSI Clustering, Thin Provisioning, Snapshots, Replication
Free
StorMagic SvSAN iSCSI Mirroring 30-day trial
Seanodes Exanodes iSCSI Clustering Free
StoneFly SCVM iSCSI DR, Replication, Snapshots For customers
NexentaStor iSCSI/NFS
Replication, Snapshots, Clustering, WORM Free
EMC Celerra UBER iSCSI/NFS
Snapshots, Replication, Thin Provisioning Free, not for production
Nasuni Filer SMB Cloud Gateway, Snapshots, Encryption $300/mo
TwinStrata CloudArray
iSCSI DR, Cloud Gateway Commercial
VMware VSA NFS Clustering, HA About $3,500
CC-BY-NC-SA © 2011, Foskett Services
Whew! Let’s Sum Up• Server virtualization changes
everything▫ Throw your old assumptions
about storage workloads and presentation out the window
• We (storage folks) have some work to do▫ New ways of presenting storage
to the server▫ Converged I/O (Ethernet!)▫ New demand for storage
virtualization features▫ New architectural assumptions
• Management integration (vCenter)• Storage drivers and paravirtualization• VMFS, RDM• Functional integration (VAAI, SIOC)
Storage Presentation
• Everything over Ethernet (iSCSI, NFS, FCoE)
• CNA, DCB, NPIV, IOV
Converged I/O
• Volume management• Thin provisioning• Automated tiering• Snapshots and replication• Virtual storage appliances
Storage Virtualization
• Solid-state: SSD and caching• Post-RAID (wide striping, erasure codes)
New Storage Architectures
CC-BY-NC-SA © 2011, Foskett Services
Thank You!
Stephen Foskett
twitter.com/sfoskett
+1(508)451-9532
FoskettServices.com
blog.fosketts.net
GestaltIT.com