TechSummit 2017 Session Design Template · • Formerly known as Virtual SAN • Integrated into...
Transcript of TechSummit 2017 Session Design Template · • Formerly known as Virtual SAN • Integrated into...
1
VMware vSAN
Paul BrarenVMware Systems Engineer / vSAN
Northeastern US / Commercial
pbraren at vmware.com
TinkerTry.com | @paulbraren
VCP-DCV, vExpert 2014/15/16/17
Wethersfield, CT
VMware vSAN
• Formerly known as Virtual SAN
• Integrated into the kernel, pools disk space from multiple ESXi hosts
Modernization of the Data Center Being Fueled by HCI
Traditional 3-Tiered ArchitectureComplex and Separate Silos
Servers
and Blades
External
Storage
Networking
Hardware
Simplified management
Lower total costs
Greater agility & scale
VirtualizationCompute | Storage | Network
Unified Management
Server + Storage Network
Hyper-Converged Infrastructure
Built on Industry-Standard Servers and Switches
Virtualization
HCI Fills a Growing Market Gap
Flexible Enterprise Storage
HCI Storage
Traditional Storage
Reliable and Fast …but complex
Cloud Storage
Simple, Flexible …but lacks Enterprise control
SSD
SSD
SSD
SSD
SSD
SSD
Scale Upor
Scale Out
Storage is being “Serverized”
Enjoy efficient server
economics
Extend server
management to storage
Unlock affordable
server-side flash
Total Storage Market
$10B
$20B
$30B
$40B
$50B
2012 2027
TraditionalStorage
HCI
(Server SAN)
Hyperscale
Source: Wikibon Server SAN Research Project, Aug 2016
Continuing the Blistering Pace of AdoptionFastest Growth Since ESX
vSAN Customer Adoption
Q1'14 Q2'14 Q3'14 Q4'14 Q1'15 Q2'15 Q3'15 Q4'15 Q1'16 Q2'16
400
Companies in Fortune 1000 (US)
109
Countries have vSAN Presence
Industry Verticals use vSAN
100%
200% YoY bookings growth
7,000+ Customers
300,000 on-board sensors send log files
to a Big Data Analytics System running
on VMware vSAN
Analytics accelerate maintenance &
turnaround times
Each extra hour of flying time
yields $25,000 in savings
vSAN Keeps the A380 Flying
VMware vSAN: Radically Simple Storage
10
vSphere + vSAN
…
• Resource efficient hyperconverged solution
• Commodity x86 servers
• Enterprise-level features:
– Deduplication and Compression
– Erasure Coding (RAID 5/6)
– Availability, Scalability and Performance
• Scales from 2 to 64 nodes
• Policy-based Management
– Dynamic application
– Applied at the VM level
Overview
Hard disksSSDHard disks
SSDHard disks
SSD
vSAN Datastore
Storage Designed for Business
vSAN Leverages Next-Generation Hardware
Persistence Tier
vSAN
NVMeSSD NVDIMM
vSAN
NVMe
NVDIMM
vSAN leverages next-gen hardware
Re-Platform to deliver the lowest $/Gig and $/IOPS
Caching Tier
NVMe
All Flash
High performance with consistent latency
Performance Curev
Enabling Next Generation Hardware for vSAN
12
NVDIMM
Overview
➢ High Speed NVMe: Use high speed, low latency NVMe for
caching
➢ NVDIMM Phase 1: “Byte addressable” for storing metadata
➢ NVDIMM Phase 2: Enable storing metadata AND cache
➢ RDMA over Ethernet: To boost n/w transfers
➢ High Density Drives: Support more capacity per host
Benefits
➢ Ride the technology curve to enable faster devices
➢ Enable next-gen NVMe devices to deliver lower $/IOP
➢ Enable storage class memory (NVDIMM) for lowest $/IOP
➢ Reduce network latencies and improve CPU utilization with RDMA
over Ethernet
➢ Enable high density drives for high capacity workloads at the
lowest $/Gig
13
vSAN 5.5March 2014
vSAN 6.0March 2015
All Flash
64 Node Cluster
X2 Hybrid Performance
vSAN Snapshots
vSAN Clones
Rack Awareness
vSAN 6.2March 2016
vSAN 6.1September 2015
Stretched Cluster
Replication - 5 Min RPO
Root Cause Analysis
Health Monitoring
Deduplication
Compression
Erasure Coding (RAID 5/6)
Quality of Service
Performance & Capacity Monitoring
Expanded vSAN Ready Nodes
VSA 5.0August 2011
VSA 5.5September 2013
VSA 5.5.2October 2014
VMware Storage:
A history of data integrity, data
availability & data resiliency
vSAN – Continued Innovation
ESX 3.02006
ESX 4.02009
ESX 5.02011
ESX 6.02015
vSAN 6.5December 2016
iSCSI Target
2 Node Direct Connect
512e Disk Support
Cloud Automation
vSAN 6.62017
livevirtually.net/2017/01/12/correlating-vsan-versions-with-vsphere-vcenter-esxi-versions
Deduplication and Compression for Space Efficiency
• Nearline deduplication and compression per disk group level.
– Enabled on a cluster level
– Deduplicated when de-staging from cache tier to capacity tier
– Fixed block length deduplication (4KB Blocks)
• Compression after deduplication
– If block is compressed <= 2KB
– Otherwise full 4KB block is stored
Beta
esxi-01 esxi-02 esxi-03
vmdk vmdk
vSphere & vSAN
vmdk
All Flash Only
14
vSAN and the advantage of integrationWhy integration matters
vSAN and the advantage of Integration
Why kernel integrated HCI is superior
16
Consolidate
vSphere vSAN
Managed by vCenter
VM VM VM VM VM VM
• Aware of VM I/O from end to end
• Kernel integration means that scheduler understands difference of I/O types
• Most efficient path leads to minimal resources used to transmit to/from storage system
HCI powered by vSAN
The power of Integration
The hypervisor sits at the most important location in the data center
17
vSphere
Managed by vCenter Server
VMware SDDC Ecosystem
Storage
The power of Integration
Kernel level distributed storage brings intelligence and control to the hypervisor
18
ControlIntelligence
Managed by vCenter Server
vSphere vSAN
VMware SDDC Ecosystem
The power of Integration - Intelligence
Intelligence – What you measure, how you measure it, and where you measure
it at matters
19
vSphere
Storage VM(per server)
Ad-hoc HCI
Storage
Storage
Networking
vSphere
Three-tier architecture
Measures
here
Measures
here
The power of Integration - Intelligence
Intelligence – vSphere measures the right data, in the right way, at the right
location
Remember that VMware has the unique advantage understanding kernel level metrics like no other solution.
20
vSphere vSAN
Managed by vCenter
VM VM VM VM VM VM
Measures
here
HCI powered by vSAN
The advantage of Integration
vSphere with vSAN is the key to transitioning to a full SDDC
21
• No risk of new solution ecosystem
• No risk of new management tool
• No risk of new software installation
• No risk of proprietary server vendorsvSphere vSAN
Managed by vCenter
Virtual Volumes
CommonPolicy-Based Management
NSXand
SRM
VADP Data ProtectionDell EMC, Veeam, Symantec, etc.
vRealize AutomationvRealize Operations
3rd Party Ecosystem
Demo: Performance Monitoring Built In
22
vSAN shows performance from the cluster level down to the VMDK level
Storage Policy-Based Management
The Old Way
• Same performance and availability for every VM
– Regardless of what is actually needed by each app
– Monolithic data protection
– Difficult maintenance
• Complexity
– Zoning, masking
– LUN sprawl
– VMs across multiple LUNs
• Example: Database VM
LUN-02 RAID5
LUN-01 RAID1
Same
Datastore
Storage Policy-Based Management
• Policy contains one or more rules
– Availability, performance, etc.
• Assign policy to VM or virtual disk
– Each app gets exactly what is needed
• Changes are done dynamically
– No downtime, no migration
• Manage traditional storage and vSAN
25
Availability
Encryption
Performance
vSAN Objects
26
VM Home
VM Swap
VMDK
Delta Disk
Memory Delta
Snapshot
• VM Home Namespace (VMX, NVRAM, etc.)
• VM Swap (virtual memory swap)
• Virtual Disk (VMDK)
• Snapshot (delta)
• Snapshot (delta)
vSAN Objects and Components
• Each object has multiple components, define availability and performance
• Example: FTT=1, RAID-1 mirroring, Stripes=2
ESXi HostESXi HostESXi Host
Mirror Copy
stripe-2b
stripe-2a
RAID-0
Mirror Copy
stripe-1b
stripe-1a
RAID-0
VMDK Object
RAID-1
Witness
All Flash Only
• With “FTT=1” availability RAID-5
– 3+1 Layout (4 host minimum)
– Recommend 5 hosts for hot rebuilds
– 1.33x instead of 2x overhead
• 20GB disk normally takes 40GB, now just ~27GB
• Guaranteed 33% Space Reduction vs Mirroring
28
RAID-5
ESXi Host
parity
data
data
data
ESXi Host
data
parity
data
data
ESXi Host
data
data
parity
data
ESXi Host
data
data
data
parity
ESXi Host
parity
data
data
RAID-6
ESXi Host
parity
data
data
ESXi Host
data
parity
data
ESXi Host
data
parity
data
ESXi Host
data
data
parity
ESXi Host
data
data
parity
• With “FTT=2” availability RAID-6
– 4+2 (6 host minimum)
– Recommend 7 hosts for hot rebuilds
– 1.5x instead of 3x overhead
• 20GB disk normally takes 60GB, now just ~30GB
• Guaranteed 50% Space Reduction vs Mirroring
Failure Tolerance Method – RAID 5/6 (Erasure Coding)
Not supported in 2 Node or Stretched Cluster Configurations (Host/Fault Domain Minimum Requirements)
Stretched Cluster & 2 Node Not Supported
Quality of Service
29
0
5
10
15
20
25
30
35
0 5 10 15 20
0
5
10
15
20
25
30
35
0 5 10 15 20
No IOPS limit
IOPS limit set at 22K
• Complete visibility into IOPS consumed per VM/Virtual Disk
• 1 Click-to-configure limit
• Eliminate noisy neighbor issues
• Granularly manage performance SLAs: independent of VM
provisioning order
vSAN Core - Other Improvements
30
Client Cache
• Write through read memory cache
– 0.4% of total host memory, up to 1GB per host
• “Local” to the virtual Machine
• Low overhead, big impact
Sparse Swap
• Reclaim Space used by memory swap
• Host advanced option enables setting policy for swap to no space reservation.
esxcfg-advcfg -g /vSAN/SwapThickProvisionDisabled
https://github.com/jasemccarty/SparseSwap
Software Checksum
• Overview
– End-to-end checksum to detect and resolve silent disk errors
– Checksum is enabled by default on all objects
– Fetch the data from another copy for checksum verification failures
– Disk scrubbing runs in the background
• Benefits
– Provide additional level of data integrity
– Automatic detection and resolution of silent disk errors
VMware vSphere & vSAN
Hybrid & All Flash
vSAN
Save up to 20%* per ROBO – eliminate standard switches between 2-nodes
Reduced network complexity with reliable, low-cost crossover cables
Better compliance with separate vSAN data traffic and witness VM traffic
Savings:2 Node ROBO avg cost: $11-12K USD
2 10G switch cost: $2-$3K USD
Total cost savings with cross connect
cable approximately 15-20%
* Source: VMware Internal Analysis based on stated pricing.
Central Data Center
Lower ROBO Costs and Complexity with 2-Node Direct Connect
vSAN Datastore
vSAN
WitnessTraffic
witness
vSAN Stretched Cluster
32
Witness Compute and Storage Requirements
witness
5ms RTT, 10GbE
Today
VMware vSphere & vSAN
Details
• vSAN Witness appliance does not hold any customer data
• Witness VM ONLY holds some very small amount of meta-data
• vSAN Witness appliance cannot run Virtual Machines
Resource Requirements
• Large scale (more than 500 VMs) or 45000 witness components
• Memory: 32 GB
• CPU: 2vCPU
• Storage: 350 GB for capacity and 10GB for catching tier
• Medium (up to 500 VMS) or 21,000 witness components
• Memory: 16 GB
• CPU: 2vCPU
• Storage: 1 x 8GB Boot Disk, 1 x 10GB “SSD” and 1 x 350GB HDD
• Tiny (up to 10 VMS) or 750 Witness Components
• Memory: 8GB
• CPU: 2vCPU
• Storage: 1 x 8GB Boot Disk, 1 x 10GB “SSD” and 1 x 15GB HDD
SDDC Building Blocks
• Adding capacity…
33
vSphere vSAN
Managed by vCenter
vSphere vSAN
Managed by vCenter
VM VM VM VM VM VM VM VM VM VM VM VM Compute Utilization
SDDC Building Blocks
• Scale up by adding drives
34
vSphere vSAN
Managed by vCenter
vSphere vSAN
Managed by vCenter
VM VM VM VM VM VM VM VM VM VM VM VM
VM VM VM VM VM VMVM VM VM VM VM VM
Compute Utilization
SDDC Building Blocks
• Scale out by adding hosts
35
vSphere vSAN
Managed by vCenter
vSphere vSAN
Managed by vCenter
vSphere vSAN
Managed by vCenter
VM VM VM VM VM VM VM VM VM VM VM VM
VM VM VM VM VM VMVM VM VM VM VM VM
VM VM VM VM VM VM
VM VM VM VM VM VM
Demo: Scale Out by Adding a Host
36
Capacity can be added one node at a time. We will take a 3-node cluster to 4 nodes.
vSAN Disk Groups
• vSAN uses the concept of disk groups to pool together flash devices and magnetic disks as single management constructs
• Disk groups are composed of at least 1 flash device and 1-7 capacity devices
– Flash devices are use for read cache / write buffer
– Storage capacity can be provided by either magnetic disks or flash based devices
– Disk groups cannot be created without a flash device
– Can have 5 disk groups per host = 35 capacity devices per host
37
disk group disk group disk group disk group disk group
Each host: 5 disk groups max. Each disk group: 1 flash device + 1-7 capacity devices
vSAN Enabled vSphere Cluster Scaled Up and Out 64 Hosts
13.4 Petabytes
4.2M IOPS
• Cluster: 2 – 64 hosts; up to 5 SSD (caching devices) and 35 HDD (storage devices) per host
• Capacity: 13.4 Petabytes (using 6TB NL-SAS HDDs)
• Performance: 4.2M IOPS
vSAN Implementation Details
vSAN requires:
– Minimum of 3 hosts in a cluster configuration
• All 3 hosts must contribute storage
• Exception: 2 Node configuration can use an external witness VM
• Max of 64 hosts per cluster, 6400 VMs
– Each host requires:
• Any server hardware listed on the VMware Compatibility Guide for Systems / Servers
• SAS/SATA Controllers (support pass-thought mode or RAID 0 mode)
• At least 1 Caching Device per host - Flash based devices (SSD, PCIe, NVMe)
• At least 1 Capacity Device per host - Flash based devices (SSD, PCIe, NVMe) or Magnetic disks (HDD)
• All flash-based devices, and storage controllers, drivers, and firmware MUST be listed on the VMware Compatibility Guide for vSAN
• Boot from 4GB/8GB/16GB USB, SD Cards, SATADOM
• Network connectivity
– 10GbE Ethernet (Preferred – Hybrid, Required – All-Flash, Can be shared) or 1GbE Ethernet (Dedicated – Hybrid)
– vSphere Standard or Distributed switches (VDS Licensing & NIOC included with vSAN licensing)
– Multicast enabled
39
VMware Compatibility GuideI’m sure you know about the VMware Compatibility Guide (aka HCL)
VMware vSAN Ready Nodes
Ready Node Configurator - vsanreadynode.vmware.com
Did you also know about he VMware vSAN Compatibility Guide (aka VCL)? Vendors include:• Cisco• Dell• Fujitsu• HPE• Hitachi• Huawei• Inspur• Intel• Lenovo• NEC• Quanta Computer Inc• Sugon• Supermicro
Dell EMC VxRail
Built on Industry-Leading VMware Hyper-Converged Software (HCS)
VMware Validated Designs
www.vmware.com/solutions/software-defined-datacenter/validated-designs.html
Broadest Deployment Options from HCI to SDDC
44
Built on Industry-Leading VMware Hyper-Converged Software (HCS)
Certified Solutions Engineered Appliances
vSAN Ready Nodes
Cloud Foundation
Certified Partner Hardware
NSX
vRealize
Lifecycle Management
EMC / VCE / Dell
HCI Appliance
VMware HCSvSAN + vSphere + vCenter
VMware HCSvSAN + vSphere + vCenter
VMware HCSvSAN + vSphere + vCenter
SDDC Manager
SDDC
Resources for Reference
• StorageHubstoragehub.vmware.com
• Virtual Blocksblogs.vmware.com/virtualblocks
• vSAN click-thru demosbit.ly/vsanclickthru
• Amazing traveling vSAN/SDDC demo rigby Zach Widingbit.ly/portablesddc
• Dispelling myths about vSAN and flashby John Nicholsonbit.ly/vsanmyths
• Technical details about Intel Optaneby Paul Brarenbit.ly/optanedetails
45
Resources for Reference
Intel Optane SSD DC P4800X Series can be ultra fast vSAN cache, storage, or RHEL/SLES 8x memory extender using Intel Memory Drive Technology
46
Questions?
47CONFIDENTIAL
Thank You!