The best kept insider secret vmware vsphere cloud deployment webinar
-
Upload
hitachi-data-systems -
Category
Technology
-
view
1.152 -
download
1
description
Transcript of The best kept insider secret vmware vsphere cloud deployment webinar
1 © Hitachi Data Systems Corporation 2011. All Rights Reserved.1
THE BEST-KEPT INSIDER SECRET: VMWARE VSPHERE 5 CLOUD DEPLOYMENTMICHAEL HEFFERNAN, SOLUTIONS PRODUCT MANAGER, VMWARE
PATRICK ALLAIRE, SENIOR PRODUCT MARKETING MANAGER,
© Hitachi Data Systems Corporation 2011. All Rights Reserved.1
2
WEBTECH EDUCATIONAL SERIES
The Best-kept Insider Secret: VMware vSphere 5 Cloud Deployment
September 21, 9am PT, 12pm ET‒ Learn why the industry’s most demanding customers are deploying clouds with the
storage virtualization leader. Hear Michael Heffernan, Hitachi Solutions Product Manager, VMware, and Patrick Allaire, Senior Product Marketing Manager, give you the inside information you need to understand why VMware vSphere 5 cloud deployment on Hitachi infrastructure is the way to go.
Storage Virtualization: Delivering Storage as an Utility for the Cloud
September 28, 9am PT, 12pm ET‒ Attend this informative session to learn how the Hitachi Command Suite can help
you meet the demanding storage requirements of private cloud computing.
Advances in Mainframe Storage, October 19, 9am PT, 12pm ET
Replication in a Mainframe Storage Environment, October 26, 9am PT, 12pm ET
Hitachi VSP Performance in a Mainframe Environment, November 2, 9am PT, 12pm ET
STORAGE IN THE CLOUD SERIES
MAINFRAME SERIES
3 © Hitachi Data Systems Corporation 2011. All Rights Reserved.3
THE BEST-KEPT INSIDER SECRET: VMWARE VSPHERE 5 CLOUD DEPLOYMENT
MICHAEL HEFFERNAN, SOLUTIONS PRODUCT MANAGER, VMWARE
PATRICK ALLAIRE, SENIOR PRODUCT MARKETING MANAGER,
© Hitachi Data Systems Corporation 2011. All Rights Reserved.3
4
AGENDA
Top VMworld 2011 myths
Storage Design and Architecture for vSphere‒ VMware and Hitachi integration
‒ Hitachi AMS2000 and Virtual Storage Platform formula
‒ VMware storage APIs - VASA and VAAI
Myth #1It makes more sense to deploy vSphere 5 over NFS
6
WHY NAS? (ADVANTAGES)
Flexibility and Cost Savings
‒ vSphere supports iSCSI, FC, FCoE and NFS (IP)
‒ All functionality of vSphere can be exploited over NFS with a few exceptions
‒ Cannot cluster VMs using Microsoft Cluster Server
‒ Cannot boot the Physical host directly from NFS (requires some internal disk)
‒ No true multi-path I/O Engine (although network can provide fault tolerance)
‒ NFS/IP Ethernet perceived as less costly, less complex and more flexible in deployment
‒ NFS provides a level of virtualization that enables it to abstract some physical level constraints – simple provisioning, LUN queue mgmt, VMFS SCSI reserves
‒ NFS provides the ability to dynamically re-size the VM Datastores
7
WHY NOT NAS? (DISADVANTAGES)
Reliability
‒ Failover is rapid and clean over Fibre Channel. NFS implementations have higher timeouts.
‒ IP/Ethernet networks while redundant are not generally as robust
Performance
‒ While you can make NFS perform equal to FC for a given workload with the right resources, it will consume more host CPU 15% upwards (substantial in sequential I/O)
‒ Native multi-pathing and load balancing (NFS cannot load balance I/O within a datastore)
‒ VAAI enabled subsystems have addressed SCSI reserve issues
8
WHY HITACHI NAS FOR VSPHERE?
Hitachi NAS offers a highly scalable platform
‒ Up to 8 high performance nodes in a single cluster (almost 4X more scalable than high end leading vendor)
‒ File systems can be extended to 256TB (up to 16X)
Hitachi NAS provides Tiered File System to maximize vSphere performance.
‒ Accelerates metadata look ups that occur when processing snapshot across many large VMDK files
Hitachi NAS provides JetClone to provide space efficient copies of VMs
Hitachi NAS is VMware Certified
JetMirror provides object-based replication over WAN
9
AMS AND VSP STRONGER THAN OTHER NAS VENDOR
Despite NAS having many software features and dedup, these platforms do not have a robust Fibre Channel capabilities:‒ Dual-node failover time (15-45 sec. outage) vs a VSP’s fault tolerant
architecture with a 100% data availability warranty‒ Does not have active/active symmetric controllers load balancing like
Hitachi AMS2000‒ LUNs are basically files on a file system, rather than native block
capability (liable to fragment over time)‒ OS and parity checksum overheads result in lower useable capacity‒ No integrated encryption offering ‒ Limited virtualization capabilities‒ Vmware View Composer 5 support intrinsic inline dedup while
current primary storage dedup are post processing and its operation impact host I/O response time
PURE NAS PLATFORMS HAVE WEAK BLOCK IMPLEMENTATION
10
DEPLOYMENT RECOMMENDATION
Evaluate both options
It’s not about Block vs File‒ Both are valid options that have strengths and weaknesses
Think of your storage as a service‒ NFS is a valid option when layered on top of our Enterprise level block
platform
‒ NAS scalability is a must in large or high growth environment
‒ Storage is the bottleneck, use automated tiering to balance performance and cost
Myth #2Storage DRS and Profile-Driven storage support tier 1 application requirements
12
Tier 1 Tier 2 Tier 3
Storage DRS and Profile-Driven Storage
Tier storage based on performance characteristics (i.e. datastore cluster)
Simplify initial storage placement Load balance based on I/O
Overview
Benefits
Eliminate VM downtime for storage maintenance
Reduce time for storage planning/configuration
Reduce errors in the selection and management of VM storage
Increase storage utilization by optimizing placement
High IO Throughput
13
WHAT DOES STORAGE DRS PROVIDE?
•Storage DRS provides the following:
1. Initial Placement of VMs and VMDKS based on available space and I/O capacity.
2. Load balancing between datastores in a datastore cluster via Storage vMotion based on storage space utilization.
3. Load balancing via Storage vMotion based on latency.
• Storage DRS also includes Affinity/Anti-Affinity Rules for VMs & VMDKs;
• VMDK Affinity – Keep a VM’s VMDKs together on the same datastore. This is the default affinity rule.
• VMDK Anti-Affinity – Keep a VM’s VMDKs separate on different datastores
• Virtual Machine Anti-Affinity – Keep VMs separate on different datastores
•Affinity rules cannot be violated during normal operations.
14
STORAGE DRS OPERATIONS – INITIAL PLACEMENT
Initial Placement - VM/VMDK create/clone/relocate.
•When creating a VM you select a datastore cluster rather than an individual datastore and let SDRS choose the appropriate datastore.
•DRS will select a datastore based on space utilization and I/O load trend.
•By default, all the VMDKs of a VM will be placed on the same datastore within a datastore cluster (VMDK Affinity Rule), but you can choose to have VMDKs assigned to different datastore clusters.
datastore cluster2TB
datastores500GB 500GB 500GB 500GB
300GB available
260GB available
265GB available
275GB available
15
STORAGE DRS OPERATIONS – LOAD BALANCING
Load balancing - DRS triggers on space usage & latency threshold.
•Algorithm makes migration recommendations when I/O response time and/or space utilization thresholds have been exceeded
•Space utilization statistics are constantly gathered by vCenter, default threshold 80%
•Load Balancing is based on I/O workload and space which ensures that no datastore exceeds the configured thresholds.
•Storage DRS will do a cost / benefit analysis!
•For I/O load balancing Storage DRS leverages Storage I/O Control functionality
16
STORAGE DRS WORKFLOW
POINT
8 hours
8 ho
urs8
hour
s
DRStriggers
DRStriggers
DRStriggersI/O load trend is evaluated
every 8 hours
based on a past day history
Default threshold 15ms
17
DRS WITH A TIER 1 APPLICATION
Customer processing sample for SAP with an Oracle DatabaseAverage usage of 21%, peak size 3-5x average
0
20
40
60
80
100
0:0
0
1:0
0
2:0
0
4:0
0
5:0
0
6:0
0
8:0
0
9:0
0
10:
00
12:
00
13:
00
14:
00
16:
00
17:
00
18:
00
20:
00
21:
00
22:
00
Time of day
Uti
liza
tio
n
Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7
23:
00
SAMPLINGSAMPLINGDRS OFF DRS OFFSAMPLINGSAMPLINGSAMPLING
Myth #3Storage feature like automated sub-LUN tiering no longer makes sense with vSphere 5 Storage DRS
19
STORAGE DRS VS AUTOMATED SUB-LUN TIERING
- Eliminate VM downtime for storage maintenance
- Reduce time for storage planning/configuration
- Reduce errors in the selection and management of VM storage
- Increase storage utilization by optimizing placement
1. DRS
Datastore cluster
POINT
8 hours
8 ho
urs8
hour
s
DRStriggers
DRStriggers
DRStriggers
- Virtualize devices into a pool of capacity and allocate by pages
- Eliminate allocated but unused waste by allocating only the pages that are used
- Optimize storage performance by spreading the I/O across more arms
- Simplify management tasks- Further reduces OPEX- Further Improves Return on Assets
2. SUB-LUN TIERING
Monitor physical IO to pages
Cycle
Page relocations
Page IO Weights & Tier Ranges
Datastore cluster
It’s Time to Rethink Storage Design & Architecture for vSphere 5…
Hitachi + VMware Integration
VMware Storage API’s
VMware ESXi 5.0Provision VMsFrom Template
StoragevMotion
ImproveThin ProvisioningDisk Performance
VMFS ShareStorage Pool
Scalability
vStorage APIs for Array Integration
Dead Space Reclamation
API’s
It is all about the ecosystem Standardization and open for all vendors OS is API-driven which eliminates custom plug-ins into the OS APIs leverage each other under the covers
vStorage API for Array Integration
Write Same, Zero (Block Zeroing)
Eliminates redundant and repetitivewrite commands, which means less I/O for common tasks
Benefit: Speeds provisioning of new VMs; key to supporting large scale VMware or VDI deployments
Full Copy (Xcopy)
Leverages storage array’s ability to mass copy, snapshot and move blocks via SCSI commands.
Benefit: Speeds up cloning and storage vMotion; allows for faster copies of VMs
Hardware-assisted Locking
Stop locking LUNs; start locking blocks only.Offloads SCSI commands to storage array.
Benefit: Removes SCSI reservation conflicts; enables faster locking; improves VM density performance
Thin Provisioning (vSphere 5.0)
TP-STUN - Error Code to Report “Out of Space” for Thin Volume
UNMAP – Zero Page Reclaim for Virtual Disks in conjunction with using “Write Same” command on Thin Volume
*Note: VAAI is currently supported on the Hitachi Adaptable Modular Series 2000 family, VSP & USPV/VM.# Thin Provisioning API will be supported with ESXi 5.0
Full Copy – VSP Test Result
VAAI ON
VAAI OFF
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 810
1000
2000
3000
4000
5000
6000
ESX Host IOPS
Time 5s Intervals
IOP
S
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 810
1000
2000
3000
4000
5000
6000
ESX Host IOPS
Time 5s Intervals
IOP
S
Block Zeroing – VSP Test Result
Block Zeroing Write-same functionality – Storage array write
content of a logical block to range of logical block, external virtualized storage
Benefits Eliminate redundant and repetitive write
commands
LUN – Internal orVirtualized Storage
Provisioning 160GB EagerZeroedThick VMDK in HDP Volumes
VSP Storage VAAI StatusHDP Pool
UsageTime
InternalOFF ~160GB 00:06:05
ON .6GB 00:00:12
Virtualized Storage
OFF ~160GB 00:15:15
ON .6GB 00:00:23
96 to 98% Improvement
vSphere 5 introduces VMFS-5 with massive improvements
VMFS-5 will leverage further Hitachi’s Thin Provisioning Technology
Feature VMFS-3 VMFS-5
2TB+ VMFS Volumes (up to 64TB) Yes (using extents)
Yes
Support for 2TB+ Single VMFS No Yes
Unified Block size (1MB) No Yes
Atomic Test & Set Enhancements(part of VAAI, locking mechanism)
No Yes
Sub-blocks for space efficiency 64KB (max ~3k) 8KB (max ~30k)
Small file support No 1KB
27
REMOVE LAYERS OF COMPLEXITY
A Single 1PB Liquid Pool of Storage Capacity for All Your Virtualized Storage
UP TO 60TBSINGLE VMFS
VOLUME
Let the storage hardware do all the work
Closer Integration of Applications and Storage Needed for Data Center Transformation
The need for integration• Applications have a software view and have no visibility into infrastructure• Storage has an infrastructure view and no visibility into applications
Storage View
LDEV LDEV LDEV LDEV LDEV LDEV LDEV LDEV
HDP / HDT Pool
LDEVs
HDP Volume(Virtual LUN)
Software View
VMVM
VM VM
VMVM
VM
VM
VMVM
VMVM VM
VM
VMVMVMVM
VMVMVMVM
VMVMVMVM
VMVMVMVM
VMVMVMVM
VMVMVMVM
VMVMVM
VMVM
VM
VM
VMVMVMVM
VMVMVMVM
VMVMVMVM
ESXESX ESXESXESXESX ESXESX
vMotion
LU
ESXESX
vMotionVM
VM
2TB VMFS Volume
ESXi 5.0 64TB Single VMFS
ESXi 5.0 64TB Single VMFS
Thin provisioning:A powerful form of storage virtualization
Hitachi Dynamic Provisioning (HDP) Internal, External Virtualized Storage
A 60TB VMFS volume is created in a 1PB HDP* pool
A bunch of VMDK’s are created consuming only 5.3TB
The other 54.7TB is available for other applications
An Example with thin provisioning + VAAI:
60TB
5.3TB
54.7TB
Additionally for space efficiency + performance: Single virtual disk of 31GB consumes only 1GB capacity
vSphere 5.0 reclaims dead space automatically when a virtual disk is deleted or vMotion’ed
1GB
31GB
The Hitachi AMS 2000 Formula – vSphere 5.0
Hitachi Dynamic Provisioning
vStorage API for Array Integration(T10 – 5 x Primitives)
Active / Active Symmetric Controller
VMware ESXi
Cluster
Native Multipathing (NMP) – Round Robin
VMware ESXi VMware ESXi
VMFS-5
Up-to 60 TB
Up-to 60 TB
vS
tora
ge A
PI fo
r Sto
rag
e
Aw
are
ness (V
AS
A)
Hitachi AMS 2000 Family
VMware vCenter Server
Profile-driven Storage
+Storage DRS
31
CLARiiON IBM DS
vS
tora
ge A
PI fo
r Sto
rag
e
Aw
are
ness (V
AS
A)
VMware vCenter Server
Profile-driven Storage and
Storage DRS
VMware ESXi
Cluster
Native Multipathing (NMP) – Round Robin
VMware ESXiVMware ESXi
Thunder 9585V™
Lightning 9980V™
AMS 2000EMC DMX
VMFS-560TB 60TB 60TB 60TB 60TB 60TB 60TBExternalize
up to 255PB
256 VMFS Volumes per ESXi
Host Cluster
256 x 60TB = 15.36PB VMFS Datastores
vStorage API for Array Integration
+Hitachi Dynamic
Provisioning
HITACHI VIRTUAL STORAGE PLATFORM FORMULA – vSPHERE 5.0
32
THE BOTTOM LINE
HITACHI DATA SYSTEMS AND VMWARE TOGETHER
Lower your costs
Accelerate your time to value
Transform your data center
3333
QUESTION & ANSWER ROUNDTABLE
34
UPCOMING WEBTECH SESSIONS
September Cloud Series
Storage Virtualization: Delivering Storage as an Utility for the Cloud, September 28, 9am PT, 12pm ET
Mainframe Series
Advances in Mainframe Storage, October 19, 9am PT, 12pm ET
Replication in a Mainframe Storage Environment, October 26, 9am PT, 12pm ET
Hitachi VSP Performance in a Mainframe Environment, November 2, 9am PT, 12pm ET
Please check www.hds.com/webtech next week for more information and for:
Link to the recording, the presentation and Q&A (available next week)
Schedule and registration for upcoming WebTech sessions
3535
THANK YOU