vSphere Deepdive
Magnus BergmanJoel Lindberg
2
Agenda VMware vCloud® Suites Launch Context and Product Set vSphere 5.0 Recap vSphere 5.1 Overview • Compute, Storage, Network—Enhancements and Features
• Availability, Security, Automation—Enhancements and Features
• vCenter Server—Enhancements and Features
• Additional Features and Enhancements— “The Best of the Rest”
Memory, CPU and Network Best Practises
3
VMware vCloud Suite
4
VMware vSphere 5.0
Application Services
Infrastructure Services
Scalability
VMware vSphere 5
Security
• ESXi Firewall • 32 way SMP• 1 TB VMs
New HA Architecture• vMotion over
higher latency links
Availability
NetworkStorage
• Network I/O Control (per VM controls)
• Distributed Switch (Netflow, SPAN, LLDP)
• Storage DRS• Profile-Driven
Storage• VMFS 5• Storage I/O
Control (NFS)
• ESXi Convergence• Auto Deploy• HW version 8
Compute
vCenter Server • Virtual Appliance• Web ClientvCenter Server
5
What’s New in vSphere 5.1?
Automation
VMware vSphere 5.1
Security
• vShield Endpoint
• Storage DRS and Profile-Driven Storage integration with VCD• Enhanced Auto Deploy
• Data Protection• Replication• vMotion w/o shared
storage• 0 Downtime upgrades
of VMware Tools
Availability
NetworkStorage
• Enhanced Distributed Switch
• SR-IOV support
• Storage Appliance• Storage Space
Reclamation for VDI
• HW version 9• 64 way SMP 1 TB VMs
Compute
•Single Sign On (vCD, vShield, vCenter)
•vSphere Web Client
vCenter Server 5.1 •Enhanced vCenter Orchestrator
6
Compute, Storage, Network—Enhancements and Features
7
vSphere vSpherevSphere
vCenter Server with Auto Deploy
Host ProfilesImage Profiles
Deploy and patch vSphere hosts in minutes using a new “on the fly” model
Coordination with vSphere Host Profiles
2 new operating modes
Overview
Benefits
Fast initial deployment and patching
Centralized host and image management
Reduce manual deployment and patch processes
Continue deployment even when a failure occurs
vSphere
Auto Deploy
8
Distributed Switch
Distributed Switch now delivers: Network Healthcheck Configuration Backup and Restore Roll Back and Recovery LACP Support
Visibility into physical and virtual network status
Backup and recover network settings
Fast recovery from lost connectivity or incorrect configurations
Overview
BenefitsvSphere vSpherevSphere
9
Create virtual machines with up to: 64 vCPU 1 TB of vRAM
2x size of previous vSphere versions
Run even the largest applications in vSphere, including very large databases
Virtualize even more applications than ever before (Tier 1 and 2)
vSphere Scales to Support Mission-Critical Applications
2x
Overview
Benefits
10
Availability, Security, and Automation— Enhancements and Features
11
Live migration of a virtual machine without the need for shared storage
Extends VMware’s revolutionary technology for automated virtual machine movement
Zero downtime migration
No dependency on shared storage
Lower operating cost
Helps meet service level and performance SLAs
vMotion (w/o Shared Storage)
Overview
Benefits
12
New backup and recovery tool for the vSphere platform
Replaces vSphere Data Recovery
Based on EMC Avamar
Use less disk space with deduplication
Simple setup and management
Proven technology
vSphere Data Protection
Overview
Benefits
VMware vSphere
DATA DEDUPLICATEDAND STORED ON VDP
APPLIANCE
VDP
*All editions and kits with the exception of Essentials
13
Virtual machine level replication by the vSphere host
Included with vSphere*
Low cost/efficient replication option
Simple setup from within vCenter Server
Integration with SRM enables automated DR process
vSphere Replication
Overview
Benefits
vSphere
vSphere Replication
Site A (Primary)
vSphere
Site B (Recovery)
*All editions and kits with the exception of Essentials
14
Secure your VMs with offloaded anti-virus and anti-malware (AV) solutions without the need of agents
Included with vSphere*
Simplified AV administration
Higher consolidation ratios by preventing the possibility of AV storms
Improved performance
vShield Endpoint
Overview
Benefits
*All editions and kits with the exception of Essentials
15
vCenter Server—Enhancements and Features
16
Web Client
New, improved interface into vSphere delivers:
Browser-based experience Custom tagging Scalability Enhanced workflow management
Platform independence
Tag based on specific business cases
Manage more objects and 3x more active sessions than ever before
Pause and resume even the most complex workflow or task
Overview
Benefits
Object Navigator
Inventory Objects
Tabs
Create Custom Actions
Sidebar Extension
Portlets
Add right-click extensions
17
vSphere Web Client InterfaceObject Navigator
Sidebar ExtensionCreate Custom ActionsInventory Objects
Tabs
Add right-click extensions
Portlets
18
Web Client—Native Plug-In Support
19
Single Sign-On
Sign-on once rather than multiple times in vCenter Server
Faster operations
Less complexity
Support for multiple identity services
Future building block for other VMware products and solutions
Overview
Benefits
vSpherePlatformServices
CustomerIdentity Sources
vSphere Solutions
Authentication(Single Sign On)
vCO Inventory ServicevCenter
ActiveDirectory
Authorization Auditing
vSphereWeb Client
OpenLDAP NIS
Local OS
Users
20
Single Sign-On
vSpherePlatformServices
CustomerIdentity Sources
vSphere Solutions
Authentication(Single Sign On)
vCO Inventory ServicevCenter
ActiveDirectory
Authorization Auditing
vSphereWeb Client
OpenLDAP NIS Local OS
Users
21
vCenter Orchestrator (vCO)
Workflow Engine Enhancements:
Web Client Integration (launch workflows)
New workflow design Simplified configuration
and installation
Execute workflows with a single interface
Simplicity thru drag and drop workflow creation
Automatic configuration
Deploy as a virtual appliance
Overview
Benefits
22
Additional Features and Enhancements
23
The Best of the Rest
Platform • ESXi Platform Updates
• New VM Features and Capabilities
• Host Profiles
Network • Port Mirroring Enhancements
• Scale
OS Support• Windows 8 Server and Desktop
Storage • VMFS File Sharing Limits
• Space Efficient Sparse Virtual Disks
• 5 Node MSCS Cluster
• Storage Protocol Enhancements
• Storage Resource Management Enhancements
• VMware vCloud® Director™ Interoperability
**Details on the new vSphere Storage Appliance 1.5 (which works in conjunction with vSphere 5.1) are available in a separate customer overview
© 2009 VMware Inc. All rights reserved
MEMORY
25
Memory – Host Memory Management
Occurs when memory is under contention
Transparent Page Sharing
Ballooning
Compression
Swapping
26
Memory – Transparent Page Sharing
27
Memory – Ballooning
28
Memory – Compression
29
Memory – Swapping
30
Memory – Swapping
31
Memory – Ballooning vs. Swapping
Ballooning is better than swapping
Guest can surrender unused/free pages
Guest chooses what to swap, can avoid swapping “hot” pages
Idle memory tax uses ballooning
32
Memory – Rightsizing
Generally, it is better to OVER-commit than UNDER-commit
If the running VMs are consuming too much host/pool memory…• Some VMs may not get physical memory
• Ballooning or host swapping
• Higher disk IO
• All VMs slow down
33
Memory – Best Practices
Avoid high active host memory over-commitment• No host swapping occurs when total memory demand is less than the physical
memory (Assuming no limits)
Right-size guest memory• Avoid guest OS swapping
Ensure there is enough vRAM to cover demand peaks
Use a fully automated DRS cluster• Test that vMotion works
• Use Resource Pools with High/Normal/Low shares
• Avoid using custom shares
© 2009 VMware Inc. All rights reserved
CPU
35
CPU – Overview
Raw processing power of a given host or VM• Hosts provide CPU resources
• VMs and Resource Pools consume CPU resources
CPU cores/threads need to be shared between VMs
Fair scheduling vCPU time• Hardware interrupts for a VM
• Parallel processing for SMP VMs
• I/O
36
CPU – vSMP
Relaxed Co-Scheduling: vCPUs can run out-of-sync
Idle vCPUs incur a scheduling penalty• configure only as many vCPUs as needed
• Impose unnecessary scheduling constraints
Use Uniprocessor VMs for single-threaded applications
37
CPU– Scheduling
Over committing physical CPUs
VMkernel CPU Scheduler
38
CPU– Scheduling
Over committing physical CPUs
VMkernel CPU Scheduler
X X
39
CPU– Scheduling
Over committing physical CPUs
VMkernel CPU Scheduler
X XXX
40
CPU – Ready Time
The percentage of time that a vCPU is ready to execute, but waiting for physical CPU time
Does not necessarily indicate a problem• Indicates possible CPU contention or limits
41
CPU – NUMA nodes
Non-Uniform Memory Access system architecture
Each node consists of CPU cores and memory
A CPU core in one NUMA node can access memory in another node, but at a small performance cost
NUMA node 1 NUMA node 2
42
CPU – NUMA nodes
The VMkernel will try to keep a VM’s vCPUs local to its memory• Internal NUMA migrations can occur to balance load
Manual CPU affinity can affect performance• vCPUs inadvertently spread across NUMA nodes
• Not possible with fully automated DRS
VMs with more vCPUs than cores available in a single NUMA node may see decreased performance
43
CPU – Troubleshooting
vCPU to pCPU over allocation• HyperThreading does not double CPU capacity!
Limits or too many reservations• can create artificial limits.
Expecting the same consolidation ratios with different workloads• Virtualizing “easy” systems first, then expanding to heavier systems
• Compare Apples to Apples• Frequency, turbo, cache sizes, cache sharing, core count, instruction set…
44
CPU – Best Practices
Right-size vSMP VMs
Keep heavy-hitters separated• Fully automated DRS should do this for you
• Use anti-affinity rules if necessary
Use a fully automated DRS cluster• Test that vMotion works
• Use Resource Pools with High/Normal/Low shares
• Avoid using custom shares
© 2009 VMware Inc. All rights reserved
NETWORK
46
Network – Load Balancing
Load balancing defines which uplink is used• Route based on Port ID
• Route based on IP hash
• Route based on MAC hash
• Route based on NIC load
Probability of high-bandwidth VMs being on the same physical NIC
Traffic will stay on elected uplink until an event occurs• NIC link state change, adding/removing NIC from a team, beacon probe
timeout…
47
Network – Troubleshooting
Check counters for NICs and VMs• Network load imbalance
• 10 Gbps NICs can incur a significant CPU load when running at 100%
Ensure hardware supports TSO• Use latest drivers and firmware for your NIC on the host
For multi-tier VM applications, use DRS affinity rules to keep VMs on same host• Same vSwitch / VLAN, rules out physical network
If using Jumbo Frames, ensure it is enabled end-to-end
48
Network – Best Practices
Use the vmxnet3 virtual adapter• Less CPU overhead
• 10 Gbps connection to vSwitch
Use the latest driver/firmware for the NICs on the host
Use network shares• Requires Virtual Distributed Switch 4.1
Isolate vMotion and iSCSI traffic from regular VM traffic• Separate vSwitches with dedicated NIC(s)
• Most applicable with Gigabit NICs
49
Key Takeaways – Performance Best Practices
Understand your environment• Hardware, storage, networking
• VMs & applications
Advanced configuration values do not need to be tweaked or modified• In almost all situations
Use fully automated DRS
Use Paravirtual virtual hardware
50
Aggregates thousands of metrics into Workload, Capacity,Health scores
Self-learns “normal” conditions using patented analytics
Smart alerts of impending performance and capacity degradation
Identifies potential performance problems before they start
Slide 50
Tools – vCenter Operations
51
Tools – vCenter OperationsSlide 51
Top Related