Wyse Datacenter for VMware Horizon Reference...
Transcript of Wyse Datacenter for VMware Horizon Reference...
A Dell Reference Architecture
Wyse Datacenter for VMware Horizon Reference Architecture A Reference Architecture for the design, configuration and implementation of a VMware Horizon environment.
Dell Cloud Client-Computing Engineering August 2016
2 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Revisions
Date Description
May 2014 Initial release (v.6.5)
July 2015 Updated density numbers for ESXi 6.0 and added PowerEdge C4130 (v.6.7)
April 2016 Document overhaul, new Broadwell MLK, networking, servers. (v.7.0)
August 2016 Updated list of Thin Clients, added NSX and M60 GPU (v.7.1)
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.
Copyright © 2016 Dell Inc. All rights reserved. Dell and the Dell logo are trademarks of Dell Inc. in the United States
and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective
companies.
3 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Contents
1 Introduction ................................................................................................................................................................................ 7
1.1 Purpose ............................................................................................................................................................................. 7
1.2 Scope ................................................................................................................................................................................ 7
1.3 What’s New ...................................................................................................................................................................... 7
2 Solution Architecture Overview .............................................................................................................................................. 8
2.1 Introduction ..................................................................................................................................................................... 8
2.2 Physical Architecture Overview .................................................................................................................................... 8
2.3 Solution Layers ................................................................................................................................................................ 9
Networking ..................................................................................................................................................................... 10
Compute ......................................................................................................................................................................... 10
Management .................................................................................................................................................................. 10
Storage ............................................................................................................................................................................. 11
2.4 Local Tier 1 ...................................................................................................................................................................... 11
Base Offerings ................................................................................................................................................................ 11
Local Tier 1 for Rack Servers (iSCSI) ........................................................................................................................... 12
Local Tier 1 for Blade Servers ...................................................................................................................................... 14
2.5 Shared Tier 1 Rack ......................................................................................................................................................... 17
Shared Tier 1 for Rack Servers (iSCSI) ........................................................................................................................ 18
Shared Tier 1 for Rack Servers (FC) ............................................................................................................................ 20
2.6 Shared Tier 1 Blade ....................................................................................................................................................... 23
Shared Tier 1 for Blade Servers (iSCSI) ...................................................................................................................... 23
Shared Tier 1 for Blade Servers (FC) ........................................................................................................................... 25
3 Hardware Components .......................................................................................................................................................... 28
3.1 Network .......................................................................................................................................................................... 28
Dell Networking S3048 (1Gb ToR Switch) ................................................................................................................ 28
Dell Networking S4048 (10Gb ToR Switch) ............................................................................................................. 29
Brocade 6510 (FC ToR Switch) ................................................................................................................................... 29
Brocade M5424 (FC Blade Interconnect) ................................................................................................................. 30
PowerEdge M I/O Aggregator (10Gb Blade Interconnect) .................................................................................... 31
3.2 Servers ............................................................................................................................................................................. 32
4 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
PowerEdge R730 ........................................................................................................................................................... 32
PowerEdge M630.......................................................................................................................................................... 32
3.3 Compute Server Infrastructure ................................................................................................................................... 33
Local Tier 1 Rack ........................................................................................................................................................... 33
Local Tier 1 Blade .......................................................................................................................................................... 33
Shared Tier 1 Rack ......................................................................................................................................................... 34
Shared Tier 1 Blade ....................................................................................................................................................... 36
3.4 Storage ............................................................................................................................................................................ 37
EqualLogic Tier 1 – PS6210XS (iSCSI) ........................................................................................................................ 37
EqualLogic Tier 2 – PS6210E ...................................................................................................................................... 37
EqualLogic Configuration ............................................................................................................................................ 38
Compellent Tier 1 – SC8000 ...................................................................................................................................... 39
Compellent Tier 2 – SC8000 or SC4020 ................................................................................................................ 40
Network Attached Storage (NAS) ............................................................................................................................... 41
3.5 GPUs ................................................................................................................................................................................ 42
NVIDIA GRID K1 and K2 ............................................................................................................................................... 42
NVIDIA Tesla M60 ......................................................................................................................................................... 43
3.6 Dell Wyse Thin Clients.................................................................................................................................................. 44
Wyse 5030 PCoIP Zero Client .................................................................................................................................... 44
Wyse 5050 AIO PCoIP Zero Client ............................................................................................................................ 45
Wyse 7030 PCoIP Zero Client .................................................................................................................................... 45
Wyse 5010 Thin Client (ThinOS) with PCoIP............................................................................................................ 45
Wyse 5040 AIO Thin Client with PCoIP .................................................................................................................... 46
Wyse 3030 LT Thin Client (ThinOS) ........................................................................................................................... 46
Wyse 5010 Thin Client (Windows Embedded 8 Standard) .................................................................................... 47
Wyse 7010 Thin Client (Windows Embedded Standard 8) .................................................................................... 47
Dell Venue 11 Pro .......................................................................................................................................................... 47
Dell Chromebook 13................................................................................................................................................ 48
3.6.11 Wyse 5020 Thin Client ................................................................................................................................................. 48
Wyse 7040 Thin Client ................................................................................................................................................. 48
4 Software Components ............................................................................................................................................................ 49
4.1 What's new in this release of Horizon 7? .................................................................................................................. 49
5 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Just in time Delivery with Instant Clone Technology ............................................................................................ 49
Transformational User Experience with Blast Extreme .......................................................................................... 49
Modernize Application Lifecycle Management with App Volumes ..................................................................... 50
Smart Policies with Streamlined Access ................................................................................................................... 50
4.2 VMware Horizon ........................................................................................................................................................... 50
4.3 Hypervisor Platforms .................................................................................................................................................... 52
VMware vSphere 6 ........................................................................................................................................................ 52
4.4 NVIDIA GRID vGPU ....................................................................................................................................................... 53
vGPU Profiles ................................................................................................................................................................. 54
5 Solution Architecture for Horizon 7 ..................................................................................................................................... 58
5.1 Management Server Infrastructure ............................................................................................................................ 58
SQL Databases ............................................................................................................................................................... 58
DNS .................................................................................................................................................................................. 59
5.2 Storage Architecture Overview .................................................................................................................................. 59
Local Tier 1 Storage ...................................................................................................................................................... 59
Shared Tier 1 Storage .................................................................................................................................................. 60
Shared Tier 2 Storage .................................................................................................................................................. 60
Storage Networking – EqualLogic iSCSI ................................................................................................................... 61
Storage Networking – Compellent Fiber Channel (FC) ......................................................................................... 62
5.3 Virtual Networking ........................................................................................................................................................ 63
Local Tier 1 – Rack - iSCSI .......................................................................................................................................... 63
Local Tier 1 – Blade ...................................................................................................................................................... 65
Shared Tier 1 – iSCSI .................................................................................................................................................... 66
Shared Tier 1 – FC ......................................................................................................................................................... 68
VMware NSX ................................................................................................................................................................... 69
5.4 Scaling Guidance .......................................................................................................................................................... 71
5.5 Solution High Availability ............................................................................................................................................. 72
5.5.1 Compute layer HA (Local Tier 1) ................................................................................................................................ 73
5.5.2 vSphere HA (Shared Tier 1) .......................................................................................................................................... 74
5.5.3 Horizon infrastructure protection .............................................................................................................................. 75
5.5.4 Management server high availability ......................................................................................................................... 75
5.5.5 Horizon VCS high availability ...................................................................................................................................... 75
6 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
5.5.6 Windows File Services high availability ...................................................................................................................... 75
SQL Server High Availability ........................................................................................................................................ 76
5.6 VMware Horizon communication flow ..................................................................................................................... 77
6 Customer-provided solution components ......................................................................................................................... 78
6.1 Customer-provided storage requirements .............................................................................................................. 78
6.2 Customer-provided switching requirements .......................................................................................................... 79
7 Solution Performance and Testing ...................................................................................................................................... 80
Acknowledgements ........................................................................................................................................................................ 81
About the Authors ........................................................................................................................................................................... 82
7 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
1 Introduction
1.1 Purpose This document addresses the architecture design, configuration and implementation considerations for
the key components of the architecture required to deliver virtual desktops via VMware Horizon on
VMware vSphere 6.
1.2 Scope Relative to delivering the virtual desktop environment, the objectives of this document are to:
Define the detailed technical design for the solution.
Define the hardware requirements to support the design.
Define the constraints which are relevant to the design.
Define relevant risks, issues, assumptions and concessions – referencing existing ones where
possible.
Provide a breakdown of the design into key elements such that the reader receives an incremental
or modular explanation of the design.
Provide component selection guidance.
1.3 What’s New
Introduce support for Intel Broadwell processors (E5-2600v4)
Introduce support for new Dell Networking options
Introduce all-flash for Local Tier 1 rack solution
8 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
2 Solution Architecture Overview
2.1 Introduction Dell Wyse Datacenter solutions provide a number of deployment options to meet your desktop
virtualization requirements. Our solution is able to provide a compelling desktop experience to a range of
employees within your organization from task workers to knowledge workers to power users. The
deployment options for Dell Wyse Datacenter include:
Pooled Virtual Desktops (Non-persistent)
Full Clone Virtual Desktops (Persistent)
Shared Sessions (RDSH)
Additionally, our solution includes options for users who require:
Graphics Acceleration
Unified Communications
2.2 Physical Architecture Overview The core Dell Wyse Datacenter architecture consists of two models: Local Tier1 and Shared Tier1. “Tier 1”
in the Dell Wyse Datacenter context defines from which disk source the VDI sessions execute. Local Tier1
includes rack servers or blades with SSDs while Shared Tier 1 can include rack or blade servers due to the
usage of shared Tier 1 storage. Tier 2 storage is present in both solution architectures and, while having a
reduced performance requirement, is utilized for user data and Management VM execution. Management
VM execution occurs using Tier 2 storage for all solution models. Dell Wyse Datacenter is a 100%
virtualized solution architecture.
9 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
In the Shared Tier 1 solution model, an additional high-performance shared storage array is added to
handle the execution of the VDI sessions. All compute and management layer hosts in this model are
diskless.
2.3 Solution Layers The Dell Wyse Datacenter Solution leverages a core set of hardware and software components consisting of five primary layers:
Networking Layer
Compute Server Layer
Management Server Layer
10 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Storage Layer
Thin Client Layer (please refer to section 3.5)
These components have been integrated and tested to provide the optimal balance of high performance and lowest cost per user. The Dell Wyse Datacenter stack is designed to be cost effective allowing IT departments to implement high-performance fully virtualized desktop environments.
Networking Only a single high performance Dell Networking 48-port switch is required to get started in the network layer. This switch hosts all solution traffic consisting of 1Gb iSCSI and LAN sources for smaller stacks. Above 500 users we recommend that 10Gb LAN be used with iSCSI traffic separated into discrete switching Fabrics. Additional switches are added and stacked as required to provide High Availability for the Network layer.
Compute The compute layer consists of the server resources responsible for hosting the Horizon user sessions,
hosted via the VMware vSphere hypervisor, local or shared tier 1 solution models (local Tier 1, all-flash,
pictured below).
Management
VDI management components are dedicated to their own layer so as to not negatively impact the user sessions running in the compute layer. This physical separation of resources provides clean, linear, and predictable scaling without the need to reconfigure or move resources within the solution as you grow. The management layer will host all the server VMs necessary to support the VDI infrastructure.
11 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Storage
The storage layer consists of options provided by EqualLogic (EQL) for iSCSI, Compellent (CML) arrays for Fiber Channel or Software Defined to suit your Tier 1 (T1) and Tier 2 (T2) scaling and capacity needs. CML pictured below.
2.4 Local Tier 1
Base Offerings
2.4.1.1 Local Tier 1 – 50 user combined pilot For a very small deployment or pilot effort to familiarize you with the solution architecture, we offer a 50
user combined pilot solution. This architecture is non-distributed with all VDI, Management and storage
functions on a single host running vSphere. If additional scaling is desired, you can grow into a larger
distributed architecture seamlessly with no loss on initial investment.
12 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
2.4.1.2 Local Tier 1 – 50 user scale-ready pilot In addition to the 50 user combined offering we also offer a scale ready version that includes Tier 2
storage. The basic architecture is the same but customers looking to scale out quickly will benefit by
building out into Tier 2 initially.
Local Tier 1 for Rack Servers (iSCSI) The Local Tier 1 solution model provides a scalable rack-based configuration that hosts user VDI sessions
on local SSD or spinning disk in the compute layer, All-flash pictured below, if spinning disk is desired
substitute the SSDs with 10-12 x 600GB 15K SAS HDDs.
13 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
2.4.2.1 Local Tier 1 – Network Architecture
In the Local Tier 1 architecture, a single Dell Networking switch is shared among all network connections for both management and compute, up to 500 users. Over 500 users Dell Wyse Solutions Engineering recommends separating the network Fabrics to isolate iSCSI and LAN traffic as well as making each switch stack redundant. Only the management servers connect to iSCSI storage in this model. All Top of Rack (ToR) traffic is layer 2 (L2)/ switched locally, with all layer 3 (L3)/ routable VLANs trunked from a core or distribution switch. The following diagrams illustrate the logical data flow in relation to the core switch.
2.4.2.2 Local Tier 1 – Cabling (HA) The following diagram depicts the LT1 rack solution including optional components for HA:
14 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
2.4.2.3 Local Tier 1 – Rack Scaling Guidance
Local Tier 1 HW Scaling (iSCSI)
User Scale ToR LAN ToR 10Gb iSCSI EQL T2 EQL NAS
0-1000 S4048 S4048 6210E
0-1000 (HA) S4048 S4048 6210E FS7610
0-6000+ S4048 S4048 6210E FS7610
Local Tier 1 for Blade Servers
The Local Tier 1 solution model for blade servers provides a high-performance 800GB SSD configuration that does not require shared storage but Tier 2 is added to provide HA to the management layer infrastructure. User VDI sessions are hosted locally on SSDs in each blade server using View Connection Server (VCS) for desktop delivery. A pair of PowerEdge M I/O Aggregator switches is required in the A Fabric. The B Fabric, ToR iSCSI switches and Tier 2 storage are optional for HA or additional bandwidth.
15 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
2.4.3.1 Local Tier 1 – Network Architecture
In the Local Tier 1 architecture for blades, there is no need for a ToR switch unless Tier 2 iSCSI is present for HA. The A Fabric IOA switches can connect directly to the core or distribution network layer. Both Management and Compute servers connect to all VLANs in this model via a single vSwitch. The following diagram illustrates the server NIC to chassis switch connections, vSwitch assignments, as well as logical VLAN flow in relation to the core switch.
16 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
2.4.3.2 Local Tier 1 – Cabling (HA)
The following diagram depicts the Local Tier 1 blade solution including optional components for HA. The A Fabric, B Fabric and ToR switches are stacked, respectively.
17 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
2.4.3.3 Local Tier 1 – Blade Scaling Guidance
Local Tier 1 HW scaling (Blade)
User Scale
Blade LAN (A Fabric)
Blade iSCSI (B Fabric)
ToR 10Gb iSCSI
EQL T2
0-10000 (no HA) IOA - - -
0-1000 (HA) IOA IOA S4048 4210E
0-6000 (HA) IOA IOA S4048 6210E
6000+ (HA) IOA IOA S4048 6210E
2.5 Shared Tier 1 Rack Shared Tier 1 for rack servers incorporates shared Tier 1 storage used for execution of VDI sessions.
18 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Shared Tier 1 for Rack Servers (iSCSI) For POCs or small deployments, Tier1 and Tier2 are combined on a single EqualLogic PS6210XS storage
array. For over 500 users the storage layers are separated into discrete arrays, as depicted in the figure
below. Additional arrays are added for Tier 1 or Tier 2 as the user count grows.
2.5.1.1 Shared Tier 1 – Network Architecture
In the Shared Tier 1 architecture for rack servers, both management and compute servers connect to shared storage. All ToR traffic has designed to be layer 2 switched locally, with all layer 3 routable VLANs routed through a core or distribution switch. The following diagrams illustrate the server NIC to ToR switch connections, vSwitch assignments, as well as logical VLAN flow in relation to the core switch.
19 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
2.5.1.2 Shared Tier 1 – Cabling (HA)
20 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
2.5.1.3 Shared Tier 1 – Rack Scaling Guidance
Shared Tier 1 HW scaling (Rack – iSCSI)
User Scale
ToR LAN ToR 10Gb
iSCSI EQL T1
EQL T2
EQL NAS
0-500
S4048 S4048 6210XS
- -
500-1000 6210E
-
0-1000 (HA)
FS7610 0-3000
6210E 3000-6000
6000+ S4048
Shared Tier 1 for Rack Servers (FC) Utilizing Compellent storage for Shared Tier 1 provides a fiber channel (FC) solution where Tier 1 and Tier 2
can optionally be combined in a single array. Tier 2 functions (user data + Mgmt VMs) are removed from
the array if the customer has another Tier 2 solution in place or if a Tier 2 Compellent array is used. Scaling
this solution is very linear by predictably adding Compellent arrays for every 2000 basic users, on average.
The image below depicts a 1000 user array. For 2000 users, 96 total disks in 4 shelves are required. Please
see section 3.3 for more information.
21 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
2.5.2.1 Shared Tier 1 – Network Architecture In the Shared Tier 1 architecture for rack servers using FC, a separate switching infrastructure is required
for FC. Management and compute servers both connect to shared storage using FC. Both management
and compute servers connect to all network VLANs in this model. All ToR traffic has designed to be layer
2/ switched locally, with all layer 3/ routable VLANs routed through a core or distribution switch. The
following diagrams illustrate the server NIC to ToR switch connections, vSwitch assignments, as well as
logical VLAN flow in relation to the core switch.
22 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
2.5.2.2 Shared Tier 1 – Cabling (HA)
2.5.2.3 Shared Tier 1 – Rack Scaling Guidance
Shared Tier 1 HW scaling (Rack - FC)
User Scale
LAN Network
FC Network
CML T1
CML T2
CML NAS
0-1000 S4048 6510 SSD - -
0-6000+ (HA) S4048 6510 SSD SSD or 15K FS8600
23 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
2.6 Shared Tier 1 Blade
Shared Tier 1 for Blade Servers (iSCSI)
As is the case in the Shared Tier 1 model using rack servers, blades can also be used in a 500 user bundle by combing Tier 1 and Tier 2 on a single 6210XS array. Above 500 users, separate Tier 1 and Tier 2 storage into discrete arrays.
2.6.1.1 Shared Tier 1 – Network Architecture
In the Shared Tier 1 architecture for blades, only iSCSI is switched through ToR. There is no need to switch LAN ToR since the IOAs in the chassis supports LAN to the blades and are uplinked to the core or distribution layers directly. For greater redundancy, a ToR switch is used to support iDRAC used outside of the chassis. Both Management and Compute servers connect to all VLANs in this model. The following diagram illustrates the chassis switch uplink ports to ToR/ core switch connections, vSwitch assignments, as well as logical VLAN flow in relation to the core switch.
24 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
2.6.1.2 Shared Tier 1 – Cabling (HA)
25 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
2.6.1.3 Shared Tier 1 – Scaling Guidance
Shared Tier 1 HW scaling (Blade - iSCSI)
User Scale
Blade LAN (B Fabric)
Blade iSCSI (A Fabric)
ToR 10Gb iSCSI
EQL T1
EQL T2
EQL NAS
0-500
IOA IOA S4048 6210XS
- -
500-1000 6210E -
0-1000 (HA) 6210E FS7610
0-6000+ 6210E FS7610
Shared Tier 1 for Blade Servers (FC)
Fiber channel is also an option in the Shared Tier 1 model for blades. There are a few key differences using FC with blades instead of iSCSI: Blade chassis interconnects, FC HBAs in the servers, and FC IO cards in the Compellent arrays. ToR FC switching is optional if a suitable FC infrastructure is already in place. The image below depicts a 4000 user stack.
2.6.2.1 Shared Tier 1 – Network Architecture In the Shared Tier 1 architecture for rack servers using FC, a separate switching infrastructure is required
for FC. Management and compute servers both connect to shared storage using FC. Both management
and compute servers connect to all network VLANs in this model. All ToR traffic has been designed to be
layer 2/ switched locally, with all layer 3/ routable VLANs routed through a core or distribution switch. The
following diagrams illustrate the server NIC to ToR switch connections, vSwitch assignments, as well as
logical VLAN flow in relation to the core switch.
26 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
27 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
2.6.2.2 Shared Tier 1 – Cabling
2.6.2.3 Shared Tier 1 – Scaling Guidance
Shared Tier 1 HW scaling (Blade - FC)
User Scale
Blade LAN (A Fabric)
Blade FC (B Fabric)
ToR FC
CML T1
CML T2
CML NAS
0-500
IOA 5424 6510 SSD
- -
500-1000 - -
0-6000+ (HA) SSD FS8600
28 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
3 Hardware Components
3.1 Network The following sections contain the core network components for the Dell Wyse Datacenter solutions.
General uplink cabling guidance to consider in all cases is that TwinAx is very cost effective for short 10Gb
runs and for longer runs use fiber with SFPs.
Dell Networking S3048 (1Gb ToR Switch) Accelerate applications in high-performance environments with a low-latency top-of-rack (ToR) switch
that features 48 x 1GbE and 4 x 10GbE ports, a dense 1U design and up to 260Gbps performance. The
S3048-ON also supports Open Network Installation Environment (ONIE) for zero-touch installation of
alternate network operating systems.
Model Features Options Uses
Dell Networking S3048-ON
48 x 1000BaseT 4 x 10Gb SFP+
Non-blocking, line-rate
performance
260Gbps full-duplex bandwidth
131 Mpps forwarding rate
Redundant hot-swap PSUs & fans
1Gb connectivity
VRF-lite, Routed VLT, VLT Proxy Gateway
User port stacking (up to 6 switches)
Open Networking Install Environment
(ONIE)
29 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Dell Networking S4048 (10Gb ToR Switch) Optimize your network for virtualization with a high-density, ultra-low-latency ToR switch that features 48
x 10GbE SFP+ and 6 x 40GbE ports (or 72 x 10GbE ports in breakout mode) and up to 720Gbps
performance. The S4048-ON also supports ONIE for zero-touch installation of alternate network
operating systems.
Model Features Options Uses
Dell Networking S4048-ON
48 x 10Gb SFP+ 6 x 40Gb QSFP+
Non-blocking, line-rate
performance
1.44Tbps bandwidth
720 Gbps forwarding rate
VXLAN gateway support
Redundant hot-swap PSUs & fans
10Gb connectivity
72 x 10Gb SFP+ ports with breakout cables
User port stacking (up to 6 switches)
Open Networking Install Environment
(ONIE)
For more information on the S3048, S4048 switches and Dell Networking, please visit: LINK
Brocade 6510 (FC ToR Switch)
The Brocade® 6510 Switch meets the demands of hyper-scale, private cloud storage environments by delivering market-leading speeds up to 16Gb Fibre Channel (FC) technology and capabilities that support highly virtualized environments. Designed to enable maximum flexibility and investment protection, the Brocade 6510 is configurable in 24, 36, or 48 ports and supports 2, 4, 8, or 16Gb speeds in an efficiently designed 1U package. It also provides a simplified deployment process and a point-and-click user interface—making it both powerful and easy to use. The Brocade 6510 offers low-cost access to industry-
30 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
leading Storage Area Network (SAN) technology while providing “pay-as-you-grow” scalability to meet the needs of an evolving storage environment.
Model Features Options Uses
Brocade 6510 48 x 2/4/8/16Gb Fiber Channel
Additional (optional) FlexIO module
Up to 24 total ports (internal + external)
Ports on demand from 24, 36, and 48 ports
FC ToR switches for all solutions. Optional for blades
For more information on the Brocade 6510 switch, please visit: LINK
Brocade M5424 (FC Blade Interconnect) The Brocade® M5424 switches and Dell™ PowerEdge™ M1000e Blade enclosures provide robust
solutions for FC SAN deployments. Not only does this offering help simplify and reduce the amount of
SAN hardware components required for a deployment, but it also maintains the scalability, performance,
interoperability and management of traditional SAN environments. The M5424 can easily integrate FC
technology into new or existing storage area network (SAN) environments using the PowerEdge™ M1000e
Blade enclosure. The Brocade® M5424 is a flexible platform that delivers advanced functionality,
performance, manageability, scalability with up to 16 internal Fabric ports and up to 8 2GB/4GB/8GB auto-
sensing uplinks and is ideal for larger storage area networks. Integration of SAN switching capabilities with
the M5424 also helps to reduce complexity and increase SAN manageability.
31 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Model Features Options Uses
Brocade M5424 16 x internal Fabric ports
Up to 8 2/4/8Gb auto-sensing uplinks
Ports on demand from 12 to 24 ports
Blade switch for FC in Shared Tier 1 model
For more information on the Brocade M5424 switch, please visit: LINK
PowerEdge M I/O Aggregator (10Gb Blade Interconnect) Simplify network management and increase server bandwidth with the PowerEdge™ M I/O Aggregator,
enabling easy, plug-and-play data center convergence.
Model Features Options Uses
PowerEdge M I/O Aggregator (IOA)
Up to 32 x 10Gb ports + 4 x external SFP+
2 x line rate fixed QSFP+ ports
2 optional FlexIO modules
2-port QSFP+ module in 4x10Gb mode
Blade switch for iSCSI in Shared Tier 1 blade solution, LAN + iSCSI in Local Tier 1 blade solution
4-port SFP+ 10Gb module
4-port 10GBASE-T copper module (one per IOA)
Stack up to 2 IOAs using QSFP+ ports
32 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
For more information on the Dell IOA switch, please visit: LINK
3.2 Servers
PowerEdge R730
The foundation of the Dell Wyse Datacenter solution portfolio is the best-in-class Dell PowerEdge R730. This dual socket CPU platform runs the fastest Intel Xeon E5-2600 v4 family of processors, can host up to 24 DIMMs of DDR4 RAM, supports up to 16 x 2.5” SAS disks and can be outfitted with 2 double-wide GPU accelerators from AMD or NVIDIA. The Dell PowerEdge R730 offers uncompromising performance and scalability in a 2U form factor.
For specific information on VDI optimized configurations from Dell please refer to section 5 of this
document, for more information on the R730, please visit: Link
PowerEdge M630
The blade server platform recommendation for the Dell Wyse Datacenter solution is the PowerEdge M630. This half-height blade server is a feature-rich, dual-processor platform that offers a blend of density, performance, efficiency and scalability. The M630 offers remarkable computational density, scaling up to 24 cores, 2 socket Intel Xeon processors(Broadwell) and 24 DIMMs (768GB RAM) of DDR3 memory in an extremely compact half-height blade form factor.
33 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
For more information, please visit: Link
3.3 Compute Server Infrastructure
Local Tier 1 Rack
In the Local Tier 1 model, VDI sessions execute from local storage on each Compute server. The hypervisor used in this solution is vSphere. In this model, only the Management server hosts access iSCSI storage to support the solution’s Management role VMs done with the 1 GB NICs on the embedded Network Daughter Card (NDC). Additional NICs can be added as required for increased bandwidth or resiliency requirements. Refer to section 2.4 for cabling implications. The Management server host has reduced RAM and CPU and does not require local disk to host the management VMs. All-flash shown for compute below, the SSDs can be optionally substituted for 10-12 15K SAS.
Local Tier 1 Compute Host PowerEdge R730
Local Tier 1 Management Host PowerEdge R730
2 x Intel Xeon E5-2698v4 Processor (2.2Ghz) 2 x Intel Xeon E5-2660v4 Processor (2Ghz)
512GB Memory (16 x 32GB RDIMMs, 2400MT/s) 128GB Memory (8 x 16GB RDIMMs, 2400MT/s)
VMware vSphere on internal 8GB Dual SD VMware vSphere on internal 8GB Dual SD
4 x 800GB SSD Embedded 2 x 1Gb NDC
PERC H730 Integrated RAID Controller –Raid 10 Embedded 2 x 10Gb NDC
Embedded 2 x 1Gb NDC iDRAC8 Enterprise
Embedded 2 x 10Gb NDC 2 x 750W PSUs
iDRAC8 Enterprise
2 x 750W PSUs
Local Tier 1 Blade
In the Local Tier 1 model for blades, VDI sessions execute on local high-performance SSDs on each compute host. vSphere is the supported hypervisor in this solution due to its ability to run from integrated SD freeing the pair of SSDs for VDI execution only. In this model, shared storage is not required for Tier 2 unless management host-level HA is required. All management and desktop VMs is hosted locally on their respective blades.
34 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Local Tier 1 Compute Host – PowerEdge M630
2 x Intel Xeon E5-2698v4 Processor (2.2GHz)
384GB Memory (24 x 16GB RDIMMs, 2400Mhz)
VMware vSphere on 2 x 8GB internal SD
QLogic 57810S-k 10Gb DP KR NDC (LAN)
iDRAC8 Enterprise w/ vFlash, 8GB SD
Local Tier 1 Management Host – PowerEdge M630
2 x Intel Xeon E5-2660v4 Processor (2GHz)
128GB Memory (8 x 16GB RDIMMs, 2400Mhz)
VMware vSphere on 2 x 8GB internal SD
QLogic 57810S-k 10Gb DP KR NDC (iSCSI-HA)
iDRAC8 Enterprise w/ vFlash, 8GB SD
Shared Tier 1 Rack
In the Shared Tier 1 model, VDI sessions execute on shared storage so there is no need for local disk on each server to host VMs. To provide server-level network redundancy using the fewest physical NICs possible, both the Compute and Management servers use a split QP NDC: 2 x 10Gb ports for iSCSI, 2 x 1Gb ports for LAN. Two additional DP NICs (2 x 1Gb + 2 x 10Gb) provide slot and connection-level redundancy for both network Fabrics. All configuration options (beyond the hypervisor differences noted in the table below) are identical except for CPU and RAM which are reduced on the Management host.
35 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
3.3.3.1 iSCSI
Local Tier 1 Compute Host PowerEdge R730
Local Tier 1 Management Host PowerEdge R730
2 x Intel Xeon E5-2698v4 Processor (2.2Ghz) 2 x Intel Xeon E5-2660v4 Processor (2Ghz)
512GB Memory (16 x 32GB RDIMMs, 2400MT/s) 128GB Memory (8 x 16GB RDIMMs, 2400MT/s)
VMware vSphere on internal 8GB Dual SD VMware vSphere on internal 8GB Dual SD
Embedded 4 x 10Gb NDC Embedded 4 x 10Gb NDC
iDRAC8 Enterprise iDRAC8 Enterprise
2 x 750W PSUs 2 x 750W PSUs
3.3.3.2 FC
Fiber Channel is optionally leveraged as the block storage protocol for Compute and Management hosts with Compellent Tier 1 and Tier 2 storage. Aside from the use of FC HBAs to replace the 10Gb NICs used for iSCSI, the rest of the server configurations are the same.
Local Tier 1 Compute Host PowerEdge R730
Local Tier 1 Management Host PowerEdge R730
2 x Intel Xeon E5-2698v4 Processor (2.2Ghz) 2 x Intel Xeon E5-2660v4 Processor (2Ghz)
512GB Memory (16 x 32GB RDIMMs, 2400MT/s) 128GB Memory (8 x 16GB RDIMMs, 2400MT/s)
VMware vSphere on internal 8GB Dual SD VMware vSphere on internal 8GB Dual SD
2 x QLogic 2562 8Gb DP FC HBA 2 x QLogic 2562 8Gb DP FC HBA
Embedded 4 x 10Gb NDC Embedded 4 x 10Gb NDC
iDRAC8 Enterprise iDRAC8 Enterprise
2 x 750W PSUs 2 x 750W PSUs
36 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Shared Tier 1 Blade
3.3.4.1 iSCSI
The Shared Tier 1 blade server is configured in line with its rack server equivalent. Two network interconnect Fabrics are configured for the blades: the A-Fabric dedicated to 10Gb iSCSI traffic, the B-Fabric dedicated to 1Gb LAN.
Local Tier 1 Compute Host PowerEdge R730
Local Tier 1 Management Host PowerEdge R730
2 x Intel Xeon E5-2698v4 Processor (2.2Ghz) 2 x Intel Xeon E5-2660v4 Processor (2Ghz)
512GB Memory (16 x 32GB RDIMMs, 2400MT/s) 128GB Memory (8 x 16GB RDIMMs, 2400MT/s)
VMware vSphere on internal 8GB Dual SD VMware vSphere on internal 8GB Dual SD
QLogic 57810S-k 10Gb DP KR NDC (iSCSI) QLogic 57810S-k 10Gb DP KR NDC (iSCSI)
QLogic 57810-k 10Gb DP KR NDC (LAN) QLogic 57810-k 10Gb DP KR NDC (LAN)
iDRAC8 Enterprise w/ vFlash, 8GB SD iDRAC8 Enterprise w/ vFlash, 8GB SD
3.3.4.2 FC
Fiber Channel is optionally leveraged as the block storage protocol for Compute and Management hosts with Compellent Tier 1 and Tier 2 storage. Aside from the use of FC HBAs to replace the 10Gb NICs used for iSCSI, the rest of the server configurations are the same. Please note that FC is only currently supported using vSphere.
Local Tier 1 Compute Host PowerEdge R730
Local Tier 1 Management Host PowerEdge R730
2 x Intel Xeon E5-2698v4 Processor (2.2Ghz) 2 x Intel Xeon E5-2660v4 Processor (2Ghz)
512GB Memory (16 x 32GB RDIMMs, 2400MT/s) 128GB Memory (8 x 16GB RDIMMs, 2400MT/s)
VMware vSphere on internal 8GB Dual SD VMware vSphere on internal 8GB Dual SD
QLogic 57810S-k 10Gb DP KR NDC (LAN) QLogic 57810S-k 10Gb DP KR NDC (LAN)
1 x QLogic QME2572 8Gb FC mezz (FC) 1 x QLogic QME2572 8Gb FC mezz (FC)
iDRAC8 Enterprise w/ vFlash, 8GB SD iDRAC8 Enterprise w/ vFlash, 8GB SD
37 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
3.4 Storage
EqualLogic Tier 1 – PS6210XS (iSCSI) Implement both high-speed, low-latency solid-state disk (SSD) technology and high-capacity HDDs from
a single chassis. The PS6210XS 10GbE iSCSI array is a Dell Fluid Data™ solution with a virtualized scale-out
architecture that delivers enhanced storage performance and reliability that is easy to manage and scale
for future needs. For more information please visit: LINK
Model Features Options Uses
EqualLogic PS6210XS
24 drive hybrid array (SSD + 10K SAS), dual HA controllers, Snaps/Clones, Async replication, SAN HQ, 4 x 10Gb interfaces per controller (2 x SFP + 2 x 10GBT)
13TB – 7 x 400GB SSD + 17 x 600GB 10K SAS
Tier 1 array for Shared Tier 1 solution model (10Gb – iSCSI)
26TB – 7 x 800GB SSD + 17 x 1.2TB 10K SAS
EqualLogic Tier 2 – PS6210E The EqualLogic PS6210 Array Series provides enhanced storage performance for the small-to-medium
enterprise with the simplicity of the PS Series product line. Dell EqualLogic PS6210 arrays can drive up to
approximately 2GB/sec of throughput per array for sequential, large-block workloads.
38 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Model Features Options Uses
EqualLogic PS6210E
24 drive bays (NL-SAS/ 7.2K RPM), dual HA controllers, Snaps/Clones, Async replication, SAN HQ, 4 x 10Gb ports, 4U chassis
24TB – 24 x 1TB HDDs Tier 2 array for up to 1500 users, per array, in shared Tier 1 solution model (10Gb)
48TB – 24 x 2TB HDDs
72TB – 24 x 3TB HDDs
96TB – 24 x 4TB HDDs
EqualLogic Configuration Each tier of EqualLogic storage is managed as a separate pool or group to isolate specific workloads.
Manage shared Tier 1 arrays used for hosting VDI sessions grouped, while managing shared Tier 2 arrays
used for hosting Management server role VMs and user data grouped separately.
39 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Compellent Tier 1 – SC8000 Experience enhanced storage performance, efficiency and
scalability with the next-generation Dell Compellent
SC8000 Storage Center Controller. Dell Wyse Solutions
Engineering recommends that all Compellent storage arrays
be implemented using two controllers in an HA cluster.
Fiber Channel is the preferred storage protocol for use with
this array, but Compellent is fully capable of supporting
iSCSI as well. Key Storage Center applications used
strategically to provide increased performance include:
Simultaneous iSCSI, Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE) front-end
interconnects
Controls SAS and FC drive enclosures, including those with write-intensive and read-intensive
SSDs in SLC, MLC and TLC formats
Scales up to 960 SAS drives in multiple enclosures per dual-controller system and scales-out to
many systems across numerous sites, monitored by a single console
Seamlessly integrates with the FS8600 NAS appliance for fully interoperable block and file storage
solutions
Compellent Tier 1 storage consists of a standard dual controller configuration and scales upward by adding disks/ shelves and additional discrete arrays. A single pair of SC8000 controllers will support Tier 1 and Tier 2 for up to 2000 knowledge worker users, as depicted below, utilizing all 15K SAS disks. If Tier 2 is to be separated then an additional 30% of users are added per Tier 1 array. Scaling above this number, additional arrays will need to be implemented. Additional capacity and performance capability is achieved by adding larger disks or shelves, as appropriate, up to the controller’s performance limits. Each disk shelf requires 1 hot spare per disk type. RAID is virtualized across all disks in an array (RAID10 or RAID6). Please refer to the test methodology and results for specific workload characteristics in section 7. SSDs are added for use in scenarios where boot storms or provisioning speeds are an issue.
40 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Controller Front-End IO Back-End IO Disk Shelf Disks SCOS (min)
2 x SC8000 (16GB)
2 x dual-port 8Gb FC cards (per controller)
2 x quad-port SAS cards (per controller)
2.5” SAS shelf (24 disks each)
2.5” 300GB 15K SAS (~206 IOPS each) or 480GB SSD
6.3
Compellent Tier 2 – SC8000 or SC4020
Compellent Tier 2 storage is optional if a customer wishes to deploy discrete arrays for each Tier. The guidance below is provided for informational purposes. The optional Compellent Tier 2 array consists of a standard dual controller configuration and scales upward by adding disks and shelves. A single pair of SC8000 or SC4020 controllers is able to support Tier 2 for 10,000 basic users. Additional capacity and performance capability is achieved by adding disks and shelves, as appropriate. Each disk shelf requires 1 hot spare per disk type. When designing for Tier 2, capacity requirements will drive higher overall array performance capabilities due to the amount of disk that is on hand. Our base Tier 2 sizing guidance is based on 1 IOPS and 5GB per user.
Controller Front-End IO Back-End IO Disk Shelf Disks
2 x SC8000 (16GB)
2 x dual-port 8Gb FC cards (per controller)
2 x quad-port SAS cards (per controller)
2.5” SAS shelf (24 disks each)
2.5” 1TB NL SAS (~76 IOPS each)
2 x SC4020 (16GB)
2 x dual-port 8Gb FC cards (per controller)
2 x quad-port SAS cards (per controller)
2.5” SAS shelf (24 disks each)
2.5” 300GB 15K SAS (~206 IOPS each)
Sample Tier 2 Scaling Guidance (based on 1 IOPS per user + 5GB minimum):
Users Controllers Disk Shelves Disks RAW Capacity
500 2 1 7 7TB
1000 2 1 14 14TB
5000 2 3 66 66TB
10000 2 6 132 132TB
41 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Network Attached Storage (NAS)
3.4.6.1 EqualLogic FS7610
Model Features Scaling Uses
EqualLogic FS7610
Dual active-active controllers, 24GB cache per controller (cache mirroring), SMB & NFS support, AD-integration. Up to 2 FS7610 systems in a NAS cluster (4 controllers).
10Gb iSCSI via 8 x Ethernet ports.
Each controller can support 1500 concurrent users, up to 6000 total in a 2 system NAS cluster.
Scale out NAS for Local Tier 1 to provide file share HA.
3.4.6.2 Compellent FS8600
Model Features Options Uses
Compellent FS8600
Dual active-active controllers, 24GB cache per controller (cache mirroring), SMB & NFS support, AD-integration. Up to 4 FS8600 systems in a NAS cluster (8 controllers).
FC only.
Each controller can support 1500 concurrent users, up to 12000 total in a 4 system NAS cluster.
Scale out NAS for Shared Tier 1 on Compellent, to provide file share HA (FC Only).
42 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
3.5 GPUs
NVIDIA GRID K1 and K2 NVIDIA GRID™ technology offers the ability to offload graphics
processing from the CPU to the GPU in virtualized
environments, allowing the data center manager to deliver true
PC graphics-rich experiences to more users for the first time.
NVIDIA's Kepler™-based GRID K1 and K2 boards are specifically
designed to enable rich graphics in virtualized environments.
GPU Virtualization
GRID boards allow hardware virtualization of the GPU. This
means multiple users can share a single GPU, improving user density while providing true PC performance
and compatibility.
Low-Latency Remote Display
NVIDIA's patented low-latency remote display technology greatly improves the user experience by
reducing the lag that users feel when interacting with their virtual machine. With this technology, the
virtual desktop screen is pushed directly to the remoting protocol.
Maximum User Density
NVIDIA GRID boards have an optimized multi-GPU design that helps to maximize user density. GRID K1
boards, which include four Kepler-based GPUs and 16GB of memory, are designed to host the maximum
number of concurrent users. GRID K2 boards, which include two higher end Kepler GPUs and 8GB of
memory, deliver maximum density for users of graphics-intensive applications.
Specs Grid K1 Grid K2
Number of GPUs 4 x Kepler GPUs (GK107) 2 x high-end Kepler GPUs
(GK104)
Total CUDA cores 768 (192 per GPU) 3072 (1536 per GPU)
Core Clock 850 MHz 745 MHz
43 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Total memory size 16 GB DDR3 8 GB GDDR5
Max power 130 W 225 W
Form Factors Dual slot (4.4” x 10.5”) Dual slot (4.4” x 10.5”)
Aux power 6-pin connector 8-pin connector
PCIe x16 (Gen3) x16 (Gen3)
Cooling solution Passive Passive/ Active
For more information on NVIDIA Grid, please visit: Link
NVIDIA Tesla M60 The NVIDIA® Tesla® M60 is a dual-slot 10.5 inch PCI Express Gen3
graphics card featuring two high-end NVIDIA Maxwell™ GPUs and
a total of 16GB GDDR5 memory per card. This card utilizes NVIDIA
GPU Boost™ technology which dynamically adjusts the GPU clock
to achieve maximum performance. Additionally, the Tesla M60
doubles the number of H.264 encoders over the NVIDIA® Kepler™
GPUs.
Accelerate your most demanding enterprise data center workloads
with NVIDIA® Tesla® GPU accelerators. Scientists can now crunch through petabytes of data up to 10x
faster than with CPUs in applications ranging from energy exploration to deep learning. Plus, Tesla
accelerators deliver the horsepower needed to run bigger simulations faster than ever before. For
enterprises deploying VDI, Tesla accelerators are perfect for accelerating virtual desktops.
Specs Tesla M60
Number of GPUs 2 x NVIDIA Maxwell™ GPUs
44 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Total CUDA cores 4096 (2048 per GPU)
Base Clock 899 MHz (Max: 1178 MHz)
Total memory size 16GB GDDR5 (8GB per GPU)
Max power 300W
Form Factors Dual slot (4.4” x 10.5”)
Aux power 8-pin connector
PCIe x16 (Gen3)
Cooling solution Passive/ Active
3.6 Dell Wyse Thin Clients The following Dell Wyse clients will deliver a superior VMware Horizon user experience and are the
recommended choices for this solution.
Wyse 5030 PCoIP Zero Client For uncompromising computing with the benefits of secure, centralized
management, the Dell Wyse 5030 PCoIP zero client for VMware View is
a secure, easily managed zero client that provides outstanding graphics
performance for advanced applications such as CAD, 3D solids modeling, video editing and advanced
worker-level office productivity applications. Smaller than a typical notebook, this dedicated zero client is
designed specifically for VMware View. It features the latest processor technology from Teradici to process
the PCoIP protocol in silicon and includes client-side content caching to deliver the highest level of
performance available over 2 HD displays in an extremely compact, energy-efficient form factor. The Dell
Wyse 5030 delivers a rich user experience while resolving the challenges of provisioning, managing,
maintaining and securing enterprise desktops. For more information, please visit: Link
45 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Wyse 5050 AIO PCoIP Zero Client The Wyse 5050 All-in-One (AIO) PCoIP zero client combines
the security and performance of the Wyse 5030 PCoIP zero
client for VMware with the elegant design of Dell’s best-
selling P24 LED monitor. The Wyse 5050 AIO provides a best-
in-class virtual experience with superior manageability – at a
better value than purchasing a zero client and high resolution
monitor separately. A dedicated hardware PCoIP engine
delivers the highest level of display performance available for
advanced applications, including CAD, 3D solids modeling,
video editing and more. Elegant in appearance and energy
efficient, the Wyse 5050 AIO is a fully functional VMware View endpoint that delivers a true PC-like
experience. It offers the full benefits of an efficient and secure centralized computing environment, like
rich multimedia, high-resolution 3D graphics, HD media, and full USB peripheral interoperability locally
(LAN) or remotely (WAN). For more information, please visit: Link
Wyse 7030 PCoIP Zero Client The Wyse 7030 PCoIP zero client from Dell offers an outstanding rich graphics user
experience with the benefits of secure, centralized management. It is a secure, easily
managed zero client that provides outstanding graphics performance for advanced
applications such as CAD, 3D solids modeling, video editing and advanced worker-level
office productivity applications. About the size of a notebook, this dedicated zero client
designed specifically for VMware View. It features the latest processor technology from
Teradici to process the PCoIP protocol in silicon and includes client-side content
caching to deliver the highest level of display performance available over 4 HD displays
in a compact, energy-efficient form factor. The Dell Wyse 7030 delivers a rich user experience while
resolving the challenges of provisioning, managing, maintaining and securing enterprise desktops. For
more information, please visit: Link
Wyse 5010 Thin Client (ThinOS) with PCoIP Designed for knowledge workers and power users, the Wyse 5010 thin client with
ThinOS and PCoIP is a high-performance thin client based on Dell Wyse ThinOS, the
virus-resistant firmware base designed for optimal thin client security, performance,
and ease-of-use. Highly secure, compact and powerful, the Wyse 5010 combines Dell
Wyse ThinOS with a dual-core AMD processor and a revolutionary unified graphics
engine for an outstanding user experience. The Wyse 5010 (ThinOS) addresses the
performance challenges of processing-intensive applications like computer-aided
design, multimedia, HD video and 3D modeling. Scalable enterprise-wide on-premise
or cloud-based management provides simple deployment, patching and updates. Take
a unit from box to productivity in minutes with auto configuration. Delivering
outstanding processing speed and power, security and display performance, the Wyse
5010 (ThinOS) offers a unique combination of performance, efficiency, and affordability. The Wyse 5010
46 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
(ThinOS) is Citrix HDX, Microsoft® RemoteFX, and VMware® Horizon View certified. It also supports
legacy peripherals via an optional USB adapter. For more information, please visit: Link
Wyse 5040 AIO Thin Client with PCoIP
The Dell Wyse 5040 AIO all-in-one (AIO) thin client with
PCoIP offers versatile connectivity options for use in a wide
range of industries. With four USB 2.0 ports, Gigabit Ethernet
and integrated dual band Wi-Fi options, users can link to their
peripherals and quickly connect to the network while working
with processing-intensive, graphics-rich applications. Built-in
speakers, a camera and a microphone make video
conferencing and desktop communication simple and easy. It
even supports a second attached display for those who need
a dual monitor configuration. A simple one-cord design and
out-of-box automatic setup makes deployment effortless
while remote management from a simple file server, Wyse Device Manager (WDM), or Wyse Thin Client
Manager can help lower your total cost of ownership as you grow from just a few thin clients to tens of
thousands.
Wyse 3030 LT Thin Client (ThinOS) The Wyse 3030 LT thin client from Dell offers an excellent user experience within a cost-effective
offering, and features the virus resistant and extremely efficient Wyse ThinOS, for environments in
which security is critical—there’s no attack surface to put your data at risk. The 3030 LT delivers
outstanding performance based on its dual core processor design, and delivers smooth multimedia,
bi-directional audio and flash playback. Boot up in just seconds and log in securely to almost any
network. In addition, the Wyse 3030 LT is designed for smooth playback of high bit-rate HD video and
graphics within a very compact form factor, with very efficient energy consumption and low heat
emissions. Using less than 7 watts of electricity, the Wyse 3030 LT’s small size enables discrete
mounting options: under desks, to walls, and behind monitors, creating cool workspaces in every
respect.
47 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Wyse 5010 Thin Client (Windows Embedded 8 Standard) In addition to Wyse ThinOS, the Dell Wyse 5010 thin client is available with Windows
Embedded Standard 8 and packs dual-core processing power into a compact form
factor for knowledge workers who need performance for demanding virtual
Windows® desktops and cloud applications. It’s also great for kiosks, and multi-touch
displays in a wide variety of environments, including manufacturing, hospitality, retail,
and healthcare. It features dual-core processing power and an integrated graphics
engine for a fulfilling Windows® 8 user experience. Knowledge workers will enjoy rich
content creation and consumption as well as everyday multimedia. Kiosk displays will
look great on a thin client that is Microsoft RemoteFX®, Citrix® HDX, VMware PCoIP,
and HD video-enabled. Operating with less than 9 watts of energy, the Dell Wyse 5010
(Windows) offers cool, quiet operations, potentially lowering your overall carbon
footprint.
Wyse 7010 Thin Client (Windows Embedded Standard 8)
The versatile Dell Wyse 7010 thin client runs Windows Embedded Standard 8 and gives
people the freedom to mix and match a broad range of legacy and cutting edge
peripheral devices. Ports for parallel, serial, and USB 3.0 offer fast, flexible connectivity.
Like all Dell Wyse thin clients, the new Dell Wyse 7010 (Windows) is one cool operator.
Its energy efficient processor – which out-performs other more power-hungry
alternatives – and silent fan-less design, all contribute to help lower an organization’s
carbon footprint through power requirements that are a fraction of traditional desktop
PCs.
Dell Venue 11 Pro Meet the ultimate in productivity, connectivity and collaboration. Enjoy full
laptop performance in an ultra-portable tablet that has unmatched flexibility
for a business in motion. This dual purpose device works as a tablet when
you're out in the field but also enables you to work on your desktop in the
office thanks to an optional dock. For more information, please visit: Link
48 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Dell Chromebook 13 The lightweight, easy-to-use Dell Chromebook 13 helps turn
education into exploration - without the worries of safety or security.
Priced to make 1:1 computing affordable today, Chromebook 13 is
backed by Dell support services to make the most of your budget for
years to come. The Chrome OS and Chrome browser get students
online in an instant and loads web pages in seconds. A high-density
battery supported by a 5th Gen Intel® CPU provides up to 12 hours
of power. Encourage creativity with the Chromebook 13 and its
multimedia features that include a 13.3" screen, stereo sound and
webcam. For more information, please visit: Link
3.6.11 Wyse 5020 Thin Client
The Wyse 5000 series thin client is a highly efficient and powerful endpoint platform for
VMware virtual desktop environments. It is available with Windows Embedded Standard
and Wyse enhanced Linux. With its extremely compact size and high performance, it’s a
versatile virtual desktop endpoint. With a powerful, energy-saving quad core AMD G Series
APU in a compact chassis with dual-HD monitor support, the Wyse 5020 thin client
delivers stunning performance and display capabilities across 2D, 3D and HD video
applications. When it comes to energy efficiency, the Wyse 5000 series outperforms other,
more power-hungry desktop alternatives. Its silent diskless and fan less design helps
reduce power usage to just a fraction of that used in traditional desktops. For more
information, please visit Link
Wyse 7040 Thin Client The Wyse 7040 is a high-powered, ultra-secure thin client. Equipped
with 6th generation Intel i5/i7 processors, it delivers extremely high
graphical display performance (up to three displays via display-port
daisy-chaining, with 4K resolution available on a single monitor) for
seamless access to the most demanding applications. The Wyse 7040 is
compatible with both data center hosted and client-side virtual desktop environments and is compliant
with all relevant U.S. Federal security certifications including OPAL compliant hard-drive options,
VPAT/Section 508, NIST BIOS, Energy-Star and EPEAT. Wyse enhanced Windows Embedded Standard 7P
OS provides additional security features such as BitLocker. The Wyse 7040 offers a high level of
connectivity including dual NIC, 6 x USB3.0 ports and an optional second network port, with either copper
or fiber SFP interface. Wyse 7040 devices are highly manageable through Intel vPRO, Wyse Device
Manager (WDM), Microsoft System Center Configuration Manager (SCCM) and Dell Command Configure
(DCC). For more information, please visit: Link
49 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
4 Software Components
4.1 What's new in this release of Horizon 7? This new release of VMware Horizon delivers following important new features and enhancements:
Just in time Delivery with Instant Clone Technology Reduce infrastructure requirements while enhancing security with Instant Clone technology and App
Volumes. Instantly deliver brand new personalized desktop and application services to end users every
time they log in. Just in Time Delivery with Instant Clone Technology is turning the traditional VDI
provisioning model on its head.
The booted-up parent VM can be “hot-cloned” to produce derivative desktop VMs rapidly, leveraging the
same disk and memory of the parent, with the clone starting in an already “booted-up” state. This process
bypasses the cycle time incurred with traditional cloning where several power cycle and reconfiguration
calls are usually made.
When Instant Clone technology is used in conjunction with VMware App Volumes and User Environment
Manager, administrators can use Instant Clone Technology to rapidly spin up desktops for users that retain
user customization and persona from session to session, even though the desktop itself is destroyed when
the user logs out. Virtual desktops benefit from the latest O/S and application patches automatically
applied between user logins, without any disruptive recompose.
Transformational User Experience with Blast Extreme A new VMware controlled protocol for a richer app & desktop experience Protocol optimized for mobile
and overall lower client TCO All existing Horizon remote experience features work with Blast Extreme and
updated Horizon clients Deliver rich multimedia experience in lower bandwidth Rapid client proliferation
from strong Horizon Client ecosystem.
Blast Extreme is network-friendly, leverages both TCP and UDP transports, powered by H.264 to get the
best performance across more devices, and reduces CPU consumption resulting in less device power
consumed for longer battery life.
50 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Modernize Application Lifecycle Management with App Volumes
Transform application management from a slow, cumbersome process into a highly scalable, nimble delivery mechanism that provides faster application delivery and application management while reducing IT costs by up to 70%.
VMware App Volumes is a transformative solution that delivers applications to virtual desktops. Applications installed on multi-user AppStacks or user-specific writable volumes attach instantly to a desktop at user login. The App Volumes user experience closely resembles that of applications natively installed on the desktop with App Volumes, applications become VM-independent objects that can be moved easily across data centers or to the cloud and shared with thousands of virtual machines.
Smart Policies with Streamlined Access Improve end user satisfaction by simplifying authentication across all desktop and app services while
improving security with smarter, contextual, role-based policies tied to a user, device or location.
Policy-Managed Client Features, which enables IT to use policy to define which specific security-
impacting features, are accessible upon login. These include clipboard redirection, USB, printing, and
client-drives. All of these can be enforced contextually, based on role, evaluated at logon/logoff,
disconnect/reconnect and at pre-determined refresh intervals for consistent application of policy across
the entirety of the user experience. For example, a user logging in from a network location consider
unsecured, can be denied access to USB and printing. Additionally, PCoIP bandwidth profile settings allow
IT to customize the user experience based on user context and location.
True SSO streamlines secure access to a Horizon desktop when users authenticate via VMware Identity
Manager. A short lived VMware Horizon virtual certificate is generated, enabling a password-free Windows
login, bypassing the usual secondary login prompt users would encounter before getting to their desktop.
4.2 VMware Horizon The solution is based on VMware Horizon which provides a complete end-to-end solution delivering
Microsoft Windows virtual desktops to users on a wide variety of endpoint devices. Virtual desktops are
dynamically assembled on demand, providing users with pristine, yet personalized, desktops each time
they log on.
VMware Horizon provides a complete virtual desktop delivery system by integrating several distributed
components with advanced configuration tools that simplify the creation and real-time management of
the virtual desktop infrastructure. For the complete set of details, please see the Horizon resources page at
http://www.vmware.com/products/horizon-view/resources.html
The core Horizon components include:
51 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
View Connection Server (VCS) – Installed on servers in the data center and brokers client connections,
The VCS authenticates users, entitles users by mapping them to desktops and/or pools, establishes secure
connections from clients to desktops, support single sign-on, sets and applies policies, acts as a DMZ
security server for outside corporate firewall connections and more.
View Client – Installed on endpoints. Is software for creating connections to View desktops that can be
run from tablets, Windows, Linux, or Mac PCs or laptops, thin clients and other devices.
View Portal – A web portal to access links for downloading full View clients. With HTML Access Feature
enabled enablement for running a View desktop inside a supported browser is enabled.
View Agent – Installed on all VMs, physical machines and Terminal Service servers that are used as a
source for View desktops. On VMs the agent is used to communicate with the View client to provide
services such as USB redirection, printer support and more.
View Administrator – A web portal that provides admin functions such as deploy and management of
View desktops and pools, set and control user authentication and more.
View Composer – This software service can be installed standalone or on the vCenter server and provides
enablement to deploy and create linked clone desktop pools (also called non-persistent desktops).
vCenter Server – This is a server that provides centralized management and configuration to entire virtual
desktop and host infrastructure. It facilitates configuration, provision, management services. It is installed
on a Windows Server 2008 host (can be a VM).
View Transfer Server – Manages data transfers between the data center and the View desktops that are
checked out on the end users’ desktops in offline mode. This Server is required to support desktops that
run the View client with Local Mode options. Replications and syncing are the functions it will perform
with offline images.
52 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
4.3 Hypervisor Platforms
VMware vSphere 6 The vSphere hypervisor also known as ESXi is a bare-metal
hypervisor that installs directly on top of your physical
server and partitions it into multiple virtual machines. Each
virtual machine shares the same physical resources as the
other virtual machines and they can all run at the same
time. Unlike other hypervisors, all management
functionality of vSphere is done through remote
management tools. There is no underlying operating
system, reducing the install footprint to less than 150MB.
VMware vSphere 6 includes three major layers:
Virtualization, Management and Interface. The Virtualization layer includes infrastructure and application
services. The Management layer is central for configuring, provisioning and managing virtualized
environments. The Interface layer includes the vSphere web client.
Throughout the Dell Wyse Datacenter solution, all VMware and Microsoft best practices and prerequisites
for core services are adhered to (NTP, DNS, Active Directory, etc.). The vCenter 6 VM used in the solution
is a single Windows Server 2012 R2 VM (Check for current Windows Server OS compatibility at
http://www.vmware.com/resources/compatibility) or vCenter 6 virtual appliance, residing on a host in the
management Tier. SQL server is a core component of the Windows version of vCenter and is hosted on
another VM also residing in the management Tier. It is recommended that View Composer is installed on a
standalone Windows Server 2012 R2 VM when using the vCenter Server Appliance.
For more information on VMware vSphere, visit http://www.vmware.com/products/vsphere
4.3.1.1 Virtual SAN 6.2 The New VMware Virtual SAN 6.2 is bundled with the ESXi 6.0 Update 2. The new features with Virtual SAN
6.2 include,
Deduplication and compression. Virtual SAN 6.2 supports deduplication and compression to eliminate
duplicate data. This technique reduces the total storage space required to meet your needs. When you
enable deduplication and compression on a Virtual SAN cluster, redundant copies of data in a particular
disk group are reduced to single copy. Deduplication and compression are available as a cluster-wide
setting on all-flash clusters.
RAID 5 and RAID 6 erasure coding. Virtual SAN 6.2 supports both RAID 5 and RAID 6 erasure coding to
reduce the storage space required to protect your data. RAID 5 and RAID 6 are available as a policy
attribute for VMs in all-flash clusters.
53 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
For more information on what’s new in vSAN 6.2 visit
http://pubs.vmware.com/Release_Notes/en/vsan/62/vmware-virtual-san-62-release-notes.html
4.4 NVIDIA GRID vGPU NVIDIA GRID™ vGPU™ brings the full benefit of NVIDIA hardware-accelerated graphics to virtualized
solutions. This technology provides exceptional graphics performance for virtual desktops equivalent to
local PCs when sharing a GPU among multiple users.
GRID vGPU is the industry's most advanced technology for sharing true GPU hardware acceleration
between multiple virtual desktops—without compromising the graphics experience. Application features
and compatibility are exactly the same as they would be at the user's desk.
With GRID vGPU technology, the graphics commands of each virtual machine are passed directly to the
GPU, without translation by the hypervisor. This allows the GPU hardware to be time-sliced to deliver the
ultimate in shared virtualized graphics performance.
54 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
vGPU Profiles
Virtual Graphics Processing Unit, or GRID™ vGPU™, is technology developed by NVIDIA® that enables
hardware sharing of graphics processing for virtual desktops. This solution provides a hybrid shared mode
allowing the GPU to be virtualized while the virtual machines run the native NVIDIA video drivers for better
performance. Thanks to OpenGL support, VMs have access to more graphics applications. When utilizing
vGPU, the graphics commands from virtual machines are passed directly to the GPU without any
hypervisor translation. All this is done without sacrificing server performance and so is truly cutting edge.
The combination of Dell servers, NVIDIA GRID vGPU™ technology and NVIDIA GRID™ cards enable high-
end graphics users to experience high fidelity graphics quality and performance, for their favorite
applications at a reasonable cost.
NOTE: GRID vGPU™ is supported on NVIDIA K2 cards and is available as a licensed feature on NVIDIA
Tesla M60 cards.
For more information about NVIDIA GRID vGPU, please visit: LINK
NVIDIA® Tesla® M60 GRID vGPU Profiles:
Card vGPU
Profile
Graphics
Memory (Frame
Buffer)
Virtual
Display
Heads
Maximum
Resolution
Maximum
vGPUs
Guest VM OS
Support* GRID
License
Required Per
GPU
Per
Card
Win 64bit
Linux
Tesla
M60
M60-8Q 8GB 4 4096x2160 1 2 ∞ ∞
GRID Virtual
Workstation
M60-4Q 4GB 4 4096x2160 2 4 ∞ ∞
M60-2Q 2GB 4 4096x2160 4 8 ∞ ∞
M60-1Q 1GB 2 4096x2160 8 16 ∞ ∞
M60-0Q 512MB 2 2560x1600 16 32 ∞ ∞
M60-1B 1GB 4 2560x1600 8 16 ∞
55 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
M60-0B 512MB 2 2560x1600 16 32 ∞ GRID Virtual
PC
M60-8A 8GB
1 1280x1024
1 2 ∞
GRID Virtual
Application
M60-4A 4GB 2 4 ∞
M60-2A 2GB 4 8 ∞
M60-1A 1GB 8 16 ∞
NOTE: Windows guest operating systems supported with all profiles. 64-Bit Linux operating systems
supported are listed in the table on the next page and are only supported by vGPU 2.0 on the M60 cards.
Windows Linux Windows 7 (32/64-bit) RHEL 6.6 & 7
Windows 8.x (32/64-bit) CentOS 6.6 & 7
Windows 10 (32/64-bit) Ubuntu 12.04 & 14.04 LTS
Windows Server 2008 R2
Windows Server 2012 R2
Card vGPU Profile
Graphics
Memory
Virtual Display Heads
Maximum Resolution
Physical GPUs
Maximum vGPUs
Intended User(s) Per
GPU
Per
Card
GRID
K2
K280Q 4GB 4 2560x1600
2
1 2 Designer
K260Q 2GB 4 2560x1600 2 4 Designer
56 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
K240Q 1GB 2 2560x1600 4 8 Power User
K220Q 512MB 2 2560x1600 8 16 Power User
GRID
K1
K180Q 4GB 4 2560x1600
4
1 4 Entry
Designer
K160Q 2GB 4 2560x1600 2 8 Power User
K140Q 1GB 2 2560x1600 4 16 Power User
K120Q 512MB 2 2560x1600 8 32 Power User
For more information about NVIDIA GRID vGPU, please visit: LINK
4.4.1.1 GRID vGPU Licensing and Architecture NVIDIA GRID vGPU™ is offered as a licensable feature on Tesla M60 GPUs. vGPU can be licensed and
entitled using one of the three following software editions.
NVIDIA GRID
Virtual Applications
NVIDIA GRID
Virtual PC
NVIDIA GRID
Virtual Workstation
For organizations deploying
XenApp or other RDSH solutions.
Designed to deliver Windows
applications at full performance.
For users who need a virtual desktop,
but also need a great user experience
leveraging PC applications, browsers,
and high-definition video.
For users who need to use professional
graphics applications with full
performance on any device, anywhere.
57 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Up to 2 displays supporting
virtualized Windows applications
Up to 4 displays supporting Windows
desktops, and NVIDIA Quadro
features
Up to 4 displays supporting Windows
or Linux desktops, NVIDIA Quadro,
CUDA*, OpenCL* & GPU pass-through
*CUDA and OpenCL only supported with M60-8Q profile
The GRID vGPU Manager, running on the hypervisor installed via the VIB, controls the vGPUs that can be
assigned to guest VMs. A properly configured VM obtains a license from the GRID license server during the
boot operation for a specified license level. The NVIDIA graphics driver running on the guest VM provides
direct access to the assigned GPU. When the VM is shut down, it releases the license back to the server.
(Image provided courtesy of NVIDIA Corporation, Copyright NVIDIA Corporation)
58 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
5 Solution Architecture for Horizon 7
5.1 Management Server Infrastructure The Management role requirements for the base solution are summarized below. Use data disks for role-
specific application files and data, logs, IIS web files, etc. in the Management volume. Present Tier 2
volumes with a special purpose (called out above) in the format specified below:
Role vCPU RAM (GB) NIC OS + Data vDisk (GB)
Tier 2 Volume (GB)
VMware vCenter 2 8 1 40 + 5 100 (VMDK)
View Connection Server 2 8 1 40 – 5 -
SQL Server 5 8 1 40 + 5 210 (VMDK)
File Server 1 4 1 40 + 5 2048 (RDM)
Total 7 28 4 180 2358
SQL Databases The VMware databases will be hosted by a single dedicated SQL 2012 SP1 Server VM (check DB
compatibility at: http://partnerweb.vmware.com/comp_guide2/sim/interop_matrix.php?) in the
Management layer. Use caution during database setup to ensure that SQL data, logs and TempDB are
properly separated onto their respective volumes. Create all Databases that will be required for:
View Connection Server
vCenter
View Composer
Initial placement of all databases into a single SQL instance is fine unless performance becomes an issue,
in which case database need to be separated into separate named instances. Enable auto-growth for each
DB.
Best practices defined by VMware are to be adhered to, to ensure optimal database performance.
The EqualLogic PS series arrays utilize a default RAID stripe size of 64K. To provide optimal performance,
configure disk partitions to begin from a sector boundary divisible by 64K.
Align all disks to be used by SQL Server with a 1024K offset and then formatted with a 64K file allocation
unit size (data, logs and TempDB).
59 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
DNS DNS plays a crucial role in the environment not only as the basis for Active Directory but will be used to
control access to the various VMware software components. All hosts, VMs and consumable software
components need to have a presence in DNS, preferably via a dynamic and AD-integrated namespace.
Microsoft best practices and organizational requirements are to be adhered to.
Pay consideration for eventual scaling, access to components that may live on one or more servers (SQL
databases, VMware services) during the initial deployment. Use CNAMEs and the round robin DNS
mechanism to provide a front-end “mask” to the back-end server actually hosting the service or data
source.
5.1.2.1 DNS for SQL To access the SQL data sources, either directly or via ODBC, a connection to the server name\ instance
name must be used. To simplify this process, as well as protect for future scaling (HA), instead of
connecting to server names directly, alias these connections in the form of DNS CNAMEs. So instead of
connecting to SQLServer1\<instance name> for every device that needs access to SQL, the preferred
approach is to connect to <CNAME>\<instance name>.
For example, the CNAME “VDISQL” is created to point to SQLServer1. If a failure scenario was to occur and
SQLServer2 would need to start serving data, we would simply change the CNAME in DNS to point to
SQLServer2. No infrastructure SQL client connections would need to be touched.
5.2 Storage Architecture Overview The Dell Wyse Datacenter solution has a wide variety of Tier 1 and Tier 2 storage options to provide maximum flexibility to suit any use case. Customers have the choice to leverage best-of-breed iSCSI solutions from EqualLogic or Fiber Channel solutions from Dell Compellent while being assured the storage Tiers of the Dell Wyse Datacenter solution will consistently meet or outperform user needs and expectations.
Local Tier 1 Storage
Selecting the local Tier 1 storage model means that the compute host servers use 10 locally installed hard drives to house the user desktop VMs. In this model, Tier 1 storage exists as local hard disks or SSDs on the Compute hosts themselves. To achieve the required performance level, RAID 10 is recommended for use across all local disks. A single volume per local Tier 1 Compute host is sufficient to host the provisioned desktop VMs along with their respective write caches.
60 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Shared Tier 1 Storage
Selecting the Shared Tier 1 model means that the virtualization compute hosts are deployed without Tier 1 local storage and leverage shared storage hosted on a high performance Dell Storage array. In this model, shared storage is leveraged for Tier 1 and used for VDI execution and write cache. Based on the heavy performance requirements of Tier 1 for VDI, it is recommended to use separate arrays for Tier 1 and Tier 2 when possible. We recommend using 500GB LUNs for VDI and running no more than 125 VMs per volume along with their respective write caches. Sizing to 500 basic users will require 4 x 500GB volumes.
Volumes Size (GB) Storage Array Purpose File System
VDI-1 500 Tier 1 125 x desktop VMs + WC VMFS or NTFS
VDI-2 500 Tier 1 125 x desktop VMs + WC VMFS or NTFS
VDI-3 500 Tier 1 125 x desktop VMs + WC VMFS or NTFS
VDI-4 500 Tier 1 125 x desktop VMs + WC VMFS or NTFS
Shared Tier 2 Storage
Tier 2 is shared iSCSI or FC storage used to host the Management server VMs and user data. EqualLogic 4100 or 6100 series 1Gb arrays are used for smaller scale deployments (Local Tier 1 only), the 6210 or 65x0 series for larger deployments (up to 16 in a group), or a single CML array scaled up to 10K users. The 10Gb iSCSI variants are intended for use in Shared Tier 1 solutions. The Compellent Tier 2 array, as specified in section 3.3.2 scales simply by adding disks. The table below outlines the volume requirements for Tier 2. Larger disk sizes are chosen to meet the capacity needs of the customer. The user data are presented either via a file server VM using RDM/ PTD for small scale deployments or via NAS for large scale or HA deployments. The solution as designed presents all SQL disks using VMDK or VHDX formats. RAID 50 are used in smaller deployments but is not recommended for critical environments. The recommendation for larger scale and mission critical deployments with higher performance requirements is to use RAID 10 or RAID 6 to maximize performance and recoverability. The following depicts the component volumes required to support a 500 user environment. Additional Management volumes are created as needed along with size adjustments as applicable for user data and profiles.
Volumes Size (GB) Storage Array Purpose File System
Management 350 Tier 2 vCenter, View Connection Server, File and SQL
VMFS
User Data 2048 Tier 2 File Server/ NAS RDM/NTFS
User Profiles 20 Tier 2 User profiles VMFS
SQL DATA 100 Tier 2 SQL VMFS
SQL LOGS 100 Tier 2 SQL VMFS
TempDB Data 5 Tier 2 SQL VMFS
TempDB Logs 5 Tier 2 SQL VMFS
61 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
SQL Witness 1 Tier 2 SQL (optional) VMFS
Templates/ ISO 200 Tier 2 ISO storage (optional) VMFS
Storage Networking – EqualLogic iSCSI
Dell’s iSCSI technology provides compelling price/performance in a simplified architecture while improving manageability in virtualized environments. Specifically, iSCSI offers virtualized environments simplified deployment, comprehensive storage management and data protection functionality, and seamless VM mobility. Dell iSCSI solutions give customers the “Storage Direct” advantage – the ability to seamlessly integrate virtualization into an overall, optimized storage environment.
If iSCSI is the selected block storage protocol, then the Dell EqualLogic MPIO plugin or MEM (Multipathing Extension Module) is installed on all hosts that connect to iSCSI storage. This module is added via a command line using a Virtual Management Appliance (vMA) from VMware. This plugin allows for easy configuration of iSCSI on each host. The MPIO plugin allows for the creation of new or access to existing data stores and handle IO load balancing. The plugin will also configure the optimal multi-pathing settings for the data stores as well. Some key settings to be used as part of the configuration:
Specify 2 IP Addresses for iSCSI on each host
Specify NICs
Specify Jumbo Frames at 9000 MTU
Initialize iSCSI initiator
Specify IP for the EqualLogic Storage group.
62 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Storage Networking – Compellent Fiber Channel (FC)
Based on Fluid Data architecture, the Dell Compellent Storage Center SAN provides built-in intelligence and automation to dynamically manage enterprise data throughout its lifecycle. Together, block-level intelligence, storage virtualization, integrated software and modular, platform-independent hardware enable exceptional efficiency, simplicity and security.
Storage Center actively manages data at a block level using real-time intelligence, providing fully virtualized storage at the disk level. Resources are pooled across the entire storage array. All virtual volumes are thin-provisioned. And with sub-LUN tiering, data is automatically moved between Tiers and RAID levels based on actual use.
If Fiber Channel is the selected block storage protocol, then the Compellent Storage Center Integrations for VMware vSphere client plug-in is installed on all hosts. This plugin enables all newly created data stores to be automatically aligned at the recommended 4MB offset. Although a single Fabric are configured to begin with to reduce costs, as a best practice recommendation, the environment is configured with two Fabrics to provide multi-pathing and end-to-end redundancy.
The following QLogic HBA BIOS settings are used:
Set the “connection options” field to 1 for point to point only
Set the “login retry count” field to 60 attempts
Set the “port down retry” count field to 60 attempts
Set the “link down timeout” field to 30 seconds
Set the “queue depth” (or “Execution Throttle”) field to 255
This queue depth are set to 255 because the ESXi VMkernel driver module and DSNRO can more
conveniently control the queue depth
5.2.5.1 FC Zoning
Zone at least one port from each server HBA to communicate with a single Compellent fault domain. The result of this is two distinct FC Fabrics and four redundant paths per server as shown in the diagram below. Round Robin or Fixed Paths are supported. You can leverage Compellent Virtual Ports to minimize port consumption as well as simplify deployment. Zone each controller’s front-end virtual ports, within a fault domain, with at least one ESXi initiator per server.
63 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
5.3 Virtual Networking
Local Tier 1 – Rack - iSCSI
The network configuration in this model will vary between the Compute and Management hosts. The Compute hosts do not need access to iSCSI storage since they are hosting VDI VMs on local disk. Since the Management VMs are hosted on shared storage, they can take advantage of HA including Live Migration. The following outlines the VLAN requirements for the Compute and Management hosts in this solution model:
Compute hosts (Local Tier 1)
o Management VLAN: Configured for hypervisor infrastructure traffic – L3 routed via core
switch
o VDI VLAN: Configured for VDI session traffic – L3 routed via core switch
Management hosts (Local Tier 1)
o Management VLAN: Configured for hypervisor Management traffic – L3 routed via core
switch
o Live Migration VLAN: Configured for Live Migration traffic – L2 switched only, trunked
from Core (HA only)
o iSCSI VLAN: Configured for iSCSI traffic – L2 switched only via ToR switch
o VDI Management VLAN: Configured for VDI infrastructure traffic – L3 routed via core
switch
A VLAN for iDRAC is configured for all hardware management traffic – L3 routed via core switch
Following best practices, LAN and block storage traffic is separated in solutions >500 users. This traffic is
combined within a single switch in smaller stacks to minimize the initial investment, however, VLANs are
required for each traffic type to enable traffic separation. Each Local Tier 1 Compute host will have a quad
port NDC which includes both 10Gb and 1Gb interfaces. Configure the LAN traffic from the server to the
ToR switch as a LAG.
vDSwitches should be used as desired for VM traffic especially in larger deployments to ease the
management burden across numerous hosts. In the Local Tier 1 rack model the MGMT hosts connect to
shared storage and require additional VMK ports. Network share values should be configured equally
among the VMKernel port groups that share a physical set of network adapters.
The benefit of using a VMware Distributed Switch (vDS) is that it brings a consistent configuration across all
hosts. The vDS is configured at the vCenter level and provides central management and monitoring to all
hosts configured on the vDS.
64 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
The Compute hosts are configured in the same basic manner, minus the shared storage, with the desktop
VMs connecting to the primary port group on the external vSwitch.
65 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Local Tier 1 – Blade
The network configuration in this model will vary slightly between the Compute and Management hosts. The Management VMs are hosted on local storage and optionally on shared Tier 2 if Live Migration is a requirement. The following outlines the VLAN requirements for the Compute and Management hosts in this solution model:
Compute hosts (Local Tier 1)
o Management VLAN: Configured for hypervisor infrastructure traffic – L3 routed via core
switch
o VDI VLAN: Configured for VDI session traffic – L3 routed via core switch
Management hosts (Local Tier 1)
o Management VLAN: Configured for hypervisor Management traffic – L3 routed via core
switch
o VDI Management VLAN: Configured for VDI infrastructure traffic – L3 routed via core
switch
o Live Migration VLAN: Configured for Live Migration traffic – L2 switched only, trunked
from Core (HA)
o iSCSI VLAN: Configured for iSCSI traffic – L2 switched only via ToR switch (HA)
A VLAN for iDRAC is configured for all hardware management traffic – L3 routed via core switch
5.3.2.1 vSphere – Non HA
Each Local Tier 1 Compute and Management blade host have a 10Gb dual port LOM in the A Fabric that will flow through 2 x IOA blade interconnects. The B and C Fabrics are left open for optional expansion. Connections should pass through the blade mezzanines and interconnects per the diagram below. Configure the LAN traffic from the chassis interconnects to the ToR switch as a LAG, if possible.
66 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
5.3.2.2 vSphere – HA
Following best practices, iSCSI and LAN traffic is physically separated into discrete Fabrics for the HA design. If mgmt. HA is required, the B Fabric is populated with a 10Gb DP NIC connecting through an additional pair of IOA blade interconnects. Network share values should be configured equally among the VMKernel port groups that share a physical set of network adapters.
Shared Tier 1 – iSCSI
The network configuration in this model is identical between the Compute and Management hosts since all VMs are hosted on shared storage, whether Tier 1 for desktops or Tier 2 for Mgmt. The benefits of shared storage are available to all hosts such as Live Migration and HA. The following outlines the VLAN requirements for the Compute and Management hosts in this solution model:
Compute hosts (Shared Tier 1)
o Management VLAN: Configured for hypervisor Management traffic – L3 routed via core
switch
o Live Migration VLAN: Configured for Live Migration traffic – L2 switched only, trunked
from Core
o iSCSI VLAN: Configured for iSCSI traffic – L2 switched only via ToR switch
o VDI VLAN: Configured for VDI session traffic – L3 routed via core switch
Management hosts (Shared Tier 1)
67 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
o Management VLAN: Configured for hypervisor Management traffic – L3 routed via core
switch
o Live Migration VLAN: Configured for Live Migration traffic – L2 switched only, trunked
from Core
o iSCSI VLAN: Configured for iSCSI traffic – L2 switched only via ToR switch
o VDI Management VLAN: Configured for VDI infrastructure traffic – L3 routed via core
switch
A VLAN for iDRAC is configured for all hardware management traffic – L3 routed via core switch
Following best practices, iSCSI and LAN traffic is physically separated into discrete Fabrics. Each Shared
Tier 1 Compute and Management host has a quad port NDC with 4 x 10Gb SFP+ ports each. iSCSI is
isolated onto its own vSwitch with redundant ports to guarantee storage bandwidth. Configure the LAN
traffic from the server to the ToR switch as a LAG.
The same basic configuration applies to rack or blade servers although the physical NIC and switching
components differ. Network share values should be configured equally among the VMkernel port groups
that share a physical set of network adapters.
68 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Shared Tier 1 – FC
Using Fiber Channel based storage eliminates the need to build iSCSI into the network stack but requires additional storage fabrics to be built out. The network configuration in this model is identical between the Compute and Management hosts. The benefits of shared storage are available to all hosts such as Live Migration and HA. The following outlines the VLAN requirements for the Compute and Management hosts in this solution model:
Compute hosts (Shared Tier 1)
o Management VLAN: Configured for hypervisor Management traffic – L3 routed via core
switch
o Live Migration VLAN: Configured for Live Migration traffic – L2 switched only, trunked
from Core
o VDI VLAN: Configured for VDI session traffic – L3 routed via core switch
Management hosts (Shared Tier 1)
o Management VLAN: Configured for hypervisor Management traffic – L3 routed via core
switch
o Live Migration VLAN: Configured for Live Migration traffic – L2 switched only, trunked
from Core
o VDI Management VLAN: Configured for VDI infrastructure traffic – L3 routed via core
switch
A VLAN for iDRAC is configured for all hardware management traffic – L3 routed via core switch
FC and LAN traffic are physically separated into discrete switching Fabrics. Each Shared Tier 1 Compute and Management host has a quad port NDC (4 x 10Gb) as well as 2 x 8Gb dual port FC HBAs. LAN traffic from the server to the ToR switch is configured as a LAG.
The same basic configuration applies to rack or blade servers although the physical NIC and switching
components differ. Network share values should be configured equally among the VMkernel port groups
that share a physical set of network adapters.
69 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
VMware NSX Dell and VMware’s Software Defined Datacenter (SDDC) architecture goes beyond simply virtualizing
servers and storage but also extends into the network. VMware NSX is a network virtualization platform
deployable on any IP network that is integrated with vSphere Virtual Distributed Switching and provides
the same features and benefits to networking as the ESXi hypervisor does to virtual machines. NSX
provides a complete set of logical networking elements and services—including logical switching, routing,
firewalling, load balancing, VPN, quality of service (QoS), and monitoring. These services are provisioned in
virtual networks through any cloud management platform leveraging the NSX APIs. Through Dell’s open
networking, companies are best able to take advantage of this disaggregation of a virtual network overlay
and an open physical underlay. Building a zero-trust security model is easy with NSX as each virtualized
workload can be protected with a stateful firewall engine providing extreme policy granularity. Any VM in
the datacenter can be rigorously secured or isolated if compromised, especially useful for virtual desktops
to prevent malicious code from attacking and spreading through the network.
VMware NSX is implemented via a layered architecture consisting of data, control and management
planes. The NSX vSwitch exists within and requires the vSphere Distributed Switch to abstract the physical
network while proving access-level switching in the hypervisor. NSX enables the use of virtual load
balancers, firewalls, logical switches and routers that can be implemented and scaled seamlessly to suit
any deployed architecture. VMware NSX compliments Dell Networking components deployed ToR,
leaf/spine or at the core.
70 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Key Features of Dell Open Networking and VMware NSX
Power of
Choice Choose from best-of-breed open networking platforms, operating systems
and applications.
Accelerated
Innovation
Take advantage of open networking with open source standards-based tools
and expertise to help accelerate innovation.
Open
Networking
Platform
All Dell Networking data center switches support the Open Network Install
Environment (ONIE), allowing customers to choose between multiple
operating systems and meet their unique needs.
Hardware
VTEP Gateway
Layer 2 gateway through VXLAN Tunnel End Points (VTEP) bridges virtual and
physical infrastructures.
Virtual
Switching
VXLAN based network overlays enable logical layer 2 overlay extensions across
a routed (L3) fabric within and across data center boundaries.
71 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Virtual
Routing
Dynamic routing between virtual networks performed in a distributed manner
in the hypervisor kernel, and scale-out routing with active-active failover with
physical routers.
Distributed
Firewalling
Distributed stateful firewalling, embedded in the hypervisor kernel for up to 20
Gbps of firewall capacity per hypervisor host.
Load
Balancing
L4-L7 load balancer with SSL offload and pass through, server health checks,
and App Rules for programmability and traffic manipulation.
For more information on VMware NSX and integrated offers from Dell Networking please see the Dell
Networking Solution Brief and the Reference architecture.
5.4 Scaling Guidance
The components are scaled either horizontally (by adding additional physical and virtual servers to
the server pools) or vertically (by adding virtual resources to the infrastructure)
Eliminate bandwidth and performance bottlenecks as much as possible
Allow future horizontal and vertical scaling with the objective of reducing the future cost of
ownership of the infrastructure.
Component Metric Horizontal Scalability Vertical Scalability
Virtual Desktop Host/Compute Servers
VMs per physical host Additional hosts and clusters added as necessary
Additional RAM or CPU compute power
View Composer Desktops per instance Additional physical servers added to the Management cluster to deal with additional management VMs.
Additional RAM or CPU compute power
View Connection Servers
Desktops per instance Additional physical servers added to the Management cluster to deal with additional management VMs.
Additional VCS Management VMs.
VMware vCenter VMs per physical host and/or ESX hosts per vCenter instance
Deploy additional servers and use linked mode to optimize management
Additional vCenter Management VMs.
72 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Database Services Concurrent connections, responsiveness of reads/ writes
Migrate databases to a dedicated SQL server and increase the number of management nodes
Additional RAM and CPU for the management nodes
File Services Concurrent connections, responsiveness of reads/ writes
Split user profiles and home directories between multiple file servers in the cluster. File services can also be migrated to the optional NAS device to provide high availability.
Additional RAM and CPU for the management nodes
5.5 Solution High Availability
High availability (HA) is offered to protect each layers of the solution architecture, individually if desired.
Following the N+1 model, additional ToR switches for LAN, iSCSI, or FC are added to the Network layer
and stacked to provide redundancy as required, additional compute and management hosts are added to
their respective layers, vSphere clustering is introduced in the management layer, SQL is mirrored or
clustered, an F5 device can be leveraged for load balancing and a NAS device can be used to host file
shares. Storage protocol switch stacks and NAS selection will vary based on chosen solution architecture.
73 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
The HA options provide redundancy for all critical components in the stack while improving the
performance and efficiency of the solution as a whole.
Additional switches added to the existing thereby equally spreading each host’s network
connections across multiple switches.
Additional ESXi o hosts added in the compute or mgmt layers to provide N+1 protection.
A number of enhancements occur at the Management tier, the first of which is the addition of
another host. The Management hosts will then be configured in an HA cluster. All applicable
Horizon server roles can then be duplicated on the new host where connections to each will be
load balanced via the addition of a F5 Load Balancer. SQL will also receive greater protection
through the addition and configuration of a SQL mirror with a witness.
5.5.1 Compute layer HA (Local Tier 1) The optional HA bundle adds an additional host in the Compute and Management layers to provide
redundancy and additional processing power to spread out the load. The Compute layer in this model
does not leverage shared storage so hypervisor HA does not provide a benefit here. If a single host fails,
another will need to be spun up in the cluster or extra server capacity can be pre-configured and running
in active status to handle the reconnection/startup of new desktops to accommodate the users from failed
host.
Because only the Management hosts have access to shared storage, in this model, only these hosts need
to leverage the full benefits of hypervisor HA. The Management hosts can be configured in an HA cluster
with or without the HA bundle. An extra server in the Management layer will provide protection should a
host fail.
vSphere HA Admission control can be configured one of three ways to protect the cluster. This will vary
largely by customer preference but the most manageable and predictable options are percentage
reservations or a specified hot standby. Reserving by percentage will reduce the overall per host density
capabilities but will make some use of all hardware in the cluster. Additions and subtractions of hosts will
require the cluster to be manually rebalanced. Specifying a failover host, on the other hand, will ensure
maximum per host density numbers but will result in hardware sitting idle.
74 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
5.5.2 vSphere HA (Shared Tier 1) Both compute and management hosts are identically configured, within their respective tiers and leverage
shared storage so can make full use of vSphere HA. The Compute hosts can be configured in an HA
cluster following the boundaries of vCenter with respect to limits imposed by VMware (3000 VMs per
vCenter). This will result in multiple HA clusters managed by multiple vCenter servers.
A single HA cluster will be sufficient to support the Management layer up to 10K users. An additional host
can be used as a hot standby or to thin the load across all hosts in the cluster.
Compute Host Cluster
Manage 10000 VMs
vCenter
75 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
5.5.3 Horizon infrastructure protection VMware Horizon infrastructure data protection with Dell Data Protection – http://dell.to/1ed2dQf
5.5.4 Management server high availability The applicable core Horizon roles will be load balanced via DNS by default. In environments requiring HA,
F5 can be introduced to manage load-balancing efforts. Horizon View, VCS and vCenter configurations
(optionally vCenter Update Manager) are stored in SQL which will be protected via the SQL mirror.
If the customer desires, some Role VMs can be optionally protected further via the form of a cold stand-by
VM residing on an opposing management host. A vSphere scheduled task can be used, for example, to
clone the VM to keep the stand-by VM current. Note – In the HA option, there is no file server VM, its
duties have been replaced by introducing a NAS head.
The following will protect each of the critical infrastructure components in the solution:
The Management hosts will be configured in a vSphere cluster.
SQL Server mirroring is configured with a witness to further protect SQL.
5.5.5 Horizon VCS high availability The VCS role as a VM and running in a VMware HA Cluster, the VCS server can be guarded against a
physical server failure.
For further protection in an HA configuration, deploy multiple replicated View Connection Server
instances in a group to support load balancing and HA. Replicated instances must exist on within a LAN
connection environment it is not recommended VMware best practice to create a group across a WAN or
similar connection.
5.5.6 Windows File Services high availability High availability for file services will be provided by the Dell FS7610, FS8600 clustered NAS devices. To
ensure proper redundancy, distribute the NAS cabling between ToR switches.
Unlike the FS8600, the FS7610 does not support for 802.1q (VLAN tagging) so configure the connecting
switch ports with native VLANs, both iSCSI and LAN/ VDI traffic ports. Best practice dictates that all ports
be connected on both controller nodes. The back-end ports are used for iSCSI traffic to the storage array
as well as internal NAS functionality (cache mirroring and cluster heart beat). Front-end ports can be
configured using Adaptive Load Balancing or a LAG (LACP).
The Dell Wyse Solutions Engineering recommendation is to configure the original file server VM to use
RDMs to access the storage LUNs, therefore migration to the NAS will be simplified by changing the
presentation of these LUNs from the file server VM to the NAS.
76 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
SQL Server High Availability
HA for SQL is provided via Always On using either Failover Cluster Instances or Availability Groups. This configuration protects all critical data stored within the database from physical server as well as virtual server problems. DNS is used to control access to the primary SQL instance. Place the principal VM that will host the primary copy of the data on the first Management host. Additional replicas of the primary database are placed on subsequent Management hosts.
Please refer to these links for more information: LINK1 and LINK2
77 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
5.6 VMware Horizon communication flow
78 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
6 Customer-provided solution components
6.1 Customer-provided storage requirements In the event that a customer wishes to provide their own storage array solution for a Dell Wyse Datacenter
solution, the following minimum hardware requirements must be met:
Feature Minimum Requirement Notes
Total Tier 2 Storage Space User count and workload
dependent 1Gb/ 10Gb iSCSI or FC storage required on NL SAS disks minimally.
Tier 1 IOPS Requirement (Total Users) x workload IOPS
6-30 IOPS per user may be required depending on workload. T1 storage should be capable of providing user IOPS requirement concurrently to all hosted users.
Tier 2 IOPS Requirement (Total Users) x 1 IOPS File share usage and size of deployment may shift this requirement.
Data Networking
1GbE Ethernet for LAN/T2 iSCSI
10GbE Ethernet for T1 iSCSI
8Gb FC
Data networking traffic should be isolated on dedicated NICs and HBAs in each applicable host.
10GbE Ethernet for T1 iSCSI 10, 6
RAID 10 is leveraged for T1 local storage and can be used if required for shared T2. RAID 6 is used in shared T1 and can be optionally used for T2 as well.
79 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
6.2 Customer-provided switching requirements Feature Minimum Requirement Notes
Switching Capacity Line rate switch
1Gb or 10Gb switching pertinent to solution being implements. 1Gb switching for iSCSI is only suitable for T2. T1 iSCSI requires 10Gb.
10Gbps Ports Uplink to Core 10Gbps Ports
1Gbps Ports 5x per Management server
5x per Compute Server 6x per Storage Array
1Gbps Ports
VLAN Support IEEE 802.1Q tagging and
port-based VLAN support.
Stacking Capability Yes
The ability to stack switches into a consolidated management framework is preferred to minimize disruption and planning when up linking to core networks.
80 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
7 Solution Performance and Testing At the time of publication here are the available density recommendations.
Workload Operating System Quantity
Standard Windows 8.1 330
Enhanced Windows 8.1 238
Professional Windows 8.1 182
Standard Windows 10 250
Enhanced Windows 10 170
Professional Windows 10 140
For detailed validation results and analysis of these reference designs and more, please visit: LINK
81 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
Acknowledgements
Thanks to David Hulama of the Wyse Technical Marketing team for his support and assistance with
VMware data center EUC programs at Dell. David is a Senior Technical Marketing Advisor for VMware VDI
solutions at Dell. David has a broad technical background in a variety of technical areas and expertise in
enterprise-class virtualization solutions.
Thanks to Yinglong Jiang and Cormac Woods for their help in the validation of the solution in this RA.
Yinglong is a Senior Solution Engineer in Dell Wyse Datacenter engineering team, primarily responsible for
testing the VDI solutions based on Dell solutions with a particular focus on software define storage and
hyper-converged data center infrastructure.
Cormac Woods is a Sr. Systems Engineer in the Desktop Virtualization solutions Group at Dell. Cormac has
over 20 years of experience in IT and Telecoms product engineering. In addition Cormac has a deep
background in IT systems support in financial and manufacturing environments. In his current role, he has
developed many of the solutions provided by the Dell Wyse Datacenter team using VMware Horizon as
well as other VDI software environments on multiple Dell server platforms.
82 Wyse Datacenter for VMware Horizon Reference Architecture | v.7.1
About the Authors
Keith Keogh is a Lead Architect in the Cloud Client Engineering Team at Dell. Keith has extensive
experience with the Design, Implementation and Validation of EUC Enterprise Solutions.
Peter Fine is the Chief Architect for enterprise VDI solutions at Dell. Peter has extensive experience and
expertise on the broader Microsoft, Citrix and VMware solutions software stacks as well as in enterprise
virtualization, storage, networking and enterprise data center design.
Gus Chavira is the Dell Alliance Director to VMware and Chief Architect for VMware EUC based solutions
at Dell. Gus has worked in capacities of Sys Admin, DBA, Network and Storage Admin, Virtualization
Practice Architect, Enterprise and Solutions Architect. In addition, Gus carries a B.S. in Computer Science
Andrew Mc Daniel, CTO Enterprise Engineering - managing team responsible for examining new
technologies and research projects to evaluate potential benefit of internal and external partners’
hardware and software to Dell’s E2E solutions for EUC and their strategic integration.