7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
1/40
Technical white paper
Virtualizing Microsot Exchange
Server 2007 with Citrix XenServer
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
2/40
Table o contents
Introduction
The challenge .......................................................................................................................................3
The solution ..........................................................................................................................................4
Project description
Architecture, tools and methodologies .................................................................................................6
Methodologies .....................................................................................................................................8
Test scenarios ....................................................................................................................................12
Results and recommendations
Results: Tests 1-9: Exchange Server 2007 on XenServer 5.0 ..............................................................25
Results: Test 10: XenMotion VM Relocation ........................................................................................33
Results: Test 11: XenServer high availability eature test ......................................................................34
Recommendations and best practices................................................................................................35
Summary
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
3/40
3
IntroductionA companys e-mail system is arguably one o the most, i not the most, critical business computing
platorms it has. Microsot Exchange is the most widely used e-mail platorm in the world. So when it
comes to evaluating a virtualization platorm such as Citrix XenServer or e-mail hosting, we naturally
begin with Exchange. Microsots latest release, Exchange Server 2007, has successully delivered on
its promise o improved perormance, reliability and scalability over its predecessor, Exchange Server
2003. Notable among the new eatures o Exchange Server 2007 is its 64-bit Windows architecture. Yet,
with all o the improvements made, Exchange 2007 still has inherent limitations in its architecture that
limits the customers ability to eectively scale and manage workloads in a dynamic manner. The best way
to transcend these limitations is to couple Exchange Server 2007 with a powerul virtualization platorm
like Citrix XenServer 5.0. With XenServer 5.0, Exchange Servers single server resource limits can be
overcome, making it possible to support more users with ewer computing resources than ever beore.
Features o XenServer like XenMotion live VM migration make it possible to deliver uninterrupted serviceand to manage and maintain server arms with the exibility that IT managers want and need.
This white paper is designed to provide IT managers with the proo points necessary to validate that
Exchange Server 2007 can be virtualized very well on the Citrix XenServer platorm. It also provides a
blueprint or them to do their own validation, which we highly recommend beore embarking on a
project to virtualize Exchange Server 2007. Our project team has spent months learning the multitude
o actors necessary to run an eective virtualization test project. While Microsot provided eective
testing tools, notably LoadGen, each datacenter environment is unique and requires taking the time to
understand how things work. Thereore, it also became an objective or us to help shorten the learning
curve or our customers. While this white paper wont eliminate the need or testing beore deployment
our hope is that this white paper will help make the test process go more easily. Those who run their
own tests will fnd that by virtualizing Exchange Server 2007 with Citrix XenServer, they not only can
take better advantage o the powerul eatures Exchange Server has to oer but also attain the beneftso a more exible and manageable Exchange computing environment as well.
The challengeIts no surprise that an increasing number o IT managers are interested in virtualizing their Exchange
Server arms. Until now, Exchange has been seen as too critical to experiment with. However, the
demands to keep a lid on capital expenditure (Cap-ex) and operational expenditure (Op-ex) costs in the
datacenter are rapidly causing them to re-think the need to virtualize. Exchange Server 2007 itsel is
creating several compelling reasons to consider virtualization, including:
As advanced and powerul as Exchange Server 2007 is, it still cannot take advantage o the ever1.
increasing capabilities o powerul server platorms. When Exchange administrators ollow Microsots
recommended single server maximums o 8 CPU cores and 32 GB RAM, they quickly fnd they are
unable to tap into the power o available servers that can have up to 24 CPU cores and 128 GB RAM
The ability to partition Exchange Server 2007 deployments into dierent roles running on multiple2.
servers creates greater efciency but, in so doing, potentially adds to the ootprint o the Exchange
server arm. By design, each o the individual server roles in Exchange Server 2007 manages a smal
segment o the user populations activities. As a result, when one o those servers is compromised,
the impact is limited to only those transactions handled by that particular server. This minimizes risk
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
4/40
4
and increases overall availability, but does so at the cost o requiring additional servers to spread the
risk around and manage the overall workload. Likewise, because o server size limits, growth can
only be addressed by adding more servers (scaling out) rather than adding more resources to the
server (scaling up).
In addition, the need or security and redundancy continually calls or more and more Exchange3.
servers to meet the demand or uptime and disaster recovery. Usually, or every production server,
another backup server is confgured to stand ready to take its place should ailure occur.
As a result, todays Exchange Server 2007 environment is likely to have many more single-role servers
than the Exchange Server 2003 arm with its multi-role servers. The response to this challenge is to use
virtualization technology to meet the demand or growth using virtual Exchange Servers instead o
adding more physical servers. Doing this should also reduce the overall server requirements or disaster
recovery and backup. Ultimately, it should reverse the direction o the ever increasing ootprint o the
Exchange Server arm while also simpliying server support and management. The goals o reducing
Cap-Ex and Op-ex while meeting the ever increasing demands o the user community can be met
equally without compromise.
The solutionCitrix XenServer 5
Citrix XenServer, a member o the Citrix Delivery Center amily, is open, powerul server virtualization
that radically reduces datacenter costs by transorming static datacenter computing into a more
dynamic, easy to manage server workload delivery center. Based on the open source Xen hypervisor,
XenServer delivers a secure and mature server virtualization platorm with near bare-metal perormance.
XenServer delivers industry best TCO through aster application deployment, improved server utilization,
aggressive pricing, simple management and accelerated application delivery. XenServer is easy to use,simple to set up and oers more intelligent storage capabilities through the most complete native
integration with leading system and storage vendors.
The powerul provisioning capabilities now available in Citrix XenServer virtualization sotware make it the
only platorm than can integrate the deployment and management o virtual and physical servers into a
unifed dynamic virtualized inrastructure a exible aggregated pool o computing and storage resources.
With Citrix XenServer virtualization sotware, businesses can increase server and storage utilization,
reducing costs o equipment, power, cooling and real estate. Virtualization-powered consolidation
enables businesses to decommission older systems that are expensive to support and prone to ailure,
replacing several o them with a single, newer, more supportable, less power-hungry system.
By combining servers and storage into resource pools that can be apportioned to the applications with
the highest business need, IT operations can be aligned to changing demand and business priorities. With
XenMotion, live virtual machines can be migrated to new servers with no service interruption, allowing
essential workloads to get needed resources, enabling zero-downtime maintenance and better
application virtualization.
Citrix XenServer 5.0 delivers:
The industrys best virtualization TCO, through low initial and ongoing costs, more efcient server
utilization and administration, dramatically reduced acilities and power costs, and reduction o
storage requirements by up to 90 percent
Open architecture based on the open source Xen hypervisor, or a platorm that is compact, secure,
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
5/40
5
ast and reliable, with seamless integration with existing inrastructure and management tools.
Easy setup and administration, available pre-installed on over 50 percent o new servers rom Dell,
HP, Lenovo and NEC, requiring only a license key to start, with the light-weight yet powerul XenCente
management console and wizard-based tools to accelerate installation, maintenance and support.A thin, light and efcient 64-bit architecture delivering bare-metal perormance and better workload
consolidation ratios.
Easy to use Windows and Linux virtualization solutions with built-in virtual machine liecycle
management, at breakthrough price and perormance
Enterprise high availability through comprehensive ault detection and ast, automated recovery rom
server ailure, exceeding our-9s availability, with replicated management and confguration data
across all servers in a pool to eliminate single points o ailure.
Continuous application availability or applications with XenMotion live migration, allowing virtual
machines to move seamlessly, eliminating nearly all planned downtime
Dynamic workload management that consolidates workload images and streams workloads
on-demand to any virtual or physical server.
Snap-in storage integration supporting all storage architectures, oering deep integration with leading
storage platorms, reducing cost and complexity by leveraging existing storage systems and
associated storage services directly.
Support or the latest and most popular server and desktop versions o Microsot Windows, as well
as major Linux server distributions
Virtual machine ormat compatibility and portability with Microsot Hyper-V
XenConvert physical-to-virtual (P2V) conversion tool and v2xva virtual-to-virtual (V2V) conversion too
Project descriptionCitrix XenServer enables customers to enhance their implementation o Microsot Exchange Server2007 by deploying Exchange either entirely on virtual machines hosted on XenServer or in a hybrid
virtual/physical machine environment. Citrix XenServer is installed directly on bare metal servers,
requiring no dedicated host operating system. Open command-line (CLI) and programming (API)
interaces make it easy or vendors and enterprises to integrate virtualization with existing processes
and management tools. The powerul provisioning capabilities make enterprise virtualization simpler
and it is possible to deliver new servers hosting services in minutes, with efcient use o storage
resources.
Using XenServer, companies can deliver Microsot Exchange Server 2007 on high-perormance virtual
machines quickly and easily, and manage them and their related storage and networking resources
rom a single easy to use management console (XenCenter). Each Microsot Exchange server appearsto users and to management sotware as i it were a separate physical computer but, in act, many
Exchange servers may sharing the resources o as ew as one physical server.
With Citrix XenServer virtualization sotware, businesses running Microsot Exchange Server 2007 can
increase their server and storage utilization, reducing costs o equipment, power, cooling and real
estate. With XenMotion, running Exchange Server virtual machines can be migrated to new servers
with no service interruption, allowing essential workloads to get needed resources, enabling zero-
downtime maintenance and better application virtualization.
This project proved the successul operation o Microsot Exchange 2007 with Citrix XenServer 5.0
Enterprise Edition. Testing was conducted in the Citrix Virtualization and Management Division Solution
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
6/40
6
test lab in Bedord, MA. Advanced eatures including XenMotion and XenServer automated high availability
were also tested and successully validated with Microsot Exchange 2007 as well.
Architecture, tools and methodologies
Architecture Hardware
Microsot Exchange Server 2007 hardware
x64 server; (4) 2.93 GHz quad-core X7350 Xeon CPUs, 128 GB RAM
(used or all Exchange perormance tests)
Two x64 servers, each with our quad-core CPUs and 32 GB o available RAM
(used or XenMotion tests)
Two x64 servers; (4) 1.6 GHz-quad-core- E7310 Xeon CPUs, 32 GB RAM
(used or high availability tests)
Exchange Server load generation server
x64 server; 1.6 GHz quad-core CPU, 16 GB RAM
Domain controllers
(2) x64 desktops; 1.8 GHz CPU, 2 GB RAM
Storage
NetApp FAS3050; confgured or NAS/iSCSI
Architecture Sotware
Citrix XenServer 5.0, Enterprise Edition
Microsot Exchange Server 2007, version 08.01.0240.006.
MS Windows Server 2003, R2 SP2 Enterprise Edition
Tools
Load generation sotware:
MS Best Practices Analyzer or Exchange: The Exchange Best Practices Analyzer provides1.
recommendations that can be made to the environment to achieve greater perormance,
scalability and uptime.
MS JetStress or Exchange: Veriy the perormance and stability o the disk subsystem beore2.
putting the Exchange server into a production environment.
MS LoadGen or Exchange: Provided by Microsot, this is the ofcial Exchange perormance3.
assessment tool. Runs end-to-end tests rom client to measure typical Exchange activities: Login,
Send Mail, Create Tasks, Request Meetings, etc. Results measured in terms o SendMail and
storage I/O latencies as well as CPU utilization.
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
7/40
7
Microsot LoadGen confguration:
Profle: 100% heavy users or all tests except test 9, (16,000 mailboxes) where a light user profle
was utilized, consistent with Microsots sizing recommendations
Simulated day length: eight hours
Test run length: eight hours
Stress mode: Disabled
Indexing: O
Distribution lists: None
External SMTP mail: None
Contact list or outgoing messages: None
The ollowing perormance counters will be used to measure physical (native) and virtual Exchange
server perormance (Note: The two SendMail latency measurements will be provided by LoadGen.
All other perormance measurements are provided using PerMon).
1. SendMail action latency Average time (in milliseconds) taken to complete a Send Mail action.
A SendMail action is representative o the complete process required to successully send an
e-mail through the Exchange system. According to Microsot, SendMail Action Latency should
be
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
8/40
8
Due to characteristics o LoadGen that cause abnormally high resource utilization at the start o each
LoadGen operation, results were collected ollowing the initial two hour stabilization period or each test.
Methodology1. Project phases
The project was divided into two primary phases, one completed in October 2008 and the other
scheduled to be completed in December 2008. The essential dierence in confgurations between
the two phases is the operating system used: Microsot Server 2003 in phase one and Microsot
Server 2008 in phase two. During phase two, we will repeat comparisons to not only physical
benchmarks, but also with that o other competitive virtualization providers.
2. Exchange Server roles
Multi-role Exchange servers consolidate the various single unction roles into a single multi-
purpose server. With Exchange Server 2007, Microsot has provided the ability to separate theseunctions and run them on independent servers. These unctions include: Mailbox Server, Edge
Transport Server, Hub Transport Server, Client Access Server and Unifed Messaging Server. By
having these unctions sit on separate servers, administrators have the ability to scale out server
roles on an individual basis rather than all unctions at once, which is the only method available
when using multi-role servers.
For the purpose o this project, we used multi-role virtual servers. We did this because in a closed
test environment such as the one we used, the roles o all but the Mailbox server are signifcantly
diminished and only come into play when messaging involves going outside the corporate frewall;
when accessing a PBX and/or voice messaging system and other such orms o external
connectivity that simply dont exist when running a simulated workload in a lab. We thereore
believed that the results we would expect to see would be essentially the same whether individual
or multi-role Exchange servers were used or testing.
3. Impact o user mailbox profles on Exchange Server sizing
Microsot sizing guidelines:
Heavyuser profle (500 mailboxes per core) was used or all tests with the exception o test 9,
a 16,000 mailbox confguration, where per Microsots sizing recommendations, the maximum
threshold o 1,000 light mailboxes per core was maintained. We used these guidelines as a
oundation or the initial set o physical vs. virtual comparison tests and then experimented with
variants or comparative purposes in the subsequent series o scalability tests.
Processor:
Light: 1,000 users/core
Heavy: 500 users/core
User profleMessages sent/received
(based on 50,000 per day)
Cache memory
per mailbox
Light 5 / 20 2MB
Medium/average 10 / 40 3MB
Heavy - Very heavy 20-30 / 80-120 5MB
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
9/40
9
Memory:
2 GB base. The base requirement or RAM is per virtual server and has no bearing on the
number or type o mailboxes or vCPUs assigned.
+ 2.0-3.5 MB per light mailbox or
5.0 MB per heavy user mailbox
Example: Exchange Server 2007 minimum virtual server requirements based on the number o
mailboxes and mailbox type:
500heavyusers/500mailboxespercore:
One virtual server w/ 1vCPU
2 GB base + 2.5 GB (500 x 5 MB) = 5 GB
1,000heavyusers/500mailboxespercore:
One virtual server w/2 vCPU (2 cores)2 GB base + 5 GB (1,000 x 5 MB) = 7 GB
2,000heavyusers/500mailboxespercore:
One virtual server w/ 4 vCPU (4 cores)
2 GB base + 10 GB (2,000 x 5 MB) = 12 GB
8,000heavyusers/500mailboxespercore:
Four virtual servers w/ 4 vCPU (16 cores)
2 GB base per VM + 40 GB (8,000 x 5 MB)= 48 GB
16,000lightusers/1,000mailboxespercore:
Eight virtual servers w/ 2 vCPU (16 cores)
2 GB base per VM + 48 GB (16,000 x 3 MB) = 64 GB
4. Storage confguration: NetApp FAS3050 connected via iSCSI
The inherent limitations o Exchange Server 2007s predecessor, Exchange 2003, made storage
confguration an extremely important and difcult process to master. Limits in memory caching in
Exchange 2003 orced an increase in I/O activity to compensate. This greatly increased the need
or storage that could continually handle increasing I/O demand. With Exchange 2007, expanded
memory capabilities provide vastly greater levels o memory caching per mailbox than beore. When
confgured to Microsots standards, as seen in the previous section, memory caching can greatly
reduce the storage I/O demand, making it easier to confgure and lessening storage-related problems
Exchange 2007 also is very responsive to perorming well with todays expanded types o data
storage, including iSCSI and NAS. The same is true or Citrix XenServer 5.0. While Exchange
Server is oten deployed using fber channel arrays, we chose to demonstrate the exibility andeectiveness o deploying Exchange on XenServer 5.0 by using relatively inexpensive iSCSI storage
rom NetApp. The ollowing is an overview o the storage we utilized and how it was confgured.
Citrix Storage Delivery Services Adapter for NetApp Data ONTAP
Citrix Storage Delivery Services enables the management o attached and networked storage to
be driven directly rom the standard XenServer management interaces, such as XenCenter, the
command-line interace (xe), or third-party management tools based on XenAPI. Regardless o the
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
10/40
10
point o management, XenServer administrators can take advantage o direct integration between
virtual server storage operations and the rich capabilities o the storage itsel, including virtual disk
provisioning and allocation, cloning and snapshot management.
Beginning with Citrix XenServer 4.1, the Storage Delivery Services capabilities have been enhanced by
the inclusion o an interace called an adapter providing support or NetApp FAS, S Series and V
Series storage systems. Through it, XenServer automatically confgures storage to be presented as
iSCSI LUNs and allocated to virtual machines. The adapter uses native NetApp APIs to implement XenAPI
storage operations that provide access to hardware provisioning and snapshotting directly, without the
proprietary layer that complicates management in other products. Users can reap the benefts o their
investment in NetApp storage more eectively, protecting their investment in hardware and sotware as
well as management processes and training.
Virtual disk images are implemented as Data ONTAP FlexVols. Depending on confguration requirements
and system limits, these virtual disk images can be confgured with one LUN per FlexVol, or with all
LUNs or a single virtual machine mapped to the same FlexVol. With all LUNs mapped to the same
FlexVol, creating a NetApp Snapshot
or a virtual machine is implemented as a single operation withinthe NetApp storage system.
When Citrix XenServer is deployed with NetApp Data ONTAP, storage or new virtual machines can
either be ully provisioned or created using space-efcient allocation. This exibility allows IT administrators
to optimize their storage systems or either highest perormance or optimal resource sharing.
Consistent and automatically-applied naming standards remove the conusion that can result rom
disconnected and arbitrary administrative policies. This consistency applies to every storage resource,
rom exvols to LUNs to snapshots.
Storage confguration notes:
Whether physical or virtual, Exchange Server 2007 requires two types o storage drives; one or the
mailbox database and another or log fles. These can be co-located within the same storage repositories.That means in the case o this test program, we will use our sets o mailbox database and log fle drives
when testing with our VM Exchange Servers and eight sets o mailbox database and log fle drives when
testing with eight VM Exchange Servers. In the physical tests, we will create a single mailbox database
drive and a single log fle drive.
NetApp managed LUNs:
Managed NetApp LUNs are accessible via the NetApp Data ONTAP Storage Repository (SR) type, and
are hosted on a NetApp FAS3050 fler running Data ONTAP rel.7.2.5. LUNs are allocated on demand
via the XenServer management interace and mapped dynamically to the host via the XenServer host
management ramework (using the open iSCSI sotware initiator) while a VM is active. All the thin
provisioning and ast clone capabilities o the fler are exposed via the Netapp Data ONTAP adapter. We
used NetApp thin provisioning in conjunction with this test project.
Storage repository sizing or this project:
Note: our storage confguration is atypical or what would normally be expected in a production
environment. We chose this particular confguration based on the wide range o variables being tested,
thus allowing us to avoid repeated storage re-confgurations with each dierent series o tests. The
ollowing numbers represent the amount o storage required or the largest physical and virtual server
tests conducted and do not include the additional storage required or NetApp Data ONTAP and or
NetApp snapshotting.
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
11/40
11
Physical tests:
Mailbox database: 1,000 GBLog fles: 130 GB
Total physical test storage: 11.13 TB
Virtual tests:
VM repository: 192 GB
Eight VMs @ 24 GB per VM
(each VM has 24 GB C: drive, 24 GB virtual memory drive).
Mailbox database repository: 12.4 TB
Eight, 1.5 TB mailbox drives + Eight, 20 GB Log drives (circular logging enabled).
Total virtual test storage: 12.592 TB
The ollowing diagram illustrates the confguration employed:
Figure 1. Example: Virtualized Exchange Server 2007 storage confguration
NetApp FAS3050 flers with NetApp Data ONTAP rel. 7.2.5
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
12/40
12
Test scenariosNote: Tests 1-4 ocused on benchmarking the perormance o Exchange 2007 in a native, physicalserver environment and making direct comparisons o perormance vs. physical when virtualizing the
same Exchange confguration. Tests 5-9 ocused on scalability o Exchange Server 2007 on XenServer.
The remaining tests, numbers 10 and 11, ocused on testing the live VM migration and high availability
eatures o XenServer in an Exchange environment.
Test 1: 2,000 heavy user mailboxes, physical server benchmark test
Test 2: 2,000 heavy user mailboxes, virtual Exchange Server
comparison test
Figure 2. Single physical (native) benchmark test confguration 2,000 heavy user mailboxes
(250 mailboxes per core)
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
13/40
13
Hardware/sotware confguration:
Test 1 and 2: descriptions
The purpose o these tests was to establish baseline perormance numbers with which to compare
the perormance o a single physical Exchange Server 2007 server with that o single host virtual
servers running our Exchange Server 2007 VMs. Perormance categories measured include SendMail
latency (average and 95th percentile), CPU capacity, Disk IO latency and RPC latency (average and
95th percentile).
Test 1: 2,000 mailbox, single physical (native) server benchmark
The physical server used in this test had our Intel X7350, 2.93 GHz quad-core CPUs (16 cores)
with128 GB RAM. Only eight cores and 32GB RAM were needed to acilitate this test.
Storage: We used a NetApp FAS3050 fler confgured or iSCSI.
LoadGen was confgured or 2,000 heavy users with a density o 250 mailboxes per CPU core.
The test ran or a period o eight hours and results were measured ollowing a two hour server
stabilization period. The LoadGen server used or these tests had a single quad-core 1.6 GHz CPU
and 16 GB RAM.
Test 2: 2,000 mailbox, single host XenServer running our Multi-role Exchange Server 2007
virtual machine servers.
The host XenServer used in this test had our Intel X7350 Xeon, 2.93 GHz quad-core CPUs (16 cores) with 128 GB RAM. Only eight cores and 32 GB RAM was
needed to acilitate this test. Each Exchange Server VM was confgured with two virtual CPUs and 8
GB o virtual RAM.
Storage: We used a NetApp FAS3050 fler confgured or iSCSI.
LoadGen was confgured or 2,000 heavy mailboxes with a density o 250 mailboxes per CPU core.
The test ran or a period o eight hours and results were measured ollowing a two hour server
stabilization period. The LoadGen server used or these tests had a single quad-core 1.6 GHz CPU
and 16 GB RAM.
Hardware Citrix sotware Microsot sotware
Exchange Server:
Four quad-core
2.93 GHz CPUs; 128 GB RAM
None Microsot Exchange Server
2007 version 08.01.0240.006.
LoadGen Server:
1.6 GHz, 16 GB RAM
Microsot Win2K3 R2,
Enterprise Citrix XenServer
5.0, Enterprise Edition SP2
Storage: NetApp FAS3050,
iSCSIMicrosot Exchange LoadGen
Domain Controllers:
x64 desktop (2)
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
14/40
14
Hardware Citrix sotware Microsot sotware
Host XenServer:
Four quad-core 2.93 GHz
CPUs, 128 GB RAM
Citrix XenServer 5.0,
Enterprise Edition
Microsot Exchange Server
2007 version 08.01.0240.006.
LoadGen Server:
1.6 GHz, 16 GB RAM
Microsot Win2K3 R2,
Enterprise Citrix XenServer
5.0, Enterprise Edition SP2
Storage:
NetApp FAS3050, iSCSIMicrosot Exchange LoadGen
Domain Controllers:
x64 desktop (2)
Test 3: 4,000 heavy user mailboxes, physical server benchmark test
Test 4: 4,000 heavy user mailboxes, virtual Exchange Server
comparison test
Figure 3. Multi-role Virtual Machine Exchange Server Test Confguration 4,000 heavy user mailboxes
(500 mailboxes per core)
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
15/40
15
Hardware/sotware confguration:
Test 3 and 4: descriptionsThe purpose o these tests was to provide a direct, side-by-side comparison to the physical (native)
single Exchange Server tests 1 and 2. As with those tests, perormance categories measured include
SendMail latency (average and 95th percentile), CPU capacity, Disk IO latency and RPC latency
(average and 95th percentile).
Test 3: 4,000 mailbox, single physical (native) server benchmark
The physical server used in this test had our Intel X7350, 2.93 GHz quad-core CPUs (16 cores)
with 128GB RAM. Only eight cores and 32GB RAM were needed to acilitate this test.
Storage: We used a NetApp FAS3050 fler confgured or iSCSI
LoadGen was confgured or 4,000 heavy users using Microsots recommended maximum o 500
such mailboxes per CPU core. The test ran or a period o eight hours and results were measured
ollowing a two hour server stabilization period. The LoadGen server used or these tests had asingle quad-core 1.6 GHz CPU and 16 GB RAM.
Test 4: 4,000 mailbox, single host XenServer running our Multi-role Exchange Server 2007
virtual machine servers.
The host XenServer used in this test had our Intel X7350 Xeon,
2.93 GHz quad-core CPUs (16 cores) with 128 GB RAM. Only eight cores and 32 GB RAM was
needed to acilitate this test. Each Exchange Server VM was confgured with two virtual CPUs and
8 GB o virtual RAM.
Storage: We used a NetApp FAS3050 fler confgured or iSCSI
LoadGen was confgured or 4,000 heavy mailboxes using Microsots recommended maximum o
500 per CPU core. The test ran or a period o eight hours and results were measured ollowing a two
hour server stabilization period. The LoadGen server used or these tests had a single quad-core 1.6GHz CPU and 16 GB RAM.
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
16/40
16
Test 5: Four Multi-role Virtual Exchange Servers, 4,000 (4x1,000)
heavy user mailboxes, 2 vCPU and 14 GB RAM per VM
Figure 4. Multi-role Virtual Machine Exchange Server test confguration
4,000 heavy user Exchange mailboxes (500 per core)
Hardware/sotware confguration:
Hardware Citrix sotware Microsot sotware
Host XenServer:
Four quad-core 2.93 GHz
CPUs, 128 GB RAM
Citrix XenServer 5.0,Enterprise Edition
Microsot Exchange Server2007 version 08.01.0240.006.
LoadGen Server:
1.6 GHz, 16 GB RAM
Microsot Win2K3 R2,
Enterprise Edition SP2
Storage: NetApp FAS3050 Microsot Exchange LoadGen
Domain Controllers:
x64 desktop (2)
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
17/40
17
Test 5: description
The purpose o test 5 is to measure the impact o increasing the virtual RAM size in each virtualized
multirole Exchange Server VM rom 8 GB to 14 GB. We expect that the overall beneft to these virtualservers will be the same that might also be gained by increasing RAM resources to a physical, native
Exchange 2007 server, namely improved SendMail and RPC latency perormance.
The host XenServer used in this test had our Intel X7350 Xeon, 2.93 GHz quad-core CPUs (16 cores)
with 128 GB RAM. Only eight cores and 56 GB RAM was needed to acilitate this test. Each o the
our Exchange Server VMs was confgured using 2 vCPUs and
14 GB o vRAM.
LoadGen was confgured or 4,000 heavy mailboxes using Microsots recommended maximum o
500 per CPU core. The test ran or a period o eight hours and results were measured ollowing a
two hour server stabilization period. The LoadGen server we used was the same as in previous tests
Test 6 and 7: Four Virtual Exchange Servers, 8,000 (4 x 2,000) heavyuser mailboxes, 4 vCPU w/ 8 GB or 14 GB RAM per VM
Figure 5. Multi-role Virtual Machine Exchange Server test confguration
8,000 heavy user Exchange mailboxes (500 per core)
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
18/40
18
Hardware/sotware confguration:
Tests 6 and 7: descriptions
The purpose o tests 6 and 7 was to determine the impact on Exchange Server perormance o the
density o the our virtual machine servers in tests 4, 5 and 6 are increased in size rom 1,000 to 2,000
heavy user mailboxes or a total o 8,000 heavy user mailboxes. To handle the increased Exchange
Server workload, we increased the virtual CPU cores rom two to our cores per VM. Also, in test 6 we
assigned 8 GB o virtual RAM per VM while in test 7 we assigned 14 GB o virtual RAM per VM. In this
way, we could once again determine the impact o increasing virtual RAM resources rom 8 GB to
14 GB o RAM per Exchange Server VM.
Test 6: 8,000 mailbox, single host XenServer running our Multi-role Exchange Server 2007
virtual machine servers.
The host XenServer used in this test had our Intel X7350 Xeon,
2.93 GHz quad-core CPUs (16 cores) with 128 GB RAM. All 16 cores were assigned, but only
56 GB o RAM was needed to acilitate this test. Each o the our Exchange Server VMs was
confgured with 4 vCPUs and 8 GB o vRAM.
Storage: We used a NetApp FAS3050 fler confgured or iSCSI.
LoadGen was confgured or 8,000 heavy mailboxes with a density o 500 mailboxes per CPU core.
The test ran or a period o eight hours and results were measured ollowing a two hour server
stabilization period. The LoadGen server used or these tests had a single quad-core 1.6 GHz CPU
and 16 GB RAM.
Test 7: 8,000 mailbox, single host XenServer running our Multi-role Exchange Server 2007
virtual machine servers.
The host XenServer used in this test had our Intel X7350 Xeon,
2.93 GHz quad-core CPUs (16 cores) with 128 GB RAM. All 16 cores were utilized but only 56 GB
o RAM was needed to acilitate this test. Each o the our Exchange Server VMs was confgured
with 4 vCPUs and 14 GB o vRAM.
Storage: We used a NetApp FAS3050 fler confgured or iSCSI.
LoadGen was confgured or 8,000 heavy mailboxes using Microsots recommended maximum o
500 per CPU core. The test ran or a period o eight hours and results were measured ollowing a
two hour server stabilization period. The LoadGen server used or these tests had a single quad-
core 1.6 GHz CPU and 16 GB RAM.
Hardware Citrix sotware Microsot sotware
Host XenServer:
Four quad-core 2.93 GHz
CPUs, 128 GB RAM
Citrix XenServer 5.0,
Enterprise Edition
Microsot Exchange Server
2007 version 08.01.0240.006.
LoadGen Server:
1.6 GHz, 16 GB RAM
Microsot Win2K3 R2,
Enterprise Edition SP2
Storage:
NetApp FAS3050, iSCSIMicrosot Exchange LoadGen
Domain Controllers:
x64 desktop (2)
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
19/40
19
Tests 8 and 9: Eight Virtual Exchange Servers, 8,000 (8 x 1,000) /
16,000 (8 x 2,000) light users, 2 vCPU w/ 14 GB RAM per VM
Figure 6. Multi-role Virtual Machine Exchange Server test confguration
8,000 heavy user or 16,000 light user Exchange mailboxes
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
20/40
20
Hardware/sotware confguration:
Tests 8 and 9: descriptions
The purpose o tests 8 and 9 is to determine the impact on Exchange Server perormance when
doubling the number o Exchange Server VMs rom our to eight. In test 9, we were also looking to
see the impact o changing the mailbox confguration rom 8,000 heavy user mailboxes to 16,000 l ight
users. In both cases, the Microsot recommended maximum number o either heavy user or light user
mailboxes per core was maintained. In both test cases, we assigned 2 vCPUs and 14 GB o vRAM per
Exchange Server VM.
Test 8: 8,000 mailbox, single host XenServer running eight Multi-role Exchange Server 2007
virtual machine servers.
This test maintained the 8,000 heavy user mailbox workload used in test 7, but increased the
number o VMs rom our to eight. We also kept the overall CPU resources the same, 16 cores.
Thereore each VM would now have 2vCPU cores each. At the same time, we increased the overall
RAM resources to 112 GB, allocating 14 GB per VM. The purpose o this test is to measure the
eect o distributing the same 8,000 mailbox workload across two times the number o VMs, while
each VM eectively also had the same amount o vCPU but two times the vRAM to handle the
same workload as in test 7. The rationale being that RAM is generally a less expensive resource
than CPU when confguring VMs.
The host XenServer used in this test had our Intel X7350 Xeon,
2.93 GHz quad-core CPUs (16 cores) with 128 GB RAM. All 16 cores and 112 GB o the available
RAM was needed to acilitate this test. Each o the eight Exchange Server VMs was confgured
with 2 vCPUs and 14 GB o vRAM.
LoadGen was confgured or 8,000 heavy mailboxes with a density o 500 mailboxes per CPU core.
The test ran or a period o eight hours and results were measured ollowing a two hour server
stabilization period. The LoadGen server used or these tests was the same used in the previous tests.
Hardware Citrix sotware Microsot sotware
Host XenServer:
Four quad-core 2.93 GHz
CPUs, 128 GB RAM
Citrix XenServer 5.0,
Enterprise Edition
Microsot Exchange Server
2007 version 08.01.0240.006.
LoadGen Server:
1.6 GHz, 16 GB RAM
Microsot Win2K3 R2,
Enterprise Edition SP2
Storage:
NetApp FAS3050, iSCSIMicrosot Exchange LoadGen
Domain Controllers:
x64 desktop (2)
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
21/40
21
Test 9: 16,000 mailbox, single host XenServer running eight Multi-role Exchange Server 2007
virtual machine servers.
The host XenServer used in this test had our Intel X7350 Xeon,
2.93 GHz quad-core CPUs (16 cores) with 128 GB RAM. All 16 cores and 112 GB o the availableRAM was needed to acilitate this test. Each o the eight Exchange Server VMs was confgured
with 2 vCPUs and 14 GB o vRAM.
LoadGen was confgured or 16,000 l ight user mailboxes using Microsots recommended maximum
o 1,000 per CPU core. The test ran or a period o eight hours and results were measured ollowing
a two hour server stabilization period. The LoadGen server used or these tests was the same used
in the previous tests.
Test 10: XenMotion eature, Live Exchange Server VM migration
Figure 7. XenServer live virtual machine migration test confguration
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
22/40
22
Hardware/sotware confguration:
Test 10: description
The purpose o the live VM migration test was to certiy that the XenMotion eature o XenServer 5 can
successully relocate live Exchange Server 2007 VMs rom one host XenServer to another without any
interruption, using the XenMotion eature o XenServer. For this test, we used LoadGen to create a4,000 heavy user mailbox workload across two multi-role Exchange 2007 virtual servers in the primary
host server. Each o the virtual servers was assigned our virtual CPU cores and eight GB o virtual
RAM, using a density o 500 heavy user mailboxes per CPU core, or 2,000 heavy user mailboxes per
virtual Exchange server. Both the primary and secondary XenServer host servers would have their own
storage repositories. The key success actor in this test is that each migrated VM would have to not
only successully migrate rom one server to the other, but would also have to successully connect
to the storage repository containing the Exchange mailbox Db. I this did not occur, LoadGen would
immediately ail and the session would terminate.
For this purpose, we utilized two x64 servers, each with our quad-core CPUs and 32 GB o available
RAM. XenMotion requires that the originating and destination XenServer host machines be the same
type, meaning that they had to have the same CPU and chipset to successully relocate a live VM rom
one host server to the other.
The LoadGen session would then be initiated and ollowing the initial two hour stabilization period, we
would attempt to perorm a live VM migration and then repeat the process a minimum o twenty times.
Hardware Citrix sotware Microsot sotware
Primary and
secondary XenServer host:
Two x64 servers, each with
our quad-core CPUs and
32 GB o RAM
Citrix XenServer 5.0,
Enterprise Edition
Microsot Exchange Server
2007 version 08.01.0240.006.
LoadGen Server:
1.6 GHz, 16 GB RAM
Microsot Win2K3 R2,
Enterprise Edition SP2
Storage:
IP SAN 1TB Microsot Exchange LoadGen
Domain Controllers:
x64 desktop
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
23/40
23
Test 11: XenServer high availability eature
Figure 8. XenServer high availability eature test confguration.
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
24/40
24
Test 11: description
With XenServer 5.0, resource pools can now be confgured with automated high availability protection
allowing virtual machines on a ailed host to automatically restart on another physical server accordingto priority and resource availability. The purpose o this test is to demonstrate the ability to successully
take the VM that was running on a ailed XenServer and restart that exact same VM on another
XenServer host in the resource pool. Two XenServer host servers will be confgured, each with a single,
multi-role Exchange 2007 virtual server. Using LoadGen, each o these Exchange Server VMs will be
given a workload to simulate a working Exchange 2007 operation. Ater both servers have been in
operation or a period o time sufcient or them to have stabilized, one o the servers will be manually
orced to ail, leaving only one active Exchange Server VM.
This test will measure XenServer 5.0 high availability eatures ability to detect the ailure and to re-start
the VM that was running on the ailed XenServer on a dierent host automatically. During the process,
we will measure the overall time (in minutes and seconds) necessary or the new VM to be recognized
by the LoadGen server as being available to resume the Exchange Server workload as well as capture
several perormance counter measurements both prior to and ollowing the expected ailover, includingRPC Averaged Latency; Physical Disk Queue Length; Processor total % processor time; RAM
available; and Exchange Database cache % hit.
The methodology or running the high availability test involved the ollowing steps:
With the LoadGen test initialization process running, we will power down and thereore cause to1.
ail the pools master Exchange Server VM.
With the LoadGen test initialization process running, we will power down and thereore cause to2.
ail the pools master Exchange Server VM. The slave virtual Exchange server is expected then to
become the master and the new master Exchange VM boots up.
LoadGen will identiy exceptions (errors) while the new Exchange server is unavailable but is3.
expected to resume the initialization process once the new virtual Exchange server is available toLoadGen. Initialization should then continue, uninterrupted, until completion.
Assumptions:4.
a. PerMon counters will be started prior to the LoadGen test start
b. LoadGen will run or an hour to stabilize (this is normal and the same as in other project test)
c. Powering down a virtual Exchange server will attempt to simulate catastrophic server ailure
d. PerMon data analysis to shows data at three points in the high availability test process:
LoadGenteststarttoservershutdown
VMbootonnewservertostabil izationpoint
(no urther LoadGen exceptions/errors detected)
Stabilizationpointtoendoftest
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
25/40
25
Results andrecommendationsResults: Tests 1-9: Exchange Server2007 on XenServer 5.0
Figure 10. Results Tests 1-9
AnalysisTests 1 - 4 Physical benchmark comparison tests
Test
# ExchangeServers
(Phys/VM)
# CPUCores/Server
Total #Cores
Mailboxes/Core
Total #Mailboxes
RAM/server(GB)
TotalRAM(GB)
SendmailLatency
Average(
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
26/40
26
CPU Utilization: Physical vs. XenServer 5.0
50.0%
40.0%
30.0%
20.0%
10.0%
0.0%Physical 2,000 Mailboxes
8 CPU cores, 32 GB RAM
4 VMs 2,000 Mailboxes
2 CPU cores, 8 GB RAM
per VM
Physical 4,000 Mailboxes
8 CPU cores, 32 GB RAM
4VMs 4,000 Mailboxes
2 CPU cores, 8 GB RAM
per VM
CPU
Utilization-Avg5.4% 14.9% 11.0% 9.8%
CPU
Utilization-Max17.2% 19.7% 26.8% 24.0%
Disk IOPS: Physical vs. XenServer 5.0
250
200
150
100
50
0Physical 2,000 Mailboxes
8 CPU cores, 32 GB RAM
4 VMs 2,000 Mailboxes
2 CPU cores, 8 GB RAMper VM
Physical 4,000 Mailboxes
8 CPU cores, 32 GB RAM
4 VMs 4,000 Mailboxes
2 CPU cores, 8 GB RAMper VM
Disk IOPS 93 99 117 115
Milliseconds
RPC Latency: Physical vs. XenServer 5.0
25
20
15
10
5
0Physical 2,000 Mailboxes
8 CPU cores, 32 GB RAM
4 VMs 2,000 Mailboxes
2 CPU cores, 8 GB RAM
per VM
Physical 4,000 Mailboxes
8 CPU cores, 32 GB RAM
4VMs 4,000 Mailboxes
2 CPU cores, 8 GB RAM
per VM
RPC
Latency-Avg2 3 6 5
RPC
Latency- 95th%4 4 9 7
Milliseconds
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
27/40
27
Perormance CounterTest 1
Physical Benchmark
Test 2
XenServer 5.0Comparative Summary
SendMail Latency, Average: 26 Ms 26 Ms No dierence
SendMail Latency, 95th Percentile: 83 Ms 84 Ms Physical: 1 Ms aster
CPU Utilization, Average: 5.4% 14.9% Physical: 9.5% less avg CPU utilization
CPU Utilization, Max: 17.2% 19.7% Physical: 2.5% less CPU Max utilization
Disk IOPS: 93 Ms 99 Ms Physical: 6 Ms aster
RPC Latency, Average: 2 Ms 3 Ms Physical: 1 Ms aster
RPC Latency, 95th Percentile: 4 Ms 4 Ms No dierence
Tests 1 and 3: Physical benchmark tests. 2,000/4,000 heavy
user mailboxes
The purpose o these tests was to establish baseline numbers to compare the perormance o physicaExchange Server 2007 servers vs. virtual servers. Both servers were given equal amounts o server
RAM and CPU cores. The only dierence between the two tests was in the density o assigned
mailboxes per CPU core. The results demonstrated that both the 2,000 and 4,000 mailbox multi-role
Exchange Servers perormed within the prescribed standards established by Microsot.
By doubling the density rom 250 to 500 mailboxes per CPU core, both SendMail latency perormance
counter results were slightly more than double in test 2 than in 1. This was not unexpected and was
well within Microsots recommended perormance guidelines. The same holds true or CPU utilization,
with results or 2 being slightly under double that o 1. Disk IOPS was not aected much by the
increase in mailboxes as it only slightly increased rom test 1 to 2, rom 118 to 124Ms. RPC latency,
both average and 95th percentile perormance counters were well within acceptable ranges. Average
RPC latency increased by roughly two times and 95th percentile RPC latency by nearly three times due
to the increased number o mailboxes, although these results were not unexpected.
Tests 2 and 4: XenServer comparisons to physical benchmarks
These tests demonstrated the perormance o multi-role Exchange Servers when virtualized on Citrix
XenServer. From an apples-to-apples comparison standpoint, test 2 could be directly compared to
the earlier physical test 1 and test 4 could be directly compared to physical test 3. The results o these
comparisons suggest the ollowing:
Test 2:
In this 2,000 heavy user mailbox test, the our XenServer VMs not only perormed well under the
perormance thresholds established by Microsot, but they were also nearly identical to that o
the physical Exchange Server. O the seven perormance categories, only Average CPU utilizationshowed any dierence that could be called signifcant, and even then, average CPU utilization or
the our VMs was only 14.9 percent.
Figure 15. Results comparison Tests 1 and 2 physical vs. virtual, 2,000 mailboxes
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
28/40
28
Conclusion: The overhead o virtualization at this level is anywhere rom slight to none at all. The
benefts o virtualizing Exchange Server 2007 on multiple VMs vs. a single physical server can be
achieved while essentially maintaining the perormance experienced using physical servers.
Test 4:
Again, as with test 2, virtualized perormance in test 4 is well within Microsots recommendations.
The perormance o both the physical and virtual Exchange servers was in a range expected given
the doubling o the workload rom 2,000 to 4,000 mailboxes using the same resources. A 4,000
heavy user mailbox Exchange Server 2007 running in multi-role mode had better perormance
counter results in six out o seven perormance categories than its physical server counterpart. The
dierences between physical and virtual Exchange servers at this workload level was only slight. The
only dierences o any signifcance were in average SendMail latency, where the virtual Exchange
servers averaged 14 Ms greater latency and in SendMail latency in the 95th percentile, where the
same virtual Exchange servers perormed 70 Ms aster than the physical server.
Figure 16. Results Comparison Tests 3 and 4 Physical vs. Virtual, 4,000 mailboxes
Conclusion: This comparison shows that distributing a 4,000 heavy user Exchange Server 2007
workload across our virtual servers rather than on a single physical server can be achieved with
little or no virtualization overhead and with little or no impact to Exchange Server perormance.
Perormance Counter Test 3Physical Benchmark
Test 4XenServer 5.0
Comparative Summary
SendMail Latency, Average: 14 Ms 28 Ms Physical: 14 Ms aster
SendMail Latency, 95th Percentile: 174 Ms 104 Ms XenServer: 70 Ms aster
CPU Utilization, Average: 11.0% 9.8% XenServer: 1.2% less avg CPU utilization
CPU Utilization, Max: 26.8% 24.0% XenServer: 2.8% less CPU Max utilization
Disk IOPS: 117 Ms 115 Ms XenServer: 2 Ms aster
RPC Latency, Average: 6 Ms 5 Ms XenServer: 1 Ms aster
RPC Latency, 95th Percentile: 9 Ms 6 Ms XenServer: 3 Ms aster
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
29/40
29
XenServer 5.0 Scalability Test Results: SendMail Latency
XenServer 5.0 Scalability Test Results: CPU Utilization
XenServer 5.0 Scalability Test Results: Disk IOPS
Tests 5 - 9: XenServer 5.0 scalability tests
Figures 17-20. Results Tests 5-9
500
400
300
200
100
04 VMs, 4,000
Heavy Mailboxes
2 CPU cores,
14 GB RAM per VM
4 VMs, 8,000
Heavy Mailboxes
4 CPU cores,
8 GB RAM per VM
4 VMs, 8,000
Heavy Mailboxes
4 CPU cores,
14 GB RAM per VM
8 VMs, 8,000
Heavy Mailboxes
2 CPU cores,
14 GB RAM per VM
8 VMs, 16,000
Light Mailboxes
2 CPU cores,
14 GB RAM per VM
Send Mail
Latency-Avg28 5 36 127 44
Send MailLatency- 95th%
94 164 124 234 154
Milliseconds
50.0%
40.0%
30.0%
20.0%
10.0%
0.0%4 VMs, 4,000
Heavy Mailboxes
2 CPU cores,
14 GB RAM per VM
4 VMs, 8,000
Heavy Mailboxes
4 CPU cores,
8 GB RAM per VM
4 VMs, 8,000
Heavy Mailboxes
4 CPU cores,
14 GB RAM per VM
8 VMs, 8,000
Heavy Mailboxes
2 CPU cores,
14 GB RAM per VM
8 VMs, 16,000
Light Mailboxes
2 CPU cores,
14 GB RAM per VM
CPU
Utilization-Avg9.7% 12.9% 11.8% 11.6% 6.2%
CPU
Utilization-Max37.3% 28.9% 39.3% 29.2% 18.5%
Milliseconds
250
200
150
100
50
04 VMs, 4,000
Heavy Mailboxes
2 CPU cores,
14 GB RAM per VM
4 VMs, 8,000
Heavy Mailboxes
4 CPU cores,
8 GB RAM per VM
4 VMs, 8,000
Heavy Mailboxes
4 CPU cores,
14 GB RAM per VM
8 VMs, 8,000
Heavy Mailboxes
2 CPU cores,
14 GB RAM per VM
8 VMs, 16,000
Light Mailboxes
2 CPU cores,
14 GB RAM per VM
Disk IOPS 105 140 115 87 72
Milliseconds
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
30/40
30
25.0
20.0
15.0
10.0
5.0
0.04 VMs, 4,000
Heavy Mailboxes
2 CPU cores,
14 GB RAM per VM
4 VMs, 8,000
Heavy Mailboxes
4 CPU cores,
8 GB RAM per VM
4 VMs, 8,000
Heavy Mailboxes
4 CPU cores,
14 GB RAM per VM
8 VMs, 8,000
Heavy Mailboxes
2 CPU cores,
14 GB RAM per VM
8 VMs, 16,000
Light Mailboxes
2 CPU cores,
14 GB RAM per VM
RPC
Latency-Avg3.3 9.3 5.5 3.8 5.6
RPC
Latency-95th %4.8 13.5 8.3 7.1 11.1
Milliseconds
XenServer 5.0 Scalability Test Results: RPC Latency
Perormance CounterTest 4
8 GB RAM per VM
Test 5
14 GB RAM per VMComparative Summary
SendMail Latency, Average: 28 Ms 28 Ms 14 GB VMs: Same
SendMail Latency, 95th Percentile: 104 Ms 94 Ms 14 GB VMs: 10% aster
CPU Utilization, Average: 9.8% 9.7% 14 GB VMs: 0.1% less avg CPU utilization
CPU Utilization, Max: 24.0% 37.3%14 GB VMs: 13.3% increased CPU
Max utilization
Disk IOPS: 115 Ms 105 Ms 14 GB VMs: 31% aster
RPC Latency, Average: 4.8 Ms 3.3 Ms 14 GB VMs: 31% aster
RPC Latency, 95th Percentile: 6.5 Ms 4.8 Ms 14 GB VMs: 26% aster
Test 5: Four VMs, 4,000 heavy user mailboxes, 14 GB RAM
This test compared the perormance o the virtual server confguration used in test 4, but with 14 GB
o RAM per VM instead o 8 GB. By increasing RAM to 14 GB, perormance improved in fve out o
seven categories improved and was the same in one. Only CPU Utilization, Max perormance suered
any signifcant perormance loss and even then, results were well below maximum acceptable levels.
Overall, the results showed little change rom the 8 GB VM test.
Figure 19. Results Comparison Phase One, Tests 4 and 5
4,000 mailboxes, 4 VMs, 8 GB vs.14 GB per VM
Conclusion: Increasing vRAM resources appears to have only a slightly positive eect on virtual
Exchange server perormance in most categories. And in the one category where perormance
declined, the negative eect was minimal. We thereore conclude that 8 GB o RAM or these VMs
was more than sufcient or the oered workload and that the expense o adding more vRAM did
not provide sufcient ROI to warrant the increase.
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
31/40
31
Tests 6 and 7: 8,000 heavy user mailboxes, 8 GB and 14 GB RAM
In tests 6 and 7, we doubled the number o heavy user mailboxes, rom 4,000 to 8,000. The amount
o vRAM per VM stayed the same as in tests 4 (8 GB) and 5 (14 GB); however, to handle theincreased mailbox processing workload, we doubled the number o CPU resources rom two cores pe
VM to our. All perormance results or both 8,000 heavy user mailbox tests were well within Microsots
recommended perormance guidelines, however perormance with the 14 GB VMs improved in only
fve o the seven categories. The exceptions were average SendMail latency and CPU utilization-Max
Figure 20. Results comparison Tests 6 and 7
Test 6: 8,000 mailboxes, 4VMs each w/4 vCPUs/8 GB vs.
Test 7: 8,000 mailboxes, 4 VMs each w/4 vCPU/14 GB
Conclusion: For practical purposes, a SendMail latency average o 36 Ms seen in test 7, when
compared to Microsots recommended perormance limit o 500 Ms, while not as low as the 5 Ms
seen in test 6 is still signifcant in its own right. Likewise, a Maximum CPU utilization rate o 39.3
percent given the 8,000 mailbox workload is great perormance. The dierence o 10 percent rom
that o test 6 might be explained by the typical variances experienced in a signifcant workload test
environment and could thereore be considered normal. This all suggests that the results are very
similar to those we saw when comparing earlier tests, 4 and 5 where we ound that increasing
RAM resources rom 8 14 GB per VM did not produce the expected perormance ROI. This is
supported when comparing the results o tests 4 and 6 with that o 5 and 7. When doing so, doubling
the workload to 8,000 mailboxes is shown to maintain consistently great perormance levels in all
categories by providing additional CPU resources to handle the workload rather than additional RAM.
Test 8: 8,000 mailboxes, our 4-core VMs vs. eight 2-core VMs
Results: The impact o spreading the 8,000 mailbox workload across eight VMs, each with two
cores vs. 4 VMs with our cores each, increased both average and 95th percentile SendMail latency
All other perormance counters showed improved perormance in the 8VM test vs. 4 VMs, although
the improvements were only marginal. Perormance in all seven categories was again, well within
Microsots recommendations.
Perormance CounterTest 6
8 GB RAM per VM
Test 7
14 GB RAM per VMComparative Summary
SendMail Latency, Average: 5 Ms 36 Ms 14 GB VMs: 6.2X slower
SendMail Latency, 95th Percentile: 164 Ms 124 Ms 14 GB VMs: 24% aster
CPU Utilization, Average: 12.9% 11.8% 14 GB VMs: 1.1% less avg CPU utilization
CPU Utilization, Max: 28.9% 39.3%14 GB VMs: 10.4% increased CPU
Max utilization
Disk IOPS: 140 Ms 115 Ms 14 GB VMs: 18% aster
RPC Latency, Average: 9.3 Ms 5.5 Ms 14 GB VMs: 31% aster
RPC Latency, 95th Percentile: 13.5 Ms 8.3 Ms 14 GB VMs: 38% aster
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
32/40
32
Perormance Counter
Test 7
4 CPU cores, 14 GB
RAM per VM
Test 8
2 CPU cores, 14 GB
RAM per VM
Comparative Summary
SendMail Latency, Average: 36 Ms 127 Ms 8 VMs: 2.5X slower
SendMail Latency, 95th Percentile: 124 Ms 234 Ms 8 VMs: 89% slower
CPU Utilization, Average: 11.8% 11.6% 8 VMs: 0.2% less avg CPU utilization
CPU Utilization, Max: 39.3% 29.2% 8 VMs: 10.1% less CPU Max utilization
Disk IOPS: 115 Ms 87 Ms 8 VMs: 24% aster
RPC Latency, Average: 5.5 Ms 3.8 Ms 8 VMs: 31% aster
RPC Latency, 95th Percentile: 8.3 Ms 7.1 Ms 8 VMs: 14% aster
Figure 21. Results Comparison Tests 7 and 8, 8,000 mailboxes, our 4-core VMs vs. eight 2-core VMs
Conclusion: SendMail latency was clearly the category most aected by the reduction in vCPU
resources, even with twice the number o VMs and eectively twice the vRAM per VM. At this
workload level, the VMs tended to be more CPU than memory-bound. This is most likely due to the
act that Exchange 2007 uses multi-threading technology whereby perormance improves as the
overall amount o CPU resource is increased. More CPU in ewer VMs tends to perorm better than
more VMs with less CPU. The reason why the other perormance categories only showed marginal
improvement is likely due to the eective use o virtual memory caching which minimizes I/O. RPC
latency in test 7 was already very low, meaning that I/O levels were minimal to begin with. The
increase in RAM had minimal ROI.
Test 9: 16,000 light user mailboxes, eight 2-core, 14 GB VMs
Result: This test shows that XenServer can easily support Microsots largest mailbox workload per
CPU core o 1,000 l ight mailboxes per CPU. Since we committed all sixteen available CPUs to this
workload, a single XenServer host was confgured to support 16,000 simultaneous mailbox users.
Results in every perormance category measured show that XenServer not only handled the workload,
but did so with remarkable efciency. SendMail and RPC latency perormance across the board was
extremely low given the high number o mailboxes. The act that each o the eight VMs was given 14 GB
o RAM, each o the 2,000 mailboxes eectively had 7 MB o memory cache to work with. This kept
the disk IO rates down to a bare minimum which would account or an average RPC latency rate o
only 5.6 percent. At that rate, essentially all I/O activity was managed via memory caching. We easily
could have had ar less memory to work with and still would have been able to keep SendMail and
RPC latency numbers within guidelines. Likewise, CPU resources were sufcient or the workload.
Average CPU utilization was only 6.2 percent and at no time did it ever get beyond 19 percent. Two
CPU cores or this workload were more than sufcient.
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
33/40
33
Figure 22. Results Test 9, 16,000 mailboxes, eight two-core, 14 GB VMs
Conclusion: Since this was the only light mailbox user workload tests run in this project, we arent
able to make any side by side comparisons to the heavy user mailbox tests done earlier. However,
these results when added to the others, are clear evidence that virtual Exchange Servers using
XenServer 5.0 are more than capable o handling the maximum recommended workloads regardless
o whether they are light or heavy. In reality, customers will have users that ft both profles as well as
in between. XenServer is able to eectively manage any combination o those workloads, regardless
o the size and quantity, on a single host server. To have done this same test using physical servers
would have required no ewer than two eight core servers. Virtualization, at a minimum, has eectively
reduced the Exchange Server 2007 ootprint by 50 percent with no measurable loss in perormance
Results: Test 10
XenMotion VM RelocationResult: We were able to successully relocate live XenServer virtual machines running Exchange Server
2007 in multi-role mode rom one XenServer host to another o the same confguration. This test was
successully run 27 times, each with the same results: no discernable disruption to Exchange Serve
operations. LoadGen did not indicate any errors beore, during or ater each migration test.Conclusion: The results demonstrate that the XenMotion eature o XenServer 5.0 is well suited or an
enterprise class application such as Exchange Server 2007. Exchange administrators using XenServe
5.0 who seek to perorm hardware maintenance on host servers can expect to fnd XenMotion will
make the process o migrating live VMs rom one host server to another, easy and eective.
Perormance Counter
Test 9
16,000 Light User Mailboxes,
2 CPU cores, 14 GB RAM per VM
SendMail Latency, Average: 44 Ms
SendMail Latency, 95th Percentile: 154 Ms
CPU Utilization, Average: 6.2%
CPU Utilization, Max: 18.5%
Disk IOPS: 72 Ms
RPC Latency, Average: 5.6 Ms
RPC Latency, 95th Percentile: 11.1 Ms
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
34/40
34
Results: Test 11
XenServer high availability eature testResults:
Three minutes ater power down the Exchange VM was unctional on the other server. This
short downtime was expected and with more than one Exchange server in the environment
disruption would have been limited to only the one virtual Exchange server that was lost and
even then, or only three minutes as it was being re-started on the other host server. Ater that
time, all normal unctionality resumed. LoadGen continued to record exceptions/errors or
roughly 20 minutes. However, there is a solid probability these exceptions were based on how
LoadGen works create an object and then perorm tasks on that object. Any attempt to
create an object during that downtime would cause a cascading eect o subsequent tasks
dependent upon that object ailing. While these exceptions were occurring LoadGen was
successully completing nearly all other tasks.
The data showed that the Exchange VM quickly resumed activity and perormance levels
when ailed over. Resource utilization during these three data sets tracked expectations, no
severe spikes or drops noted that would indicate issues with Exchange caused by ailover.
PerMon data or the three categories (prior, at the point o reinstatement and ollowing the
stabilization period):
Figure 23. Results Test 11, Level one high availability
Conclusions: The high availability eature o XenServer 5.0 works well with Exchange Server 2007 to
provide a quick ailover o Exchange server unctionality in the event o primary server ailure. Theability to automatically detect a virtual Exchange Server ailure, to have the ailed VM restart on the
other XenServer host and or all normal Exchange Server unctionality to resume in only three minutes
ater a catastrophic server ailure, is a powerul tool to have in a virtualized Exchange Server arm.
It is important to note that high availability solutions such as this should be careully implemented and
should not be done so when other similar orms o high availability are also present, such as Microsots
(Cluster Continuous Replication) CCR. Doing so could cause major conicts and introduce unknown
eects, possibly causing the ailure o one or both o the high availability programs.
Perormance Counter Prior to FailureAt Point o
Restoration
Following
re-stabilizaiton
RPC Averaged Latency 12.7 Ms 12.3 Ms 11.4 Ms
Physical Disk Queue Length
(# elements)0.5 0.48 0.34
Processor total % processor time 1.1% 1.1% 1.2%
RAM available 5.516 GB 6.419 GB 6.151 GB
Exchange Database cache % hit 24.8 Ms 24.8% 24.9%
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
35/40
35
Recommendations and best practices
XenServer 5.0 provides an excellent oundation or conducting a thorough evaluation test environment
or Exchange Server 2007. An inherent eature o virtualization is the ease with which application
platorms like Exchange Server 2007 can be created and manipulated to test a wide range o
confguration options. Those amiliar with XenServer already know that VMs can be easily built, copied
and changed with relative ease. The best part is that a ar greater range o options can be tested in
a virtual lab setting than is possible in a traditional physical server test environment. Then, ollowing a
careul evaluation and comparison o results vs. expectations, the same lab environment can be easily
rolled out into a production environment with minimal disruption. Finally, the test environment should be
maintained so that troubleshooting and testing o new products and eatures can be evaluated, again
with minimal disruption to the production environment.
While literally dozens o confguration options are possible in a XenServer virtual lab environment, its
best to frst narrow the range o options down to something manageable and that can be tested in
a relatively short period o time. One thing we discovered was that with LoadGen, one o the more
signifcant actors when it comes to test time is the initialization process, conducted prior to the start o
the actual testing itsel. During the initialization period, LoadGen is building the Exchange environment
including the individual mailboxes and the properties required or it to perorm all necessary unctions
typically ound in an Exchange Server 2007 environment. While it might only take a ew hours to
initialize a 500 or 1,000 mailbox test environment, larger mailbox confgurations might require one or
more days. Thereore, you might want to consider keeping the initial size o the tests to a small number
frst beore settling on the confguration(s) youll want to test or scalability.
With minimal hardware investment, available evaluation sotware rom Microsot and Citrix (evaluationlicenses) or XenServer 5.0 can be easily downloaded by going to www.citrix.com and selecting
XenServer). A very important aspect o establishing the test environment is storage confguration.
Microsot has an easy-to-use storage confguration tool or Exchange Server available rom
http://msexchangeteam.com/archive/2007/01/15/432207.aspx. There are other similar tools
available rom various storage vendors as well. The most important aspect o the tool is its ability to
confgure storage specifcally or Exchange Server 2007. Due to the dierences between Exchange
Server 2007 and 2003, tools previously used or Exchange Server 2003 will no longer provide an
accurate storage confguration.
As was previously mentioned, one o the undamental changes with Exchange Server 2007 is the
ability to deploy individual role servers rather than the combined multi-role servers that were necessary
with Exchange Server 2003. While not a requirement (customers can still use multi-role servers i they
so choose), Microsot suggests that those customers having ewer than 500 mailboxes might stillwant to go this route. The benefts that come with separating Exchange Server roles typically come
with deployments o more than 500 mailboxes. Customers currently using multi-role Exchange 2003
servers will fnd that a XenServer virtual lab environment is an excellent way o learning how these
individual server roles are confgured and interact with each other, with storage and other components
such as Active Directory. However, without some o these servers actually connecting to users outside
the frewall, to the internet, etc., you wont get a true sense o the demands placed on them in a sel-
contained lab. Nonetheless, it is a worthwhile eort to test out these server roles.
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
36/40
36
Based on our own experience virtualizing Exchange Server 2007 with XenServer 5.0, we oer theollowing suggestions to consider when developing your own test environment and related test plans:
Exchange Server 2007, with its 64-bit architecture and multi-threading capabilities, will tend
to beneft more when CPU resources are available in larger amounts over a smaller number o
VMs rather than the other way around. This is likely due to the way the Microsot splits the tasks
perormed by Exchange Server 2007 across the CPU core stack. More CPU cores tend to lead
to more processing efciency. At the same time, there is an obvious beneft to spreading out the
overall workload over a larger number o virtual servers rather than a ew. This is no dierent than it
is or physical servers. Should a physical or virtual server ail, only those sessions and users served
by the ailed server are generally aected. Thereore, we recommend that you should attempt to
balance the need or both when making a decision o how many virtual servers to use, leaning
towards the solution that best meets your overall business needs.
Exchange Server 2007 is ar more efcient at using virtual memory caching to reduce the overall I/Orate, as seen in the test results we obtained. Many o the Disk IOPS and other latency related test
results were in the single digits, indicating that most, i not all storage IO was being handled via
memory caching. Yet, it was also shown in these results that once the amount o memory necessary
to perorm this unction was met, additional memory provided no incremental beneft. Thereore, your
tests should attempt to fnd out what the appropriate amount o RAM necessary or your specifc
environment is and avoid adding more than necessary where the ROI just isnt there to be had.
High availability is always a key actor in any Exchange Server environment. This is no less true
when virtualizing Exchange Server 2007 using XenServer 5.0. The two eatures o XenServer that
add to high availability, XenMotion and Automated high availability ailover, should both be tested
and taken into consideration when developing your HA/DR test plans. Both o these XenServer
eatures have been proven to be very eective in an Exchange Server 2007 environment.
Ultimately, the overall confguration that customers will settle on will require several rounds o testingprior to deployment. One o the advantages o XenServer virtualization or Exchange Server 2007 is the
ability to go rom testing to production in a calculated manner. Not only is it not necessary to virtualize
everything all at the same time, i ts essentially not a good idea. For example, o the fve Exchange
Server 2007 roles, one o those server roles, the Unifed Messaging Server, is not ideally suited to being
virtualized. This is due to the act that UM servers act as the bridge between the e-mail platorm and
the companys PBX and voice mail system. Unifed messaging, by design, is extremely sensitive to
quality o service (QoS) conditions, usually in combination with a companys VoiP network. Virtualized
servers act very dierently than physical servers when it comes to timing which thereore is a potential
cause or disruption in environments where QoS must be continually guaranteed, such as with voice
messaging. Thereore, we dont recommend that customers who elect to deploy their Exchange
2007 servers in individual server roles, virtualize their UM servers. On the other hand, other individual
Exchange Server unctions should virtualize very well. The ollowing is an example o how a customers
Exchange Server 2007 environment might look when virtualizing.
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
37/40
37
Figure 24. Example: Native Exchange Server 2007 Farm
In this diagram, the typical Exchange Server 2007 environment might have any combination o single
role servers. While the use o multi-role servers is still permitted, the separation o Exchange Server
unctions has distinct benefts (discussed earlier). The number o servers in each o these roles will
depend on the demand or the specifc services, however all are required in some number with the
exception o Unifed Messaging servers. UM servers are not necessary i customers choose not to
provide users access to their PBX and voicemail systems via e-mail and vice versa.
In this example, a customer using each o these server unctions supporting 6,000 heavy user
mailboxes might have as many as fteen physical servers to eectively distribute the workload. These
servers would have the maximum level o CPU and RAM resources recommended, 8 CPU cores and32 GB RAM. We elected to have one CPU core and 4 GB RAM or every 250 mailboxes in the mailbox
servers, and a corresponding number o CA, HT, ET and UM servers. This is in the mid-range o the
recommended load or heavy users. Thereore, each group o fve servers could eectively support
2,000 users; fteen physical servers in all.
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
38/40
38
Figure 25. Example: Virtualized Exchange Server 2007 Farm
This diagram reects how a customer might want to approach virtualizing their Exchange Server 2007
arm with XenServer 5.0. The most apparent dierence is in the number o physical servers required to
manage the workload. In this case, six servers, three each or UM and three acting as XenServer hosts,
is all that is required. With the exception o the UM servers (or reasons discussed earlier), all other
Exchange Server unctions can be accomplished with virtual Exchange Servers. Each o the XenServer
hosts has 16 CPU cores and 64 GB o RAM available to be assigned to VMs. While this is two times
the maximum size recommended by Microsot or physical servers, this works perectly or
virtualization. Each o the virtual Exchange Servers can be confgured to operate within Microsots
recommendations. For the purpose o this example, we chose to lower the number o mailboxes per CPU
to 125 and spread the workload over a larger number o VMs. As was stated in the test results earlier,
there are benefts to having ewer VMs with more resources per VM vs. spreading workloads over a
larger pool o VMs. Since this example will demonstrate the potential to reduce the overall number o
physical servers required to manage the same workload as in the previous example (6,000 heavy user
mailboxes), we chose to have a larger pool o VMs each managing smaller workloads. In this case, the
number o virtual Exchange Server VMs in the XenServer pool is 24. The Exchange administrator now
has the ability to assign as many as eight o these VMs per host server, eectively providing the same
overall Exchange Server 2007 workload on six physical servers as the fteen physical servers in the
previous example. Even i the Exchange administrator wanted to lower the number o VMs per host
server to only our, that would still require only nine physical servers, rather than fteen.
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
39/40
39
In terms o increased availability, as we demonstrated earlier, XenServer 5.0 provides live virtual
Exchange Server migration rom one host server to another, allowing or server maintenance to be
perormed with less impact to users and with less management time or IT. We also demonstrated that
when using the automated high availability ailover eature o XenServer 5.0, even with the catastrophicailure o a virtual Exchange Server occurs, recovery o the ailed VM can take place in only minutes,
again, with little overall disruption to users. The need to have as many as thirty physical servers or a
ully-duplicated environment in the previous example no longer exists.
Is this the scenario that a customer would want to implement when virtualizing right rom the start?
Likely not. Customers should frst test this and other possible virtualization confgurations in their virtual
lab and then decide on an implementation plan that will allow them to migrate rom physical to virtual
computing at a rate that makes the most sense or them. Virtualization using XenServer 5.0 makes this
easy to do. Further, depending on whether customers are upgrading rom Exchange Server 2003 or have
already upgraded to Exchange Server 2007, will present a dierent set o opportunities and options.
The most important thing to take away rom these recommendations perhaps is the knowledge that
virtualization o Exchange Server 2007 is not only possible, but in most cases should also be quite
benefcial. The next most important thing to take away would be the realization that virtualization o
Exchange Server 2007 isnt a one-size-fts-all proposition and requires careul planning and testing
prior to implementation. It is our hope that this white paper will serve to provide valuable assistance to
those customers who have determined that virtualizing their Exchange Server 2007 environment has
the potential or numerous strategic and operational benefts to their IT service delivery.
SummaryVirtualization can add operational exibility and optimize resource utilization or most organizations
Microsot Exchange 2007 deployments. Citrix XenServer 5 provides unprecedented TCO gains via
bare-metal perormance, and increases the scalability and responsiveness o the Exchange platorm.Rather than oering trade-os between perormance and exibility, XenServer allows IT organizations
to maximize both perormance and manageability.
7/31/2019 Virtualizing Microsoft Exchange With Citrix XenServer
40/40
About Citrix
Citrix Systems, Inc. (Nasdaq:CTXS) is the global leader and the most t rusted name in application delivery. More than 215,000
organizations worldwide rely on Citrix to deliver any application to users anywhere with the best perormance, highest security and
lowest cost. Citrix customers include 100 percent o the Fortune 100 companies and 99 percent o the Fortune Global 500, as well
as hundreds o thousands o small businesses and prosumers. Citrix has approximately 8,000 partners in more than 100 countries.
Annual revenue in 2007 was $1.4 billion.
2008 Citrix Systems, Inc. All rights reserved. Citrix, ICA, Citrix Delivery Center, Citrix XenApp, Citrix XenServer, Citrix NetScaler, Citrix XenDesktop, Citrix Workow
Studio
, Citrix Access Gateway
, Citrix EdgeSight
, Citrix Password Manager
, Citrix Branch Repeater
, Citrix WANScaler
, Citrix Application Receiver
and Citrix DesktopReceiver are trademarks o Citrix Systems, Inc. and/or one or more o its subsidiaries, and may be registered in the United States Patent and Trademark Ofce andin other countries. Microsot and Windows, are registered trademarks o Microsot Corporation in the U.S. and/or other countries. All other trademarks and registeredtrademarks are property o their respective owners.
1108/PDF
Citrix Worldwide
Worldwide headquarters
Citrix Systems, Inc.
851 West Cypress Creek Road
Fort Lauderdale, FL 33309
USA
T +1 800 393 1888
T +1 954 267 3000
Regional headquarters
Americas
Citrix Silicon Valley
4988 Great America Parkway
Santa Clara, CA 95054
USA
T +1 408 790 8000
Europe
Citrix Systems International GmbH
Rheinweg 9
8200 Schahausen
Switzerland
T +41 52 635 7700
Asia Pacifc
Citrix Systems Hong Kong Ltd.Suite 3201, 32nd Floor
One International Finance Centre
1 Harbour View Street
Central
Hong Kong
T +852 2100 5000
Citrix Online division
6500 Hollister Avenue
Goleta, CA 93117
USA
T +1 805 690 6400
www.citrix.com
Top Related