ETS 9000Mailboxes

101
1 Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M610 Servers, Dell EqualLogic Storage, and F5 Load Balancing Solutions Rob Simpson, Program Manager, Microsoft Exchange Server; Akshai Parthasarathy, Systems Engineer, Dell; Casey Birch, Product Marketing Manager for Exchange Solutions, Dell December 2010 In Exchange 2010 Tested Solutions, Microsoft and participating server, storage, and network partners examine common customer scenarios and key design decision points facing customers who plan to deploy Microsoft Exchange Server 2010. Through this series of white papers, we provide examples of well-designed, cost-effective Exchange 2010 solutions deployed on hardware offered by some of our server, storage, and network partners. You can download this document from the Microsoft Download Center . Applies to: Microsoft Exchange Server 2010 release to manufacturing (RTM) Microsoft Exchange Server 2010 with Service Pack 1 (SP1) Windows Server 2008 R2 Windows Server 2008 R2 Hyper-V Table of Contents Solution Summary Customer Requirements Mailbox Profile Requirements Geographic Location Requirements Server and Data Protection Requirements Design Assumptions Server Configuration Assumptions Storage Configuration Assumptions Solution Design Determine High Availability Strategy

description

ETS 9000Mailboxes

Transcript of ETS 9000Mailboxes

  • 1

    Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M610 Servers, Dell EqualLogic Storage, and F5 Load Balancing Solutions

    Rob Simpson, Program Manager, Microsoft Exchange Server; Akshai Parthasarathy, Systems

    Engineer, Dell; Casey Birch, Product Marketing Manager for Exchange Solutions, Dell

    December 2010

    In Exchange 2010 Tested Solutions, Microsoft and participating server, storage, and network

    partners examine common customer scenarios and key design decision points facing customers

    who plan to deploy Microsoft Exchange Server 2010. Through this series of white papers, we

    provide examples of well-designed, cost-effective Exchange 2010 solutions deployed on

    hardware offered by some of our server, storage, and network partners.

    You can download this document from the Microsoft Download Center.

    Applies to: Microsoft Exchange Server 2010 release to manufacturing (RTM)

    Microsoft Exchange Server 2010 with Service Pack 1 (SP1)

    Windows Server 2008 R2

    Windows Server 2008 R2 Hyper-V

    Table of Contents

    Solution Summary

    Customer Requirements

    Mailbox Profile Requirements

    Geographic Location Requirements

    Server and Data Protection Requirements

    Design Assumptions

    Server Configuration Assumptions

    Storage Configuration Assumptions

    Solution Design

    Determine High Availability Strategy

  • 2

    Estimate Mailbox Storage Capacity Requirements

    Estimate Mailbox I/O Requirements

    Determine Storage Type

    Choose Storage Solution

    Determine Number of EqualLogic Arrays Required

    Estimate Mailbox Memory Requirements

    Estimate Mailbox CPU Requirements

    Determine Whether Server Virtualization Will Be Used

    Determine Whether Client Access and Hub Transport Server Roles Will Be Deployed in

    Separate Virtual Machines

    Determine Server Model for Hyper-V Root Server

    Determine the CPU Capacity of the Virtual Machines

    Determine Number of Mailbox Server Virtual Machines Required

    Determine Number of Mailboxes per Mailbox Server

    Determine Memory Required Per Mailbox Server

    Determine Number of Client Access and Hub Transport Server Combo Virtual Machines

    Required

    Determine Memory Required per Combined Client Access and Hub Transport Virtual

    Machines

    Determine Virtual Machine Distribution

    Determine Memory Required per Root Server

    Determine Minimum Number of Databases Required

    Identify Failure Domains Impacting Database Copy Layout

    Design Database Copy Layout

    Determine Storage Design

    Determine Placement of the File Share Witness

    Plan Namespaces

    Determine Client Access Server Array and Load Balancing Strategy

    Determine Hardware Load Balancing Solution

    Determine Hardware Load Balancing Device Resiliency Strategy

    Determine Hardware Load Balancing Methods

    Solution Overview

    Logical Solution Diagram

    Physical Solution Diagram

    Server Hardware Summary

    Client Access and Hub Transport Server Configuration

    Mailbox Server Configuration

  • 3

    Database Layout

    Storage Hardware Summary

    Storage Configuration

    Network Switch Hardware Summary

    Load Balancer Hardware Summary

    Solution Validation Methodology

    Storage Design Validation Methodology

    Server Design Validation

    Functional Validation Tests

    Datacenter Switchover Validation

    Primary Datacenter Service Restoration Validation

    Storage Design Validation Results

    Server Design Validation Results

    This document provides an example of how to design, test, and validate an Exchange Server

    2010 solution for environments with 9,000 mailboxes deployed on Dell server and storage

    solutions and F5 load balancing solutions. One of the key challenges with designing Exchange

    2010 environments is examining the current server and storage options available and making the

    right hardware choices that provide the best value over the anticipated life of the solution.

    Following the step-by-step methodology in this document, we walk through the important design

    decision points that help address these key challenges while ensuring that the customer's core

    business requirements are met. After we have determined the optimal solution for this customer,

    the solution undergoes a standard validation process to ensure that it holds up under simulated

    production workloads for normal operating, maintenance, and failure scenarios.

    Return to top

    Solution Summary The following tables summarize the key Exchange and hardware components of this solution.

    Exchange components

    Exchange component Value or description

    Target mailbox count 9000

    Target mailbox size 750 megabytes (MB)

    Target message profile 103 messages per day

    Database copy count 3

    Volume Shadow Copy Service (VSS) backup None

  • 4

    Exchange component Value or description

    Site resiliency Yes

    Virtualization Hyper-V

    Exchange server count 18 virtual machines (VMs)

    Physical server count 9

    Hardware components

    Hardware component Value or description

    Server partner Dell

    Server model PowerEdge M610

    Server type Blade

    Processor Intel Xeon X5550

    Storage partner Dell EqualLogic

    Storage type Internet SCSI (iSCSI) storage area network

    (SAN)

    Disk type 1 terabyte 7.2 kilobyte (KB) Serial ATA (SATA)

    3.5"

    Return to top

    Customer Requirements One of the most important first steps in Exchange solution design is to accurately summarize the

    business and technical requirements that are critical to making the correct design decisions. The

    following sections outline the customer requirements for this solution.

    Return to top

    Mailbox Profile Requirements

    Determine mailbox profile requirements as accurately as possible because these requirements

    may impact all other components of the design. If Exchange is new to you, you may have to

    make some educated guesses. If you have an existing Exchange environment, you can use the

    Microsoft Exchange Server Profile Analyzer tool to assist with gathering most of this information.

    The following tables summarize the mailbox profile requirements for this solution.

  • 5

    Mailbox count requirements

    Mailbox count requirements Value

    Mailbox count (total number of mailboxes

    including resource mailboxes)

    9000

    Projected growth percent (%) in mailbox count

    (projected increase in mailbox count over the

    life of the solution)

    0%

    Expected mailbox concurrency % (maximum

    number of active mailboxes at any time)

    100%

    Mailbox size requirements

    Mailbox size requirements Value

    Average mailbox size in MB 750 MB (742)

    Tiered mailbox size Yes

    450 @ 4 gigabytes (GB)

    900 @ 1 GB

    7650 @ 512 MB

    Average mailbox archive size in MB 0

    Projected growth (%) in mailbox size in MB

    (projected increase in mailbox size over the life

    of the solution)

    included

    Target average mailbox size in MB 750 MB

    Mailbox profile requirements

    Mailbox profile requirements Value

    Target message profile (average total number

    of messages sent plus received per user per

    day)

    103 messages per day

    Tiered message profile Yes

    450 @ 150 messages per day

    8550 @ 100 messages per day

    Target average message size in KB 75

    % in MAPI cached mode 100

    % in MAPI online mode 0

  • 6

    Mailbox profile requirements Value

    % in Outlook Anywhere cached mode 0

    % in Microsoft Office Outlook Web App

    (Outlook Web Access in Exchange 2007 and

    previous versions)

    0

    % in Exchange ActiveSync 0

    Return to top

    Geographic Location Requirements

    Understanding the distribution of mailbox users and datacenters is important when making design

    decisions about high availability and site resiliency.

    The following table outlines the geographic distribution of people who will be using the Exchange

    system.

    Geographic distribution of people

    Mailbox user site requirements Value

    Number of major sites containing mailbox users 1

    Number of mailbox users in site 1 9000

    Number of mailbox users in site 2 0

    The following table outlines the geographic distribution of datacenters that could potentially

    support the Exchange e-mail infrastructure.

    Geographic distribution of datacenters

    Datacenter site requirements Value or description

    Total number of datacenters 2

    Number of active mailboxes in proximity to

    datacenter 1

    9000

    Number of active mailboxes in proximity to

    datacenter 2

    0

    Requirement for Exchange to reside in more

    than one datacenter

    Yes

    Return to top

  • 7

    Server and Data Protection Requirements

    It's also important to define server and data protection requirements for the environment because

    these requirements will support design decisions about high availability and site resiliency.

    The following table identifies server protection requirements.

    Server protection requirements

    Server protection requirements Value

    Number of simultaneous server or VM failures

    within site

    1

    Number of simultaneous server or VM failures

    during site failure

    0

    The following table identifies data protection requirements.

    Data protection requirements

    Data protection requirement Value or description

    Requirement to maintain a backup of the

    Exchange databases outside of the Exchange

    environment (for example, third-party backup

    solution)

    No

    Requirement to maintain copies of the

    Exchange databases within the Exchange

    environment (for example, Exchange native

    data protection)

    Yes

    Requirement to maintain multiple copies of

    mailbox data in the primary datacenter

    Yes

    Requirement to maintain multiple copies of

    mailbox data in a secondary datacenter

    Yes

    Requirement to maintain a lagged copy of any

    Exchange databases

    No

    Lagged copy period in days Not applicable

    Target number of database copies 3

    Deleted Items folder retention window in days 14 days

    Return to top

  • 8

    Design Assumptions This section includes information that isn't typically collected as part of customer requirements,

    but is critical to both the design and the approach to validating the design.

    Return to top

    Server Configuration Assumptions

    The following table describes the peak CPU utilization targets for normal operating conditions,

    and for site server failure or server maintenance conditions.

    Server utilization targets

    Target server CPU utilization design assumption Value

    Normal operating for Mailbox servers

  • 9

    Data configuration assumption Value or description

    Mailbox moves per week 1%

    Dedicated maintenance or restore logical unit

    number (LUN)

    No

    LUN free space 20%

    Log shipping compression enabled Yes

    Log shipping encryption enabled Yes

    I/O configuration assumptions

    I/O configuration assumption Value or description

    I/O overhead factor 20%

    Additional I/O requirements None

    Return to top

    Solution Design The following section provides a step-by-step methodology used to design this solution. This

    methodology takes customer requirements and design assumptions and walks through the key

    design decision points that need to be made when designing an Exchange 2010 environment.

    Return to top

    Determine High Availability Strategy

    When designing an Exchange 2010 environment, many design decision points for high availability

    strategies impact other design components. We recommend that you determine your high

    availability strategy as the first step in the design process. We highly recommend that you review

    the following information prior to starting this step:

    Understanding High Availability Factors

    Planning for High Availability and Site Resilience

    Understanding Backup, Restore and Disaster Recovery

    Step 1: Determine whether site resiliency is required

    If you have more than one datacenter, you must decide whether to deploy Exchange

    infrastructure in a single datacenter or distribute it across two or more datacenters. The

    organization's recovery service level agreements (SLAs) should define what level of service is

    required following a primary datacenter failure. This information should form the basis for this

    decision.

  • 10

    *Design Decision Point*

    In this example, there is a service level agreement that requires the ability to restore the

    messaging service within four hours in the event of a primary datacenter failure. Therefore the

    customer must deploy Exchange infrastructure in a secondary datacenter for disaster recovery

    purposes.

    Step 2: Determine relationship between mailbox user locations and datacenter locations

    In this step, we look at whether all mailbox users are located primarily in one site or if they're

    distributed across many sites and whether those sites are associated with datacenters. If they're

    distributed across many sites and there are datacenters associated with those sites, you need to

    determine if there's a requirement to maintain affinity between mailbox users and the datacenter

    associated with that site.

    *Design Decision Point*

    In this example, all of the active users are located in one primary location. The primary location is

    in geographic proximity to the primary datacenter and therefore there's a desire for all active

    mailboxes to reside in the primary datacenter during normal operating conditions.

    Step 3: Determine database distribution model

    Because the customer has decided to deploy Exchange infrastructure in more than one physical

    location, the customer needs to determine which database distribution model best meets the

    needs of the organization. There are three database distribution models:

    Active/Passive distribution Active mailbox database copies are deployed in the primary

    datacenter and only passive database copies are deployed in a secondary datacenter. The

    secondary datacenter serves as a standby datacenter and no active mailboxes are hosted in

    the datacenter under normal operating conditions. In the event of an outage impacting the

    primary datacenter, a manual switchover to the secondary datacenter is performed and active

    databases are hosted there until the primary datacenter returns online.

    Active/Passive distribution

  • 11

    Active/Active distribution (single DAG) Active mailbox databases are deployed in the

    primary and secondary datacenters. A corresponding passive copy is located in the alternate

    datacenter. All Mailbox servers are members of a single database availability group (DAG). In

    this model, the wide area network (WAN) connection between two datacenters is potentially a

    single point of failure. Loss of the WAN connection results in Mailbox servers in one of the

    datacenters going into a failed state due to loss of quorum.

    Active/Active distribution (single DAG)

    Active/Active distribution (multiple DAGs) This model leverages multiple DAGs to

    remove WAN connectivity as a single point of failure. One DAG has active database copies in

    the first datacenter and its corresponding passive database copies in the second datacenter.

    The second DAG has active database copies in the second datacenter and its corresponding

    passive database copies in the first datacenter. In the event of loss of WAN connectivity, the

    active copies in each site continue to provide database availability to local mailbox users.

  • 12

    Active/Active distribution (multiple DAGs)

    *Design Decision Point*

    In this example, active mailbox users are only in a single location and only the secondary

    datacenter will be used in the event that the primary datacenter fails. Therefore, an

    Active/Passive distribution model is the obvious choice.

    Step 4: Determine backup and database resiliency strategy

    Exchange 2010 includes several new features and core changes that, when deployed and

    configured correctly, can provide native data protection that eliminates the need to make

    traditional data backups. Backups are traditionally used for disaster recovery, recovery of

    accidentally deleted items, long term data storage, and point-in-time database recovery.

    Exchange 2010 can address all of these scenarios without the need for traditional backups:

    Disaster recovery In the event of a hardware or software failure, multiple database copies

    in a DAG enable high availability with fast failover and no data loss. DAGs can be extended

    to multiple sites and can provide resilience against datacenter failures.

    Recovery of accidentally deleted items With the new Recoverable Items folder in

    Exchange 2010 and the hold policy that can be applied to it, it's possible to retain all deleted

    and modified data for a specified period of time, so recovery of these items is easier and

    faster. For more information, see Messaging Policy and Compliance, Understanding

    Recoverable Items, and Understanding Retention Tags and Retention Policies.

  • 13

    Long-term data storage Sometimes, backups also serve an archival purpose. Typically,

    tape is used to preserve point-in-time snapshots of data for extended periods of time as

    governed by compliance requirements. The new archiving, multiple-mailbox search, and

    message retention features in Exchange 2010 provide a mechanism to efficiently preserve

    data in an end-user accessible manner for extended periods of time. For more information,

    see Understanding Personal Archives, Understanding Multi-Mailbox Search, and

    Understanding Retention Tags and Retention Policies.

    Point-in-time database snapshot If a past point-in-time copy of mailbox data is a

    requirement for your organization, Exchange provides the ability to create a lagged copy in a

    DAG environment. This can be useful in the rare event that there's a logical corruption that

    replicates across the databases in the DAG, resulting in a need to return to a previous point

    in time. It may also be useful if an administrator accidentally deletes mailboxes or user data.

    There are technical reasons and several issues that you should consider before using the

    features built into Exchange 2010 as a replacement for traditional backups. Prior to making this

    decision, see Understanding Backup, Restore and Disaster Recovery.

    *Design Decision Point*

    In this example, maintaining tape backups has been difficult, and testing and validating restore

    procedures hasn't occurred on a regular basis. Therefore, using Exchange native data protection

    in place of traditional backups as the database resiliency strategy is preferred.

    Step 5: Determine number of database copies required

    There are a number of factors to consider when determining the number of database copies that

    you'll deploy. The first is whether you're using a third-party backup solution. In the previous step,

    this decision was made. We strongly recommend deploying a minimum of three copies of a

    mailbox database before eliminating traditional forms of protection for the database, such as

    Redundant Array of Independent Disks (RAID) or traditional VSS-based backups.

    Prior to making this decision, see Understanding Mailbox Database Copies.

    *Design Decision Point*

    In the previous step, it was decided not to deploy a third-party backup solution. As a result, the

    design should have a minimum of three copies of each database. This ensures that both the

    recovery time objective and recovery point objective requirements are met.

    Step 6: Determine database copy type

    There are two types of database copies:

    High availability database copy This database copy is configured with a replay lag time of

    zero. As the name implies, high availability database copies are kept up-to-date by the

    system, can be automatically activated by the system, and are used to provide high

    availability for mailbox service and data.

    Lagged database copy This database copy is configured to delay transaction log replay for

    a period of time. Lagged database copies are designed to provide point-in-time protection,

    which can be used to recover from store logical corruptions, administrative errors (for

  • 14

    example, deleting or purging a disconnected mailbox), and automation errors (for example,

    bulk purging of disconnected mailboxes).

    *Design Decision Point*

    In this example, all three mailbox database copies will be deployed as high availability database

    copies. The primary need for a lagged copy is to provide the ability to recover single deleted

    items. This requirement can be met using the deleted items retention feature.

    Step 7: Determine number of database availability groups

    A DAG is the base component of the high availability and site resilience framework built into

    Exchange 2010. A DAG is a group of up to 16 Mailbox servers that hosts a set of replicated

    databases and provides automatic database-level recovery from failures that affect individual

    servers or databases.

    A DAG is a boundary for mailbox database replication, database and server switchovers and

    failovers, and for an internal component called Active Manager. Active Manager is an

    Exchange 2010 component, which manages switchovers and failovers. Active Manager runs on

    every server in a DAG.

    From a planning perspective, you should try to minimize the number of DAGs deployed. You

    should consider going with more than one DAG if:

    You deploy more than 16 Mailbox servers.

    You have active mailbox users in multiple sites (active/active site configuration).

    You require separate DAG-level administrative boundaries.

    You have Mailbox servers in separate domains. (DAG is domain bound.)

    *Design Decision Point*

    In a previous step, it was decided that the database distribution model was going to be

    active/passive. This model doesn't require multiple DAGs to be deployed. This example isn't likely

    to require more than 16 Mailboxes servers for 10,000 mailboxes, and there is no requirement for

    separate DAG-level administrative boundaries. Therefore, a single DAG will be used in this

    design.

    Step 8: Determine Mailbox server resiliency strategy

    Exchange 2010 has been re-engineered for mailbox resiliency. Automatic failover protection is

    now provided at the mailbox database level instead of at the server level. You can strategically

    distribute active and passive database copies to Mailbox servers within a DAG. Determining how

    many database copies you plan to activate on a per-server basis is a key aspect to Exchange

    2010 capacity planning. There are different database distribution models that you can deploy, but

    generally we recommend one of the following:

    Design for all copies activated In this model, the Mailbox server role is sized to

    accommodate the activation of all database copies on the server. For example, a Mailbox

    server may host four database copies. During normal operating conditions, the server may

    have two active database copies and two passive database copies. During a failure or

    maintenance event, all four database copies would become active on the Mailbox server.

  • 15

    This solution is usually deployed in pairs. For example, if deploying four servers, the first pair

    is servers MBX1 and MBX2, and the second pair is servers MBX3 and MBX4. In addition,

    when designing for this model, you will size each Mailbox server for no more than 40 percent

    of available resources during normal operating conditions. In a site resilient deployment with

    three database copies and six servers, this model can be deployed in sets of three servers,

    with the third server residing in the secondary datacenter. This model provides a three-server

    building block for solutions using an active/passive site resiliency model.

    This model can be used in the following scenarios:

    Active/Passive multisite configuration where failure domains (for example, racks, blade

    enclosures, and storage arrays) require easy isolation of database copies in the primary

    datacenter

    Active/Passive multisite configuration where anticipated growth may warrant easy

    addition of logical units of scale

    Configurations that aren't required to survive the simultaneous loss of any two Mailbox

    servers in the DAG

    This model requires servers to be deployed in pairs for single site deployments and sets of

    three for multisite deployments. The following table illustrates a sample database layout for

    this model.

    Design for all copies activated

    In the preceding table, the following applies:

    C1 = active copy (activation preference value of 1) during normal operations

    C2 = passive copy (activation preference value of 2) during normal operations

    C3 = passive copy (activation preference value of 3) during site failure event

    Design for targeted failure scenarios In this model, the Mailbox server role is designed to

    accommodate the activation of a subset of the database copies on the server. The number of

  • 16

    database copies in the subset will depend on the specific failure scenario that you're

    designing for. The main goal of this design is to evenly distribute active database load across

    the remaining Mailbox servers in the DAG.

    This model should be used in the following scenarios:

    All single site configurations with three or more database copies

    Configurations required to survive the simultaneous loss of any two Mailbox servers in

    the DAG

    The DAG design for this model requires between 3 and 16 Mailbox servers. The following

    table illustrates a sample database layout for this model.

    Design for targeted failure scenarios

    In the preceding table, the following applies:

    C1 = active copy (activation preference value of 1) during normal operations

    C2 = passive copy (activation preference value of 2) during normal operations

    C3 = passive copy (activation preference value of 3) during normal operations

    *Design Decision Point*

    In a previous step, it was decided to deploy an Active/Passive database distribution model with

    two high availability database copies in the primary datacenter and one high availability copy in

    the secondary datacenter. Because the two high availability copies in the primary datacenter are

    usually deployed in separate hardware failure domains, this model usually results in a Mailbox

    server resiliency strategy that designs for all copies being activated.

  • 17

    Step 9: Determine number of Mailbox servers and DAGs

    The number of Mailbox servers required to support the workload and the minimum number of

    Mailbox servers required to support the DAG design may be different. In this step, a preliminary

    result is obtained. The final number of Mailbox servers will be determined in a later step.

    *Design Decision Point*

    This example uses three high availability database copies. To support three copies, a minimum of

    three Mailbox servers in the DAG is required. In an active/passive configuration, two of the

    servers will reside in the primary datacenter, and the third server will reside in the secondary

    datacenter. In this model, the number of servers in the DAG should be deployed in multiples of

    three. The following table outlines the possible configurations.

    Number of Mailbox servers and DAGs

    Primary datacenter Secondary datacenter Total Mailbox server count

    2 1 3

    4 2 6

    6 3 9

    8 4 12

    Return to top

    Estimate Mailbox Storage Capacity Requirements

    Many factors influence the storage capacity requirements for the Mailbox server role. For

    additional information, we recommend that you review Understanding Mailbox Database and Log

    Capacity Factors.

    The following steps outline how to calculate mailbox capacity requirements. These requirements

    will then be used to make decisions about which storage solution options meet the capacity

    requirements. A later section covers additional calculations required to properly design the

    storage layout on the chosen storage platform.

    Microsoft has created a Mailbox Server Role Requirements Calculator that will do most of this

    work for you. To download the calculator, see E2010 Mailbox Server Role Requirements

    Calculator. For additional information about using the calculator, see Exchange 2010 Mailbox

    Server Role Requirements Calculator.

    Step 1: Calculate mailbox size on disk

    Before attempting to determine what your total storage requirements are, you should know what

    the mailbox size on disk will be. A full mailbox with a 1-GB quota requires more than 1 GB of disk

    space because you have to account for the prohibit send/receive limit, the number of messages

    the user sends or receives per day, the Deleted Items folder retention window (with or without

    calendar version logging and single item recovery enabled), and the average database daily

  • 18

    variations per mailbox. The Mailbox Server Role Requirements Calculator does these

    calculations for you. You can also use the following information to do the calculations manually.

    The following calculations are used to determine the mailbox size on disk for the three mailbox

    tiers in this solution:

    Tier 1 (512 MB mailbox quota, 100 messages per day message profile, 75 KB average

    message size)

    Whitespace = 100 messages per day 75 1024 MB = 7.3 MB

    Dumpster = (100 messages per day 75 1024 MB 14 days) + (512 MB 0.012) +

    (512 MB 0.058) = 138 MB

    Mailbox size on disk = mailbox limit + whitespace + dumpster

    = 512 MB + 7.3 MB + 138 MB

    = 657 MB

    Tier 2 (1024 MB mailbox quota, 100 messages per day message profile, 75 KB average

    message size)

    Whitespace = 100 messages per day 75 1024 MB = 7.3 MB

    Dumpster = (100 messages per day 75 1024 MB 14 days) + (1024 MB 0.012) +

    (1024 MB 0.058) = 174 MB

    Mailbox size on disk = mailbox limit + whitespace + dumpster

    = 1024 MB + 7.3 MB + 174 MB

    = 1205 MB

    Tier 3 (4096 MB mailbox quota, 150 messages per day message profile, 75 KB average

    message size)

    Whitespace = 150 messages per day 75 1024 MB = 11 MB

    Dumpster = (150 messages per day 75 1024 MB 14 days) + (4096 MB 0.012) +

    (4096 MB 0.058) = 441 MB

    Mailbox size on disk = mailbox limit + whitespace + dumpster

    = 4096 MB + 11 MB + 441 MB

    = 4548 MB

    Average size on disk = [(657 7650) + (1205 900) + (4548 450)] 9000

    = 907 MB

    Step 2: Calculate database storage capacity requirements

    In this step, the high level storage capacity required for all mailbox databases is determined. The

    calculated capacity includes database size, catalog index size, and 20 percent free space.

    To determine the storage capacity required for all databases, use the following formulas:

    Tier 1 (512 MB mailbox quota, 100 messages per day message profile, 75 KB average

    message size)

  • 19

    Database size = (number of mailboxes mailbox size on disk database overhead

    growth factor) (20% data overhead)

    = (7650 657 1) 1.2

    = 6031260 MB

    = 5890 GB

    Database index size = 10% of database size

    = 589 GB

    Total database capacity = (database size + index size) 0.80 to add 20% volume free

    space

    = (5890 + 589) 0.8

    = 8099 GB

    Tier 2 (1024 MB mailbox quota, 100 messages per day message profile, 75 KB average

    message size)

    Database size= (number of mailboxes mailbox size on disk database overhead

    growth factor) x (20% data overhead)

    = (900 1205 1) x 1.2

    = 1301400 MB

    =1271 GB

    Database index size = 10% of database size

    = 127 GB

    Total database capacity = (database size + index size) 0.80 to add 20% volume free

    space

    = (1271 + 127) 0.8

    = 1747 GB

    Tier 3 (4096 MB mailbox quota, 150 messages per day message profile, 75 KB average

    message size)

    Database size = (number of mailboxes mailbox size on disk database overhead

    growth factor) x (20% data overhead)

    = (450 4548 1) x 1.2

    = 2455920 MB

    = 2400 GB

    Database index size = 10% of database size

    = 240 GB

    Total database capacity = (database size + index size) 0.80 to add 20% volume free

    space

    = (2400+ 240) 0.8

    = 3301 GB

    Total database capacity (all tiers) = 8099 + 1747 + 3301

  • 20

    = 13147 GB

    = 12.3 terabytes

    Step 3: Calculate transaction log storage capacity requirements

    To ensure that the Mailbox server doesn't sustain any outages as a result of space allocation

    issues, the transaction logs also need to be sized to accommodate all of the logs that will be

    generated during the backup set. Provided that this architecture is leveraging the mailbox

    resiliency and single item recovery features as the backup architecture, the log capacity should

    allocate for three times the daily log generation rate in the event that a failed copy isn't repaired

    for three days. (Any failed copy prevents log truncation from occurring.) In the event that the

    server isn't back online within three days, you would want to temporarily remove the copy to allow

    truncation to occur.

    To determine the storage capacity required for all transaction logs, use the following formulas:

    Tier 1 (512 MB mailbox quota, 100 messages per day message profile, 75 KB average

    message size)

    Log files size = (log file size number of logs per mailbox per day number of days

    required to replace failed infrastructure number of mailbox users) + (1% mailbox move

    overhead)

    = (1 MB 20 3 7650) + (7650 0.01 512)

    = 498168 MB

    = 487 GB

    Total log capacity = log files size 0.80 to add 20% volume free space

    = (487) 0.80

    = 608 GB

    Tier 2 (1024 MB mailbox quota, 100 messages per day message profile, 75 KB average

    message size)

    Log files size = (log file size number of logs per mailbox per day number of days

    required to replace failed infrastructure number of mailbox users) + (1% mailbox move

    overhead)

    = (1 MB 20 3 900) + (900 0.01 1024)

    = 63216 MB

    = 62 GB

    Total log capacity = log files size 0.80 to add 20% volume free space

    = (62) 0.80

    = 77 GB

    Tier 3 (4096 MB mailbox quota, 150 messages per day message profile, 75 KB average

    message size)

  • 21

    Log files size = (log file size number of logs per mailbox per day number of days

    required to replace failed infrastructure number of mailbox users) + (1% mailbox move

    overhead) = (1 MB 30 3 450) + (450 0.01 4096)

    = 58932 MB

    = 58 GB

    Total log capacity = log files size 0.80 to add 20% volume free space

    = (58) 0.80

    = 72 GB

    Total log capacity (all tiers) = 608 + 77 + 72

    = 757 GB

    Step 4: Determine total storage capacity requirements

    The following table summarizes the high level storage capacity requirements for this solution. In a

    later step, you will use this information to make decisions about which storage solution to deploy.

    You will then take a closer look at specific storage requirements in later steps.

    Summary of storage capacity requirements

    Disk space requirements Value

    Average mailbox size on disk (MB) 907

    Database space required (GB) 13147

    Log space required (GB) 757

    Total space required (GB) 13904

    Total space required for three database copies

    (GB)

    41712

    Total space required for three database copies

    (terabytes)

    41

    Return to top

    Estimate Mailbox I/O Requirements

    When designing an Exchange environment, you need an understanding of database and log

    performance factors. We recommend that you review Understanding Database and Log

    Performance Factors.

    Calculate total mailbox I/O requirements

    Because it's one of the key transactional I/O metrics needed for adequately sizing storage, you

    should understand the amount of database I/O per second (IOPS) consumed by each mailbox

    user. Pure sequential I/O operations aren't factored in the IOPS per Mailbox server calculation

  • 22

    because storage subsystems can handle sequential I/O much more efficiently than random I/O.

    These operations include background database maintenance, log transactional I/O, and log

    replication I/O. In this step, you calculate the total IOPS required to support all mailbox users,

    using the following:

    Note:

    To determine the IOPS profile for a different message profile, see the table "Database

    cache and estimated IOPS per mailbox based on message activity" in Understanding

    Database and Log Performance Factors.

    Total required IOPS = IOPS per mailbox user number of mailboxes I/O overhead factor

    = 0.15 450 1.2

    = 81

    Total required IOPS (all tiers) = 1107

    Average IOPS per mailbox = 1107 9000 = 0.123

    The high level storage IOPS requirements are approximately 1107. When choosing a storage

    solution, ensure that the solution meets this requirement.

    Return to top

    Determine Storage Type

    Exchange 2010 includes improvements in performance, reliability, and high availability that

    enable organizations to run Exchange on a wide range of storage options.

    When examining the storage options available, being able to balance the performance, capacity,

    manageability, and cost requirements is essential to achieving a successful storage solution for

    Exchange.

    For more information about choosing a storage solution for Exchange 2010, see Mailbox Server

    Storage Design.

    Determine whether you prefer an internal or external storage solution

    A number of server models on the market today support from 8 through 16 internal disks. These

    servers are a fit for some Exchange deployments and provide a solid solution at a low price point.

    If your storage capacity and I/O requirements are met with internal storage and you don't have a

    specific requirement to use external storage, you should consider using server models with an

    internal disk for Exchange deployments. If your storage and I/O requirements are higher or your

    organization has an existing investment in SANs, you should examine larger external direct-

    attached storage (DAS) or SAN solutions.

    *Design Decision Point*

    In this example, the external storage solution is selected.

    Return to top

  • 23

    Choose Storage Solution

    Use the following steps to choose a storage solution.

    Step 1: Identify preferred storage vendor

    In this solution, the preferred storage vendor is Dell.

    Dell, Inc. is a leading IT infrastructure and services company with a broad portfolio of servers,

    storage, networking products, and comprehensive service offerings. Dell also provides testing,

    best practices, and architecture guidance specifically for Exchange 2010 and other Microsoft-

    based solutions in the unified communications and collaboration stack such as Microsoft Office

    SharePoint Server and Office Communications Server.

    Dell offers a wide variety of storage solutions from Dell EqualLogic, Dell PowerVault, and

    Dell/EMC. Dell storage technologies help you minimize cost and complexity, increase

    performance and reliability, simplify storage management, and plan for future growth.

    Step 2: Review available options from preferred vendor

    There are a number of storage options that would be a good fit for this solution. The following

    options were considered:

    Option 1:Dell EqualLogic PS 6000 Series iSCSI SAN Array

    The Dell EqualLogic PS Series is fundamentally changing the way enterprises think about

    purchasing and managing storage. Built on breakthrough virtualized peer storage architecture,

    the EqualLogic PS Series simplifies the deployment and administration of consolidated storage

    environments. Its all-inclusive, intelligent feature set streamlines purchasing and delivers rapid

    SAN deployment, easy storage management, comprehensive data protection, enterprise-class

    performance and reliability, and seamless pay-as-you grow expansion. The PS6000 is a 3u

    chassis that contains sixteen 3.5 inch hard disk drives with two iSCSI controllers and four 1 GB-

    Ethernet ports per controller. Up to 16 arrays can be included in a single managed unit known as

    a group.

    Option 2:Dell EqualLogic PS 6500 Series iSCSI SAN Array

    The Dell EqualLogic PS Series 6500 arrays also provide the same ease of use and intelligence

    features. However, this array was built with maximum density in mind. This 4u chassis holds up to

    48 3.5 inch hard disk drives, making it incredibly space efficient. It also contains four 1 GB-

    Ethernet ports per controller. The PS6500 can be mixed with other PS series arrays in the same

    group.

    Dell EqualLogic PS Series 6500 array

    Components Dell EqualLogic PS6000E, X,

    XV, and XVS

    Dell EqualLogic PS6500E and X

    Storage controllers: Dual controllers with a total of

    4 GB-battery-backed memory.

    Battery-backed memory

    provides up to 72 hours of

    Dual controllers with a total of

    4 GB-battery-backed memory.

    Battery-backed memory

    provides up to 72 hours of

  • 24

    Components Dell EqualLogic PS6000E, X,

    XV, and XVS

    Dell EqualLogic PS6500E and X

    data protection. data protection.

    Hard disk drives: 16x SATA, SAS, or SSD. 48x SATA or SAS.

    Volumes Up to 1024. Up to 1024.

    RAID support RAID-5, RAID-6, RAID-10, and

    RAID-50.

    RAID-5, RAID-6, RAID-10, and

    RAID-50.

    Network interfaces 4 copper per controller. 4 copper per controller.

    Reliability Redundant, hot-swappable

    controllers, power supplies,

    cooling fans, and disks.

    Individual disk drive slot power

    control.

    Redundant, hot-swappable

    controllers, power supplies,

    cooling fans, and disks.

    Individual disk drive slot power

    control.

    Option 3: Dell PowerVault MD3200i iSCSI SAN Array

    The PowerVault MD3200i is a high performance iSCSI SAN designed to deliver storage

    consolidation and data management capabilities in an easy to use, cost effective solution. Shared

    storage is required to enable VM mobility, which is the key benefit of a virtual environment. The

    PowerVault MD3000i is a networked shared storage solution, providing the high availability,

    expandability, and ease of management desired in virtual environments. The PowerVault

    MD3000i leverages existing IP networks and offers small and medium businesses an easy to use

    iSCSI SANs without the need for extensive training or new expensive infrastructures.

    Step 3: Select an array

    The listed arrays were the PS 6000E and the PS 6500E. PS6500E enclosures can accommodate

    a total of 46 + 2 (hot spare) drives and are the most dense storage solution offered. Therefore,

    the cost per gigabyte of deploying a PS6500E solution would be lower than that for a PS6000E

    solution. The PS6500E array is also an intelligent solution that offers SAN configuration and

    monitoring features, auto-build of RAID sets, network sensing mechanisms, and continuous

    health monitoring. The MD3200i is a less expensive solution but lacks some of the management

    and deployment features in the PS series arrays.

    In this example, the PS6500 series is selected because this storage enclosure offers a

    comprehensive datacenter consolidation solution spread across multiple sites as opposed to a

    Server Message Block (SMB) or branch-office storage need.

    Step 4: Select a disk type

    The Exchange 2010 solution is optimized to use more sequential I/O and less random I/O with

    larger mailboxes. This implies less disk intensive activity, even during peak usage hours when

    compared to Exchange 2007. Therefore, high capacity SATA disks are used to save cost.

  • 25

    For a list of supported disk types, see "Physical Disk Types" in Understanding Storage

    Configuration.

    To help determine which disk type to choose, see "Factors to Consider When Choosing Disk

    Types" in Understanding Storage Configuration.

    Return to top

    Determine Number of EqualLogic Arrays Required

    In a previous step, it was determined to deploy three copies of each database. One of the three

    copies will be located in the secondary datacenter. Therefore, to meet the site resiliency

    requirements, a minimum of one PS6500E in the primary datacenter and one PS6500E in the

    secondary datacenter is needed.

    Consider IOPS requirements. In a previous step it was determined that 1,107 IOPS were required

    to support the 9,000 mailboxes. For a RAID-10 configuration of SATA disks, this IOPS

    requirement can be met in a single PS 6500 array. In a failure event, a single PS6500E would

    have to support 100 percent of the IOPS requirement. Therefore, to meet the IOPS requirements,

    a minimum of one PS6500E in the primary datacenter and one PS6500E in the secondary

    datacenter is needed.

    Consider storage capacity requirements. In a previous step, it was determined that approximately

    26 terabytes were required to support two copies of each database in the primary datacenter and

    approximately 13 terabytes to support one copy of each database in the secondary datacenter. A

    single PS6500E configured with two spares and the remaining 46 disks in a RAID-10 disk group

    provides approximately 20 terabytes. Therefore, two PS6500E's in the primary datacenter and

    one PS6500E in the secondary datacenter are required to support the capacity requirements.

    Three PS6500E's will be deployed to support the capacity requirements of this solution.

    Return to top

    Estimate Mailbox Memory Requirements

    Sizing memory correctly is an important step in designing a healthy Exchange environment. We

    recommend that you review Understanding Memory Configurations and Exchange Performance

    and Understanding the Mailbox Database Cache.

    Calculate required database cache

    The Extensible Storage Engine (ESE) uses database cache to reduce I/O operations. In general,

    the more database cache available, the less I/O generated on an Exchange 2010 Mailbox server.

    However, there's a point where adding additional database cache no longer results in a

    significant reduction in IOPS. Therefore, adding large amounts of physical memory to your

    Exchange server without determining the optimal amount of database cache required may result

    in higher costs with minimal performance benefit.

    The IOPS estimates that you completed in a previous step assume a minimum amount of

    database cache per mailbox. These minimum amounts are summarized in the table "Estimated

  • 26

    IOPS per mailbox based on message activity and mailbox database cache" in Understanding the

    Mailbox Database Cache.

    The following table outlines the database cache per user for various message profiles.

    Database cache per user

    Messages sent or received per mailbox per day

    (about 75 KB average message size)

    Database cache per user (MB)

    50 3 MB

    100 6 MB

    150 9 MB

    200 12 MB

    In this step, you determine high level memory requirements for the entire environment. In a later

    step, you use this result to determine the amount of physical memory needed for each Mailbox

    server. Use the following information:

    Tier 1 (512 MB mailbox quota, 100 messages per day message profile, 75 KB average

    message size)

    Database cache = profile specific database cache number of mailbox users

    = 6 MB 7650

    = 45900 MB

    = 45 GB

    Tier 2 (1024 MB mailbox quota, 100 messages per day message profile, 75 KB average

    message size)

    Database cache = profile specific database cache number of mailbox users

    = 6 MB 900

    = 5400 MB

    = 6 GB

    Tier 3 (4096 MB mailbox quota, 150 messages per day message profile, 75 KB average

    message size)

    Database cache = profile specific database cache number of mailbox users

    = 9 MB 450

    = 4050 MB

    = 4 GB

    Total database cache (all tiers) = 55 GB

    Average per active mailbox = 55 GB 9000 1024 = 6.2 MB

    The total database cache requirements for the environment are 55 GB or 6.2 MB per mailbox

    user.

  • 27

    Return to top

    Estimate Mailbox CPU Requirements

    Mailbox server capacity planning has changed significantly from previous versions of Exchange

    due to the new mailbox database resiliency model provided in Exchange 2010. For additional

    information, see Mailbox Server Processor Capacity Planning.

    In the following steps, you calculate the high level megacycle requirements for active and passive

    database copies. These requirements will be used in a later step to determine the number of

    Mailbox servers needed to support the workload. Note that the number of Mailbox servers

    required also depends on the Mailbox server resiliency model and database copy layout.

    Using megacycle requirements to determine the number of mailbox users that an Exchange

    Mailbox server can support isn't an exact science. A number of factors can result in unexpected

    megacycle results in test and production environments. Megacycles should only be used to

    approximate the number of mailbox users that an Exchange Mailbox server can support. It's

    always better to be conservative rather than aggressive during the capacity planning portion of

    the design process.

    The following calculations are based on published megacycle estimates as summarized in the

    following table.

    Megacycle estimates

    Messages sent or

    received per mailbox

    per day

    Megacycles per

    mailbox for active

    mailbox database

    Megacycles per

    mailbox for remote

    passive mailbox

    database

    Megacycles per

    mailbox for local

    passive mailbox

    50 1 0.1 0.15

    100 2 0.2 0.3

    150 3 0.3 0.45

    200 4 0.4 0.6

    Step 1: Calculate active mailbox CPU requirements

    In this step, you calculate the megacycles required to support the active database copies, using

    the following:

    Tier 1 (512 MB mailbox quota, 100 messages per day message profile, 75 KB average

    message size)

    Active mailbox megacycles required = profile specific megacycles number of mailbox

    users

    = 2 7650

    = 15300

  • 28

    Tier 2 (1024 MB mailbox quota, 100 messages per day message profile, 75 KB average

    message size)

    Active mailbox megacycles required = profile specific megacycles number of mailbox

    users

    = 2 900

    = 1800

    Tier 3 (4096 MB mailbox quota, 150 messages per day message profile, 75 KB average

    message size)

    Active mailbox megacycles required = profile specific megacycles number of mailbox

    users

    = 3 450

    = 1350

    Total active mailbox megacycles required (all tiers) = 18450 megacycles

    Step 2: Calculate active mailbox remote database copy CPU requirements

    In a design with three copies of each database, there is processor overhead associated with

    shipping logs required to maintain database copies on the remote servers. This overhead is

    typically 10 percent of the active mailbox megacycles for each remote copy being serviced.

    Calculate the requirements, using the following:

    Tier 1 (512 MB mailbox quota, 100 messages per day message profile, 75 KB average

    message size)

    Remote copy megacycles required = profile specific megacycles number of mailbox

    users number of remote copies

    = (0.2) (7650) 2

    = 3060

    Tier 2 (1024 MB mailbox quota, 100 messages per day message profile, 75 KB average

    message size)

    Remote copy megacycles required = profile specific megacycles number of mailbox

    users number of remote copies

    = (0.2) (900) 2

    = 360

    Tier 3 (4096 MB mailbox quota, 150 messages per day message profile, 75 KB average

    message size)

    Remote copy megacycles required = profile specific megacycles number of mailbox

    users number of remote copies

    = (0.3) (450) 2

    = 270

    Total remote copy megacycles required (all tiers) = 3690

  • 29

    Step 3: Calculate local passive mailbox CPU requirements

    In a design with three copies of each database, there is processor overhead associated with

    maintaining the local passive copies of each database. In this step, the high level megacycles

    required to support local passive database copies will be calculated. These numbers will be

    refined in a later step so that they match the server resiliency strategy and database copy layout.

    Calculate the requirements, using the following:

    Tier 1 (512 MB mailbox quota, 100 messages per day message profile, 75 KB average

    message size)

    Passive mailbox megacycles required = profile specific megacycles number of mailbox

    users number of passive copies

    = 0.3 7650 2

    = 4590

    Tier 2 (1024 MB mailbox quota, 100 messages per day message profile, 75 KB average

    message size)

    Passive mailbox megacycles required = profile specific megacycles number of mailbox

    users number of passive copies

    = 0.3 900 2

    = 540

    Tier 3 (4096 MB mailbox quota, 150 messages per day message profile, 75 KB average

    message size)

    Passive mailbox megacycles required = profile specific megacycles number of mailbox

    users number of passive copies

    = 0.45 450 2

    = 405

    Total passive mailbox megacycles required (all tiers) = 5535

    Step 4: Calculate total CPU requirements

    Calculate the total requirements, using the following:

    Total megacycles required = active mailbox + remote passive copies + local passive copies

    = 18450 + 3690 + 5535

    = 27676

    Total megacycles per mailbox = 3.08

    Return to top

    Determine Whether Server Virtualization Will Be Used

    Several factors are important when considering server virtualization for Exchange. For more

    information about supported configurations for virtualization, see Exchange 2010 System

    Requirements.

    The main reasons customers use virtualization with Exchange are as follows:

  • 30

    If you expect server capacity to be underutilized and anticipate better utilization, you may

    purchase fewer servers as a result of virtualization.

    You may want to use Windows Network Load Balancing when deploying Client Access, Hub

    Transport, and Mailbox server roles on the same physical server.

    If your organization is using virtualization in all server infrastructure, you may want to use

    virtualization with Exchange, to be in alignment with corporate standard policy.

    *Design Decision Point*

    In this solution, deploying additional physical hardware for Client Access servers and Hub

    Transport servers isn't wanted. Active/passive site resiliency design would require several

    Mailbox servers to support the DAG design and database copy layout, which may result in

    unused capacity on the Mailbox servers. Virtualization will be used to better utilize capacity

    across server roles.

    Return to top

    Determine Whether Client Access and Hub Transport Server Roles Will Be Deployed in Separate Virtual Machines

    When using virtualization for the Client Access and Hub Transport server roles, you may consider

    deploying both roles on the same VM. This approach reduces the number of VMs to manage, the

    number of server operating systems to update, and the number of Windows and Exchange

    licenses you need to purchase. Another benefit to combining the Client Access and Hub

    Transport server roles is to simplify the design process. When deploying roles in isolation, we

    recommend that you deploy one Hub Transport server logical processor for every four Mailbox

    server logical processors, and that you deploy three Client Access server logical processors for

    every four Mailbox server logical processors. This can be confusing, especially when you have to

    provide sufficient Client Access and Hub Transport servers during multiple VM or physical server

    failures or maintenance scenarios. When deploying Client Access, Hub Transport, and Mailbox

    servers on like physical servers or like VMs, you can deploy one server with the Client Access

    and Hub Transport server roles for every one Mailbox server in the site.

    *Design Decision Point*

    In this solution, co-locating the Hub Transport and Client Access server roles in the same VM is

    wanted. The Mailbox server role is deployed separately in a second VM. This will reduce the

    number of VMs and operating systems to manage as well as simplify planning for server

    resiliency.

    Return to top

    Determine Server Model for Hyper-V Root Server

    Step 1: Identify preferred server vendor

    In this solution, the preferred server vendor is Dell.

  • 31

    The Dell eleventh generation PowerEdge servers offer industry leading performance and

    efficiency. Innovations include increased memory capacity and faster I/O rates, which help deliver

    the performance required by today's most demanding applications.

    Step 2: Review available options from preferred vendor

    Dell's server portfolio includes several models that were considered for this implementation.

    Option 1: Dell PowerEdge M610 Blade Server

    The decision to use iSCSI attached storage provides the potential for taking advantage of Dell

    blades, based on the M1000e chassis. The M610 combines two sockets and twelve DIMMs in a

    half-height blade for a dense and power efficient server.

    Dell PowerEdge M1000e blade chassis

    Components Description

    Chassis\enclosure Form factor: 10U modular enclosure holds up

    to sixteen half-height blade servers

    44.0 cm (17.3") height 44.7 cm (17.6") width

    75.4 cm (29.7") depth

    Weight:

    Empty chassis98 pounds

    Chassis with all rear modules (IOMs,

    PSUs, CMCs, KVM)176lbs

    Max fully loaded with blades and rear

    modules394lbs

    Power supplies 3 (non-redundant) or 6 (redundant) 2,360 watt

    hot-plug power supplies

    Cooling fans M1000e chassis comes standard with 9 hot-

    pluggable, redundant fan modules

    Input device Front control panel with interactive graphical

    LCD:

    Supports initial configuration wizard

    Local server blade, enclosure, and module

    information and troubleshooting

    Two USB keyboard/mouse connections and

    one video connection (requires the optional

    Avocent iKVM switch to enable these ports) for

    local front crash cart console connections that

    can be switched between blades

    Enclosure I/O modules Up to six total I/O modules for three fully

    redundant fabrics, featuring Ethernet FlexIO

  • 32

    Components Description

    technology providing on-demand stacking and

    uplink scalability. Dell FlexIO technology

    delivers a level of I/O flexibility, bandwidth,

    investment protection, and capabilities

    unrivaled in the blade server market.

    FlexIO technologies include:

    Completely passive, highly available

    midplane that can deliver greater than

    5 terabytes per second (TBps) of total I/O

    bandwidth

    Support for up to two ports of up to

    40 gigabits per second (Gbps) from each

    I/O mezzanine card on the blade server

    Management 1 (standard) or optional second (redundant)

    Chassis Management Controller (CMC)

    Optional integrated Avocent keyboard, video

    and mouse (iKVM) switch

    Dell OpenManage systems management

    External storage options Dell EqualLogic PS series, Dell/EMC AX series,

    Dell/EMC CX series, Dell/EMC NS series, Dell

    PowerVault MD series, Dell PowerVault NX

    series

    Dell PowerEdge M610 server

    Components Description

    Processors (x2) Latest quad-core or six-core Intel Xeon

    processors 5500 and 5600 series

    Form factor Blade/modular half-height slot in an M1000e

    blade chassis

    Memory 12 DIMM slots

    1 GB/2 GB/4 GB/8 GB/16 GB ECC DDR3

    Support for up to 192 GB using 12 16 GB

    DIMMs

    Drives Internal hot-swappable drives:

    2.5" SAS (10,000 rpm): 36 GB, 73 GB,

    146 GB, 300 GB, 600 GB

  • 33

    Components Description

    2.5" SAS (15,000 rpm): 36 GB, 73 GB

    146 GB

    Solid-state drives (SSD):

    25 GB, 50 GB, 100 GB, 150 GB

    Maximum internal storage:

    Up to 1.2 terabyte via 4 300 GB SAS

    hard disk drives

    For external storage options, see previous

    M1000e blade chassis information

    I/O slots For details, see previous M1000e blade chassis

    information

    Option 2: Dell PowerEdge M710 blade server

    The M710 provides two sockets in a blade form factor but extends the number of DIMMs to

    eighteen, greatly expanding memory capacity. However, the M710 also is a full height blade. The

    extra RAM can make the R710 an attractive virtualization server.

    Dell PowerEdge T710 server

    Components Description

    Processors (x2) Latest quad-core or six-core Intel Xeon

    processors 5500 and 5600 series

    Form factor Blade/modularfull-height slot in an M1000e

    blade chassis

    Memory 18 DIMM slots

    1 GB/2 GB/4 GB/8 GB/16 GB ECC DDR3

    Support for up to 192 GB using 12 16 GB

    DIMMs

    Drives Internal hot-swappable drives:

    2.5" SAS (10,000 rpm): 36 GB, 73 GB,

    146 GB, 300 GB, 600 GB

    2.5" SAS (15,000 rpm): 36 GB, 73 GB

    146 GB

    SSD:

    25 GB, 50 GB, 100 GB, 150 GB

    Maximum Internal Storage:

    Up to 1.2 terabyte via 4 300 GB SAS

  • 34

    Components Description

    Hard Drives

    For external storage options, see previous

    M1000e blade chassis information

    I/O slots For details, see previous M1000e blade chassis

    information

    Option 3: Dell PowerEdge R710 rack mounted server

    Another choice for this implementation could be the Dell PowerEdge R710. This Intel-based

    platform is a 2u rack mounted server containing two sockets, eighteen DIMM slots, and the option

    of either eight 2.5", or six 3.5" internal hard disk drives. Although limited in internal disk capacity

    compared to the other server models presented, it scales beyond the R510 in memory (eighteen

    DIMMS compared to eight) and provides more I/O options. Storage capabilities may be expanded

    by using Dell PowerVault MD1200 or MD1220 direct attached storage arrays. The MD1200

    provides twelve 3.5" hard disk drives in a 2u rack mounted form factor, while the MD1220

    provides twenty-five 2.5" hard disk drives in the same 2u rack mounted form factor. These

    6 Gbps SAS connected arrays can be daisy chained, up to four arrays per RAID controller, and

    also support redundant connections from the server. This storage option satisfies requirements

    for lower cost storage and simplicity while giving each node the ability to scale in the number of

    supported mailboxes.

    Dell PowerEdge R710 server

    Components Description

    Processors (x2) Latest quad-core or six-core Intel Xeon

    processors 5500 and 5600 series

    Form factor 2U rack

    Memory Up to 192 GB (18 DIMM slots*):

    1 GB/2 GB/4 GB/8 GB/16 GB DDR3,

    800 megahertz (MHz), 1066 MHz, or 1333 MHz

    Drives Eight 2.5" hard disk drive option or six 3.5" hard

    disk drive option with optional flex bay

    expansion to support half-height TBU

    Up to six 3.5" drives with optional flex bay or up

    to eight 2.5" SAS or SATA drives with optional

    flex bay

    Peripheral bay options include slim optical drive

    bay with choice of DVD-ROM, combo CD-

    RW/DVD-ROM, or DVD + RW

    I/O slots 2 PCIe x8 + 2 PCIe x4 G2 or 1 x16 + 2 x4 G2

  • 35

    Option 4: Dell PowerEdge R810 rack mounted server

    The R810 is a two or four socket platform in a 2u form factor. It contains Dell patented FlexMem

    bridge technology, which allows the server to take advantage of all thirty-two DIMM slots even

    with only two processors installed. This enables the R810 to be a virtualization platform

    providing great compute power in a dense package.

    Dell PowerEdge R810 server

    Components Description

    Processors (x4) Up to Eight-Core Intel Xeon 7500 and 6500

    series processors

    Form factor 2U rack

    Memory Up to 512 GB (32 DIMM slots)

    1 GB/2 GB/4 GB/8 GB/16 GB DDR3 1066 MHz

    Drives Hot-swap option available with up to six 2.5"

    SAS or SATA drives, including SATA SSD

    I/O slots 6 PCIe G2 slots:

    Five x8 slot

    One x4 slot

    One storage x4 slot

    Step 3: Select a server model

    For this solution, Dell PowerEdge M610 blades is selected. To standardize on blades in the

    datacenter to take advantage of the density and power efficiencies is desired. Although the M710

    may be able to support more VMs per server than the M610, there is still more capacity to be

    saved with the M610 in this deployment due to it being half-height versus the M710 full-height

    form factor.

    In previous steps, megacycles required to support the number of active mailbox users were

    calculated. In the following steps, the number of available megacycles the selected server model

    and processor can support will be determined so that the number of active mailboxes each server

    can support can then be determined.

    Step 4: Determine benchmark value for server and processor

    Because the megacycle requirements are based on a baseline server and processor model, you

    need to adjust the available megacycles for the server against the baseline. To do this,

    independent performance benchmarks maintained by Standard Performance Evaluation

    Corporation (SPEC) are used. SPEC is a non-profit corporation formed to establish, maintain,

    and endorse a standardized set of relevant benchmarks that can be applied to the newest

    generation of high-performance computers.

  • 36

    To help simplify the process of obtaining the benchmark value for your server and processor, we

    recommend you use the Exchange Processor Query tool. This tool automates the manual steps

    to determine your planned processor's SPECInt 2006 rate value. To run this tool, your computer

    must be connected to the Internet. The tool uses your planned processor model as input, and

    then runs a query against the Standard Performance Evaluation Corporation Web site returning

    all test result data for that specific processor model. The tool also calculates an average SPECint

    2006 rate value based on the number of processors planned to be used in each Mailbox server

    Use the following calculations:

    Processor and server platform = Intel X5550 2.6 gigahertz (GHz) in a Dell M610

    SPECint_rate2006 value = 234

    SPECint_rate2006 value per processor core = 234 8

    = 29.25

    Step 5: Calculate adjusted megacycles

    In previous steps, you calculated the required megacycles for the entire environment based on

    megacycle per mailbox estimates. Those estimates were measured on a baseline system (HP

    DL380 G5 x5470 3.33 GHz, 8 cores) that has a SPECint_rate2006 value of 150 (for an 8 core

    server), or 18.75 per core.

    In this step, you need to adjust the available megacycles for the chosen server and processor

    against the baseline processor so that the required megacycles can be used for capacity

    planning.

    To determine the megacycles of the Dell M610 Intel X5550 2.6 GHz platform, use the following

    formula:

    Adjusted megacycles per core = (new platform per core value) (hertz per core of baseline

    platform) (baseline per core value)

    = (29.25 3330) 18.75

    = 5195

    Adjusted megacycles per server = adjusted megacycles per core number of cores

    = 5195 8

    = 41558

    Step 6: Adjust available megacycles for virtualization overhead

    When deploying VMs on the root server, megacycles required to support the hypervisor and

    virtualization stack must be accounted for. This overhead varies from server to server and under

    different workloads. A conservative estimate of 10 percent of available megacycles will be used.

    Use the following calculation:

    Adjusted available megacycles = usable megacycles 0.90

    = 41558 0.90

    = 37403

    So each server has a usable capacity for VMs of 37403 megacycles.

  • 37

    The usable capacity per logical processor is 4675 megacycles.

    Return to top

    Determine the CPU Capacity of the Virtual Machines

    Now that we know the megacycles of the root server we can calculate the megacycles of each

    VM. These values will be used to determine how many VMs are required and how many

    mailboxes will be hosted by each VM.

    Step 1: Calculate available megacycles per virtual machine

    In this step, you determine how many megacycles are available for each VM deployed on the root

    server. Because the server has eight logical processors, plan to deploy two VMs per server, each

    with four virtual processors. Use the following calculation:

    Available megacycles per VM = adjusted available megacycles per server number of VMs

    = 37403 2

    = 18701

    Step 2: Determine the target available megacycles per virtual machine

    Because the design assumptions state not to exceed 80 percent processor utilization, in this step,

    you adjust the available megacycles to reflect the 80 percent target. Use the following calculation:

    Because the design assumptions state not to exceed 70 percent processor utilization, in this step,

    you adjust the available megacycles to reflect the 70 percent target. Use the following calculation:

    Target available megacycles = available megacycles target max processor utilization

    = 18701 0.70

    = 13091

    Return to top

    Determine Number of Mailbox Server Virtual Machines Required

    You can use the following steps to determine the number of Mailbox server VMs required.

    Step 1: Determine the maximum number of mailboxes supported by the MBX virtual machine

    To determine the maximum number of mailboxes supported by the MBX VM, use the following

    calculation:

    Number of active mailboxes = available megacycles megacycles per mailbox

    = 13091 3.08

    = 4250

  • 38

    Step 2: Determine the minimum number of mailbox virtual machines required in the primary site

    To determine the minimum number of mailbox VMs required in the primary site, use the following

    calculation:

    Number of VMs required = total mailbox count in site active mailboxes per VM

    = 9000 4250

    = 2.2

    Based on processor capacity, minimum of three Mailbox server VMs to support the anticipated

    peak work load during normal operating conditions is required.

    Step 3: Determine number of Mailbox server virtual machines required to support the mailbox resiliency strategy

    In the previous step, you determined that a minimum of three Mailbox server VMs to support the

    target workload are needed. In an active/passive database distribution model, you need a

    minimum of three Mailbox server VMs in the secondary datacenter to support the workload during

    a site failure event. The DAG design will have nine Mailbox server VMs with six in the primary site

    and three in the secondary site.

    Datacenter vs. Mailbox server count

    Primary datacenter Secondary datacenter Total Mailbox server count

    2 1 3

    4 2 6

    6 3 9

    8 4 12

    Return to top

    Determine Number of Mailboxes per Mailbox Server

    You can use the following steps to determine the number of mailboxes per Mailbox server.

    Step 1: Determine number of active mailboxes per server during normal operation

    To determine the number of active mailboxes per server during normal operation, use the

    following calculation:

    Number of active mailboxes per server = total mailbox count server count

    = 9000 6

    = 1500

  • 39

    Step 2: Determine number of active mailboxes per server worst case failure event

    To determine the number of active mailboxes per server worst case failure event, use the

    following calculation:

    Number of active mailboxes per server = total mailbox count server count

    = 9000 3

    = 3000

    Return to top

    Determine Memory Required Per Mailbox Server

    You can use the following steps to determine the memory required per Mailbox server.

    Step 1: Determine database cache requirements per server for the worst case failure scenario

    In a previous step, you determined that the database cache requirements for all mailboxes was

    55 GB and the average cache required per active mailbox was 6.2 MB.

    To design for the worst case failure scenario, you calculate based on active mailboxes residing

    on three of six Mailbox servers. Use the following calculation:

    Memory required for database cache = number of active mailboxes average cache per

    mailbox

    = 3000 6.2 MB

    = 18600 MB

    = 18.2 GB

    Step 2: Determine total memory requirements per mailbox virtual machine server for the worst case failure scenario

    In this step, reference the following table to determine the recommended memory configuration.

    Memory requirements

    Server physical memory (RAM) Database cache size (Mailbox role only)

    24 GB 17.6 GB

    32 GB 24.4 GB

    48 GB 39.2 GB

    The recommended memory configuration to support 18.2 GB of database cache for a mailbox

    role server is 32 GB.

    Return to top

  • 40

    Determine Number of Client Access and Hub Transport Server Combo Virtual Machines Required

    In a previous step, it was determined that nine Mailbox server VMs are required. We recommend

    that you deploy one Client Access and Hub Transport server combo VM for every MBX VM.

    Therefore, the design will have nine Client Access and Hub Transport server combo VMs.

    Number of Client Access and Hub Transport server combo VMs required

    Server role configuration Recommended processor core ratio

    Mailbox server role: Client Access and Hub

    Transport combined server role

    1:1

    Determine Memory Required per Combined Client Access and Hub Transport Virtual Machines

    To determine the memory configuration for the combined Client Access and Hub Transport server

    role VM, reference the following table.

    Memory configurations for Exchange 2010 servers based on installed server roles

    Exchange 2010 server role Minimum supported Recommended maximum

    Hub Transport server role 4 GB 1 GB per core

    Client Access server role 4 GB 2 GB per core

    Client Access and Hub

    Transport combined server

    role (Client Access and Hub

    Transport server roles running

    on the same physical server)

    4 GB 2 GB per core

    Based on the preceding table, each combination Client Access and Hub Transport server VM

    requires a minimum of 8 GB of memory.

    Return to top

    Determine Virtual Machine Distribution

    When deciding which VMs to host on which root server, your main goal should be to eliminate

    single points of failure. Don't locate both Client Access and Hub Transport server role VMs on the

    same root server, and don't locate both Mailbox server role VMs on the same root server.

  • 41

    Virtual machine distribution (incorrect)

    The correct distribution is one Client Access and Hub Transport server role VM on each of the

    physical host servers and one Mailbox server role VM on each of the physical host servers. So in

    this solution there will be nine Hyper-V root servers each supporting one Client Access and Hub

    Transport server role VM and one Mailbox server role VM.

    Virtual machine distribution (correct)

    Return to top

    Determine Memory Required per Root Server

    To determine the memory required for each root server, use the following calculation:

    Root server memory = Client Access and Hub Transport server role VM memory + Mailbox

    server role VM memory

    = 8 GB + 32 GB

    = 40 GB

    The Hyper-V root server will require a minimum of 40 GB.

    Return to top

    Determine Minimum Number of Databases Required

    To determine the optimal number of Exchange databases to deploy, use the Exchange 2010

    Mailbox Role Calculator. Enter the appropriate information on the input tab and select Yes for

    Automatically Calculate Number of Unique Databases / DAG.

  • 42

    Database configuration

    On the Role Requirements tab, the recommended number of databases appears.

    Recommended number of databases

    In this solution, a minimum of 12 databases will be used. The exact number of databases may be

    adjusted in future steps to accommodate the database copy layout.

    Return to top

    Identify Failure Domains Impacting Database Copy Layout

    Use the following steps to identify failure domains impacting database copy layout.

    Step 1: Identify failure domains associated with storage

    In a previous step, it was decided to deploy three Dell EqualLogic PS6500E arrays and to deploy

    three copies of each database. To provide maximum protection for each of those database

    copies, we recommend that no more than one copy of a single database be located on the same

    physical array. In this scenario, each PS6500E represents a failure domain that will impact the

    layout of database copies in the DAG.

    Dell EqualLogic PS6500E arrays

    Step 2: Identify failure domains associated with servers

    In a previous step, it was determined that nine physical blade servers will be deployed. Six of

    those servers will be deployed in the primary datacenter and three in the secondary datacenter.

    Blades are associated with blade enclosures. So to support the site resiliency requirements, a

    minimum of 2 blade enclosures are required.

  • 43

    Failure domains associated with servers

    In the previous step, it was determined that PS6500E represents three failure domains. Consider

    when all six blades in the first enclosure to the two PS6500Es in the primary datacenter are

    connected. In the event that there is an issue impacting the enclosure, there are no other servers

    in the primary datacenter and you're forced to conduct a manual site switchover to the secondary

    datacenter. A better design is to deploy three blade enclosures, each with three of the nine server

    blades. Pair the servers in the first enclosure with the first PS6500E, the servers in the second

    enclosure with the second PS6500E, and the three servers in the secondary site with the

    PS6500E in the secondary site. By aligning the server and storage failure domains, the database

    copies are set in a manner that protects against issues with either the storage array or an entire

    blade enclosure.

    Failure domains associated with servers in two sites

    Return to top

    Design Database Copy Layout

    Use the following steps to design database copy layout.

    Step 1: Determine number of database copies per Mailbox server

    In a previous step, it was determined that the minimum number of unique databases that should

    be deployed is 12. In an active/passive configuration with three copies, we recommend that the

    number of databases equal the total number of Mailbox servers in the primary site multiplied by

    the number of Mailbox servers in a single failure domain and be greater than the minimum

    number of recommended databases. Use the following calculation:

    Unique database count = total number of Mailbox servers in primary datacenter number of

    Mailbox servers in failure domain

    = 6 3

  • 44

    =18

    Step 2: Determine database layout during normal operating conditions

    Consider equally distributing the C1 database copies (or the copies with an activation preference

    value of 1) to the servers in the primary datacenter. These are the copies that will be active during

    normal operating conditions.

    Database copy layout during normal operating conditions

    DB MBX1 MBX2 MBX3 MBX4 MBX5 MBX6

    DB1 C1

    DB2 C1

    DB3 C1

    DB4 C1

    DB5 C1

    DB6 C1

    DB7 C1

    DB8 C1

    DB9 C1

    DB10 C1

    DB11 C1

    DB12 C1

    DB13 C1

    DB14 C1

    DB15 C1

    DB16 C1

    DB17 C1

    DB18 C1

    In the preceding table, the following applies:

    C1 = active copy (activation preference value of 1) during normal operations

    Next distribute the C2 database copies (or the copies with an activation preference value of 2) to

    the servers in the second failure domain. During the distribution, you distribute the C2 copies

    across as many servers in the alternate failure domain as possible to ensure that a single server

    failure has a minimal impact on the servers in the alternate failure domain.

  • 45

    Database copy layout with C2 database copies distributed

    DB MBX1 MBX2 MBX3 MBX4 MBX5 MBX6

    DB1 C1 C2

    DB2 C1 C2

    DB3 C1 C2

    DB4 C1 C2

    DB5 C1 C2

    DB6 C1 C2

    DB7 C1 C2

    DB8 C1 C2

    DB9 C1 C2

    In the preceding table, the following applies:

    C1 = active copy (activation preference value of 1) during normal operations

    C2 = passive copy (activation preference value of 2) during normal operations

    Consider the opposite configuration for the other failure domain. Again, you distribute the C2

    copies across as many servers in the alternate failure domain as possible to ensure that a single

    server failure has a minimal impact on the servers in the alternate failure domain.

    Database copy layout with C2 database copies distributed in the opposite configuration

    DB MBX1 MBX2 MBX3 MBX4 MBX5 MBX6

    DB10 C2 C1

    DB11 C2 C1

    DB12 C2 C1

    DB13 C2 C1

    DB14 C2 C1

    DB15 C2 C1

    DB16 C2 C1

    DB17 C2 C1

    DB18 C2 C1

    In the preceding table, the following applies:

    C1 = active copy (activation preference value of 1) during normal operations

  • 46

    C2 = passive copy (activation preference value of 2) during normal operations

    Step 3: Determine database layout during server failure and maintenance conditions

    Before the secondary datacenter and distribute the C3 copies are considered, examine the

    following server failure scenario. In the following example, if server MBX1 fails, the active

    database copies will automatically move to servers MBX4, MBX5, and MBX6. Notice that each of

    the three servers in the alternate failure domain are now running with four active databases and

    the active databases are equally distributed across all three servers.

    Database copy layout during server maintenance or failure

    In the preceding table, the following applies:

  • 47

    C1 = active copy (activation preference value of 1) during normal operations

    C2 = passive copy (activation preference value of 2) during normal operations

    In a maintenance scenario, you could move the active mailbox databases from the servers in the

    first failure domain (MBX1, MBX2, MBX3) to the servers in the second failure domain (MBX4,

    MBX5, MBX6), complete maintenance activities, and then move the active database copies back

    to the C1 copies on the servers in the first failure domain. You can conduct maintenance activities

    on all servers in the primary datacenter in two passes.

    Database copy layout during server maintenance

    In the preceding table, the following applies:

    C1 = active copy (activation preference value of 1) during normal operations

    C2 = passive copy (activation preference value of 2) during normal operations

  • 48

    Step 4: Add database copies to secondary datacenter to support site resiliency

    The last step in the database copy layout is to add the C3 copies (or copies with an activation

    preference value of 3) to the servers in the secondary datacenter to provide