_Windows Server 2012 - Overview - Module 02 - Features and Enhancements (Detailed)

download _Windows Server 2012 - Overview - Module 02 - Features and Enhancements (Detailed)

of 90

Transcript of _Windows Server 2012 - Overview - Module 02 - Features and Enhancements (Detailed)

  • 0 2012 Microsoft Corporation Microsoft Confidential

  • 1 2012 Microsoft Corporation Microsoft Confidential

  • 2 2012 Microsoft Corporation Microsoft Confidential

  • This page is hidden since this will be a printed copy.

    3 2012 Microsoft Corporation Microsoft Confidential

  • 4 2012 Microsoft Corporation Microsoft Confidential

  • 5 2012 Microsoft Corporation Microsoft Confidential

  • Introduction

    Add your introduction text.

    Objectives

    After completing this lesson, you will be able to:

    Identify the improvements made to Database Mirroring in SQL Server. Explain what Database Mirroring is and its architecture. Identify the hard and soft errors that result in a failover response. Identify the various Database Mirroring states. Explain the role of the witness server in Database Mirroring Initiate and terminate a Database Mirroring session.

    6 2012 Microsoft Corporation Microsoft Confidential

  • Microsoft uniquely delivers the Cloud OS as a consistent and comprehensive set of capabilities across your datacenter, ours or someone elses to support the worlds apps and data anywhere.

    Introducing Windows Server 2012: Windows Server 2012 is at the heart of the Cloud OS and delivers on the promises of a modern data center to bring you the economics, agility and innovation of cloud both on your premises and off. Weve seen hundreds of thousands of downloads of the pre-release versions, thousands of engineers worked on this product and we couldnt be more proud to share it with you.

    Lets take a closer look at how Windows Server 2012 can deliver value to your organization.

    Windows Server Management Marketing

    7/17/2013

    7

  • The packaging and licensing structure for Windows Server 2012 Datacenter edition and Windows Server 2012 Standard edition has been updated to simplify purchasing and reduce management requirements. Two editions differentiated only by virtualization rights two virtual instances for Standard edition and unlimited virtual instances for Datacenter edition. A consistent processor-based licensing model that covers up to two physical processors on a server. Windows Server 2012 Essentials edition and Windows Server 2012 Foundation edition remain unchanged. Server-based licensing model Foundation is for single processor servers and Essentials is for either one or two processor servers. CALs not required for access Foundation comes with 15 user accounts and Essentials comes with 25 user accounts.

    8

  • Please note these are minimums. In the brackets are the minimums I would recommend for a physical server. A virtual server you could go down to 1 vCPU and 1 GB of RAM.

    When you install Windows Server 2012, you can choose between Server Core Installation and Server with a GUI. The Server with a GUI option is the Windows 8 equivalent of the Full installation option available in Windows Server 2008 R2. The Server Core Installation option reduces the space required on disk, the potential attack surface, and especially the servicing requirements, so we recommend that you choose the Server Core installation unless you have a particular need for the additional user interface elements and graphical management tools that are included in the Server with a GUI option. For this reason, the Server Core installation is now the default. Because you can freely switch between these options at any time later, one approach might be to initially install the Server with a GUI option, use the graphical tools to configure the server, and then later switch to the Server Core Installation option. An intermediate state is possible where you start with a Server with a GUI installation and then remove Server Graphical Shell, resulting in a server that comprises the Minimal Server Interface, Microsoft Management Console (MMC), Server Manager, and a subset of Control Panel. See the Minimal Server Interface section of this document for more information. In addition, after installation of either option is complete, you can completely remove the binary files for server roles and features that you do not need, thereby conserving disk space and reducing the attack surface still further. See the Features on Demand section of this document for more information. For the smallest possible installation footprint, start with a Server Core installation and remove any server roles or features you do not need by using Features on Demand. If you choose the Server Core Installation option With this option, the standard user interface (the Server Graphical Shell) is not installed; you manage the server using the command line, Windows PowerShell, or by remote methods. User interface: command prompt (Server Graphical Shell is not installed) Install, configure, uninstall server roles locally: at a command prompt with Windows PowerShell. Install, configure, uninstall server roles remotely: with Server Manager, Remote Server Administration Tools (RSAT), or Windows PowerShell. Microsoft Management Console: not available locally. Desktop Experience: not available. Server roles available:

    Active Directory Certificate Services Active Directory Domain Services DHCP Server DNS Server File Services (including File Server Resource Manager) Active Directory Lightweight Directory Services (AD LDS) Hyper-V Print and Document Services Streaming Media Services Web Server (including a subset of ASP.NET) Windows Server Update Server Active Directory Rights Management Server Routing and Remote Access Server

    9

  • 2012 Microsoft Corporation Microsoft Confidential 10

  • More Intuitive

    Enhanced ISE with Intellisense

    Simplified language syntax

    Updatable help system

    Easy command discovery and import

    Broader Coverage

    Over 2,300 cmdlets across Windows

    Support for thriving community

    Script Explorer & Script Library

    Greater Resiliency

    Robust session connectivity

    Integrated workflow

    Connect/disconnect remote sessions

    Scheduled jobs

    PowerShell 3.0 is a better 2.0

    Many suggestions addressed

    On-the-fly compilation allows scripts to run up to 6x faster

    Enhanced interactive console experience

    Core cmdlet and provider improvements

    11

    7/17/2013

  • Windows PowerShell 3.0 provides a comprehensive management platform for all aspects of the data center: servers, network, and storage. Windows PowerShell 3.0 includes 260 core cmdlets. Windows Server 2012 includes more than 2,300 total cmdlets in 85 available modules.

    12

    7/17/2013

  • Value added to the PowerShell ecosystem:

    End-Users Remote management Access anywhere

    Partners Return from their PowerShell investment

    Requirements: Client

    Browser (HTML + Ajax) Gateway

    Windows Server 2012, PowerShell Web Access role Target

    PowerShell Remoting

    13

    7/17/2013

  • New Windows PowerShell ISE Features. The Windows PowerShell Integrated Scripting Environment (ISE) 3.0 includes many new features to ease beginning users into Windows PowerShell and provide advanced editing support for scripters. Some of the new features are: Show-Command pane lets users find and run cmdlets in a dialog box. IntelliSense provides context-sensitive command completion for cmdlet and script

    names, parameter names and enumerated values, and property and method names. IntelliSense also supports paths, types, and variables.

    14

    7/17/2013

  • The ForEach-Object and Where-Object cmdlets have been updated to support an intuitive command structure that more closely models natural language. Users can construct commands without script block, braces, the current object automatic variable ($_), or dot operators to get properties and methods. In short, the punctuation that plagued beginning users is no longer required.

    New Windows PowerShell ISE Features. Windows PowerShell ISE 3.0 includes many other new features to ease beginning users into Windows PowerShell and provide advanced editing support for scripters. Some of the new features are: Windows PowerShell 3.0 helps IT Pros by providing access to a library of Windows

    PowerShell code snippets, within Windows PowerShell ISE. To access Integrated Script Snippets, the user presses the keystroke (Ctrl+J). The user can then select from a list of script templates, select the appropriate template, and have partially completed script inserted into the editor.

    Collapsible regions in scripts and XML files make navigation in long scripts easier.

    15

    7/17/2013

  • With the new release of Windows PowerShell, sessions aren't just persistent; they are resilient. Robust Session Connectivity allows sessions to remain in a connected state even when network connectivity is briefly disrupted. With Robust Session Connectivity, remote sessions can remain in a connected state for up to 4 minutes, even if the client computer crashes or becomes inaccessible, and tasks on the managed nodes continue to run on their own making the end to end system more reliable. If connectivity cannot be restored within 4 minutes, execution on the managed nodes is suspended with no loss of data and remote sessions automatically transition to a disconnected state, allowing them to be reconnected after network connectivity is restored. Corruption of application and system state from premature termination of running tasks due to unexpected client disconnection is virtually eliminated

    16

    7/17/2013

  • Download Required. Note the script explorer is a pre-release currently at the time this deck was created and will be updated.

    17

    7/17/2013

  • 18

    7/17/2013

  • Note to Presenter: Feel free to unhide if youd like to illustrate to the audience how much was added as part of this release.

    19

  • Glance-able

    Down-level-able we can use this across Windows Server 2012, 2008 R2 and 2008 servers we dont pull all the information, but we do provide a central view of server information.

    20

    7/17/2013

  • In Windows Server 2008 R2, roles and features are deployed by using the Add Roles Wizard or Add Features Wizard in Server Manager running on a local server. This requires either physical access to the server or Remote Desktop access by using RDP. Installing Remote Server Administration Tools lets you run Server Manager on a Windows-based client computer, but adding roles and features is disabled, because remote deployment isnt supported. In Windows Server 2012 the deployment capabilities are extended to support robust remote deployment of roles and features. Using Server Manager in Windows Server 2012, IT Pros can provision servers from their desktops without requiring either physical access to the systems or the need to enable RDP connection to each server. Windows Server 2012 with Server Manager can deploy both roles and features in a single session using the unified Add Roles and Features Wizard. The Add Roles and Features Wizard in Windows Server 2012 performs validation passes on a server you select for deployment as part of the installation process; theres no need to pre-verify that a server in your Server Manager server pool is properly configured to support a role. Jeffrey Snover provides additional perspective on multiserver management in Windows Server 2012 in the following blog post: http://blogs.technet.com/b/windowsserver/archive/2012/03/07/rocking-the-windows-server-8-administrative-experience.aspx Potential Questions: Why cannot I select multiple servers in Add Roles and Features window and perform an operation in a bulk?

    Unfortunately this functionality couldnt be incorporated in the UI due to schedule/cost constraints. For Windows Server 2012, admins can write a short PowerShell script that uses a combination of UI and Install-

    WindowsFeature cmdlet to do a batch deployment. Let me know if you need more details I can also provide you the sample script. Your feedback is noted for the future versions though.

    Microsoft merged Roles and Features into a single wizard. But why there are still separate wizards

    for adding and removing them?

    This has been considered as an option but the complexity of merging the add and remove wizards

    outweighs the potential gains. Feedback is noted for future considerations.

    21

    7/17/2013

  • Administrators can deploy roles and features to offline virtual hard disks from Server Manager. In a single session in the Add Roles and Features Wizard, you can add your desired roles and features to an offline virtual hard disk, allowing for faster and simpler repetition and consistency of desired configurations. As discussed earlier, deployment of both roles and features is combined into a single Add Roles and Features Wizard. While the process of installing roles is familiar, and consistent with the Add Roles Wizard in earlier Windows Server releases, there are changes. To support remote deployment and installations on offline virtual hard disks, some roles have moved some initial configuration (tasks formerly performed in the Add Roles Wizard during an installation) into post-installation configuration wizards. For some offline virtual hard disk deployments, installation tasks are scheduled to run the first time the machine is started.

    22

    7/17/2013

  • In Windows Server 2008 and Windows Server 2008 R2, you connect to a server to get to view the roles running on that individual server.

    23

    7/17/2013

  • Note that Windows Server 2012 does not abandon the old management model; it simply expands upon it. Sometimes you need to manage a server and its roles. Sometimes you need to manage a role and its servers. A modern server operating system needs to provide this management flexibility.

    24

    7/17/2013

  • 25 2012 Microsoft Corporation Microsoft Confidential

  • Populate the demo title depending upon which demo you plan to deliver. If you dont plan to deliver demos, please hide this slide.

    26

  • 2012 Microsoft Corporation Microsoft Confidential 27

  • Before Windows Server 2012

    Hyper-V in Windows Server 2008 R2 supported configuring virtual machines with a maximum of

    four virtual processors and up to 64 GB of memory. However, IT organizations increasingly want

    to use virtualization when they deploy mission-critical, tier-1 business applications. Large,

    demanding workloads such as online transaction processing (OLTP) databases and online

    transaction analysis (OLTA) solutions typically run on systems with 16 or more processors and

    demand large amounts of memory. For this class of workloads, more virtual processors and

    larger amounts of virtual machine memory are a core requirement.

    Hyper-V in Windows Server 2012

    Hyper-V in Windows Server 2012 greatly expands support for host processors and memory. New

    features include support for up to 64 processors and 1 TB of memory for Hyper-V guests, a new

    VHDX virtual hard disk format with larger disk capacity of up to 64 TB (see the section, New

    virtual hard disk format), and additional resiliency. These features help ensure that your

    virtualization infrastructure can support the configuration of large, high-performance virtual

    machines to support workloads that might need to scale up significantly.

    7/17/2013

    Page 28

  • With the evolution of storage systems, and the ever-increasing reliance on virtualized enterprise workloads, the VHD format of Windows Server needed to also evolve. The new format is better suited to address the current and future requirements for running enterprise-class workloads, specifically:

    Where the size of the VHD is larger then 2,040 GB.

    To reliably protect against issues for dynamic and differencing disks during power failures.

    To prevent performance degradation issues on the new, large-sector physical disks.

    Hyper-V in Windows Server 2012 contains an update to the VHD format, called VHDX, that has much larger capacity and additional resiliency. VHDX supports up to 64 terabytes of storage. It also provides additional protection from corruption from power failures by logging updates to the VHDX metadata structures and

    prevents performance degradation on large-sector physical disks by optimizing structure alignment.

    Technical description

    The VHDX formats principal new features are:

    Support for virtual hard disk storage capacity of up to 64 terabytes.

    Protection against corruption during power failures by logging updates to the VHDX metadata structures. The format contains an internal log that is used to capture updates to the metadata of the virtual hard disk file before being written to its final location. In case of a power failure, if the write to

    the final destination is corrupted, then it is played back from the log to promote consistency of the virtual hard disk file.

    Optimal structure alignment of the virtual hard disk format to suit large sector disks. If unaligned I/Os are issued to these disks, an associated performance penalty is caused by the Read-Modify-Write cycles that are required to satisfy these I/Os. The structures in the format are aligned to help ensure

    that are no unaligned I/Os exist.

    The VHDX format also provides the following features:

    Larger block sizes for dynamic and differential disks, which lets these disks attune to the needs of the workload.

    A 4-KB logical sector virtual disk that results in increased performance when applications and workloads that are designed for 4-KB sectors use it.

    7/17/2013

    Page 29

  • The ability to store custom metadata about the file that you might want to record, such as

    operating system version or patches applied.

    Efficiency (called trim) in representing data, which results in smaller files and lets the

    underlying physical storage device reclaim unused space. (Trim requires pass-through or

    SCSI disks and trim-compatible hardware.)

    The figure illustrates the VHDX hard disk format.

    As you can see in the preceding figure, most of the structures are large allocations and are MB

    aligned. This alleviates the alignment issue that is associated with virtual hard disks. The different

    regions of the VHDX format are as follows:

    Header region. The header region is the first region of the file and identifies the location

    of the other structures, including the log, block allocation table (BAT), and metadata

    region. The header region contains two headers, only one of which is active at a time, to

    increase resiliency to corruptions.

    Intent log. The intent log is a circular ring buffer. Changes to the VHDX metastructures are

    written to the log before they are written to the final location. If corruption occurs during a

    power failure while an update is being written to the actual location, on the subsequent

    open, the change is applied again from the log, and the VHDX file is brought back to a

    consistent state. The log does not track changes to the payload blocks, so it does not

    protect data contained within them.

    Data region. The BAT contains entries that point to both the user data blocks and sector

    bitmap block locations within the VHDX file. This is an important difference from the VHD

    format because sector bitmaps are aggregated into their own blocks instead of being

    appended in front of each payload block.

    Metadata region. The metadata region contains a table that points to both user-defined

    metadata and virtual hard disk file metadata such as block size, physical sector size, and

    logical sector size.

    Hyper-V in Windows Server 2012 also introduces support that lets VHDX files be more efficient

    when they represent that data within it.

    2012 Microsoft Corporation Microsoft Confidential 29

  • Because the VHDX files can be large, based on the workload they are supporting, the space they

    consume can grow quickly. Currently, when applications delete content within a virtual hard disk,

    the Windows storage stack in both the guest operating system and the Hyper-V host have

    limitations that prevent this information from being communicated to the virtual hard disk and

    the physical storage device.

    This contains the Hyper-V storage stack from optimizing the space used and prevents the

    underlying storage device from reclaiming the space previously occupied by the deleted data.

    In Windows Server 2012, Hyper-V now supports unmap notifications, which lets VHDX files be

    more efficient in representing that data within it. This results in smaller files size, which lets the

    underlying physical storage device reclaim unused space.

    Benefits

    VHDX, which is designed to handle current and future workloads, has a much larger storage

    capacity than the earlier formats and addresses the technological demands of evolving

    enterprises. The VDHX performance-enhancing features make it easier to handle large workloads,

    protect data better during power outages, and optimize structure alignments of dynamic and

    differential disks to prevent performance degradation on new, large-sector physical disks.

    Requirements

    To take advantage of the new version of the new VHDX format, you need the following:

    Windows Server 2012 or Windows 8

    The Hyper-V server role

    To take advantage of the trim feature, you need the following:

    VHDX-based virtual disks connected as virtual SCSI devices or as directly attached physical

    disks (sometimes referred to as pass-through disks). This optimization also is supported for

    natively attached VHDX-based virtual disks.

    2012 Microsoft Corporation Microsoft Confidential 29

  • Trim-capable hardware.

    2012 Microsoft Corporation Microsoft Confidential 29

  • Note: This slide has 2 Clicks for animation to describe how live migration works when you use Virtual Fibre Channel in the VM. Current situation You need your virtualized workloads to connect to your existing storage arrays with as little trouble as possible. Many enterprises have already invested in Fibre Channel SANs, deploying them in their data centers to address their growing storage requirements. These customers often want the ability to use this storage from within their virtual machines instead of having it only accessible from and used by the Hyper-V host.

    Virtual Fibre Channel for Hyper-V, a new feature of Windows Server 2012, provides Fibre Channel ports within the guest operating system, which lets you connect to Fibre Channel directly from within virtual machines.

    With Windows Server 2012

    Virtual Fibre Channel support includes the following:

    Unmediated access to a SAN. Virtual Fibre Channel for Hyper-V provides the guest operating system with unmediated access to a SAN by using a standard World Wide Name (WWN) associated with a virtual machine. Hyper-V lets you use Fibre Channel SANs to virtualize workloads that require direct access

    to SAN logical unit numbers (LUNs). Fibre Channel SANs also allow you to operate in new scenarios, such as running the Windows Failover Cluster Management feature inside the guest operating system of a virtual machine connected to shared Fibre Channel storage.

    A hardware-based I/O path to the Windows software virtual hard disk stack. Mid-range and high-end storage arrays include advanced storage functionality that helps offload certain management tasks from the hosts to the SANs. Virtual Fibre Channel presents an alternative, hardware-based I/O path to

    the Windows software virtual hard disk stack. This path lets you use the advanced functionality of your SANs directly from Hyper-V virtual machines. For example, Hyper-V users can offload storage functionality (such as taking a snapshot of a LUN) to the SAN hardware simply by using a hardware Volume

    Shadow Copy Service (VSS) provider from within a Hyper-V virtual machine.

    N_Port ID Virtualization (NPIV). NPIV is a Fibre Channel facility that allows multiple N_Port IDs to share a single physical N_Port. This allows multiple Fibre Channel initiators to occupy a single physical port, easing hardware requirements in SAN design, especially where virtual SANs are called for. Virtual

    Fibre Channel for Hyper-V guests uses NPIV (T11 standard) to create multiple NPIV ports on top of the hosts physical Fibre Channel ports. A new NPIV port is created on the host each time a virtual host bus adapter (HBA) is created inside a virtual machine. When the virtual machine stops running on the host,

    the NPIV port is removed.

    A single Hyper-V host connected to different SANs with multiple Fibre Channel ports. Hyper-V allows you to define virtual SANs on the host to accommodate scenarios where a single Hyper-V host is connected to different SANs via multiple Fibre Channel ports. A virtual SAN defines a named group of

    physical Fibre Channel ports that are connected to the same physical SAN. For example, assume a Hyper-V host is connected to two SANsa production SAN and a test SAN. The host is connected to each SAN through two physical Fibre Channel ports. In this example, you might configure two virtual SANs

    one named Production SAN that has two physical Fibre Channel ports connected to the production SAN and one named Test SAN that has two physical Fibre Channel ports connected to the test SAN. You can use the same technique to name two separate paths to a single storage target.

    Up to four virtual Fibre Channel adapters on a virtual machine. You can configure as many as four virtual Fibre Channel adapters on a virtual machine and associate each one with a virtual SAN. Each virtual Fibre Channel adapter is associated with one WWN address, or two WWN addresses to support live

    migration. Each WWN address can be set automatically or manually.

    MPIO functionality. Hyper-V in Windows Server 2012 can use the multipath I/O (MPIO) functionality to help ensure optimal connectivity to Fibre Channel storage from within a virtual machine. You can use MPIO functionality with Fibre Channel in the following ways:

    o Virtualize workloads that use MPIO. Install multiple Fibre Channel ports in a virtual machine, and use MPIO to provide highly available connectivity to the LUNs accessible by the host.

    7/17/2013

    Page 30

  • o Configure multiple virtual Fibre Channel adapters inside a virtual machine, and use a

    separate copy of MPIO within the guest operating system of the virtual machine to

    connect to the LUNs the virtual machine can access. This configuration can coexist

    with a host MPIO setup.

    o Use different device-specific modules (DSMs) for the host or each virtual machine.

    This approach allows live migration of the virtual machine configuration, including

    the configuration of DSM and connectivity between hosts and compatibility with

    existing server configurations and DSMs.

    Live migration support with virtual Fibre Channel in Hyper-V: To support live migration of

    virtual machines across hosts running Hyper-V while maintaining Fibre Channel connectivity,

    two WWNs are configured for each virtual Fibre Channel adapter: Set A and Set B. Hyper-V

    automatically alternates between the Set A and Set B WWN addresses during a live migration.

    This helps to ensure that all LUNs are available on the destination host before the migration

    and minimal downtime occurs during the migration.

    Requirements for Virtual Fibre Channel in Hyper-V:

    One or more installations of Windows Server 2012 with the Hyper-V role installed. Hyper-V

    requires a computer with processor support for hardware virtualization.

    A computer with one or more Fibre Channel HBAs, each with an updated HBA driver that

    supports Virtual Fibre Channel. Updated HBA drivers are included with the in-box HBA drivers

    for some models.

    Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 as the guest

    operating system.

    Connection only to data LUNs. Storage accessed through a Virtual Fibre Channel connected

    to a LUN cant be used as boot media.

    2012 Microsoft Corporation Microsoft Confidential 30

  • NOTE: This slide is animated and has 5 clicks To maintain optimal use of physical resources and to add new virtual machines easily, you must be able to move virtual machines whenever necessary without disrupting your business. Windows Server 2008 R2 introduced live migration, which made it possible to move a running virtual machine from one physical computer to another with no downtime and no service interruption. However, this assumed that the virtual hard disk for the virtual machine remained consistent on a shared storage device such as a Fibre Channel or iSCSI SAN. In Windows Server 2012, live migrations are no longer limited to a cluster and virtual machines can be migrated across cluster boundaries, including to any Hyper-V host server in your environment. Hyper-V builds on this feature, adding support for simultaneous live migrations, enabling you to move several virtual machines at the same time. When combined with features such as Network Virtualization, this feature even allows virtual machines to be moved between local and cloud hosts with ease. In this example, we are going to show how live migration works when connected to an SMB File Share. With Windows Server 2012 and SMB3, you can store your virtual machine hard disk files and configuration files on an SMB share and live migrate the VM to another host whether that host is part of a cluster or not. [Click] Live migration setup: During the live migration setup stage, the source host creates a TCP connection with the destination host. This connection transfers the virtual machine configuration data to the destination host. A skeleton virtual machine is set up on the destination host, and memory is allocated to the destination virtual machine. [Click] Memory page transfer: In the second stage of a SMB-based live migration, the memory that is assigned to the migrating virtual machine is copied over the network from the source host to the destination host. This memory is referred to as the working set of the migrating virtual machine. A page of memory is 4 KB. During this phase of the migration, the migrating virtual machine continues to run. Hyper-V iterates the memory copy process several times, with each iteration requiring a smaller number of modified pages to be copied. After the working set is copied to the destination host, the next stage of the live migration begins. [Click] Memory page copy process: This stage is a memory copy process that duplicates the remaining modified memory pages for Test VM to the destination host. The source host transfers the CPU and device state of the virtual machine to the destination host. During this stage, the available network bandwidth between the source and destination hosts is critical to the speed of the live migration. Use of a 1-gigabit Ethernet (GbE) or faster connection is important. The faster the source host transfers the modified pages from the migrating virtual machines working set, the more quickly live migration is completed. The number of pages transferred in this stage is determined by how actively the virtual machine accesses and modifies the memory pages. The more modified pages, the longer it takes to transfer all pages to the destination host. [Click] Moving the storage handle from source to destination: During this stage of a live migration, control of the storage that is associated with Test VM, such as any virtual hard disk files or physical storage attached through a virtual Fibre Channel adapter, is transferred to the destination host. (Virtual Fibre Channel is also a new feature of Hyper-V. For more information, see Virtual Fibre Channel in Hyper-V). The following figure shows this stage. [Click] Bringing the virtual machine online on the destination server: In this stage of a live migration, the destination server has the up-to-date working set for the virtual machine and access to any storage that the VM uses. At this time, the VM resumes operation. Network cleanup: In the final stage of a live migration, the migrated virtual machine runs on the destination server. At this time, a message is sent to the network switch, which causes the switch to obtain the new MAC addresses of the migrated virtual machine so that network traffic to and from the VM can use the correct switch port. The live migration process completes in less time than the TCP time-out interval for the virtual machine that is being migrated. TCP time-out intervals vary based on network topology and other factors.

    31

  • NOTE: This slide is animated and has 3 clicks

    Not only can we live migrate a virtual machine between two physical hosts, Hyper-V in Windows Server 2012 introduces live storage migration, which lets you move virtual hard disks that are attached to a running virtual machine without downtime. Through this feature, you can transfer virtual hard disks, with no

    downtime, to a new location for upgrading or migrating storage, performing backend storage maintenance, or redistributing your storage load. You can perform this operation by using a new wizard in Hyper-V Manager or the new Hyper-V cmdlets for Windows PowerShell. Live storage migration is available for both

    storage area network (SAN)-based and file-based storage.

    When you move a running virtual machines virtual hard disks, Hyper-V performs the following steps to move storage:

    Throughout most of the move operation, disk reads and writes go to the source virtual hard disk.

    [Click]

    After live storage migration is initiated, a new virtual hard disk is created on the target storage device. While reads and writes occur on the source virtual hard disk, the disk contents are copied to the new destination virtual hard disk.

    [Click]

    After the initial disk copy is complete, disk writes are mirrored to both the source and destination virtual hard disks while outstanding disk changes are replicated.

    [Click]

    After the source and destination virtual hard disks are synchronized, the virtual machine switches over to using the destination virtual hard disk. The source virtual hard disk is deleted.

    Just as virtual machines might need to be dynamically moved in a cloud data center, allocated storage for running virtual hard disks might sometimes need to be moved for storage load distribution, storage device servicing, or other reasons.

    [Additional information]

    Updating the physical storage that is available to Hyper-V is the most common reason for moving a virtual machines storage. You also may want to move virtual machine storage between physical storage devices, at runtime, to take advantage of new, lower-cost storage that is supported in this version of Hyper-V, such

    as SMB-based storage, or to respond to reduced performance that can result from bottlenecks in the storage throughput. Windows Server 2012 provides the flexibility to move virtual hard disks both on shared storage subsystems and on non-shared storage as long as a Windows Server 2012 SMB3 network shared

    7/17/2013

    Page 32

  • folder is visible to both Hyper-V hosts.

    You can add physical storage to either a stand-alone system or to a Hyper-V cluster and then

    move the virtual machines virtual hard disks to the new physical storage while the virtual

    machines continue to run.

    Storage migration, combined with live migration, also lets you move a virtual machine

    between hosts on different servers that are not using the same storage. For example, if two

    Hyper-V servers are each configured to use different storage devices and a virtual machine must

    be migrated between these two servers, you can use storage migration to a shared folder on a

    file server that is accessible to both servers and then migrate the virtual machine between the

    servers (because they both have access to that share). Following the live migration, you can use

    another storage migration to move the virtual hard disk to the storage that is allocated for the

    target server.

    You can easily perform the live storage migration using a wizard in Hyper-V Manager or Hyper-V

    cmdlets for Windows PowerShell.

    Benefits

    Hyper-V in Windows Server 2012 lets you manage the storage of your cloud environment with

    greater flexibility and control while you avoid disruption of user productivity. Storage migration

    with Hyper-V in Windows Server 2012 gives you the flexibility to perform maintenance on

    storage subsystems, upgrade storage appliance firmware and software, and balance loads as

    capacity is used without shutting down virtual machines.

    Requirements for live storage migration

    Windows Server 2012. The Hyper-V role. Virtual machines configured to use virtual hard disks for storage.

    2012 Microsoft Corporation Microsoft Confidential 32

  • NOTE: This slide is animated and has 4 clicks With Windows Server 2012 Hyper-V, you can also perform a Shared Nothing Live Migration where you can move a virtual machine, live, from one physical system to another even if they dont have connectivity to the same shared storage. This is useful, for example, in a branch office where you may be storing the virtual machines on local disk, and you want to move a VM from one node to another. This is also especially useful when you have two independent clusters and you want to move a virtual machine, live, between them, without having to expose their shared storage to one another. You can also use Shared Nothing Live Migration to migrate a virtual machine from one datacenter to another provided your bandwidth is large enough to transfer all of the data between the datacenters. As you can see in the animation, when you perform a live migration of a virtual machine between two computers that do not share an infrastructure, Hyper-V first performs a partial migration of the virtual machines storage by creating a virtual machine on the remote system and creating the virtual hard disk on the target storage device. [Click] While reads and writes occur on the source virtual hard disk, the disk contents are copied over the network to the new destination virtual hard disk. This copy is performed by transferring the contents of the VHD between the two servers over the IP connection between the Hyper-V hosts. [Click] After the initial disk copy is complete, disk writes are mirrored to both the source and destination virtual hard disks while outstanding disk changes are replicated. This copy is performed by transferring the contents of the VHD between the two servers over the IP connection between the Hyper-V hosts. [Click] After the source and destination virtual hard disks are synchronized, the virtual machine live migration process is initiated, following the same process that was used for live migration with shared storage. After the virtual machines storage is migrated, the virtual machine migrates while it continues to run and provide network services. [Click] After the live migration is complete and the virtual machine is successfully running on the destination server, the files on the source server are deleted.

    7/17/2013

    Page 33

  • Current situation

    Business continuity is the ability to quickly recover business functions from a downtime event with minimal or no data loss. There are number of reasons why businesses experience outage including power failure, IT hardware failure, network outage, human errors, IT software failures, and natural disasters. Depending

    on the type of outage, customers need a high availability solution that simply restores the service. However, some outages that impact the entire data center such as natural disaster or an extended power outage require a disaster recovery solution that restores data at a remote site in addition to bringing up the

    services and connectivity. Organizations need an affordable and reliable business continuity solution that helps them recover from a failure.

    Before Windows Server 2012 Beginning with Windows Server 2008 R2, Hyper-V and Failover Clustering can be used together to make a virtual machine highly available and minimize disruptions. Administrators can seamlessly migrate their virtual machines to a different host in the cluster in the event of outage or to load balance their virtual machines without impacting virtualized applications. While this can protect virtualized workloads from a local host failure or scheduled maintenance of a host in a cluster, this does not protect businesses from outage of an entire data center. While Failover Clustering can be used with hardware-based SAN replication across data centers, these are typically expensive. Hyper-V Replica fills an important gap in the Windows Server Hyper-V offering by providing an affordable in-box disaster recovery solution.

    Windows Server 2012 Hyper-V Replica

    Windows Server 2012 introduces Hyper-V Replica, a built-in feature that provides asynchronous replication of virtual machines for the purposes of business continuity and disaster recovery. In the event of failures (such as power failure, fire, or natural disaster) at the primary site, the administrator can manually fail over

    the production virtual machines to the Hyper-V server at the recovery site. During failover, the virtual machines are brought back to a consistent point in time, and within minutes they can be accessed by the rest of the network with minimal impact to the business. Once the primary site comes back, the administrators

    can manually revert the virtual machines to the Hyper-V server at the primary site.

    Hyper-V Replica is a new feature in Windows Server 2012. It lets you replicate your Hyper-V virtual machines over a network link from one Hyper-V host at a primary site to another Hyper-V host at a Replica site without reliance on storage arrays or other software replication technologies. The figure shows secure

    replication of virtual machines from different systems and clusters to a remote site over a WAN.

    Benefits of Hyper-V Replica

    Hyper-V Replica fills an important gap in the Windows Server Hyper-V offering by providing an affordable in-box business continuity and disaster recovery solution.

    Failure recovery in minutes. In the event of an unplanned shutdown, Hyper-V Replica can restore your system in just minutes.

    More secure replication across the network. Hyper-V Replica tracks the write operations on the primary virtual machine and replicates these changes to the Replica server efficiently over a WAN. The network connection between the two servers uses the HTTP or HTTPS protocol and supports both

    integrated and certificate-based authentication. Connections configured to use integrated authentication are not encrypted; for an encrypted connection, you should choose certificate-based authentication. Hyper-V Replica is closely integrated with Windows failover clustering and provides easier replication

    across different migration scenarios in the primary and Replica servers.

    Hyper-V Replica doesnt rely on storage arrays.

    Hyper-V Replica doesnt rely on other software replication technologies.

    Hyper-V Replica automatically handles live migration.

    Configuration and management are simpler with Hyper-V Replica:

    o Integrated user interface (UI) with Hyper-V Manager.

    o Failover Cluster Manager snap-in for Microsoft Management Console (MMC).

    o Extensible WMI interface.

    o Windows PowerShell command-line interface scripting capability.

    7/17/2013

    Page 34

  • Requirements

    To use Hyper-V Replica, you need two physical computers configured with:

    Windows Server 2012.

    Hyper-V server role.

    Hardware that supports the Hyper-V role.

    Sufficient storage to host the files that virtualized workloads use. Additional storage on the

    Replica server based on the replication configuration settings may be necessary.

    Sufficient network bandwidth among the locations that host the primary and Replica

    servers and sites.

    Firewall rules to permit replication between the primary and Replica servers and sites.

    Failover Clustering feature, if you want to use Hyper-V Replica on a clustered virtual

    machine.

    2012 Microsoft Corporation Microsoft Confidential 34

  • Clustering has provided organizations with protection against:

    Application and service failure.

    System and hardware failure (such as CPUs, drives, memory, network adapters, and power

    supplies.)

    Site failure (which could be caused by natural disaster, power outages, or connectivity

    outages).

    Clustering enables high-availability solutions for many workloads, and has included Hyper-V

    support since its initial release. By clustering your virtualized platform, you can increase

    availability and enable access to server based application in time of planned or unplanned

    downtime.

    Other Benefits

    Hyper-V and Windows Server 2012:

    Extend clustered environment features to a new level

    Support greater access to storage

    Provide faster failover and migration of nodes

    7/17/2013

    Page 35

  • Support for guest clustering via Fiber Channel. Windows Server 2012 provides Fibre Channel ports within the guest operating system, allowing you connect to Fibre Channel

    directly from within virtual machines. This feature lets you virtualize workloads that use direct

    access to Fibre Channel storage and cluster guest operating systems over Fibre Channel.

    Virtual Fibre Channel also allows guest multipathing for high link availability using standard

    MPIO and DSMs.

    Clustered live migration enhancements. Live migrations in a clustered environment can now use higher network bandwidths (up to 10 GB) to complete migrations faster.

    Encrypted cluster volumes. BitLocker-encrypted cluster disks enhance physical security for deployments outside secure data centers, providing a critical safeguard for the cloud.

    Cluster Shared Volume (CSV) 2.0. The CSV feature, which simplifies the configuration and operation of virtual machines, has also been improved for greater security and performance.

    It also now integrates with storage arrays for replication and hardware snapshots out of the

    box.

    7/17/2013

    Page 36

  • Transparent failover. You can now more easily perform hardware or software maintenance of nodes in a File Server cluster (for example, storage virtual machine files such as

    configuration files, virtual hard disk files, and snapshots in file shares over the SMB3 protocol)

    by moving file shares between nodes with minimal interruption of server applications that are

    storing data on these file shares. Also, if a hardware or software failure occurs on a cluster

    node, SMB3 transparent failover lets file shares fail over to another cluster node with minimal

    interruption of server applications that are storing data on these file shares.

    Hyper-V application monitoring. Hyper-V and failover clustering work together to bring higher availability to workloads that do not officially support clustering. By monitoring

    services and event logs inside the virtual machine, Hyper-V and failover clustering can detect

    whether the key services being provided by a virtual machine are healthy. If they are not

    healthy, automatic corrective action (restarting the virtual machine or moving it to a different

    Hyper-V server) can be taken.

    7/17/2013

    Page 37

  • Virtual machine failover prioritization. Administrators can now configure virtual machine priorities to control the order in which virtual machines fail over or are started to help ensure

    that lower-priority virtual machines automatically release resources if they are needed for

    higher-priority virtual machines.

    In-box live migration queuing. Administrators can now perform large multiselect actions to queue live migrations of multiple virtual machines.

    Affinity (and anti-affinity) virtual machine rules. Administrators can now configure partnered virtual machines so that at failover the partnered machines are migrated together.

    For example, administrators can configure their SharePoint virtual machine and the partnered

    SQL Server virtual machine to fail over together to the same node. Administrators can also

    specify that two virtual machines cannot coexist on the same node in a failover scenario.

    Requirements: Windows Server 2012 with the Hyper-V role installed.

    7/17/2013

    Page 38

  • Note: This slide is animated and has 1 click

    Dynamic Memory was introduced with Windows Server 2008 R2 SP1 and is used to reallocate

    memory between virtual machines that are running on a Hyper-V host. Improvements made

    within Windows Server 2012 Hyper-V include

    Minimum memory setting being able to set a minimum value for the memory assigned to a virtual machine that is lower than the startup memory setting

    Hyper-V smart paging which is paging that is used to enable a virtual machine to reboot while the Hyper-V host is under extreme memory pressure

    Memory ballooning the technique used to reclaim unused memory from a virtual machine to be given to another virtual machine that has memory needs

    Runtime configuration the ability to adjust the minimum memory setting and the maximum memory configuration setting on the fly while the virtual machine is running without

    requiring a reboot.

    Because a memory upgrade requires shutting down the virtual machine, a common challenge for administrators is upgrading the maximum amount of memory for a virtual machine as demand increases. For example, consider a virtual machine running SQL Server and configured with a maximum of 8 GB of RAM. Because of an increase in the size of the databases, the virtual machine now requires more memory. In Windows Server 2008 R2 with SP1, you must shut down the virtual machine to perform the upgrade, which requires planning for downtime and decreasing business productivity. With Windows Server 2012, you can apply that change while the virtual machine is running.

    [Click]

    As memory pressure on the virtual machine increases, an administrator can change the

    maximum memory value of the virtual machine, while it is running and without any downtime to

    the VM. Then, the Hot-Add memory process of the VM will ask for more memory and that

    memory is now available for the virtual machine to use.

    7/17/2013

    Page 39

  • Note: This slide is animated and has 2 clicks

    Hyper-V Smart Paging is a memory management technique that uses disk resources as additional, temporary memory when more memory is required to restart a virtual machine. This approach has both advantages and drawbacks. It provides a reliable way to keep the virtual machines running when no physical memory is available. However, it can degrade virtual machine performance because disk access speeds are much slower than memory access speeds. To minimize the performance impact of Smart Paging, Hyper-V uses it only when all of the following occur: The virtual machine is being restarted. No physical memory is available. No memory can be reclaimed from other virtual machines that are running on the host. Hyper-V Smart Paging is not used when: A virtual machine is being started from an off state (instead of a restart). Oversubscribing memory for a running virtual machine would result. A virtual machine is failing over in Hyper-V clusters. Hyper-V continues to rely on internal guest paging when host memory is oversubscribed because it is more effective than Hyper-V Smart Paging. With internal guest paging, the paging operation inside virtual machines is performed by Windows Memory Manager. Windows Memory Manager has more information than does the Hyper-V host about memory use within the virtual machine, which means it can provide Hyper-V with better information to use when it chooses the memory to be paged. Because of this, internal guest paging incurs less overhead to the system than Hyper-V Smart Paging.

    In this example, we have multiple VMs running, and we are restarting the last virtual machine. Normally, that VM would be using some amount of memory between the Minimum and Maximum values. In this case, the Hyper-V host is running fairly loaded and there isnt enough memory available to give the virtual machine all of the startup value needed to boot.

    [Click]

    When this occurs, a Hyper-V Smart Paging file is created for the VM to give it enough RAM to be able to start.

    [Click]

    After some time, the Hyper-V host will use the Dynamic Memory techniques like ballooning to pull the RAM away from this or other virtual machines to free up enough RAM to bring all of the Smart Paging contents back off of the disk.

    7/17/2013

    Page 40

  • 2012 Microsoft Corporation Microsoft Confidential 41

  • 2012 Microsoft Corporation Microsoft Confidential 42

  • These are what we consider the top features in Windows Server 2012 that addresses the challenges we discussed. We will go into more details and dive into a lot of details over the next hour. This is in no means all the work we have done in WS 2012 in Storage but this should give you a flavor of some of the top investments we have made.

    43

  • We have had virtualization at Hyper-V layer over the last couple of releases of Windows server. With Windows Server 2012, we give you the ability to virtualize your storage solution. Storage spaces gives you the ability to consolidate all your SAS and SATA connected disks no matter whether they are SSDs or traditional HDDs and consolidate them together as storage Pools. You can then assign these pools to different departments within your enterprise or customers so that your data is isolated and administration is easy. Once you have created these pools, you can then create logical disks from them, called Storage Space. These logical disks, for the most part looks and acts like regular disks. But they can be configured for different resiliency schemes mirroring or parity depending the performance and space requirements. When you create a Storage space, you can either choose thin or fixed provisioning. This lets you have the ability to increase your storage investments only when you need. You can create a logical disk or Space that is bigger than your pool and add disks only when there is an actual need. Lets assume that your Hyper-V VMs are stored in logical disks created using Storage Spaces. With trim provisioning, when a large file gets deleted from one of the VMs, the VMs communicate this to the host and the host down to storage spaces and spaces will automatically reclaim this storage and assign it to other disks within the same pool or other pools. So you are optimizing storage utilization with on-demand provisioning and automated capacity reclamation. Storage Spaces is compatible with other Windows Server 2012 storage features, like SMB Direct and SMB Failover Clustering, so you can use simple inexpensive storage devices to create powerful and resilient storage infrastructures on a limited budget. Storage Spaces enable you to deliver a new category of highly capable storage solutions to all Windows customer segments at a dramatically lower price point. At the same time, you can maximize your operations by leveraging commodity storage to supply high-performance and feature-rich storage to servers, clusters, and applications alike.

    7/17/2013 4:03 PM

    44

    2010 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.

    The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it

    should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation.

    MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

  • To cluster Hyper-V in Windows Server 2008 R2, you typically needed a SAN in the back-end either an iSCSI or FC SAN. Setting up the SAN to support clustering has been challenging to some. We already have a solution for filebased storage in Windows which is the SMB fileshare that most of you are familiar with. These were traditionally your file repository for Office workers for storing powerpoint decks, word docs, videos and so on. With WS 2012, we fully support using SMB fileshares for storing your Hyper-V VMs. Not the VM library or install ISOs, the live VHDs of your VMs. Not only that we also enable live SQL databases to reside in an SMB fileshare now. What this provides you is an additional file based storage option for your mission critical application that is very easy to provision at a fraction of the cost. You can now cluster your volumes together and expose them as a single file system name space to Hyper-V as a location to store the VMs or to your workload to store the SQL database. With SMB transparent failover, even if one of the nodes goes down, SMB transparently fails over to another node without down time. As you would see over the course of this presentation, we have augmented this with a lot of other functionality to provide real enterprise class performance and reliability with features like SMB Direct, SMB transparent failover, SMB multichannel, Clustered Share volumes and so on. We also support encryption in SMB so that data that gets transferred over wire is safe and secure. And most importantly we have now changed Volume Shadow Service or VSS to support back-up of remote file storage. So any 3rd party back-up software that plugs into VSS can now back-up your SMB fileshares backing up not only your documents but also your VMs and databases. Finally, since SMB uses your existing network infrastructure the need for a dedicated network is eliminated there by reducing management costs. So no matter what your underlying storage subsystem is, SMB provides an easy and highly available repository for your SQL databases and Hyper-V VMs. Since all that you need for SMB is volumes it really doesnt matter how thse volumes are created they could be a RAID attached storage solution, through a SAN. Or Storage Spaces

    7/17/2013 4:03 PM

    45

    2010 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.

    The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it

    should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation.

    MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

  • Data deduplication is a new storage efficiency feature available with Windows Server 2012 that helps address the ever-growing demand for file storage. Instead of expanding the storage used to host the data, the amount of space used by that data is now reduced through the use of variable-size chunking and compression. What this means is that Windows will automatically scan through your disks, identify duplicate chunks in the data you have stored and store these chunks only once. Since only one copy is stored for duplicate date this not only lets you optimize your existing storage infrastructure, it also translates into even greater savings by postponing the need to purchase storage upgrades and extending the lifespan of current storage investments. The disk space savings we have seen with Data Dedup during internal testing has been phenomenal. Data deduplication can deliver storage savings of up to 2:1 for general file shares and 20:1 for virtual storage. This is far above what was possible with Single Instance Storage (SIS) or NTFS compression. Data deduplication also throttles CPU and memory usage to allow for implementation on large volumes without impacting server performance. Furthermore, compression routine run times can be scheduled for off-peak times to reduce any impact those operations might have on data access. Reliability and data integrity arent problems for data deduplication, thanks to metadata and preview redundancy that helps to prevent data loss due to unexpected power outages. Checksums, along with data integrity and consistency checks, also help prevent corruption for volumes configured to use data deduplication. Not for : Live VMs SQL DBs ReFS file shares Client machines Boot data Cluster shared volumes

    46

  • Windows Server 2012 introduces a newly engineered file system called Resilient File System (ReFS) that is built on the foundations of NTFS to maintain compatibility with that highly popular file system while also architected to support a new generation of storage technologies and scenarios. ReFS was designed with three key goals in mind: Maintain compatibility with the NTFS features that are widely adopted and successful while replacing features that provide limited value. Maintain the highest levels of system availability and reliability possible under the assumption that underlying storage may be inherently unreliable. Provide a full end-to-end resilient architecture when used in conjunction with Storage Spaces so that these two features magnify the capabilities and reliability of the other when used together. Uses Copy on Write method to update files and saves file to new location every time it is updated there-by avoiding corruptions caused during disk power outage. Checksums on all metadata in ReFS are performed at the tree level and these checksums are stored independently from the tree page itself. This enables the detection of all forms of disk corruption, including degradation of data on media. Along with Storage Spaces, ReFS forms the storage foundation on Windows for the next decade and beyond with features that enabled significant storage stability, flexibility, scalability, and availability, IT pros typically have horror stories with ChkDsk, with Chkdsk running for hours together we have made tremendous progress with ChkDsk that fixing disk corrpution issues now take a matter of seconds as opposed to hours in the past. It is so good that the graph doesnt even do justice to it. We now isolate the section in the disk that is corrupted in online mode and bring down only that portion offline for a brief moment to fix the problem. This ensures that the entire scanning and fixing process is totally transparent to the application. If you combine this with Clustering, chkdsk can now run with no downtime for the application even if the problem is with the system drive.

    47

  • Ultimately, all of these capabilities combined give you flexible storage and availability options that can be deployed in organizations of all sizes and at any scale. Smaller organizations and branch offices can deploy solutions leveraging either internal server storage or virtualized storage spaces solutions that use shared SAS JBODs or JBODs with integrated SAS switches. These can scale up from single node, with very limited high availability options, or a 2-node high-availability storage solution. Midsize businesses and larger departments within the enterprise market can use storage spaces or clustered PCI RAID solutions with shared SAS or SAS switch fabrics to scale up from 2-node clustered storage up to 8-node clusters with CSV v2 scale-out storage for larger virtualized environments. Finally, large enterprise environments or hosting providers can use new Windows Server 2012 storage management capabilities to scale up to 32-node high-availability storage solutions that can integrate with external storage arrays using Fibre Channel or IP SAN (10 GbE or IPoIB) fabrics for massive virtualized, private cloud, or multitenant public cloud storage solutions.

    48

  • 49

  • 50

  • 51

  • 2012 Microsoft Corporation Microsoft Confidential 52

  • Based on the needs and challenges, these are the top features we have built in Windows Server 2012 in networking. We will spend a lot of time over the next hour going over these features. This is not the entire set of features we have added in WS 2012, but what I consider the most important.

    53

  • Note to presenter: 3 clicks to complete build. Windows Server 2012 helps you provide fault tolerance on your network adapters without having to buy additional hardware and software. Windows Server 2012 includes NIC Teaming as a new feature, which allows multiple network interfaces to work together as a team, preventing connectivity loss if one network adapter fails. It allows a server to tolerate network adapter and port failure up to the first switch segment. NIC Teaming also allows you to aggregate bandwidth from multiple network adapters, for example, so four 1-gigabyte (GB) network adapters can provide an aggregate of 4 GB/second of throughput. The advantages of a Windows teaming solution are that it works with all network adapter vendors, spares you from most potential problems that proprietary solutions cause, provides a common set of management tools for all adapter types, and is fully supported by Microsoft. Teaming network adapters involves the following: NIC Teaming configurations. Two or more physical network adapters connect to the NIC Teaming solutions multiplexing unit and present one or more virtual adapters (team network adapters) to the operating system. Algorithms for traffic distribution. Several different algorithms distribute inbound and outbound traffic between the network adapters. Team network adapters exist in third-party NIC Teaming solutions to divide traffic by virtual local area network (VLAN) so that applications can connect to different VLANs simultaneously. Like other commercial implementations of NIC Teaming, Windows Server 2012 has this capability.

    54

  • Note to presenter: 3 clicks to complete build. With SMB MultiChannel, network path failures are automatically and transparently handled without application service disruption. Windows Server 2012 now scans, isolates, and responds to unexpected server problems that allow network fault tolerance, if multiple paths are available between the SMB client and the SMB server. SMB Multichannel will also provide aggregation of network bandwidth from multiple network interfaces when multiple paths exist. Server applications can then take full advantage of all available network bandwidth and become resilient to a network failure. In the animation, you see data getting transferred from an SMB client and server. Notice the red ball. Now lets assume that there is a failure in the path in which the red ball/packet was travelling. Without any manual intervention now, the red ball packet was sent again through a different route. So as you can see with SMB multichannel, server workloads are now resilient to underlying network changes/failures.

    55

  • Note to presenter: 3 clicks to build.

    56

  • The figure shows the architecture of SR-IOV support in Hyper-V. Support for SR-IOV networking devices

    Single Root I/O Virtualization (SR-IOV) is a standard introduced by the PCI-SIG, the special-interest group that owns and manages PCI specifications as open industry standards. SR-IOV works in conjunction with system chipset support for virtualization technologies that provide remapping of interrupts and Direct

    Memory Access, and allows SR-IOV-capable devices to be assigned directly to a virtual machine.

    Hyper-V in Windows Server 2012 enables support for SR-IOV-capable network devices and allows an SR-IOV virtual function of a physical network adapter to be assigned directly to a virtual machine. This increases network throughput and reduces network latency while also reducing the host CPU overhead required for

    processing network traffic.

    Benefits

    These new Hyper-V features let enterprises take full advantage of the largest available host systems to deploy mission-critical, tier-1 business applications with large, demanding workloads.

    You can configure your systems to maximize the use of host system processors and memory to effectively handle the most demanding workloads.

    Requirements

    To take advantage of the new Hyper-V features for host scale and scale-up workload support, you need the following:

    One or more Windows Server 2012 installations with the Hyper-V role installed. Hyper-V requires a server that provides processor support for hardware virtualization.

    The number of virtual processors that may be configured in a virtual machine depends on the number of processors on the physical machine. You must have at least as many logical processors in the virtualization host as the number of virtual processors required in the virtual machine. For example,

    to configure a virtual machine with the maximum of 32 virtual processors, you must be running Hyper-V in Windows Server 2012 on a virtualization host that has 32 or more logical processors.

    7/17/2013

    Page 57

  • SR-IOV networking requires the following:

    A host system that supports SR-IOV (such as Intel VT-d2), including chipset support for

    interrupt and DMA remapping and proper firmware support to enable and describe the

    platforms SR-IOV capabilities to the operating system.

    An SR-IOVcapable network adapter and driver in both the management operating system

    (which runs the Hyper-V role) and each virtual machine where a virtual function is assigned.

    2012 Microsoft Corporation Microsoft Confidential 57

  • Note to presenter: 3 clicks to build.

    RSS improvements.

    RSS spreads monitoring interrupts over multiple processors, so that a single

    processor isnt required to handle all I/O interrupts, which was common with earlier versions of Windows Server. Active load balancing between the

    processors tracks the load on the different CPUs and then transfers the

    interrupts as needed. You can select which processors will be used for handling

    RSS requests. RSS works with in-box NIC Teaming or Load Balancing and

    Failover (LBFO) to address an issue in previous versions of Windows Server,

    where a choice had to be made between using hardware drivers or RSS. RSS

    will also work for User Datagram Protocol (UDP) traffic and can manage and

    debug applications that use Windows Management Instrumentation (WMI) and

    Windows PowerShell.

    58

  • Note to presenter: Let the third animation run until all the dots stop moving and until the dots can be seen on the screen.

    Dynamic Virtual Machine Queues (D-VMQs)

    Windows Server 2008 R2: Offload routing and filtering of network packets to the network adapter (enabled by hardware-based receive queues) to reduce

    host overhead.

    New in Windows Server 2012: Dynamically distribute incoming network traffic processing to host processors (based on processor usage and network

    load).

    59

  • 60 2012 Microsoft Corporation Microsoft Confidential

  • Windows Server 2012 provides improved multitenant security for customers on a shared infrastructure as a service (IaaS) cloud through the new Hyper-V Extensible Switch. The Hyper-V Extensible Switch is a layer-2 virtual interface that provides programmatically managed and extensible capabilities to connect virtual machines to the physical network. Management features are built into the Hyper-V Extensible Switch that allow you to troubleshoot and resolve problems on Hyper-V Extensible Switch networks: Windows PowerShell and scripting support. Windows Server 2012 provides Windows PowerShell cmdlets for the Hyper-V Extensible Switch that allow you to build command-line tools or automated scripts for setup, configuration, monitoring, and troubleshooting. Windows PowerShell also enables third parties to build their own tools to manage the virtual switch. Unified tracing and enhanced diagnostics. Unified tracing has been extended into the Hyper-V Extensible Switch to allow you to trace packets and events through the Hyper-V Extensible Switch and its extensions.

    61

  • Open platform to fuel plug-ins. The Hyper-V Extensible Switch is an open platform that allows plug-ins to sit in the virtual switch between all traffic, including virtual machine-to-virtual machine traffic. Extensions can provide traffic monitoring, firewall filters, and switch forwarding. To jump start the ecosystem, several partners will announce extensions with the unveiling of the Hyper-V Extensible Switch; no one switch only solution for Hyper-V. Core services are free. Core services are provided for extensions; for example, all extensions have live migration support by default; no special coding for services is required. Windows Reliability/Quality. Extensions experience a high level of reliability and quality from the strength of the Windows platform and the Windows Logo Program Certification, which sets a high bar for extension quality. Unified management. The management of extensions is integrated into the Windows management through Windows PowerShell cmdlets and WMI scripting. One management story for all. Easier to support. Unified tracing means its quicker and easier to diagnose issues when they arise. Less down time increases availability of services.

    62

  • 2012 Microsoft Corporation Microsoft Confidential 63

  • Using Remote Desktop Services in the enterprise helps to address a number of important challenges in business scenarios where locally deployed client desktops would be costly, difficult to manage, or create other issues such as security concerns. Some of these scenarios include providing workstations for locked-down tasks and contractor desktops and provisioning office workers with specific security and compliance needs. More and more organizations are also seeing the need to centralize desktops used by workers acquired through acquisitions, employees bringing their own devices to work, and by people in remote offices and branch locations.

    64

  • The process of creating, assigning, and patching virtual machines has been simplified in Windows Server 2012 RC through the use of virtual machine templates and wake for patches function. This uses the virtual machine BIOS to wake up the virtual machine, a process that also uses intelligent patching to reduce the load on the server thats running the virtual machine. The Remote Desktop Services framework is also extensible, giving third-party developers the ability to provide solutions that work with Remote Desktop Services.

    65

  • So, lets also look at what weve done with High Availability.

    What youre looking at is the high-level deployment architecture for all the components that go together in VDI deployment. We looked at these just a couple minutes ago. In order for a VDI deployment to scale and be highly available, each of these components needs to be highly available.

    In WS08 R2, this is how things worked.

    RDWeb: Can be scaled out. Its a web app, so it can scale out as a farm of web servers. Since WS08

    RDG: Also a web app, so it can scale out as a farm of web servers. Since WS08

    RDVH: A Hyper-V server, so it works as a HyperV Cluster. Different nodes in the cluster. If one fails, the workloads in the cluster, such as the VMs can migrate to another node in the cluster. Since WS08R2

    RDLS: Supported a cluster mode since WS08. RDVH and RDSH can access multiple servers in a farm.

    RDSH: TS Fram. Since WS03, very early version, has supported a farm configuration.

    The key new thing in WS2012 in this area is the high availability and scalability of connection broker. In WS08R2, we only supported Active/Passive Clustering for connection broker. In WS2012 we changed to support Active/Active mode.

    Connection broker has an internal database to store the configuration and runtime data for the entire deployment, things like where the user is logged on. What VM is on which host, which apps are published, etc.

    When Broker is configured in HA mode, you have multiple instances of the Broker, all of which run against a SQL DB cluster. All Broker instances are active: They are responding to load at the same time. Hence this configuration provides both availability and scale.

    All the key tasks that Connection broker manages, such as VM creation, or user logon creating/mountiung user VHDs, as well incoming connections that get redirected through Broker, all of these tasks work seamlessly with a multi-instance, highly available Broker deployment.

    This config requires that you have a SQL server in your configuration to host all the data for your VDI deployment. We support a wide variety of SQL clustering modes and SQL versions, including for example SQL Denali & Always ON High Availability mode. The most recent innovation that SQL is bringing to the space.

    Addl notes:

    Wizard in Admin UI walks you through the steps needed to set up a new broker instance. Automatically migrates configuration data from the source brokers data store to the shared SQL database. Powershell cmdlet to do the same.

    As in many other farm-type ha configurations, the broker instances need to be configured so they are at the same DNS name and authenticate under the same name. This is typically accomplished by using DNS Round Robin and a shared SSL certificate.

    66

  • With Windows Server 2012, technologies such as RDS and Hyper-V provide the scalability and flexibility that enterprises demand from their virtual desktop platform.

    User Disks:

    One of the reasons that customer want to look at pooled desktops or sessions is to lower the cost of their VDI (since in both models, there are fewer images to manage and store). However, one of the biggest issues with pooled VMs and sessions is that users loose any changes made to their profiles (including setting changes) upon logout. In order to make pooled VMs and sessions a viable deployment model, the users data (user and application settings, personal data such as documents and pictures, etc.) are stored on a separate .vhd file called a User Disk.

    When the user logins in, RDS combines the Users disk with a desktop either from the VM or session pool, thereby providing the user with their data and settings.

    With User Disks, IT can provide a certain level of personalization to pooled VMs or session based deployments. However, it is important to note that User Disks cannot be used to roam across different pools or collections, or across physical to virtual environment. It is also important to note that user installed applications cannot be persisted even with user disks, and are lost upon logoff.

    FairShare

    Fairshare is a collection of technologies that ensure that no single VM or session hogs machine resources (memory, disk I/O and bandwidth), thereby reducing the impact to other users on the system.

    If a VM / Session starts to utilize more resources than deemed safe by the system, Fairshare will automatically throttle the resource in question, thereby dynamically distributing that resource across other VMs/ Sessions.

    RDS has Fairshare built in to manage resources for Sessions. Hyper-V has a collection of technologies to manage bandwidth, I/O and memory, collectively ensuring performance of VMs.

    Storage Options:

    RDS now supports various lower cost storage, such as SMB based fileshares, or Direct Attached Storage (DAS), in addition to SAN

    Can separately configure storage location for Parent VHD, individuals VMs, and UserVHD. Use different storage tiers for each to optimize

    Hence, storing VMs is now cheaper, as customers do not have to rely on expensive SANs anymore.

    Powerful Hyper-V platform

    RDS can now be configured to have multiple active nodes, thereby enabling the connection broker to scale up, while still ensuring DR.

    As always, the RDS platform provides APIs to help partners build on top of the Microsoft platform. Hence, partners such as Citrix can help scale out the Microsoft VDI platform to include largest and most complex of deployments.

    H/A & Scalability

    Active/Active broker to improve high availability: Simple and intuitive setup of multiple broker instances. Uses SQL Hyper-V is a truly Enterprise grade hypervisor platform, that has been designed to host even the largest VDI deployments. As of Beta, Hyper-V supports 64 nodes and 8000 VMs per cluster.

    Additionally, Hyper-V dynamic memory increases VM density, thereby furthur boosting scale of the system, while lowering costs.

    Sessions: Traditionally, a session host can provide upto twice the scalability of a similar specification server thats hosting VMs. Hence, deploying sessions instead of VM based desktops helps to increase density, and lower costs. Additionally, Session technology can also be used to deliver just a hosted application to a user, instead of hosting the entire desktop.

    Windows Server 2008 R2 16 nodes, 1000 VMs

    67

  • 68

  • With Windows Server 2012, technologies such as RDS and Hyper-V provide the scalability and flexibility that enterprises demand from their virtual desktop platform.

    User Disks:

    One of the reasons that customer want to look at pooled desktops or sessions is to lower the cost of their VDI (since in both models, there are fewer images to manage and store). However, one of the biggest issues with pooled VMs and sessions is that users loose any changes made to their profiles (including setting changes) upon logout. In order to make pooled VMs and sessions a viable deployment model, the users data (user and application settings, personal data such as documents and pictures, etc.) are stored on a separate .vhd file called a User Disk.

    When the user logins in, RDS combines the Users disk with a desktop either from the VM or session pool, thereby providing the user with their data and settings.

    With User Disks, IT can provide a certain level of personalization to pooled VMs or session based deployments. However, it is important to note that User Disks cannot be used to roam across different pools or collections, or across physical to virtual environment. It is also important to note that user installed applications cannot be persisted even with user disks, and are lost upon logoff.

    69

  • Personalization is a critical aspect of the user experience in virtualized desktop deployments. In a standard physical PC, the users

    data and settings are intertwined with the apps and OS settings. This makes the desktop difficult to manage and it reduces the

    benefits of virtualizing it. What we need is a way to assemble to desktop from ingredient components. Windows composed of

    replaceable parts.

    User Profile Disk is a key technology we are unveiling with Server 2012 that takes the first step towards this vision.

    What is UserDisk?

    With UserDisk, each user of a collection is assigned a unique VHD that stores all of her settings and data. UserDisk can be

    configured for both RDSH collections and Pooled VM collections. As the user is logging on to that collection, the users UserDisk is

    mounted to the VM or the RDSH and her profile and data folders are mapped to this mounted volume. As the user logs on to other

    vms or RDSH servers within that collection, the userdisk roams with her, making her data and settings available within the

    collection.

    UserDisk appears as a local disk; therefore it works better with applications that expect to have local data access. This improves

    app compat.

    There are other technologies such as Roaming User profiles, Folder redirection, and especially User Environment Virtualization,

    which are designed for user data and settings isolation. UserDisk provides a container for all of these technologies. E.g.

    The RUP profile is cached in the User Disk at logon

    When FR is configured with caching, the cache resides on UserDisk.

    The per-application setting datasets used by UEV are cached in the UserDisk

    In all of these cases, it is important to recognize that UserDisk is scoped to the collection for which it is configured. It provides

    roamable access within the collection. RUP, FR, and UEV enable roaming beyond the collection, and between different collections.

    So, what is the right way to deploy these technologies?

    We recommend that you deploy user disk with all Pooled VM collections and RDSH collections. There is really no downside!

    If you have multiple collections, or if you want user settings to roam between VDI and physical environments, then you should also

    use UEV.

    Folder Redirection can be used in such a scenario to provide roaming access to user documents, e.g. My Documents, My Pictures

    folders. FR is also a reliable way to centralize users data to a file server from where it can be more easily backed up and managed.

    70

  • 71

  • Now lets talk about storage.

    Storage is a key part of a VDI deployment. Customer experience indicates that VDI is easily the most challenging workload for storage infrastructure, both in terms of IOPS and storage volume. Thus , it is critical to have a wide range of options with which you can optimize the output from your storage $$.

    You have probably heard about all that is new with storage in WS2012: Storage Spaces, SMB HyperV support, CSV, new highly scalable deployment models for File Server are some of the key new storage technologies in Server. In our VDI deployment, we support all of these.

    We are also offer the ability to configure storage at a granular level, per collection:

    You can use Direct attached, central SMB, or Central CSV SAN storage with a collection. There are three types of things you need to store: The parent VHD, each VM, and the userdisks.

    You can configure a separate storage location for each. E.g. for your mission critical Pooled VM Collection, you can use a high-IOPS SSD array over SMB for

    the parent VHD, and a mirrored Spaces disk array for the UserVhds to ensure data resiliency, etc. Or you can equip your HyperV nodes with cheap, locally attached SSD drives along with normal

    hard disks. Then assign parent VHD to ssd volumes, VM instances to the disks, and host the userdisks in a central file server over SMB.

    Ultimately, ability to configure these options separately allows you to optimize for your IOPS and volume needs.

    Other Notes:

    There are some caveats.

    If HyperV supports the storage option, we support it for VDI. Some things don'REFS not support