Module 12: Designing High Availability in Windows Server ® 2008.

20
Module 12: Designing High Availability in Windows Server ® 2008

Transcript of Module 12: Designing High Availability in Windows Server ® 2008.

Page 1: Module 12: Designing High Availability in Windows Server ® 2008.

Module 12:Designing High

Availability in Windows Server® 2008

Page 2: Module 12: Designing High Availability in Windows Server ® 2008.

Module Overview

• Overview of High Availability

• Designing Network Load Balancing for High Availability

• Designing Failover Clustering for High Availability

• Geographically Dispersed Failover Clusters

Page 3: Module 12: Designing High Availability in Windows Server ® 2008.

Service Level Agreements

An SLA includes:

Requirements for availability

Recovery times and processes

Penalties for non-compliance

Escalation procedures

•SLAs can be formal or informal•SLAs can be formal or informal

Page 4: Module 12: Designing High Availability in Windows Server ® 2008.

High Availability Options in Windows Server 2008

Option Description

Network load balancing

• Distributes application requests among multiple nodes

Failover clustering • Migrates services from one server to

another, after server failure

Virtual machine migration

• Moves a virtual machine to a new host without shutting it down

• Quick migration requires a virtual machine to be paused

Page 5: Module 12: Designing High Availability in Windows Server ® 2008.

Lesson 2: Designing Network Load Balancing for High Availability

• Overview of Network Load Balancing

• Considerations for Storing Application Data for NLB

• Host Priority and Affinity

• Selecting a Network Communication Method for NLB

Page 6: Module 12: Designing High Availability in Windows Server ® 2008.

Overview of Network Load Balancing

•NLB is a fully distributed, software-based solution for load balancing that does not require any specialized hardware

•NLB is a fully distributed, software-based solution for load balancing that does not require any specialized hardware

Characteristic Description

NLB scalability • Scale an NLB cluster by adding servers

NLB availability

• Server failure is detected by other servers in the cluster

• Application failure is not detected

• Load is automatically distributed among the remaining servers

Page 7: Module 12: Designing High Availability in Windows Server ® 2008.

Considerations for Storing Application Data for NLB

All servers must have the same data

Data can be stored in a central location Data can be synchronized between servers

•The suitability of an application for NLB depends on how data is stored

•The suitability of an application for NLB depends on how data is stored

Page 8: Module 12: Designing High Availability in Windows Server ® 2008.

Host Priority and Affinity

Feature Description

Affinity

• Determines which server receives subsequent incoming requests from a specific host

• Useful for applications that maintain user state information

Host Priority

• Is used for failover, rather than load balancing

• Useful for applications that share a data store across servers

Page 9: Module 12: Designing High Availability in Windows Server ® 2008.

Selecting a Network Communication Method for NLB

Unicast:

• One NIC is dedicated to NLB communication

• Requires two NICs

• Allows segmentation of NLB communication

Multicast:

• Multicast is used for NLB communication

• Requires only one NIC

• All communication happens on a single network

Page 10: Module 12: Designing High Availability in Windows Server ® 2008.

Lesson 3: Designing Failover Clustering for High Availability

• Overview of Failover Clustering

• Failover Clustering Scenarios

• Shared Storage for Failover Clustering

• Guidelines for Designing Hardware for Failover Clustering

• Guidelines for Failover Clustering Capacity Planning

• Quorum Configuration for Failover Clustering

Page 11: Module 12: Designing High Availability in Windows Server ® 2008.

Overview of Failover Clustering

Failover clustering:

• Runs services on a virtual server

• Clients connect to services on the virtual server

• Virtual server can failover from one cluster node to another

• Clients are reconnected to services on the new node

• Clients experience a short disruption of service

Requires shared storage

Page 12: Module 12: Designing High Availability in Windows Server ® 2008.

Failover Clustering Scenarios

Use failover clustering when:

High availability is required

• Scalability is not required

• Application is stateful

• Client automatically reconnects

• Application uses IP-based protocols

Page 13: Module 12: Designing High Availability in Windows Server ® 2008.

Shared Storage for Failover Clustering

Shared serial attached SCSI (SAS)

iSCSI Fibre Channel

•Failover clusters require shared storage to provide consistent data to a virtual server after failover

•Failover clusters require shared storage to provide consistent data to a virtual server after failover

Page 14: Module 12: Designing High Availability in Windows Server ® 2008.

Guidelines for Designing Hardware for Failover Clustering

Some guidelines for failover clustering hardware are:

• Use a 64-bit operating system and hardware to increase memory scalability

• Use multicore processors to increase scalability

Use the Validation tool to verify correct configuration and to ensure support from Microsoft

• Use GPT disk partitioning to increase partition sizes up to 160 TB

Page 15: Module 12: Designing High Availability in Windows Server ® 2008.

Guidelines for Failover Clustering Capacity Planning

Plan failover to spread load evenly to remaining nodes

Ensure that nodes have sufficient capacity to support virtual servers that have failed over

Use hardware with similar capacity for all nodes in a cluster

Use standby servers to simplify capacity planning

Page 16: Module 12: Designing High Availability in Windows Server ® 2008.

Quorum Configuration for Failover Clustering

Quorum configuration When to use

Node Majority • There are an odd number of nodes

Node and Disk Majority • There are an even number of nodes

Node and File Share Majority

• A shared disk is not required with an even number of nodes

No Majority: Disk Only • Not recommended because disk is a

single point of failure

Page 17: Module 12: Designing High Availability in Windows Server ® 2008.

Lesson 4: Geographically Dispersed Failover Clusters

• Overview of Geographically Dispersed Clusters

• Data Replication for Geographically Dispersed Clusters

• Quorum Configuration for Geographically Dispersed Clusters

Page 18: Module 12: Designing High Availability in Windows Server ® 2008.

Overview of Geographically Dispersed Clusters

Geographically dispersed clusters:

Are typically used as a disaster recovery hot site

Have specialized concerns about data synchronization between locations

Using “Or” networking allows clustering over two subnets

Require specific hardware for support

Require that careful consideration be given to the quorum configuration

Page 19: Module 12: Designing High Availability in Windows Server ® 2008.

Data Replication for Geographically Dispersed Clusters

Asynchronous replication:

• A file change is complete in the first location, and then replicated to the second location

• Faster performance

• If disk operation order is preserved, then data is preserved

Synchronous replication:

• A file change is not complete until replicated to both locations

• Ensures consistent data in both locations

Page 20: Module 12: Designing High Availability in Windows Server ® 2008.

Quorum Configuration for Geographically Dispersed Clusters

When designing automatic failover for geographically dispersed clusters:

• Use Node Majority or Node Majority with File Share quorum

• Three locations must be used to allow automatic failover of a single virtual server

• All three locations must be linked directly to each other

• One location can be only a file-share witness