Leveraging Netscaler 10 Clusters to Improve Application Performance

74
Citrix Virtual Classroom Leveraging NetScaler clusters to improve application performance Self-paced exercise guide

Transcript of Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 1: Leveraging Netscaler 10 Clusters to Improve Application Performance

Citrix Virtual Classroom

Leveraging NetScaler clusters to improve

application performance

Self-paced exercise guide

Page 2: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 2

Citrix Virtual Classroom

Table of Contents

Table of Contents .............................................................................................................................................. 2

Lab Overview ..................................................................................................................................................... 3

Exercise 1: Migrating an HA pair to a 3-node cluster .................................................................................. 6

Exercise 2: Initial cluster setup ...................................................................................................................... 12

Exercise 3: Cluster load balancing configuration ........................................................................................ 23

Exercise 4 (Optional): Packet capture analysis in a cluster system .......................................................... 37

Exercise 5 (Optional): ECMP configuration ............................................................................................... 52

Page 3: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 3

Citrix Virtual Classroom

Lab Overview

Hands-on Training Module

This training module has the following details:

Objective This lab manual provides hands-on training on the new clustering feature included in NetScaler 10. It covers the most common clustering deployment types as well as step-by-step configuration on how to setup a NetScaler cluster.

Upon completion of this lab document, a 3-node cluster environment should be fully operational.

Audience Primary: Event attendees

Lab Environment Details

Machine Details

XenServer Hosts virtual machines

AD.training.lab Domain Controller – DNS server

Apache_MySQL_1 (GENTOO_1) Apache, PHP, MySQL server

Apache_MySQL_2 (GENTOO_2) Apache, PHP, MySQL server

Apache_MySQL_3 (GENTOO_3) Apache, PHP, MySQL server

EXT-Win_7 Remote network Windows 7 workstation

Win7Client Internal network Windows 7 workstation

NS10_node1 Virtual instance of a NetScaler appliance.

NS10_node2 Virtual instance of a NetScaler appliance.

NS10_node3 Virtual instance of a NetScaler appliance

vyatta-RTR Provides connectivity between internal and remote networks

Page 4: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 4

Citrix Virtual Classroom

Lab Topology Diagram

Remote Network

Internal Router

Public Network

EXT-Win_7

Internal network

Win7Client

NS10_node1

NS10_node3

NS10_node2

192.168.10.X/24

DNS server

172.16.2.0/24

Apache2

Apache3

Apache1

172.16.1.0/24

.200 .12

.110

.120

.15

.130

.1 .1

.14

.13

.2

Required Lab Credentials

Below are the login credentials required to connect to the workshop system and complete the lab

exercises.

Machine IP Address Username Password

AD.training.lab 192.168.10.11 TRAINING\Administrator Citrix123

Apache_MySQL_1 (GENTOO_1)

192.168.10.13 root Citrix123

Apache_MySQL_2

(GENTOO_2)

192.168.10.14 root Citrix123

Apache_MySQL_3

(GENTOO_3)

192.168.10.15 root Citrix123

EXT-Win_7 172.16.2.2 TRAINING\Administrator Citrix123

Win7Client 192.168.10.200 TRAINING\Administrator Citrix123

Page 5: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 5

Citrix Virtual Classroom

NS10_node0 NSIP: 192.168.10.110 SNIP: 192.168.10.91

nsroot nsroot

NS10_node1 NSIP: 192.168.10.120 SNIP: 192.168.10.92

nsroot nsroot

NS10_node3 NSIP: 192.168.10.130 SNIP: 192.168.10.93

nsroot nsroot

vyatta-RTR eth0: 172.16.2.1

eth1: 172.16.1.1

vyatta Vyatta

How to log into the lab environment

The self-paced lab environment is hosted on a cloud-based Citrix XenServer. Connecting to your server

from the portal page is as easy as 1-2-3.

Step-by-step login instructions

Step Action

1. Once logged in at the self-paced portal, click the Start lab button to launch a connection to published

XenCenter.

2. When XenCenter loads, right-click the XenCenter node and select Add…

3. On the Add New Server screen enter the XenServer IP address provided on the portal and in the

Password field enter the password provided on the portal. The user name will always be root.

Page 6: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 6

Citrix Virtual Classroom

Exercise 1: Migrating an HA pair to a 3-node

cluster

Overview

Clustering is the centerpiece of NetScaler 10. It is part of the TriScale architecture to deliver a service

delivery fabric to fundamentally change the ADC space. By allowing all nodes in the cluster to efficiently

process traffic as well as providing high availability, it provides the foundation to deploy the necessary

infrastructure to meet any kind of performance and scalability requirements.

With the existing infrastructure, active-active or HA deployments have shortcomings directly addressed by

clustering:

HA Active/Active

Scalability (Only one appliance processing traffic)

Configuration management (Independent configuration between appliances)

Resource utilization. (One resource always idle)

Heavy dependency on upstream device configuration.

Not all L4-L7 features work across all nodes (max. clients, persistence, session reuse, etc..)

With the new architecture offered by NetScaler 10, requirements for dynamic capacity, fault tolerance,

high scalability and performance are met. The first step is to plan the migration of existing HA

deployments.

In this exercise, we will prepare an existing HA pair to become ready for a 3-node cluster deployment.

Page 7: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 7

Citrix Virtual Classroom

Step by step guidance

The lab environment required for this exercise is as follows:

1. NetScaler VPX appliance: NS10_node1 (Existing HA node)

2. NetScaler VPX appliance: NS10_node2 (Existing HA node)

3. NetScaler VPX appliance: NS10_ node3 (Independent node)

An HA pair has been pre-configured with a simple load balancing configuration:

enable ns feature WL SP LB

enable ns mode FR L3 Edge USNIP PMTUD

add ns ip 192.168.10.91 255.255.255.0 -vServer DISABLED

add HA node 1 192.168.10.110 / add HA node 1 192.168.10.120

add server GENTOO_1 192.168.10.13

add server GENTOO_2 192.168.10.14

add service GENTOO_1_HTTP_TCP_80 GENTOO_1 HTTP 80

add service GENTOO_2_HTTP_TCP_80 GENTOO_2 HTTP 80

add lb vserver HTTP_vserver_cluster HTTP 192.168.10.18 80

bind lb vserver HTTP_vserver_cluster GENTOO_1_HTTP_TCP_80

bind lb vserver HTTP_vserver_cluster GENTOO_2_HTTP_TCP_80

Our goal is to migrate to a cluster setup without losing any configuration.

Page 8: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 8

Citrix Virtual Classroom

Estimated time to complete this lab: 15 minutes.

Step Action

1. After XenCenter loads the

available VMs, proceed with the

next step. Please make sure only

the following VMs are powered

on:

AD.training.lab

Apache_MySQL_1

Apache_MySQL_2

Apache_MySQL_3

EXT-WIN_7

NS10_node1

NS10_node2

NS10_node3

Vyatta-RTR

Win7Client

2. In order to migrate to a cluster

configuration, we need to break

the existing HA pair. However,

before removing the HA

configuration, a few precautions

need to be completed.

Login to the primary node

(NS10_node1) and verify the

HA status.

NS10_node1> show ha node

1) Node ID: 0

IP: 192.168.10.110 (NS10_node1)

Node State: UP

Master State: Primary

...

2) Node ID: 1

IP: 192.168.10.120

Node State: UP

Master State: Secondary

...

Done

NS10_node1>

3. While in the primary node,

backup the configuration.

# shell

# cp /nsconfig/ns.conf

/nsconfig/ns.conf_beforeCluster

Page 9: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 9

Citrix Virtual Classroom

Step Action

4. As part of this exercise, we will

verify that HA disassociation

does not disrupt traffic. For this

purpose we will use a simple

utility to constantly send

HTTP requests to the virtual

server already configured. On

Apache_MySQL_3

(GENTOO_3) use “wbox” to

send 5 GET requests per

second to the load balancing

virtual server. Leave this utility

running.

gentoo_3 ~ # wbox 192.168.10.18 clients 5

WBOX 192.168.10.18 (192.168.10.18) port 80

0. 200 OK 319 bytes 3 ms

0. 200 OK 319 bytes 3 ms

0. 200 OK 319 bytes 4 ms

0. 200 OK 319 bytes 3 ms

0. 200 OK 319 bytes 4 ms

1. 200 OK (318) bytes 2 ms

1. 200 OK (318) bytes 2 ms

1. 200 OK (318) bytes 2 ms

1. 200 OK (318) bytes 2 ms

1. 200 OK (318) bytes 2 ms

5. Next, set the primary node to

STAYPRIMARY and disable

command propagation and

synchronization in order to

avoid any configuration loss or

IP conflicts.

# exit

NS10_node1> set ha node -haStatus STAYPRIMARY

-haprop DISABLED -haSync DISABLED

Done

NS10_node1>

6. Logon to the secondary unit

and clear the configuration to

avoid IP conflicts when we

break the HA pair.

NS10_node2> clear ns config full

Are you sure you want to clear the

configuration(Y/N)? [N]:y

Done

NS10_node2>

7. Logon to the primary and

remove the secondary node

from the HA configuration.

On Apache_MySQL_3, inspect

the output of the probes. Was

any request lost?

Verify the HA status on both

units.

NS10_node1> rm ha node 1

Done

NS10_node1>

NS10_node1> show ha node

1) Node ID: 0

IP: 192.168.10.110 (NS10_node1)

Node State: STAYPRIMARY

Master State: Primary

...

NS10_node1>

NS10_node2> show ha node

1) Node ID: 0

IP: 192.168.10.120 (NS10_node2)

Node State: UP

Master State: Primary

...

NS10_node2>

Page 10: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 10

Citrix Virtual Classroom

Step Action

8. As a precaution, restore the

original HA settings (Since

clustering does not use HA

configuration, this step might

not be required; however, for

completion, we are reverting

these settings to defaults).

NS10_node1> set ha node -haProp ENABLED -

haStatus ENABLED -haSync ENABLED

Done

NS10_node1>

9. At this stage, node 1 should have continued to serve traffic. We are ready to begin the initial

cluster configuration. Continue with exercise 2.

Page 11: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 11

Citrix Virtual Classroom

Summary

Key

Takeaways

The key takeaways for this exercise are:

A high availability pair can be migrated to a cluster setup. This involves

breaking the HA pair setup and clearing the configuration on the secondary.

No loss of connectivity should occur during the HA dissociation.

Unfortunately, upon creating the cluster instance, the configuration will be

cleared. We must backup the configuration before migrating to a cluster

setup to avoid any loss.

NOTES A cluster setup is only functional if at minimum (n/2) +1, where n is the number of cluster nodes, are able to serve traffic. If that formula were applied to a two-node cluster, the cluster would fail if one node went down (n/2 +1=2). A two-node cluster is and exception to the rule and it is fully functional even if only one node is able to serve traffic. Creating a two node cluster is the same as creating any other cluster. You must add one node as the configuration coordinator and the other node as an additional cluster node.

Page 12: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 12

Citrix Virtual Classroom

Exercise 2: Initial cluster setup

Overview

A NetScaler cluster uses multiple mechanisms to efficiently distribute traffic across all nodes in the cluster.

This can be achieved by using the following available methods:

ECMP: Equal Cost Multipath Routing

CLAG: Cluster Link Aggregation Channels

Link Set

Since we have a virtual lab infrastructure, in this exercise, we will perform the necessary configuration to

bring up a cluster using a link set deployment which does not require switch or router configuration for

the cluster to be fully operational.

Step by step guidance

The lab environment required for this exercise is as follows:

1. NetScaler VPX appliance: NS10 _node1

2. NetScaler VPX appliance: NS10_node2

3. NetScaler VPX appliance: NS10_node3

4. Internal Windows 7 Workstation: Win7Client

Estimated time to complete this lab: 15 minutes.

Step Action

4. All three VPX appliances

are already configured with

a NSIP and default route.

Additionally, node1 will

contain existing

configuration from the

previous HA setup.

First, login to each

appliance and verify IP

connectivity to each other.

> ping 192.168.10.110

> ping 192.168.10.120

> ping 192.168.10.130

Page 13: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 13

Citrix Virtual Classroom

Step Action

5. A cluster topology

introduces the concept of

“communication planes”.

There are four logical

traffic flows in a cluster

system that identify the

relevant information that is

passed between client,

servers, and nodes in the

cluster:

1. Client data plane

2. Control plane

3. Server data plane

4. Back plane

Our topology (link set)

does not require separation

of such planes as they all

can co-exist in the shared

medium. However, for

illustration purposes, in our

environment a new virtual

interface was configured to

simulate the backplane.

In XenCenter, verify that

all 3 VPX appliances are

connected to such logical

segment.

Continue with the next

exercise.

Page 14: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 14

Citrix Virtual Classroom

Step Action

6. The next step is to add the

first node to the cluster.

This node will act as the

configuration coordinator

(CCO).

Go back to node 1

(NS10_node1) and define

the cluster instance.

NS10_node1> add cluster instance 1

7. Add node 1 to the cluster

instance. In order to define

spotted SNIPs to each

node later in this exercise,

set the state to passive.

Make sure you select

interface 1/2 as the

backplane interface.

NS10_node1> add cluster node 1 192.168.10.110 -

state PASSIVE -backplane 1/2

8. Enable the cluster instance.

After executing this

command, the

configuration needs to be

saved and the appliance

has to be rebooted.

Note: After enabling the

cluster instance, the

configuration is cleared

internally using the

“clear ns config”

command. We will use the

backup to restore the

entities lost.

NS10_node1> enable cluster instance 1

Warning: The configuration must be saved and

the system rebooted for these settings to take

effect

NS10_node1> save ns config

NS10_node1> reboot –warm

9. When node 1 comes back

online, add the cluster IP

NS10_node1> add ns ip 192.168.10.140

255.255.255.255 -type CLIP

Page 15: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 15

Citrix Virtual Classroom

Step Action

10. Verify the cluster instance

and node.

NS10_node1> show cluster instance

1) Cluster ID: 1

Dead Interval: 3 secs

Hello Interval: 200 msecs

Preemption: DISABLED

Propagation: ENABLED

Cluster Status: ENABLED(admin),

ENABLED(operational), UP

Member Nodes:

Node ID Node IP Health Admin

State Operational State

------- ------- ------ --------

--- -----------------

1) 1 192.168.10.110* UP PASSIVE

INACTIVE(CCO)

Done

NS10_node1> show cluster node

1) Node ID: 1

IP: 192.168.10.110*

Backplane: 1/1/2

Health: UP

Admin State: PASSIVE

Operational State: INACTIVE(CCO)

Sync State: ENABLED

Done

NS10_node1>

Page 16: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 16

Citrix Virtual Classroom

Step Action

11. From the Internal Windows 7 workstation (Win7Client), SSH into the cluster IP (CLIP) using

PUTTY (in our example, 192.168.10.140, currently owned by node1) and add node 2 to the

cluster.

IMPORTANT:

Make sure the cluster commands are performed from the cluster IP; otherwise, they will

not be replicated to other nodes and issues will arise. Even when logging to the NSIP of

the Cluster Coordinator (CCO), the commands will not replicate correctly.

Specify the state as PASSIVE and the backplane interface for node 2.

NOTE: Cluster interface names are identified by a 3-tuple:

NodeID / ControlledID / InterfaceID

NS10_node1> add cluster node 2 192.168.10.120 -state PASSIVE -

backplane 2/1/2

Done

NS10_node1>

12. While logged in via SSH to

the cluster IP (CLIP),

perform the same step for

node 3.

NS10_node1> add cluster node 3 192.168.10.130 -

state PASSIVE -backplane 3/1/2

Done

NS10_node1>

Page 17: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 17

Citrix Virtual Classroom

Step Action

13. On the same SSH session,

verify the cluster node

status using the “show

cluster node”

command

Note that node 2 and node

3 should report the status

as unknown since we have

not completed the required

configuration on these

nodes.

NS10_node1> show cluster node

1) Node ID: 1

IP: 192.168.10.110*

Backplane: 1/1/2

Health: UP

Admin State: PASSIVE

Operational State: INACTIVE(CCO)

Sync State: ENABLED

2) Node ID: 2(Adding)

IP: 192.168.10.120

Backplane: 2/1/2

Health: UNKNOWN

Admin State: PASSIVE

Operational State: UNKNOWN

Sync State: UNKNOWN

3) Node ID: 3(Adding)

IP: 192.168.10.130

Backplane: 3/1/2

Health: UNKNOWN

Admin State: PASSIVE

Operational State: UNKNOWN

Sync State: UNKNOWN

DoneNS10_node1>

14. While logged in to the

cluster IP (CLIP), save the

configuration.

NS10_node1> save ns config

15. Next, login to node 2 and

join the cluster. Save the

configuration and reboot.

NS10_node2> join cluster -clip 192.168.10.140

-password nsroot

Warning: The configuration must be saved and

the system rebooted for these settings to take

effect

NS10_node2> save ns config

Done

NS10_node2> reboot -warm

16. Perform the same step on

node 3.

NS10_node3> join cluster -clip 192.168.10.140 -

password nsroot

Warning: The configuration must be saved and

the system rebooted for these settings to take

effect

NS10_node3> save ns config

Done

NS10_node3> reboot -warm

Page 18: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 18

Citrix Virtual Classroom

Step Action

17. After node 2 and node 3

are online, login to the

cluster IP (CLIP) via SSH

using the Windows 7

workstation (Win7Client).

Display the cluster status.

Since all the nodes joined

the cluster in a PASSIVE

state. The Operational

State for all nodes should

be INACTIVE.

Inspect this by issuing the

“show cluster node”

command from the CCO.

NS10_node1> show cluster node

1) Node ID: 1

IP: 192.168.10.110*

Backplane: 1/1/2

Health: UP

Admin State: PASSIVE

Operational State: INACTIVE(CCO)

Sync State: ENABLED

2) Node ID: 2

IP: 192.168.10.120

Backplane: 2/1/2

Health: UP

Admin State: PASSIVE

Operational State: INACTIVE

Sync State: ENABLED

3) Node ID: 3

IP: 192.168.10.130

Backplane: 3/1/2

Health: UP

Admin State: PASSIVE

Operational State: INACTIVE

Sync State: SUCCESS

Done

NS10_node1>

18. Go back to the cluster IP

session (CLIP) and add

spotted SNIPs for each of

the nodes. Make sure you

select the -ownerNode

parameter.

NS10_node1> add ns ip 192.168.10.91

255.255.255.0 -type SNIP -ownerNode 1

Done

NS10_node1> add ns ip 192.168.10.92

255.255.255.0 -type SNIP -ownerNode 2

Done

NS10_node1> add ns ip 192.168.10.93

255.255.255.0 -type SNIP -ownerNode 3

Done

NS10_node1>

Page 19: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 19

Citrix Virtual Classroom

Step Action

19. From the cluster IP

(CLIP) SSH session,

display the cluster system

IPs.

NOTE: Every node in the

cluster should show all the

IPs in the cluster system.

However, all

configurations should be

performed using the

Cluster IP.

NS10_node1> show ip

20. From the same SSH

session set the node state

on all nodes to ACTIVE.

NS10_node1> set cluster node 1 -state ACTIVE

Done

NS10_node1> set cluster node 2 -state ACTIVE

Done

NS10_node1> set cluster node 3 -state ACTIVE

Done

NS10_node1>

21. Verify that all nodes are

ACTIVE.

NS10_node1> show cluster node

1) Node ID: 1

IP: 192.168.10.110*

Backplane: 1/1/2

Health: UP

Admin State: ACTIVE

Operational State: ACTIVE(CCO)

Sync State: ENABLED

2) Node ID: 2

IP: 192.168.10.120

Backplane: 2/1/2

Health: UP

Admin State: ACTIVE

Operational State: ACTIVE

Sync State: ENABLED

3) Node ID: 3

IP: 192.168.10.130

Backplane: 3/1/2

Health: UP

Admin State: ACTIVE

Operational State: ACTIVE

Sync State: SUCCESS

DoneNS10_node1>

Page 20: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 20

Citrix Virtual Classroom

Step Action

22. OPTIONAL: If any of

the nodes appear as NOT

UP and they are currently

out-of-sync, proceed to

remove and re-add it to the

cluster.

Login to the problematic

node’s NSIP (in our

example node 3) and

remove the cluster

instance.

Then proceed to re-join

the node to the cluster to

force a full

synchronization.

NS10_node1> show cluster node

1) Node ID: 1

IP: 192.168.10.110*

State: ACTIVE

Backplane: 1/1/2

Health: UP

Master State: ACTIVE(CCO)

Sync State: ENABLED

2) Node ID: 2

IP: 192.168.10.120*

State: ACTIVE

Backplane: 2/1/2

Health: UP

Master State: ACTIVE

Sync State: SUCCESS

3) Node ID: 3

IP: 192.168.10.130

State: ACTIVE

Backplane: 3/1/2

Health: NOT UP

Reason(s):

Service state is being

synchronized with the cluster

Master State: INACTIVE

Sync State: ENABLED

Done

NS10_node1>

NS10_node3> rm cluster instance 1

Done

NS10_node3> join cluster -clip 192.168.10.140 -

password nsroot

Warning: The configuration must be saved and the

system rebooted for these settings to take effect

NS10_node3> save ns config

Done

NS10_node3> reboot -warm

Are you sure you want to restart NetScaler (Y/N)?

[N]:y

NS10_node3>

Page 21: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 21

Citrix Virtual Classroom

Step Action

23. From the cluster IP

(CLIP) SSH session, verify

that all IPs defined so far

are ACTIVE

24. Next, we need to define the distribution method for the cluster system. Since this lab

environment is completely virtualized, we will use the “link set” deployment type as this does

not require any router or switch configuration.

First, while logged on to the cluster IP (CLIP), create the link set definition.

Then, bind all the interfaces connected to the link set.

In our example, the VPX appliances will have interface n/1/1 connected to the upstream virtual

switch (where n is the node id). We will select these interfaces in our link set configuration.

NS10_node1> add linkset LS/1

Done

NS10_node1> bind linkset LS/1 -ifnum 1/1/1

Done

NS10_node1> bind linkset LS/1 -ifnum 2/1/1

Done

NS10_node1> bind linkset LS/1 -ifnum 3/1/1

Done

NS10_node1>

25. Verify that the link set

binding is correct.

NS10_node1> show linkset LS/1

1) Id: LS/1

Interfaces: 1/1/1 2/1/1 3/1/1

Done

NS10_node1>

26. While logged on to the

cluster IP (CLIP), save the

cluster configuration

NS10_node1> save ns config

27. The cluster is now fully operational.

Page 22: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 22

Citrix Virtual Classroom

Summary

Key

Takeaways

The key takeaways for this exercise are:

New logical and physical design requirements must be considered before

deploying a cluster system. Communication traffic flows between the nodes

in the cluster, clients, and servers have to be carefully planned in order to

avoid issues during implementation.

New entity types are available in a cluster system. Spotted entities are only

active on a single node. Striped entities are available in all nodes participating

in the cluster.

NOTES One example of a spotted entity is the NSIP of each system. Additionally, in our

previous configuration, we defined SNIPs as spotted entities and each node in the

cluster will use this IPs to contact backend servers. Spotted SNIPs are

recommended by Citrix in order to avoid ARP flux issues.

After enabling the cluster instance and joining a node to the cluster setup, the

configuration is lost and all entities have to be redefined. It is very important the

configuration is backed up before proceeding to add a node to the cluster.

Page 23: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 23

Citrix Virtual Classroom

Exercise 3: Cluster load balancing

configuration

Overview

In a cluster system, most of the core NetScaler features are supported. Virtual server entities are

configured from the configuration coordinator which is the system that owns the cluster IP. The election

of the configuration coordinator is done automatically by the Quorum Service Provider (cluster) protocol

upon cluster setup. If the configuration coordinator suffers a failure, the QSP protocol will elect a new

CCO.

In this exercise we will configure a load balancing virtual server striped entity. We will explore traffic flow

between members of the cluster and force a CCO failure while serving traffic to illustrate how the system

behaves.

Step by step guidance

The lab environment required for this exercise is as follows:

1. NetScaler VPX appliance: NS10_node1

2. NetScaler VPX appliance: NS10_node2

3. NetScaler VPX appliance: NS10_node3

4. Apache_MySQL_1 (GENTOO_1)

5. Apache_MySQL_2 (GENTOO_2)

6. Internal Windows 7 Workstation: (Win7Client)

Estimated time to complete this lab: 30 minutes.

Step Action

1. From the Windows 7

workstation (Win7Client),

login to the cluster IP

(CLIP) and enable the load

balancing feature.

NS10_node1> enable ns feature LB

NS10_node1> enable ns mode USNIP

Page 24: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 24

Citrix Virtual Classroom

Step Action

2. While connected to the

cluster IP (CLIP),

configure HTTP and

MYSQL server and

services for servers 1 and 2

(Apache_MySQL_1 /

Apache_MySQL_2)

Note: HTTP virtual

servers and services should

be redefined as they are lost

after joining the initial node

to the cluster. Use the

initial configuration to

restore these settings.

NS10_node1> add server GENTOO_1 192.168.10.13

Done

NS10_node1> add server GENTOO_2 192.168.10.14

Done

NS10_node1> add service GENTOO_1_HTTP_TCP_80

GENTOO_1 HTTP 80

Done

NS10_node1> add service GENTOO_2_HTTP_TCP_80

GENTOO_2 HTTP 80

Done

NS10_node1> add service GENTOO_1_MYSQL_TCP_3306

GENTOO_1 MYSQL 3306

Done

NS10_node1> add service GENTOO_2_MYSQL_TCP_3306

GENTOO_2 MYSQL 3306

Done

3. From the same SSH session

to the cluster IP (CLIP),

create a new virtual server

for HTTP and MYSQL.

Note: Make sure the

configuration commands

are executed while

connected to the cluster IP

(CLIP). Even when

connecting to the CCO

NSIP, the commands are

not replicated to all

members in the cluster. If

by mistake, entities are

created locally, remove

them and reattempt to add

them using the cluster IP

(CLIP).

NS10_node1> add lb vserver HTTP_vserver_cluster

HTTP 192.168.10.18 80

Done

NS10_node1> bind lb vserver HTTP_vserver_cluster

GENTOO_1_HTTP_TCP_80

Done

NS10_node1> bind lb vserver HTTP_vserver_cluster

GENTOO_2_HTTP_TCP_80

Done

NS10_node1>

NS10_node1> add lb vserver MYSQL_vserver_cluster

MYSQL 192.168.10.18 3306

Done

NS10_node1> bind lb vserver

MYSQL_vserver_cluster GENTOO_1_MYSQL_TCP_3306

Done

NS10_node1> bind lb vserver

MYSQL_vserver_cluster GENTOO_2_MYSQL_TCP_3306

Done

NS10_node1>

Page 25: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 25

Citrix Virtual Classroom

Step Action

4. Continue on the SSH

session to the cluster IP

(CLIP). Verify that virtual

server definitions and

bindings were replicated to

other nodes. Login to each

node’s NSIP and execute a

“show lb vserver”

5. The cluster system uses a

distributed monitoring

infrastructure. One node in

the cluster is elected as the

monitoring owner and it is

responsible to notify the

CCO of the service health

status. In turn, the CCO

notifies the rest of the

nodes in the cluster.

To verify which node is the

monitoring owner, inspect

the output of the “show service <service_name>”

command from the SSH

session to the cluster IP

(CLIP).

The output from your

environment could be

different as a new

monitoring owner can be

selected.

Page 26: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 26

Citrix Virtual Classroom

Step Action

6. Now, we will trace the traffic path when generating a GET request from the Windows 7

workstation (Win7Client).

For simplicity, bind a simple PING monitor. The purpose of this exercise is to avoid cluttering the

TCPDUMP output with probe traffic.

Important: Make sure you perform all configuration operations from the SSH session to

the cluster IP (CLIP).

NS10_node1> bind service GENTOO_1_HTTP_TCP_80 -monitorName PING

Done

NS10_node1> bind service GENTOO_2_HTTP_TCP_80 -monitorName PING

Done

7. Go back to each node SSH

session and exit to the shell.

Then, start a new

TCPDUMP on each node

using the capture filter

provided.

# shell

# nstcpdump.sh -ne arp | grep "192.168.10.18"

8. From the Windows

workstation (Win7Client),

open the browser and

navigate to the VIP

previously configured

(192.168.10.18). Observe

the TCPDUMP output on

each node.

Page 27: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 27

Citrix Virtual Classroom

Step Action

9. When configuring a “Link Set”, the cluster defines an ARP owner for every striped IP configured.

One of the connected nodes is responsible for replying to ARP requests for a VIP. Upon receiving

a request, the ARP owner returns its own MAC address and receives the initial traffic flow.

Inspect the TCPDUMP running on each of the nodes to find out who is the ARP owner.

In our example, NS10_node1 (CCO) is the ARP owner.

-----------------

10. Verify who owns MAC

address given the in the

ARP reply.

From the cluster IP (CLIP)

SSH session, look at the

output of the “show

interface” command.

Which interface owns it?

11. Next, we will verify the traffic flow between client, server, and nodes in the cluster.

From each of the SSH sessions to each node’s NSIP, stop the previous capture and start a new

TCPDUMP dump.

This time, we will use a more specific filter to capture packets destined to the VIP or any of the

backend servers on port 80 as well as ARP traffic.

Additionally, using the –e flag, the output will display the corresponding source and destination

MAC addresses.

# shell

# nstcpdump.sh -ne "((host 192.168.10.18 or host 192.168.10.13 or

host 192.168.10.14) and tcp port 80) or arp"

Page 28: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 28

Citrix Virtual Classroom

Step Action

12. From the Win7Client

workstation, refresh the

page.

Page 29: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 29

Citrix Virtual Classroom

Step Action

13. Observe the TCPDUMP output on each node. Which node receives the initial flow? Which node

contacts the backend server? In a link set configuration, the traffic will flow from the

CLIENT_IP to the VIP through the MAC owner. Then, the cluster will select the node that will

process the request (flow processor) and forwards the request to this node using the backplane

interface. Next, the flow processor contacts the backend server and fetches the response. Through

the same backplane interface, this is returned to the ARP owner who in turn sends it back to the

CLIENT_IP.

Note: During your testing, the traffic flow might be different depending on which node is selected

as the flow receiver/processor. Inspect the source/destination MAC addresses to understand how

traffic is flowing in your lab environment.

Node1:

Node2:

Node3:

Page 30: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 30

Citrix Virtual Classroom

Step Action

14. For node-to-node communication, the backplane interface is utilized and a shared MAC address is

used to identify steered traffic.

This MAC has the following format:

02:00:6F:<cluster_id>:<node_id>:<flags>

In the previous TCPDUMP output used in step 11, inspect the source and destination MAC

addresses used between nodes.

15. OPTIONAL: To further illustrate the traffic flow, we will send sequential GET requests to the

VIP and observe the behavior.

From Apache_MySQL_3 (GENTOO_3), start the wbox utility to send 1 request per second to

the VIP.

gentoo_3 ~ # wbox 192.168.10.18

WBOX 192.168.10.18 (192.168.10.18) port 80

0. 200 OK 318 bytes 2 ms

1. 200 OK 318 bytes 2 ms

From the Windows 7 workstation (Win7Client), open an SSH session to each node’s NSIP and

start a TCPDUMP using the same capture filter provided below (same as in step 14).

# nstcpdump.sh -ne "((host 192.168.10.18 or host 192.168.10.13 or host

192.168.10.14) and tcp port 80) or arp"

Observe the traffic flow for the incoming requests.

1. Is the ARP owner always the same? Why?

2. Is the flow processor always the same?

Page 31: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 31

Citrix Virtual Classroom

Step Action

16. Now, we will simulate a

node failure by rebooting

one of the nodes in the

cluster.

While leaving the wbox

utility running to generate

constant HTTP probes

(refer to the previous step),

reboot one of the nodes in

the cluster.

NS10_node3> reboot -warm

Are you sure you want to restart NetScaler

(Y/N)? [N]:y

Done

NS10_node3>

17. When the node selected for

reboot goes offline, a small

delay might be perceived

from the client while

connections are redirected

to other nodes.

Inspect the probe output.

Was there any HTTP

request lost during your

testing?

gentoo_3 ~ # wbox 192.168.10.18

...

65. 200 OK 318 bytes 2 ms

66. 200 OK 318 bytes 3 ms

67. 200 OK 318 bytes 3 ms

68. 200 OK 318 bytes 2 ms

69. 200 OK 318 bytes 3 ms

70. 200 OK 318 bytes 3 ms

71. 200 OK 318 bytes 3295 ms

72. 200 OK 318 bytes 2 ms

73. 200 OK 318 bytes 2 ms

74. 200 OK 318 bytes 1 ms

75. 200 OK 318 bytes 1 ms

18. After node 3 comes back online, it rejoins the cluster automatically and connections start to flow

through this node once again.

From the Windows 7 workstation (Win7Client), go back to the SSH session to the cluster IP

(CLIP) and verify the output of the “show cluster node”.

What is the status of the node rebooted? Did it re-join the cluster?

Page 32: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 32

Citrix Virtual Classroom

Step Action

19. From the same workstation, go back to the SSH session to node3. Stop any trace running and start

a new TCPDUMP to inspect traffic flow. Use the command provided:

# nstcpdump.sh -ne "((host 192.168.10.18 or host 192.168.10.13 or host

192.168.10.14) and tcp port 80) or arp"

Is there any traffic flowing through this node?

See the example below:

Page 33: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 33

Citrix Virtual Classroom

Step Action

20. Now, to simulate a

Configuration Coordinator

node failure, we will reboot

the CCO node while

sending HTTP probes.

From the SSH session to

the cluster IP (CLIP),

reboot the CCO while

running the wbox utility

from server 3

(GENTOO_3).

The cluster system will

immediately start the CCO

election process and select

a new configuration

coordinator (CCO). A small

delay might be perceived

from the client while this is

in progress.

Note: Since our cluster

deployment uses a link set,

one of the nodes is the

ARP owner. On step 12 of

this particular example, we

found out that node1, (the

rebooted node) was the

ARP owner. Therefore, a

new node is selected for

this function.

From the CCO node execute:

NS10_node1> reboot –warm

Are you sure you want to restart NetScaler

(Y/N)? [N]:y

Done

NS10_node1>

From Server 3 (GENTOO_3), observe the wbox

output:

395. 200 OK 318 bytes 2 ms

396. 200 OK 318 bytes 2 ms

397. 200 OK 318 bytes 3 ms

398. 200 OK 318 bytes 3 ms

399. 200 OK 318 bytes 2 ms

400. 200 OK 318 bytes 7012 ms

401. 200 OK 318 bytes 2 ms

402. 200 OK (319) bytes 4 ms

403. 200 OK (318) bytes 2 ms

404. 200 OK 318 bytes 3 ms

405. 200 OK 318 bytes 2 ms

406. 200 OK 318 bytes 3 ms

407. 200 OK 318 bytes 2 ms

408. 200 OK 318 bytes 2 ms

Page 34: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 34

Citrix Virtual Classroom

Step Action

21. From the Windows 7

workstation (Win7Client),

reconnect to cluster IP

while the previous node is

rebooting.

Execute the “show cluster

node” command.

Was the connection

successful? Was a new

CCO elected?

22. So far, our HTTP vserver had no persistency settings and the LEASTCONNECTIONS method

was used by default. Now, we will explore other load balancing algorithms and how the persistence

state is replicated in a cluster system.

From the SSH session to the cluster IP (CLIP), change the LB method to Round Robin and select

source IP persistence.

NS10_node2> set lb vserver HTTP_vserver_cluster -lbMethod ROUNDROBIN

-persistenceType SOURCEIP

Done

NS10_node2>

Page 35: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 35

Citrix Virtual Classroom

Step Action

23. Go back to the SSH session

to Apache_MySQL_3

(GENTOO_3). Using our

probing tool, send HTTP

requests to the VIP in

order to generate constant

traffic.

Note: If you followed the

previous steps, the wbox

utility should be running.

Inspect this is the case.

gentoo_3 ~ # wbox 192.168.10.18

WBOX 192.168.10.18 (192.168.10.18) port 80

0. 200 OK 318 bytes 1 ms

1. 200 OK 318 bytes 1 ms

24. Go back to the SSH session to the cluster IP (CLIP) on the Windows 7 workstation (Win7Client).

Inspect the persistence table by executing the “show persistentSessions” command.

> show persistentSessions

Perform the same command on the other two nodes. Are the persistence entries the same on all

nodes in the cluster?

25. From the SSH session to the cluster IP (CLIP), look at the timeout for the persistence entry

created. Is the timeout value the same across all nodes?

26. This concludes with our load balancing configuration. Next, we will look into the new packet

capture options available in a cluster system.

Page 36: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 36

Citrix Virtual Classroom

Summary

Key

Takeaways

The key takeaways for this exercise are:

A NetScaler cluster system provides a scalable and fault tolerant solution. If

any of the components fail, another member of the cluster will handle the

load or roles assigned.

Load balancing persistence is accomplished by shared hashed tables across

all cluster nodes. Even though traffic does not flow through the same node

for every incoming flow, the same destination server is selected.

NOTES NetScaler clustering uses an extension of the existing persistence algorithms

available in nCore technology. This extension applies to independent systems that

process traffic independently in the same fashion as packet engines.

Page 37: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 37

Citrix Virtual Classroom

Exercise 4: Packet capture analysis in a

cluster system

Overview

In a cluster system, traffic flows can be complex depending on the deployment type and environment. In

order to help troubleshoot networking issues, additional markings are included in captured packets when

traces are taken in native format. This feature was already available in previous releases; however, with the

introduction of clustering, additional markings are needed to aid during the packet capture analysis. In this

exercise, we will capture a trace in native format and examine such markings.

Keep in mind that a special version of a protocol analyzer is required as it needs to understand the native

NetScaler format. You must download the latest WireShark version which already includes such changes.

For this exercise, the required application version was already preloaded to the workstation. No additional

download is required.

Step by step guidance

The lab environment required for this exercise is as follows:

1. NetScaler VPX appliance: NS10_node1

2. NetScaler VPX appliance: NS10_node2

3. NetScaler VPX appliance: NS10_node3

4. Apache_MySQL_2 (GENTOO_1)

5. Apache_MySQL_2 (GENTOO_2)

6. Apache_MySQL_3 (GENTOO_3)

7. Windows 7 Workstation: (Win7Client)

Estimated time to complete this lab: 20 minutes.

Step Action

Page 38: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 38

Citrix Virtual Classroom

Step Action

1. From the Windows 7 workstation (Win7Client), login to the cluster IP (CLIP) GUI using any browser

installed on the workstation.

2. Navigate to Configuration -> System -> Diagnostics -> Technical Support Tools -> Start new trace

Page 39: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 39

Citrix Virtual Classroom

Step Action

3. Verify no TCPDUMP sessions are running individually on each node. Stop them if necessary. Then,

proceed to start a new trace with the following settings:

Packet size = 0

Trace file format = nstrace

Leave other settings default. Click on start.

Page 40: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 40

Citrix Virtual Classroom

Step Action

4. From the same workstation (Win7Client), open a different browser and navigate to the HTTP virtual

server in order to generate some traffic to be captured.

Since there is a web application pre-installed on all servers bound to the VIP (Apache_MySQL_1 and

Apache_MySQL_2), use the URL below to access it:

http://192.168.10.18/phpmyadmin

Login as:

U: root

P: Citrix123

Page 41: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 41

Citrix Virtual Classroom

Step Action

5. After login in to the web application, go back to the NetScaler trace utility and stop the trace. After

stopping the running trace, click on the download button to open the file browser:

6. Whenever a trace is captured, a new directory will be created containing the trace files for each node.

Select the corresponding folder with the correct timestamp and click on the zip utility to compress the

archive and then hit select.

Page 42: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 42

Citrix Virtual Classroom

Step Action

7. Download the compressed archive to the administrator’s account desktop:

8. Close the trace utility. Proceed to extract the

contents of the compressed archive to the

workstation’s desktop.

9. Inspect the contents of the new directory. You should see a subfolder for every node in the cluster.

Open any of the trace files and filter by HTTP.

Page 43: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 43

Citrix Virtual Classroom

Step Action

10. Expand any of the frames

and inspect the NetScaler

Packet Trace layer. Note the

additional markings

included in each packet:

Some of these are:

Source Node

Destination Node

Cluster Flags

11. In our example, frame 3267 was received by node 1 and steered to node 2 for further processing. Look

into the source and destination MAC addresses to verify this. From the same workstation (INT-Win_7),

open an SSH session and login to the cluster IP (CLIP) and compare to the output of the “show

interface” command:

Page 44: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 44

Citrix Virtual Classroom

Step Action

12. As you can see, the source

MAC address is the virtual

MAC used by all nodes in

the cluster.

In order to get the correct

node to process traffic, the

packet must be “steered”

to the corresponding owner

node (in this case node2).

The receiving node (Flow

Receiver) forwards the

packet to the destination

node (Flow Processor) to

serve the request.

Page 45: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 45

Citrix Virtual Classroom

Step Action

13. The elegance of the cluster architecture relies around an active/active configuration. This means that

any node in the traffic path can receive traffic. This complicates things a bit as each trace will have

chunks of the relevant traffic for a particular request and it will be very hard to troubleshoot networking

related issues while having multiple trace files.

To solve this problem, NetScaler 10 offers a new command line utility to merge all trace files into a

single capture.

The nstracemerge.sh utility is available from the shell:

14. Let’s merge the previously captured trace files. From the Windows 7 workstation (Win7Client), login to

the cluster IP (CLIP) using SSH. Access the shell prompt and navigate to the directory where the trace

was captured:

/var/nstrace/DDMonthYYYY_hh_mm_ss

NOTE: The folder is labeled using a current timestamp format. See example below.

> shell

# cd /var/nstrace/19Sep2012_03_16_59/

Page 46: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 46

Citrix Virtual Classroom

Step Action

15. From the SSH session, invoke the nstracemerge.sh utility with the default parameters. This will use

the current directory as source and destination for the input/output files. The result will be a single

trace file named nstrace0.cap with the contents of all individual traces:

root@NS10_node2# pwd

/var/nstrace/17Apr2012_21_55_47

root@NS10_node2# ls -alth

drwxr-xr-x 5 root wheel 512B Apr 18 01:38 .

drwxr-xr-x 4 root wheel 512B Apr 17 22:42 ..

drwxr-xr-x 2 root wheel 512B Apr 17 21:56 node2

drwxr-xr-x 2 root wheel 512B Apr 17 21:55 node3

drwxr-xr-x 2 root wheel 512B Apr 17 21:55 node1

root@NS10_node2# nstracemerge.sh

nstrace merge

source directory : .

destination directory : .

output file name : nstrace

current working directory [.] :/var/nstrace/17Apr2012_21_55_47

Files in source directory:

./node1/nstrace1_node1.cap

./node2/nstrace1_node2.cap

./node3/nstrace1_node3.cap

root@NS10_node2# ls -alth

total 2378

drwxr-xr-x 5 root wheel 512B Apr 18 01:46 .

-rwx------ 1 root wheel 2.3M Apr 18 01:46 nstrace0.cap

drwxr-xr-x 4 root wheel 512B Apr 17 22:42 ..

drwxr-xr-x 2 root wheel 512B Apr 17 21:56 node2

drwxr-xr-x 2 root wheel 512B Apr 17 21:55 node3

drwxr-xr-x 2 root wheel 512B Apr 17 21:55 node1

root@NS10_node2#

Page 47: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 47

Citrix Virtual Classroom

Step Action

16. From the Windows 7 workstation (Win7Client), use WinSCP to copy the merged trace file to the

desktop:

17. From the Windows 7 workstation (Win7Client), open the file. Verify the size. Is the trace larger that the

individual pieces? Does it contain more packets?

18. In WireShark, specify a display filter for the VIP AND HTTP traffic. Use the filter provided below:

ip.addr==192.168.10.18 && http

Page 48: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 48

Citrix Virtual Classroom

Step Action

19. Continue on WireShark and select the first packet. In the packet details, expand the “NetScaler Packet

Trace” sub-node. Start from the first packet displayed and continue to highlight the next. Observe the

Source / Destination node fields as well as the Flow Processor / Flow Receiver flags.

Are all packets processed by the same node?

Page 49: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 49

Citrix Virtual Classroom

Step Action

20. Additional display filters have been introduced in the new WireShark release. Now, we have the ability

to filter packets based on cluster flags, source/destination node, etc…

In order to find out which packets were received/processed/steered by a certain cluster node, we can

leverage some of the following filters:

nstrace.snode

nstrace.dnode

nstrace.flags

nstrace.flags.rssh

nstrace.flags.srss

nstrace.flags.dfd

nstrace.flags.fr

nstrace.flags.fp

For example, to find the packets sent by the client and received by node1, we can use the following

filter:

ip.addr==192.168.10.18 && ip.addr==192.168.10.233 && nstrace.snode==-1 &&

nstrace.dnode==1

In WireShark, apply the filter provided to the merged trace. Were there any packets received by node1?

Page 50: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 50

Citrix Virtual Classroom

Step Action

21. Change the destination node filter to either node 2 or 3. Was any packet received by either of these

nodes?

Why / Why not?

NOTE: Since we are using a linkset configuration, a node is selected to be the ARP owner. This

means that all incoming packets for a single VIP will be received by this node. If you select a different

destination node in the filter, only 1 of the 3 nodes in the cluster should receive incoming packets.

22. This concludes this exercise. Next we will explore other traffic distribution methods by extending our

lab environment to a remote network and configure ECMP.

Page 51: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 51

Citrix Virtual Classroom

Summary

Key

Takeaways

The key takeaways for this exercise are:

Traffic flowing through a cluster system can add additional complexity when

troubleshooting any network issue. NetScaler 10 introduces new options

when capturing network traces in a cluster system to identify the nodes

processing traffic; thus simplifying the analysis process.

These additional packet markings are only available when capturing a trace in

native format and a special version of a protocol analyzer is required to

analyze such captures. Citrix has submitted the changes to the Wireshark

organization and the latest WireShark build contains the necessary changes

to parse such captures.

An additional utility is available in NetScaler 10 to merge all trace files

generated by each node. This simplifies the analysis process.

NOTES Administrators familiar with NetScaler traces should find the new options familiar as

they extend the existing tracing capabilities of a NetScaler appliance.

Page 52: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 52

Citrix Virtual Classroom

Exercise 5: ECMP configuration

Overview

In a previous exercise, we used a linkset configuration as the traffic distribution mechanism. A linkset

configuration is easily implemented as it does not require any changes on the upstream device making it

suitable for deployment environments where administrative control of all network devices is not possible.

This distribution method has one caveat:

Traffic destined to a single VIP will only pass through a single node selected as the MAC address owner,

potentially creating a bottleneck if a large amount of traffic surges to the such IP.

In addition to linksets, other traffic distribution methods are available to mitigate this limitation:

- ECMP: Equal Cost Multipath Routing

- CLAG: Cluster Link Aggregation

In this lab, we will implement an ECMP configuration. For this we will use OSPF as the dynamic routing

protocol to advertise the VIP host routes to an intermediate router in order to provide access to a remote

network to the virtual servers hosted by the cluster. The following diagram will be used during this setup:

Remote Network

Internal Router

Public Network

EXT-Win_7

Internal network

INT-Win_7

NS10_node1

NS10_node3

NS10_node2

192.168.10.X/24

DNS server

172.16.2.0/24

Apache2

Apache3

Apache1

172.16.1.0/24

.11 .12

.110

.120

.15

.130

.1 .1

.14

.13

.2

Page 53: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 53

Citrix Virtual Classroom

Step by step guidance

The lab environment required for this exercise is as follows:

1. NetScaler VPX appliance: NS10_node1

2. NetScaler VPX appliance: NS10_node2

3. NetScaler VPX appliance: NS10_node3

4. Apache_MySQL_1 (GENTOO_1)

5. Apache_MySQL_2 (GENTOO_2)

6. Internal Windows 7 workstation: Win7Client

7. Remote network Windows 7 workstation: EXT-Win_7

8. Vyatta Internal Router: vyatta-RTR

Estimated time to complete this lab: 30 minutes.

Step Action

1. The 3-node cluster created in a previous exercise should be fully operational. However, it is still

uses a LinkSet configuration.

Since we are going to be configuring a fully connected ECMP setup, from the Windows 7

workstation (Win7Client), establish an SSH session to the cluster IP (CLIP) and remove the

LinkSet definition.

NS10_node2> rm linkset LS/1

Done

Page 54: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 54

Citrix Virtual Classroom

Step Action

2. To start with our ECMP

configuration, we must

establish connectivity to the

remote network from the

NetScaler cluster going

through the internal Vyatta

router.

In the next step, we will

create the necessary

interfaces to establish

connectivity. Check the

diagram provided to get

familiar with the logical

topology.

After you have reviewed the

topology diagram, continue

to the next step.

Remote Network

Internal Router

EXT-Win_7

NS10_node1

NS10_node3

NS10_node2

172.16.2.0/24

172.16.1.0/24

.101

.102

.103

.1

.1

.2

Page 55: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 55

Citrix Virtual Classroom

Step Action

3. Using XenCenter, add a new interface to Node 1, 2, 3. Bind this interface to the To_172-16-1-0

network on each node.

Page 56: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 56

Citrix Virtual Classroom

Step Action

4. All three NetScaler cluster

nodes should have the

following Networks attached.

The new network bound will

provide us the necessary

connectivity to the remote

172.16.2.0/24 subnet.

5. From the Windows 7

workstation (Win7Client),

SSH into the cluster IP

(CLIP) and enable OSPF.

NS10_node2> enable ns feature OSPFRouting

Done

NS10_node2>

Page 57: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 57

Citrix Virtual Classroom

Step Action

6. Next, we will create a new

subnet IPs on the new

network space

(172.16.1.0/24). We will turn

on dynamic routing on such

IP addresses in order to “peer

up” with the Vyatta router to

form an OSPF adjacency.

From the SSH session to the

cluster IP (CLIP), create 3

different SNIPs assigned to

each node. Make sure you turn

on the dynamic routing

option.

Use the following IPs:

172.16.2.11 – node1

172.16.2.12 – node2

172.16.2.13 – node3

NS10_node2> add ns ip 172.16.1.11 255.255.255.0

-ownerNode 1 -dynamicRouting ENABLED

NS10_node2> add ns ip 172.16.1.12 255.255.255.0

-ownerNode 2 -dynamicRouting ENABLED

NS10_node2> add ns ip 172.16.1.13 255.255.255.0

-ownerNode 3 -dynamicRouting ENABLED

Page 58: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 58

Citrix Virtual Classroom

Step Action

7. From each node SSH session (to the NSIP), return to the NetScaler CLI and verify connectivity

with the vyatta router eth1 interface (172.16.1.1) using PING.

NOTE: Since we manipulated the network interfaces on the NetScaler VPX instances while it was

running, it is possible that ICMP replies are not received. If this is the case, please reboot each of

the cluster nodes as well as the Vyatta router VM. Retry the connectivity test.

8. From the Windows 7 workstation (Win7Client), return to the SSH session to the cluster IP (CLIP)

and proceed to enable the dynamic routing options on the VIP (created in a previous exercise) to

inject the host route into OSPF.

Additionally, set the “Vserver Route Health Injection level” to ONE_VSERVER to ensure the

route is taken off the routing table if any virtual servers on this VIP go down.

NS10_node2> set ns ip 192.168.10.18 -hostRoute ENABLED -vserverRHILev

ONE_VSERVER

Page 59: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 59

Citrix Virtual Classroom

Step Action

9. Next, we will configure the

intermediate router to run

OSPF and setup an adjacency

with the NetScalers in the

cluster.

Login to the vyatta router

console (vyatta-RTR) using

the default username and

password (vyatta/vyatta).

Switch to configuraton mode

and start the OSPF instance

on area 0.

Proceed to add all the new

subnet IPs (created on the

previous step) as OSPF

neighbors with the same

priority and polling interval.

Enter configuration mode:

vyatta@vyatta:~$ configure

Once in configuration mode, enable OSPF and

configure the NetScaler nodes as neighbors:

vyatta@vyatta# set protocols ospf area 0

network 172.16.1.0/24

vyatta@vyatta# set protocols ospf neighbor

172.16.1.11 poll-interval 60

vyatta@vyatta# set protocols ospf neighbor

172.16.1.11 priority 0

vyatta@vyatta# set protocols ospf neighbor

172.16.1.12 poll-interval 60

vyatta@vyatta# set protocols ospf neighbor

172.16.1.12 priority 0

vyatta@vyatta# set protocols ospf neighbor

172.16.1.13 poll-interval 60

vyatta@vyatta# set protocols ospf neighbor

172.16.1.13 priority 0

10. Commit and save the

configuration for the changes

to be active.

vyatta@vyatta# commit

vyatta@vyatta# save

Page 60: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 60

Citrix Virtual Classroom

Step Action

11. The next step is to configure

the OSPF instance on the

NetScaler appliances.

NetScaler uses the ZebOS

protocol suite to manage the

dynamic routing protocol

configuration.

From the Windows 7

workstation (Win7Client), go

back to the SSH session to

cluster IP (CLIP) and access

the ZebOS shell.

NS10_node2> vtysh

12. Access the ZebOS

configuration mode, and start

the OSPF instance. Add the

network adjacent to the vyatta

router on area 0.

NS10_node2# conf t

NS10_node2(config)# router ospf

NS10_node2(config-router)# network 172.16.1.0

0.0.0.255 area 0

13. From the ZebOS shell, display the routing table to ensure the host route is being marked as a kernel

route. The VIP/32 route should be displayed.

NS10_node2(config-router)#exit

NS10_node2(config)#exit

NS10_node2#show ip route

Page 61: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 61

Citrix Virtual Classroom

Step Action

14. Next, show the OSPF neighbors to ensure the proper adjacencies were established. The vyatta

router as well as the other two NetScalers in the cluster should show up as neighbors.

NS10_node2#show ip ospf neighbor

15. Verify the same adjacencies on the vyatta router. From the vyatta router console, display the OSPF

neighbors.

NOTE: In order for the show commands to work correctly, make sure you exit out of

configuration mode.

vyatta@vyatta# exit

exit

vyatta@vyatta:~$ show ip ospf neighbor

Page 62: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 62

Citrix Virtual Classroom

Step Action

16. We are now ready to

redistribute the host routes

into OSPF.

The idea is for the NetScaler

to advertise these routes in

order for the Vyatta router to

include them in its routing

table. This way, hosts on the

remote network will be able to

access the striped VIP through

the advertised route.

From the Windows 7

workstation (Win7Client), go

back to the cluster IP (CLIP)

SSH session. The ZebOS shell

should be available.

Redistribute the kernel routes

into the OSPF instance.

node0#conf t

node0(config)#router ospf

node0(config-router)#redistribute kernel

node0(config-router)#exit

node0(config)#exit

node0#exit

NS10_node1>

17. Go back to the vyatta router console and display the routing table.

Was the host route injected?

Are there multiple routes?

Do they have the same cost?

Does it show the proper next hop IP?

vyatta@vyatta:~$ show ip route

Page 63: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 63

Citrix Virtual Classroom

Step Action

18. From the Windows 7

workstation at the remote

network (EXT-Win_7),

attempt to PING or access the

virtual server (192.168.10.18)

via HTTP.

Does the VIP respond with

ICMP replies? Why not?

19. At this point, the vyatta router has a specific route(s) to reach the virtual server and traffic is being

forwarded to one of the nodes in the cluster; however, the NetScaler cluster does not know how to

route back to the remote network.

Remote Network

Internal Router

EXT-Win_7

NS10_node1

NS10_node3

NS10_node2

172.16.2.0/24

172.16.1.0/24

.101

.102

.103

.1 .1

.2

HTTP vserver192.168.10.18

MySQL vserver192.168.10.18

ICMP echo request

Destination unreachable

Page 64: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 64

Citrix Virtual Classroom

Step Action

20. From the internal network windows workstation (Win7Client), go back to the cluster (CLIP) SSH

session and display the routing table.

Is there a route to the remote network (172.16.2.X/24)? Why not?

NS10_node2> show route

21. To get the correct return route into the NetScaler cluster routing table, we have two options.

Install a static route or use OSPF and have the router to redistribute its routes.

Since there is an OSPF adjacency already established, we can configure the vyatta router to

redistribute the connected routes.

Login to the vyatta router console, and execute the command provided below for the router to

advertise its routes:

vyatta@vyatta$ configure

vyatta@vyatta# set protocols ospf redistribute connected

vyatta@vyatta# commit

vyatta@vyatta# save

vyatta@vyatta# exit

22. Go back to the cluster IP (CLIP) SSH session and inspect the NetScaler cluster routing table. Why

is the route still NOT available?

Page 65: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 65

Citrix Virtual Classroom

Step Action

23. In order to include any advertised routes via OSPF into the NetScaler routing table, we need to

instruct the underlying routing layer (ZebOS) to make these routes available to the NetScaler kernel.

Review the diagram below and then continue with the next step.

Internal Router

ZebOS routing table

NS10_node2#show ip route

Codes: K - kernel, C - connected, S - static, R - RIP, B - BGP

O - OSPF, IA - OSPF inter area

N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2

E1 - OSPF external type 1, E2 - OSPF external type 2

i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2

ia - IS-IS inter area, I - Intranet

* - candidate default

C 127.0.0.0/8 is directly connected, lo0

C 172.16.1.0/24 is directly connected, vlan0

O E2 172.16.2.0/24 [110/20] via 172.16.1.1, vlan0, 00:05:32

C 192.168.10.0/24 is directly connected, vlan0

K 192.168.10.18/32 via 0.0.0.0

C 192.168.10.140/32 is directly connected, vlan0

NetScaler Routing Table

NS10_node2> show route

Network Netmask Gateway/OwnedIP State Type

------- ------- --------------- ----- ----

1) 192.168.10.0 255.255.255.0 192.168.10.120 UP DIRECT

2) 172.16.1.0 255.255.255.0 172.16.1.11 UP DIRECT

3) 0.0.0.0 0.0.0.0 192.168.10.1 UP STATIC

4) 127.0.0.0 255.0.0.0 127.0.0.1 UP PERMANENT

NS10_node1

NS10_node3

NS10_node2

Vyatta Router Routing Table

vyatta@vyatta:~$ show ip route

Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF,

I - ISIS, B - BGP, > - selected route, * - FIB route

C>* 127.0.0.0/8 is directly connected, lo

O 172.16.1.0/24 [110/10] is directly connected, eth1, 02:44:30

C>* 172.16.1.0/24 is directly connected, eth1

C>* 172.16.2.0/24 is directly connected, eth0

O>* 192.168.10.18/32 [110/20] via 172.16.1.11, eth1, 02:32:04

* via 172.16.1.13, eth1, 02:32:04

* via 172.16.1.12, eth1, 02:32:04

ADVE

RTIS

E RO

UTE

S

INST

ALL

RO

UTE

S

24. From the Windows 7

workstation (Win7Client), go

back to the cluster IP (CLIP)

SSH session and access the

ZebOS shell.

Enter configuration mode and

execute the command provided.

NS10_node1> vtysh

node0#conf t

node0(config)#ns route-install ospf

node0(config)#exit

node0#exit

Page 66: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 66

Citrix Virtual Classroom

Step Action

25. From the same SSH session and while on the NetScaler CLI, display the routing table. The OSPF

route should be installed.

NS10_node2> show route

26. From the remote network

Windows 7 workstation (EXT-

Win_7), attempt to PING the

VIP.

Did you receive an ECHO

reply?

27. Attempt to access the VIP over

HTTP. On the same

workstation, open a browser and

navigate to the VIP. Can you

access the default site?

Page 67: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 67

Citrix Virtual Classroom

Step Action

28. Next, we will verify that all machines in the cluster are receiving traffic for the striped VIP. From

the Windows 7 workstation (Win7Client), SSH into the NetScaler IP of each node, and start a

TCPDUMP to capture packets from the remote workstation’s IP and destined to TCP port 80.

Use the filter provided.

> shell

# nstcpdump.sh host 172.16.2.2 and tcp port 80

29. From the remote Windows 7

workstation (EXT-Win_7), open

a browser and connect to the

VIP.

30. Go back to the TCPDUMP started on step 27. Are multiple nodes receiving traffic?

Page 68: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 68

Citrix Virtual Classroom

Step Action

31. From the remote network Windows 7

workstation and login to the

“phpmyadmin” web application hosted

through the VIP.

http://192.168.10.18/phpmyadmin

U: root

P: Citrix123

Was the login successful?

32. Go back to the SSH sessions running on the Internal Windows 7 workstation (Win7Client), are

multiple nodes receiving traffic?

Page 69: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 69

Citrix Virtual Classroom

Step Action

33. Since we have defined source IP persistence on the HTTP virtual server configuration, all the

requests from the same workstation should be directed to the same backend server.

From (Win7Client), go back to the cluster IP (CLIP) SSH session and display the statistics on the

HTTP virtual server with the command provided below. Record the number of hits to each of the

services bound.

NS10_node2> stat lb vserver HTTP_vserver_cluster

34. On the remote network Windows 7 workstation (EXT-Win_7), navigate through the webapp to

generate some traffic.

Page 70: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 70

Citrix Virtual Classroom

Step Action

35. Go back to the internal Windows 7 workstation (Win7Client). From the cluster IP (CLIP) SSH

session, display the virtual server statistics. Compare the number of hits to each of the services

bound.

Are both services receiving traffic?

Page 71: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 71

Citrix Virtual Classroom

Step Action

36. To simulate a node failure, from XenCenter,

force shutdown one of the nodes in the cluster.

In our example, we will use node 3.

37. Go back to the vyatta router console. Display the routing table using the “show ip route”

command.

NOTE: Remember to exit out of configuration mode as the “show ip route” command would

not work. (The prompt should display a $ sign as this indicates we are in operational and not in

configuration mode).

Was the correct host route for the node selected removed?

vyatta@vyatta:~$ show ip route

Page 72: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 72

Citrix Virtual Classroom

Step Action

38. Go back to the remote network Windows 7 workstation (EXT-Win_7) and attempt to access the

HTTP VIP.

http://192.168.10.18/phpmyadmin

Can you still browse to it?

Since the vyatta router has two more host routes to the VIP, connectivity to the other two nodes is

still possible and the VIP should continue serving traffic. In this example, it is the upstream device

the one responsible for distributing traffic to each node in the cluster based on the L3 information

contained in the routing table.

39. This is the end of the exercise.

40. IMPORTANT! This is the last exercise for this lab. Please go back to the Self-paced Learning

Lab site and do the following before you go:

Provide us feedback by selecting the “Take web survey…” link.

After completing the survey, click the “Logout” link towards the bottom right to close out

your lab session.

Page 73: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 73

Citrix Virtual Classroom

Summary

Key

Takeaways

The key takeaways for this exercise are:

A NetScaler cluster can use multiple distribution mechanisms to distribute

traffic across all active nodes in the cluster. ECMP provides the best

distribution as each node will receive traffic directly from the upstream

device. State sharing between all nodes in the cluster is crucial for the cluster

to support additional features (persistence, session reuse, etc…)

Load balancing persistence is accomplished by shared hashed tables across

all cluster nodes. Even though traffic does not flow through the same node

for every incoming flow, the same destination server is selected.

NOTES An Equal Cost Multipath Routing configuration is dependent upon the upstream

device configuration. It relies on dynamic routing protocols and support on other

network devices to distribute traffic to all cluster nodes homogeneously.

Please complete this survey

We value your feedback! Please take a moment to let us know about your self-paced

lab experience by completing a brief survey on the web portal before you logout.

Page 74: Leveraging Netscaler 10 Clusters to Improve Application Performance

Page 74

Citrix Virtual Classroom

Revision History

Revision Change Description Updated By Date

1.0 Original Version WW Technical Readiness October 2012

About Citrix

Citrix Systems, Inc. designs, develops and markets technology solutions that enable information technology (IT)

services. The Enterprise division and the Online Services division constitute its two segments. Its revenues are

derived from sales of Enterprise division products, which include its Desktop Solutions, Datacenter and Cloud

Solutions, Cloud-based Data Solutions and related technical services and from its Online Services division's Web

collaboration, remote access and support services. It markets and licenses its products directly to enterprise

customers, over the Web, and through systems integrators (Sis) in addition to indirectly through value-added

resellers (VARs), value-added distributors (VADs) and original equipment manufacturers (OEMs). In July 2012, the

Company acquired Bytemobile, provider of data and video optimization solutions for mobile network operators.

http://www.citrix.com

© 2012 Citrix Systems, Inc. All rights reserved.