Chassis Cluster Configuration

Post on 12-May-2015

2.196 views 10 download

Tags:

Transcript of Chassis Cluster Configuration

Chassis Cluster Configuration

Kashif Latif

What is Chassis Cluster…?

Chassis clustering provides network node redundancy by grouping a pair of the same kind of supported J-series devices or SRX-series devices into a cluster. The devices must be running JUNOS software.

A chassis cluster takes the two SRX devices and represents them as a single device.

It consist of two important models that are:1. active/active2. active/passive

Overview

The basic active/passive chassis cluster consists of two devices:

1. One device actively provides routing, firewall, NAT, VPN, and security services, along with maintaining control of the chassis cluster.

2. The other device passively maintains its state for cluster failover capabilities should the active device become inactive.

Hardware Setup for SRX-series Chassis Clusters

To create an SRX-series chassis cluster, you must physically connect a pair of the same kind of supported SRX-series devices back-to-back over a pair of Gigabit Ethernet connections or a pair of 10-Gigabit Ethernet connections.

The connection that serves as the control link must be the built-in controller port on each device. The fabric link connection can be a combination of any pair of Gigabit Ethernet interfaces on the devices.

Connecting SRX-series Devices in a Cluster

Fabric Link

Control Ports

Fabric Link Cable

What Happens When You Enable Chassis Cluster

After wiring the two devices together, you use CLI operational mode commands to enable chassis clustering by assigning a cluster ID and node ID on each chassis in the cluster. The cluster ID is the same on both nodes.

To do this, you connect to the console port on the device that will be the primary, give it a node ID, and identify the cluster it will belong to, then reboot the system.

Now you then connect the console port to the other device, give it a node ID, and assign it the same cluster ID you gave to the first node, then reboot the system.

Management Interfaces on SRX-series Chassis Clusters

The fxp0 interfaces, when configured for active/active operations, function like standard management interfaces on SRX-series devices and allow network access to each node in the cluster.

You must, however, first connect to each node through the console port and assign a unique IP address to each fxp0 interface.

Fabric InterfaceThe fabric is the data link between the nodes and is used to forward traffic between the chassis.

Traffic arriving on a node that needs to be processed on the other is forwarded over the fabric data link. Similarly, traffic processed on a node that needs to exit through an interface on the other node is forwarded over the fabric.

The fabric also provides for synchronization of session state objects created by operations such as authentication, Network Address Translation (NAT), Application Layer Gateways (ALGs), and IP Security (IPsec) sessions. The fabric link can be any pair of Gigabit Ethernet interfaces spanning the cluster.

Control Interfaces

The control interfaces provide the control link between the two nodes in the cluster and are used for routing updates and for control plane signal traffic, such as heartbeat and threshold information that triggers node failover.

The control link is also used to synchronize the configuration between the nodes. When you submit configuration statements to the cluster, the configuration is automatically synchronized over the control link.

Creating an SRX-series Chassis Cluster1. Physically connect a pair of SRX-series devices

together2. Connect the initial node to the console port3. Configure the control ports4. Use CLI operational mode commands to enable

clustering5. Repeat step 2, 3 & 4 for other device6. Configure the management interfaces on the cluster7. Configure the cluster8. Initiate manual failover9. Configure conditional route advertisement over

redundant Ethernet interfaces10.Verify the configuration

Setting the Node ID and Cluster IDAfter connecting the two devices together, you configure a cluster ID and a node ID.

A cluster ID identifies the cluster that the two nodes belong to.

A node ID identifies a unique node within a cluster.

You can deploy up to 15 clusters in a Layer 2 domain. Each cluster is defined by a cluster-id value within the range of 1 through 15. A device can belong to only one cluster at any given time. Nodes in a cluster are numbered 0 and 1.

CLI ConfigurationTo set the node IDs and cluster IDs, connect to each device through the console port and enter the following operational commands, then reboot the system.

Enter the cluster ID and node ID information for the first node. If you want redundancy groups to be primary on this node when priority settings for both nodes are the same, make it node 0.user@host> set chassis cluster node 0 cluster-id 1warning: A reboot is required for chassis cluster to be enabled

Count…

Enter the cluster ID and node ID for the other node. If you want redundancy groups to be secondary on this node when priority settings for both nodes are the same, make it node1.

user@host> set chassis cluster node 1 cluster-id 1 rebootSuccessfully enabled chassis cluster. Going to reboot now.

View Node Status

Use the show chassis cluster status operational command to view node status.

user@host# show chassis cluster status

When you complete the chassis cluster basic configuration, any subsequent configuration changes you make are automatically synchronized on both nodes.

Configuring the Management Interface

You must assign a unique IP address to each node in the cluster to provide network management access. This configuration is not replicated across the two nodes.

In an SRX-series chassis cluster, the fxp0 interface is a port on the Routing Engine (RE) card.

CLI Configuration

From the console port connection to the device you want to designate as the primary node, in configuration mode enter the following commands to name the node node0-router and assign IP address 10.1.1.1/24 to it:

user@host# set groups node0 system host-name node0-router

user@host# set groups node0 interfaces fxp0 unit 0 family inet address 10.1.1.1/24

Count…From the console port connection to the device you want to designate the secondary node, in configuration mode enter the following commands to name the node node1-router and assign IP address 10.1.1.2/24 to it:

user@host# set groups node1 system host-name node1-routeruser@host# set groups node1 interfaces fxp0 unit 0 family inet address 10.1.1.2/24

Enter the following command in configuration mode to apply these unique configurations to the appropriate node. This configuration is not replicated across the two nodes.user@host# set apply-groups

Configuring Chassis Cluster Information

For the chassis cluster configuration, you specify the number of redundant Ethernet interfaces that the cluster contains and the information used to monitor the “health” of the cluster.

You must configure the redundant Ethernet interfaces count for the cluster in order for the redundant Ethernet interfaces that you configure to be recognized. Use the following command in configuration mode to define the number of redundant Ethernet interfaces for the cluster:user@host# set chassis cluster reth-count 3

Configuring the FabricThe fabric is the back-to-back data connection between the nodes in a cluster. Traffic on one node that needs to be processed on the other node or to exit through an interface on the other node passes over the fabric. Session state information also passes over the fabric.

In an SRX-series chassis cluster, you can configure any pair of Gigabit Ethernet interfaces or any pair of 10-Gigabit interfaces to serve as the fabric between nodes.

You cannot configure filters, policies, or services on the fabric interface.

CLI Configuration

Enter the following commands to join ge-0/0/1 on one node in the cluster and ge-7/0/1 on the other to form the fabric:

{primary:node0}user@host# set interfaces fab0 fabric-options member-interfaces ge-0/0/1{secondary:node1}user@host# set interfaces fab1 fabric-options member-interfaces ge-7/0/1

Configuring Redundancy Groups

A redundancy group is an abstract entity (extracting common features from specific object) that includes and manages a collection of objects. A redundancy group can be primary on only one node at a time.

Before you can create redundant Ethernet interfaces you must create their redundancy groups.

CLI Configuration

Use the following command in configuration mode to specify the number of gratuitous Address Resolution Protocol (ARP) requests that an interface can send to notify other network devices of its presence after the redundancy group it belongs to has failed over:

{primary:node1}user@host# set chassis cluster redundancy-group 1 gratuitous-arp-count 4

Count…

Use the following command in configuration mode to identify an interface to be monitored by a specific redundancy group and give it a weight. You can configure a redundancy group to monitor any interfaces, not just those belonging to its redundant Ethernet interfaces.

{primary:node1}user@host# set chassis cluster redundancy-group 1 interface-monitor fe-3/1/1/1 weight 100

Count…

Use the following commands in configuration mode to specify a redundancy group's priority for primacy on each node of the cluster. The higher number takes precedence.

{primary:node1}user@host# set chassis cluster redundancy-group 1 node 1 priority 100{secondary:node0}user@host# set chassis cluster redundancy-group 1 node 0 priority 200

Count…

Use the following command in configuration mode to specify if a node with a better (higher) priority can initiate a failover to become primary for the redundancy group:

{primary:node1}user@host# set chassis cluster redundancy-group 1 preempt

Configuring Redundant Ethernet Interfaces

A redundant Ethernet interface is a pseudo interface that contains two physical interfaces, one from each node of the cluster. To create a redundant Ethernet interface, you configure the two physical interfaces independently.

You configure the rest of the configuration that pertains to them at the level of the redundant Ethernet interface, and each of the child interfaces inherits this configuration.

CLI ConfigurationUse the following commands to bind redundant child physical interfaces to reth1:{primary:node1}user@host# set interfaces ge-0/0/0 gigether-options redundant-parent reth1{primary:node1}user@host# set interfaces ge-7/0/0 gigether-options redundant-parent reth1{primary:node1}user@host# set interfaces fe-1/0/0 fast-ether-options redundant-parent reth1{primary:node1}user@host# set interfaces fe-8/0/0 fast-ether-options redundant-parent reth1

Count…Use the following commands to:Add reth 1 to redundancy group 1Set the MTU (Maximum Transmission Unit) size to 1500 bytesAssign IP address 10.1.1.3/24 to reth1

{primary:node1}user@host# set interfaces reth1 redundant-ether-options redundancy-group 1{primary:node1}user@host# set interfaces reth1 unit 0 family inet mtu 1500{primary:node1}user@host# set interfaces reth1 unit 0 family inet address 10.1.1.3/24

Count…

Use the following command to associate reth1.0 with a security zone named Trust. Security zone configuration is the same for redundant Ethernet interfaces as for any other interface.

{primary:node1}user@host# set security zones security-zone Trust interfaces reth1.0

Configuring Interface MonitoringRedundancy group failover is triggered by the results from monitoring the health of interfaces that belong to the redundancy group. When you assign a weight to an interface to be monitored, the system monitors the interface for availability.

If a physical interface fails, the weight is deducted from the corresponding redundancy group's threshold. Every redundancy group has a threshold of 255. If the threshold hits 0, a failover is triggered. Failover is triggered even if the redundancy group is in manual failover mode and preempt is not enabled.

CLI Configuration

Use the following command to set interface monitoring on ge-7/0/3:

{primary:node1}user@host# set chassis cluster redundancy-group 1 interface-monitor ge-7/0/3 weight 255

Initiating a Manual Redundancy Group Failover

You can initiate a failover manually with the request command. A manual failover bumps up the priority of the redundancy group for that member to 255.

After a manual failover, the new primary continues in that role until there is a failback. If there is a failback, the manual failover is lost and state election is made based on priority and preempt settings. A failback in manual failover mode can occur if the primary node fails or if the threshold of a redundancy group 0 reaches 0.

CLI ConfigurationUse the show command to display the status of nodes in the cluster:

{primary:node0} user@host> show chassis cluster status redundancy-group 0Output to this command indicates that node 0 is primary.

Use the request command to trigger a failover and make node 1 the primary:

{primary:node1} user@host> request chassis cluster failover redundancy-group 0 node 1

Count…Use the show command to display the new status of nodes in the cluster.

{primary:node1} user@host> show chassis cluster status redundancy-group 0Output to this command shows that node 1 is now primary.

You can reset the failover for redundancy groups using the request command. This change is propagated across the cluster.{primary:node1} user@host> request chassis cluster failover reset redundancy-group 0 node 0

Verifying the Chassis Cluster

Purpose:Display chassis cluster verification options.Action:From the CLI, enter the show chassis cluster ? command:{primary:node1} user@host> show chassis cluster ?

What it Means…?The output shows a list of all chassis cluster verification parameters. Verify the following information:1. Interfaces—Displays information about chassis cluster interfaces.2. Statistics—Displays information about chassis cluster services

and interfaces.3. Status—Displays failover status about nodes in a cluster.

Verifying Chassis Cluster Interfaces

Purpose:Display information about chassis cluster interfaces.Action:From the CLI, enter the show chassis cluster interfaces command:{primary:node1} user@host> show chassis cluster interfaces

What it Means…?The output shows the state the control link between the nodes, and provides information about the link state. The output also shows the state of the fabric interface between the nodes and provides information about traffic on that link.

Verifying Chassis Cluster Statistics

Purpose:Display information about chassis cluster services and interfaces.Action:From the CLI, enter the show chassis cluster statistics command:{primary:node1} user@host> show chassis cluster statistics

What it Means…?The output shows the control link statistics (heartbeats sent and received), the fabric link statistics (probes sent and received), and the number of RTOs sent and received for services.

Clear Services & InterfacesPurpose:Clear displayed information about chassis cluster services and interfaces.Action:From the CLI, enter the clear chassis cluster statistics command:{primary:node1} user@host> clear chassis cluster statistics

What it Means…?Cleared control-plane statisticsCleared data-plane statistics

Control-Plane Statistics

Purpose:Display chassis cluster control-plane statistics.Action:From the CLI, enter the show chassis cluster control-plane statistics command:{primary:node1} user@host> show chassis cluster control-plane statistics

What it Means…?The output shows the control link statistics (heartbeats sent and received) and the fabric link statistics (probes sent and received).

Clear Control-Plane Statistics

Purpose:Clear displayed chassis cluster control plane statisticsAction:From the CLI, enter the clear chassis cluster control—plane statistics command:{primary:node1} user@host> clear chassis cluster control—plane statistics

What it Means…?Cleared control-plane statistics

Data Plane StatisticsPurpose:Display chassis cluster data plane statisticsAction:From the CLI, enter the show chassis cluster data-plane statistics command:{primary:node1} user@host> show chassis cluster data-plane statistics

What it Means…?The output shows the number of RTOs sent and received for services.

Clear Data Plane Statistics

Purpose:Clear displayed chassis cluster data plane statisticsAction:From the CLI, enter the clear chassis cluster data-plane statistics command:{primary:node1} user@host> clear chassis cluster data-plane statistics

What it Means…?Cleared data-plane statistics

Verifying Chassis Cluster StatusPurpose:Display the failover status of a chassis cluster.Action:From the CLI, enter the show chassis cluster status command:{primary:node1} user@host> show chassis cluster status

What it Means…?The output shows the failover status of the chassis cluster in addition to information about the chassis cluster redundancy groups.

Clear the Failover Status

Purpose:Clear the failover status of a chassis cluster.Action:From the CLI, enter the clear chassis cluster failover-count command:{primary:node1} user@host> clear chassis cluster failover-count

What it Means…?Cleared failover-count for all redundancy-groups

Verifying Chassis Cluster Redundancy Group Status

Purpose:Display the failover status of a chassis cluster redundancy group.Action:From the CLI, enter the show chassis cluster status redundancy-group command:{primary:node1} user@host> show chassis cluster status redundancy-group 2

What it Means..?The output shows state and priority of both nodes in a cluster and indicates whether the primary has been preempted or whether there has been a manual failover.

Upgrading Chassis Cluster

To upgrade a chassis cluster:

Load the new image file on node 0. Perform the image upgrade without rebooting the

node by entering:user@host> request system software add <image_name> Load the new image file on node 1. Repeat Step 2. Reboot both nodes simultaneously.

Disabling Chassis Cluster

To disable chassis cluster, enter the following command:{primary:node1} user@host> set chassis cluster disable reboot

Successfully disabled chassis cluster. Going to reboot now. After the system reboots, the chassis cluster is disabled.

Thank

You…!

Kashif Latif