Firefly Perimeter Cluster VSRX Setup on Vmware ESX

24
Firefly Perimeter Cluster VSRX Setup on Vmware ESX As you might know Firefly Perimeter aka VSRX which is the virtual firewall running on Vmware ESX and KVM can be downloaded as evaluation at here I believe it is great to test most functionality for example an SRX cluster (High Availability) which is the topic of this post. Here I will show How you can install two firefly instances and operate them in a cluster. How you can setup redundancy groups Installation of Firefly instances First download your evaluation copy. You must have a Juniper Networks account to download one. Current version is 12.1X46-D10.2. Once you downloaded the OVA file to your disk, deploy it into your ESX server via File->Deploy OVF Template. Give name e.g firefly00 for the first instance. Continue and you can choose whatever suggested in the wizard. Now you should have two firefly instances ready to be configured as below: Configuring Firefly instances After deploying instances we must configure them for clustering.

description

junos 03

Transcript of Firefly Perimeter Cluster VSRX Setup on Vmware ESX

Page 1: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

Firefly Perimeter Cluster VSRX Setup on Vmware ESXAs you might know Firefly Perimeter aka VSRX which is the virtual firewall running on Vmware ESX and KVM can be downloaded as evaluation at here I believe it is great to test most functionality for example an SRX cluster (High Availability) which is the topic of this post. Here I will show

How you can install two firefly instances and operate them in a cluster. How you can setup redundancy groups

Installation of Firefly instancesFirst download your evaluation copy. You must have a Juniper Networks account to download one. Current version is 12.1X46-D10.2. Once you downloaded the OVA file to your disk, deploy it into your ESX server via File->Deploy OVF Template.

Give name e.g firefly00 for the first instance. Continue and you can choose whatever suggested in the wizard. Now you should have two firefly instances ready to be configured as below:

Configuring Firefly instancesAfter deploying instances we must configure them for clustering. A default firefly instance requires 2GB RAM and two CPUs and instance comes with two ethernet interfaces but we will need more for clustering. It is because:

1. ge-0/0/0 is used for management interface (can’t be changed)2. ge-0/0/1 is used for control link (can’t be changed)3. ge-0/0/2 is going to be used for fabric link (this is configurable)

Page 2: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

Note: Although the minimum memory requirement is 2GB, I use 1GB for my testing purposes. It is also working but it isn’t the recommended memory.

As we lose 3 interfaces here, we will add 6 more interfaces to already configured 2 interfaces. (Max 10 interfaces can be added) Check release notes for this limitation

Add Internal ESX vSwitchWe will need an internal vSwitch on ESX platform for our HA and Control links. It doesn’t have to have any Physical Adapter. You can follow Configuration->Add Networking->Virtual Machine->Create a vSphere standard switch to add a new internal switch. You should have

Page 3: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

something like below after addition:

In my case, virtual switch’s name is vSwitch5. Now add two normal interfaces with no VLAN assigned.

Both interfaces should be identical in their Port Group properties. Then we need to increase MTU. In order to do this, under vSwitch5 Properties window click “vSwitch” which is under the Ports list and then click “Edit”. Set the MTU to 9000 as below and apply. (Ignore the warning about no physical adapter assigned, we don’t need it for HA interfaces)

Page 4: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

Assign Cluster InterfacesWe have configured the internal vSwitch and HA interfaces now it is time to assign these to instances.

Page 5: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

Note: Adapter 3 will be fab0 for node0 and fab1 for node1

Below is a simple table showing how interfaces are assigned in ESX and Firefly

ESX to Firefly Interface Mapping

12345678

Network adapter 1 ---> ge-0/0/0Network adapter 2 ---> ge-0/0/1Network adapter 3 ---> ge-0/0/2Network adapter 4 ---> ge-0/0/3Network adapter 5 ---> ge-0/0/4Network adapter 6 ---> ge-0/0/5Network adapter 7 ---> ge-0/0/6Network adapter 8 ---> ge-0/0/7

Pretty intuitive:)

My management interface vlan is vlan4000_MGT. This is the VLAN through which I will connect to my VMs. We assigned adapter 2 and 3 to Control and FAB ports on vSwitch5.

Netw ork adapter 1 ---> ge-0/0/0Netw ork adapter 2 ---> ge-0/0/1Netw ork adapter 3 ---> ge-0/0/2Netw ork adapter 4 ---> ge-0/0/3

Page 6: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

Exactly the same port assignment must be done on firefly01 VM too as they will be in cluster. Now it is time to boot two instances.

After booting both firefly VMs, you will see the Amnesiac screen. There isn’t any password yet and you can login with root username. From now on, cluster configuration is the same as any branch SRX configuration. To configure cluster smoothly, follow the steps below on both nodes.

firefly00 (node0)

123456

>conf#delete interfaces#delete security#set system root-authentication plain-text-password#commit and-quit>set chassis cluster cluster-id 2 node 0 reboot

Note: As I already have another Firefly cluster, I have chosen 2 as the cluster id.

>conf#delete interfaces#delete security#set system root-authentication

Page 7: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

firefly01 (node1)

123456

>conf#delete interfaces#delete security#set system root-authentication plain-text-password#commit and-quit>set chassis cluster cluster-id 2 node 1 reboot

Firefly Interface ConfigurationAt this point, you should have two firefly instances running and one should have {primary:node0} and the other one {secondary:node1} on the prompt but we still don’t have management connectivity. we will do the cluster groups configuration and access the VMs via IP instead of console:

firefly00 node0

123456

set groups node0 system host-name firefly00-cl2set groups node0 interfaces fxp0 unit 0 family inet address 100.100.100.203/24set groups node1 system host-name firefly01-cl2set groups node1 interfaces fxp0 unit 0 family inet address 100.100.100.204/24set apply-groups ${node}commit and-quit

After this configuration, you should be able to reach your cluster nodes via their fxp0 interfaces. You don’t need any security policy for these interfaces to connect. As you can see I could SSH from my management network to firefly00 node.

1 root@srx100> ssh [email protected]

>conf#delete interfaces#delete security#set system root-authentication

set groups node0 system host-set groups node0 interfaces fxpset groups node1 system host-set groups node1 interfaces fxp

root@srx100> ssh [email protected] authenticity of host '100.10ECDSA key f ingerprint is 68:26:Are you sure you w ant to conti

Page 8: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

23456789

The authenticity of host '100.100.100.203 (100.100.100.203)' can't be established.ECDSA key fingerprint is 68:26:63:11:6d:63:91:7e:e7:69:d6:6e:01:b7:7b:b3.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added '100.100.100.203' (ECDSA) to the list of known hosts.Password:--- JUNOS 12.1X46-D10.2 built 2013-12-18 02:43:42 UTC

root@firefly00-cl2%

Redundancy Group ConfigurationTopology that I am trying to achieve is below. The host debian1 will reach Internet via Firefly cluster. Its gateway is 10.12.1.20 reth1 which belongs to the redundancy group 1. There is only one traffic redundancy group and once it fails, cluster should fail to node1. As you can see in the topology, second node’s interfaces start with ge-7/0/x once it is part of the cluster.

Chassis Cluster Config

Page 9: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

12345

set chassis cluster reth-count 2set chassis cluster redundancy-group 0 node 0 priority 100set chassis cluster redundancy-group 0 node 1 priority 99set chassis cluster redundancy-group 1 node 0 priority 100set chassis cluster redundancy-group 1 node 1 priority 99

Redundant Interface Config

1234567891011

set interfaces reth0.0 family inet address 10.11.1.10/24set interfaces reth0 redundant-ether-options redundancy-group 1set interfaces ge-0/0/3 gigether-options redundant-parent reth0set interfaces ge-7/0/3 gigether-options redundant-parent reth0

set interfaces reth1.0 family inet address 10.12.1.20/24set interfaces reth1 redundant-ether-options redundancy-group 1set interfaces ge-0/0/4 gigether-options redundant-parent reth1set interfaces ge-7/0/4 gigether-options redundant-parent reth1

set routing-options static route 0/0 next-hop 10.11.1.1

Note: Cluster’s default gateway is SRX100 device.

Security zone and Policy Config

12345

set security zones security-zone external interfaces reth0.0set security zones security-zone internal interfaces reth1.0set security zones security-zone internal host-inbound-traffic system-services all

set security policies from-zone internal to-zone external policy allow-all-internal match source-

set chassis cluster reth-count 2set chassis cluster redundancyset chassis cluster redundancyset chassis cluster redundancy

set interfaces reth0.0 family ineset interfaces reth0 redundant-set interfaces ge-0/0/3 gigetherset interfaces ge-7/0/3 gigether

set security zones security-zonset security zones security-zonset security zones security-zon

Page 10: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

6789

address anyset security policies from-zone internal to-zone external policy allow-all-internal match destination-address anyset security policies from-zone internal to-zone external policy allow-all-internal match application anyset security policies from-zone internal to-zone external policy allow-all-internal then permitcommit and-quit

Check the Interfaces and cluster status

123456789101112131415161718192021222324252627282930

{primary:node0}root@firefly00-cl2> show interfaces terseInterface               Admin Link Proto    Local                 Remotegr-0/0/0                up    upip-0/0/0                up    upge-0/0/2                up    upge-0/0/3                up    upge-0/0/3.0              up    up   aenet    --> reth0.0ge-0/0/4                up    upge-0/0/4.0              up    up   aenet    --> reth1.0ge-0/0/5                up    upge-0/0/6                up    upge-0/0/7                up    upge-7/0/2                up    upge-7/0/3                up    upge-7/0/3.0              up    up   aenet    --> reth0.0ge-7/0/4                up    upge-7/0/4.0              up    up   aenet    --> reth1.0ge-7/0/5                up    upge-7/0/6                up    upge-7/0/7                up    updsc                     up    upfab0                    up    downfab0.0                  up    down inet     30.33.0.200/24fab1                    up    downfab1.0                  up    down inet     30.34.0.200/24fxp0                    up    upfxp0.0                  up    up   inet     100.100.100.203/24fxp1                    up    upfxp1.0                  up    up   inet     129.32.0.1/2

{primary:node0}root@firefly00-cl2> show interfInterface Admin Link Prgr-0/0/0 up up

Page 11: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

313233343536373839404142434445464748495051525354

                                   tnp      0x1200001gre                     up    upipip                    up    uplo0                     up    uplo0.16384               up    up   inet     127.0.0.1           --> 0/0lo0.16385               up    up   inet     10.0.0.1            --> 0/0                                            10.0.0.16           --> 0/0                                            128.0.0.1           --> 0/0                                            128.0.0.4           --> 0/0                                            128.0.1.16          --> 0/0lo0.32768               up    uplsi                     up    upmtun                    up    uppimd                    up    uppime                    up    uppp0                     up    upppd0                    up    upppe0                    up    upreth0                   up    upreth0.0                 up    up   inet     10.11.1.10/24reth1                   up    upreth1.0                 up    up   inet     10.12.1.20/24st0                     up    uptap                     up    up

123456789101112

{primary:node0}root@firefly00-cl2> show chassis cluster statusCluster ID: 2Node                  Priority          Status    Preempt  Manual failover

Redundancy group: 0 , Failover count: 1    node0                   100         primary        no       no    node1                   99          secondary      no       no

Redundancy group: 1 , Failover count: 1    node0                   0           primary        no       no    node1                   0           secondary      no       no

Hmmm…there is something wrong. I don’t see the priorities for RG1. Why? Let’s check, cluster interfaces.

{primary:node0}root@firefly00-cl2> show chasCluster ID: 2 Node Priority St

Page 12: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

1234567891011121314151617181920212223242526

{primary:node0}root@firefly00-cl2> show chassis cluster interfacesControl link status: Up

Control interfaces:    Index   Interface        Status   Internal-SA    0       fxp1             Up       Disabled

Fabric link status: Down

Fabric interfaces:    Name    Child-interface    Status                               (Physical/Monitored)    fab0    fab0    fab1    fab1

Redundant-ethernet Information:    Name         Status      Redundancy-group    reth0        Up          1    reth1        Up          1

Redundant-pseudo-interface Information:    Name         Status      Redundancy-group    lo0          Up          0

Aha… I forgot to configure the fabric links. I always forget to do something:) as you might remember from the beginning of the post, you can choose any interface you want for the fabric link and I have chosen ge-0/0/2 interfaces on both nodes.

Configure Fabric Link

12

set interfaces fab0 fabric-options member-interfaces ge-0/0/2set interfaces fab1 fabric-options member-interfaces ge-7/0/2

{primary:node0}root@firefly00-cl2> show chasControl link status: Up

set interfaces fab0 fabric-optionset interfaces fab1 fabric-optioncommit and-quit

Page 13: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

3 commit and-quit

Check the cluster status again

123456789101112131415161718192021222324252627282930313233343536

{primary:node0}root@firefly00-cl2> show chassis cluster statusCluster ID: 2Node                  Priority          Status    Preempt  Manual failover

Redundancy group: 0 , Failover count: 1    node0                   100         primary        no       no    node1                   99          secondary      no       no

Redundancy group: 1 , Failover count: 2    node0                   100         secondary      no       no    node1                   99          primary        no       no

{primary:node0}root@firefly00-cl2> show chassis cluster interfacesControl link status: Up

Control interfaces:    Index   Interface        Status   Internal-SA    0       fxp1             Up       Disabled

Fabric link status: Up

Fabric interfaces:    Name    Child-interface    Status                               (Physical/Monitored)    fab0    ge-0/0/2           Up   / Up    fab0    fab1    ge-7/0/2           Up   / Up    fab1

Redundant-ethernet Information:    Name         Status      Redundancy-group    reth0        Up          1    reth1        Up          1

Redundant-pseudo-interface Information:

{primary:node0}root@firefly00-cl2> show chasCluster ID: 2 Node Priority St

Page 14: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

373839

    Name         Status      Redundancy-group    lo0          Up          0

Yes, now we have fabric links configured and UP.

Now lets do a ping test from debian1 host to cluster’s reth1 IP.

12345

root@debian1:~# ping 10.12.1.20PING 10.12.1.20 (10.12.1.20) 56(84) bytes of data.From 10.12.1.10 icmp_seq=1 Destination Host UnreachableFrom 10.12.1.10 icmp_seq=2 Destination Host UnreachableFrom 10.12.1.10 icmp_seq=3 Destination Host Unreachable

Hmm, something isn’t working. What have I forgotten? I must tell you that the number one mistake I make when working with ESX is that port assignment. Since I haven’t assigned the interfaces to their respective VLAN ports, ping doesn’t work. Let assign.

root@debian1:~# ping 10.12.1.2PING 10.12.1.20 (10.12.1.20) 56From 10.12.1.10 icmp_seq=1 DeFrom 10.12.1.10 icmp_seq=2 De

Page 15: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

As you can see I assigned Adapter 4 (ge-0/0/3) to vlan2001 and Adapter 5 (ge-0/0/4) to vlan2002. These are child links of reth0 and reth1 respectively.

Let’s try ping once again:

1234567

root@debian1:~# ping 10.12.1.20PING 10.12.1.20 (10.12.1.20) 56(84) bytes of data.64 bytes from 10.12.1.20: icmp_req=1 ttl=64 time=45.1 ms64 bytes from 10.12.1.20: icmp_req=2 ttl=64 time=2.53 ms64 bytes from 10.12.1.20: icmp_req=3 ttl=64 time=0.796 ms^C--- 10.12.1.20 ping statistics ---

root@debian1:~# ping 10.12.1.2PING 10.12.1.20 (10.12.1.20) 5664 bytes from 10.12.1.20: icmp_64 bytes from 10.12.1.20: icmp_

Page 16: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

89

3 packets transmitted, 3 received, 0% packet loss, time 2002msrtt min/avg/max/mdev = 0.796/16.160/45.154/20.514 ms

Heyyyy, it works!

Now ping 8.8.8.8 and see how the session table looks like

1234567891011121314151617

{primary:node0}root@firefly00-cl2> show security flow session protocol icmpnode0:--------------------------------------------------------------------------Total sessions: 0

node1:--------------------------------------------------------------------------

Session ID: 1217, Policy name: allow-all-internal/4, State: Active, Timeout: 2, Valid  In: 10.12.1.10/4 --> 8.8.8.8/12114;icmp, If: reth1.0, Pkts: 1, Bytes: 84  Out: 8.8.8.8/12114 --> 10.12.1.10/4;icmp, If: reth0.0, Pkts: 1, Bytes: 84

Session ID: 1218, Policy name: allow-all-internal/4, State: Active, Timeout: 2, Valid  In: 10.12.1.10/5 --> 8.8.8.8/12114;icmp, If: reth1.0, Pkts: 1, Bytes: 84  Out: 8.8.8.8/12114 --> 10.12.1.10/5;icmp, If: reth0.0, Pkts: 1, Bytes: 84Total sessions: 2

Hmm, sessions are flowing through node1 which isn’t what I wanted. Let’s fail over RG1 to node0.

1234567

{primary:node0}root@firefly00-cl2> request chassis cluster failover redundancy-group 1 node 0node0:--------------------------------------------------------------------------Initiated manual failover for redundancy group 1

{primary:node0}

{primary:node0}root@firefly00-cl2> show secunode0:--------------------------------------

{primary:node0}root@firefly00-cl2> request chanode0:--------------------------------------

Page 17: Firefly Perimeter Cluster VSRX Setup on Vmware ESX

89101112131415161718192021222324252627

root@firefly00-cl2> show security flow session protocol icmpnode0:--------------------------------------------------------------------------

Session ID: 288, Policy name: allow-all-internal/4, State: Active, Timeout: 2, Valid  In: 10.12.1.10/179 --> 8.8.8.8/12114;icmp, If: reth1.0, Pkts: 1, Bytes: 84  Out: 8.8.8.8/12114 --> 10.12.1.10/179;icmp, If: reth0.0, Pkts: 1, Bytes: 84

Session ID: 295, Policy name: allow-all-internal/4, State: Active, Timeout: 2, Valid  In: 10.12.1.10/180 --> 8.8.8.8/12114;icmp, If: reth1.0, Pkts: 1, Bytes: 84  Out: 8.8.8.8/12114 --> 10.12.1.10/180;icmp, If: reth0.0, Pkts: 1, Bytes: 84

Session ID: 296, Policy name: allow-all-internal/4, State: Active, Timeout: 4, Valid  In: 10.12.1.10/181 --> 8.8.8.8/12114;icmp, If: reth1.0, Pkts: 1, Bytes: 84  Out: 8.8.8.8/12114 --> 10.12.1.10/181;icmp, If: reth0.0, Pkts: 1, Bytes: 84Total sessions: 3

node1:--------------------------------------------------------------------------Total sessions: 0

After the fail over, we can see that packets are flowing through the node0.

In this post, I would like to show how virtual firefly SRX can be used on an ESX host. I haven’t configured interface monitoring as there doesn’t seem to be any point to monitor a virtual port which should be UP at all times or maybe there is a point but I don’t know it. You can also use ip monitoring by which you can leverage cluster functionality. I hope with this post, you can get up to speed on firefly quickly. If you have any questions or anything to contribute to this post, don’t hesitate!

Have a nice fireflying!