Nexus 2 02 NICTeamingVPC

download Nexus 2 02 NICTeamingVPC

of 12

Transcript of Nexus 2 02 NICTeamingVPC

  • 8/22/2019 Nexus 2 02 NICTeamingVPC

    1/12

    Nexus Technology Labs Virtual Port Channels(vPC)

    Active Active NIC Teaming with vPCLast updated: April 11, 2013

    Active Active NIC Teaming with vPC Diagram

    (/uploads/workbooks/images/diagrams/VQC9DEWKRSFGAxmOqpLE.png)

    (/uploads/workbooks/images/diagrams/VQC9DEWKRSFGAxmOqpLE.png)

    Task

    Configure a vPC Domain between N5K1 and N5K2 as follows:

    N5K1 and N5K2 are the vPC Peers.

    Create vPC Domain 1 on the peers, and use the mgmt0 ports for the vPC Peer Keepalive

    Link.

    Configure all links between the vPC peers as Port-Channel 1, and use this as the vPC

    Peer Link.

    The vPC Peer Link should use LACP negotiation, be an 802.1q trunk link, and be an

    STP Network Port.

    Configure vPCs from N5K1 and N5K2 to Server 1 and Server 2 as follows:

    Configure N5K1 and N5K2's links to Server 1 as Port-Channel 101.

    Port-Channel 101 should be configured as an access port in VLAN 10, an STP Edge

    Port, and as vPC 101.

    FEEDBACK

    http://labs.ine.com/uploads/workbooks/images/diagrams/VQC9DEWKRSFGAxmOqpLE.pnghttp://labs.ine.com/uploads/workbooks/images/diagrams/VQC9DEWKRSFGAxmOqpLE.pnghttp://labs.ine.com/uploads/workbooks/images/diagrams/VQC9DEWKRSFGAxmOqpLE.pnghttp://labs.ine.com/uploads/workbooks/images/diagrams/VQC9DEWKRSFGAxmOqpLE.pnghttp://labs.ine.com/workbook/view/nexus-technology-labs/task/active-active-nic-teaming-with-vpc-MTE5#modal
  • 8/22/2019 Nexus 2 02 NICTeamingVPC

    2/12

    Configure N5K1 and N5K2's links to Server 2 as Port-Channel 102.

    Port-Channel 102 should be configured as an access port in VLAN 10, an STP Edge

    Port, and as vPC 102.

    Configure Active/Active NIC Teaming on Server 1 and Server 2 as follows:

    Configure a NIC Team on Server 1 using 802.3ad (LACP); both links to N5K1 and N5K2

    should be in this team, and it should use the IP address 10.0.0.1/24.

    Configure a NIC Team on Server 2 using 802.3ad (LACP); both links to N5K1 and N5K2

    should be in this team, and it should use the IP address 10.0.0.2/24.

    When complete, ensure that Server 1 and Server 2 have IP connectivity to each other, and that

    traffic between them uses both uplinks to N5K1 and N5K2 simultaneously.

    Configuration

    N5K1:

    feature lacp

    feature vpc

    !

    vlan 10

    !

    vpc domain 1

    peer-keepalive destination 192.168.0.52

    !

    interface port-channel1

    switchport mode trunk

    spanning-tree port type network

    vpc peer-link!

    interface port-channel101

    switchport mode access

    switchport access vlan 10

    spanning-tree port type edge

    vpc 101

    !

    interface port-channel102

    switchport mode access

    switchport access vlan 10

    spanning-tree port type edge

    vpc 102

    !

    interface Ethernet1/1

    switchport mode access

    switchport access vlan 10

    channel-group 101 mode active

    speed 1000

    !

    interface Ethernet1/2

    switchport mode access

    switchport access vlan 10

    channel-group 102 mode active

  • 8/22/2019 Nexus 2 02 NICTeamingVPC

    3/12

    speed 1000

    !

    interface Ethernet1/3 - 5

    switchport mode trunk

    spanning-tree port type network

    channel-group 1 mode active

    N5K2:

    feature lacp

    feature vpc

    !

    vlan 10

    !

    vpc domain 1

    peer-keepalive destination 192.168.0.51

    !

    interface port-channel1

    switchport mode trunkspanning-tree port type network

    vpc peer-link

    !

    interface port-channel101

    switchport mode access

    switchport access vlan 10

    spanning-tree port type edge

    vpc 101

    !

    interface port-channel102

    switchport mode access

    switchport access vlan 10

    spanning-tree port type edge

    vpc 102

    !

    interface Ethernet1/1

    switchport mode access

    switchport access vlan 10

    channel-group 101 mode active

    speed 1000

    !

    interface Ethernet1/2

    switchport mode access

    switchport access vlan 10

    channel-group 102 mode active

    speed 1000

    !

    interface Ethernet1/3 - 5switchport mode trunk

    spanning-tree port type network

    channel-group 1 mode active

  • 8/22/2019 Nexus 2 02 NICTeamingVPC

    4/12

    Verification

    In this design, the end servers are dual attached to separate access switches, N5K1 and N5K2.

    Additionally, N5K1 and N5K2 are configured for Virtual Port Channel (vPC), which is a type of

    Multi-Chassis EtherChannel (MEC). vPC means that the downstream devices, Server 1 and

    Server 2 in this case, see the upstream switches (the vPC Peers) as a single switch. In other

    words, while the physical topology is a triangle, the logical topology is a point-to-point port

    channel.

    vPC configuration is made up of three main components, the vPC Peer Keepalive Link, the vPC

    Peer Link, and the vPC Member Ports. The vPC Keepalive Link is any layer 3 interface, including

    the mgmt0 port, that is used to send UDP pings between the vPC peers. If the UDP ping is

    successful over the keepalive link, the peers are considered to be reachable. The second

    portion, the vPC Peer Link, is used to synchronize the control plane between the vPC Peers.

    The Peer Link is used for operations such as MAC address table synchronization, ARP table

    synchronization, IGMP Snooping synchronization, and so on. The Peer Link is a port channel

    made up of at least two 10Gbps links, and it should be configured as a layer 2 trunk link that

    runs as STP port type network. The final portions, the vPC member ports, are the port channel

    interfaces that go down the end hosts or downstream devices.

    The first step in vPC verification is to ensure that the vPC Peer Keepalive is up and that the vPC

    Peer Link is up, as shown below.

    N5K1# show vpc

    Legend:

    (*) - local vPC is down, forwarding via vPC peer-link

    vPC domain id : 1

    Peer status : peer adjacency formed ok

    vPC keep-alive status : peer is alive

    Configuration consistency status: success

    Per-vlan consistency status : success

    Type-2 consistency status : success

    vPC role : primary

    Number of vPCs configured : 2

    Peer Gateway : Disabled

    Dual-active excluded VLANs : -

    Graceful Consistency Check : Enabled

    vPC Peer-link status

    ---------------------------------------------------------------------

    id Port Status Active vlans

    -- ---- ------ --------------------------------------------------

    1 Po1 up 1,10

    Next, the vPC Member Ports are configured to the end hosts. In the output below, Port-

    Channel101 to Server 1 shows its vPC as down, because the vPC has been configured on the

    switch side but not yet on the server side. The end result is that the link runs as a normal

    access port, as indicated by the Individual flag of the show port-channel summary.

  • 8/22/2019 Nexus 2 02 NICTeamingVPC

    5/12

    N5K1# show vpc 101

    vPC status

    --------------------------------------------------------------------------

    --

    id Port Status Consistency Reason Active vl

    ans

    ------ ----------- ------ ----------- -------------------------- ---------

    --

    101 Po101 down * success success -

    N5K1# show port-channel summary

    Flags: D - Down P - Up in port-channel (members)

    I - Individual H - Hot-standby (LACP only)

    s - Suspended r - Module-removed

    S - Switched R - RoutedU - Up (port-channel)

    M - Not in use. Min-links not met

    --------------------------------------------------------------------------

    ------

    Group Port- Type Protocol Member Ports

    Channel

    --------------------------------------------------------------------------

    ------

    1 Po1(SU) Eth LACP Eth1/3(P) Eth1/4(P) Eth1/5(P)

    101 Po101(SD) Eth LACP Eth1/1(I)

    102 Po102(SU) Eth LACP Eth1/2(P)

    Next, the end server is configured for NIC Teaming. In the case of the Intel ANS software, an

    LACP-based channel is called 802.3ad Dynamic Link Aggregation.

  • 8/22/2019 Nexus 2 02 NICTeamingVPC

    6/12

    After the server signals the switch with LACP, the channel can form and the vPC comes up, as

    shown below.

    N5K1#

    2013 Mar 3 18:58:39 N5K1 %ETHPORT-5-IF_DOWN_INITIALIZING: Interface Ether

    net1/1 is down (Initializing)

    2013 Mar 3 18:58:39 N5K1 %ETH_PORT_CHANNEL-5-PORT_INDIVIDUAL_DOWN: indivi

    dual port Ethernet1/1 is down

    2013 Mar 3 18:58:39 N5K1 %ETHPORT-5-SPEED: Interface port-channel101, ope

    rational speed changed to 1 Gbps

    2013 Mar 3 18:58:39 N5K1 %ETHPORT-5-IF_DUPLEX: Interface port-channel101,

    operational duplex mode changed to Full

    2013 Mar 3 18:58:39 N5K1 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface port-ch

    annel101, operational Receive Flow Control state changed to off

    2013 Mar 3 18:58:39 N5K1 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface port-ch

    annel101, operational Transmit Flow Control state changed to off

    2013 Mar 3 18:58:42 N5K1 %ETH_PORT_CHANNEL-5-PORT_UP: port-channel101: Et

    hernet1/1 is up

    N5K1# 2013 Mar 3 18:58:51 N5K1 %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-chan

    nel101: first operational port changed from none to Ethernet1/1

    2013 Mar 3 18:58:51 N5K1 %ETHPORT-5-IF_UP: Interface Ethernet1/1 is up in

    mode access

    2013 Mar 3 18:58:51 N5K1 %ETHPORT-5-IF_UP: Interface port-channel101 is

    up in mode access

    N5K1# show vpc 101

    vPC status

    --------------------------------------------------------------------------

    --

    id Port Status Consistency Reason Active vl

    ans

    ------ ----------- ------ ----------- -------------------------- ---------

    --

    101 Po101 up success success 10

    The IP configuration of the server goes on the logical NIC Team interface, similar to how NX-OS

    and IOS use the logical Port-Channel interface to reference the physical members of the

    channel.

  • 8/22/2019 Nexus 2 02 NICTeamingVPC

    7/12

    Testing the traffic flows over the vPC in the data plane becomes a little difficult in this case.

    Each device that has a port channel configured ultimately controls the decision of how itsoutbound traffic flows. For example, if a traffic flow is moving from Server 1 to Server 2, Server 1

    first determines which links to send the flows out on, and then the upstream switches choose

    which outbound links to send the flows out on, until the final destination is reached. This is an

    issue because you will not see an even distribution of traffic among the NIC Team and vPC

    Member Ports unless there is a sufficiently large number of flows from diverse source and

    destination addresses. Although the port-channel load balancing method can be changed on the

    Nexus switches, it cant be changed in the Intel NIC drivers in this design. Therefore, to fully

    verify that Active/Active forwarding is working, we need more than one destination address to

    send to. This is achieved below by configuring a secondary IP address on the NIC Team of

    Server 1.

    Next, Server 2 is configured to send separate UDP flows to each of the addresses on Server 1

    with the iPerf app, as shown below.

  • 8/22/2019 Nexus 2 02 NICTeamingVPC

    8/12

    On the network side, the traffic flows in the data plane can be verified by looking at the interface

    counters of the vPC Member Ports. If the input bandwidth counter from Server 2 is split betweenboth N5K1 and N5K2, we would then know that Server 2 is distributing the load between both

    members of its NIC Team in an Active/Active manner. Furthermore, if we see that the output

    bandwidth counters from N5K1 and N5K2 to Server 1 is split between them, we would also know

    that the switches are doing Active/Active forwarding to the destination. This can be seen in the

    output below.

  • 8/22/2019 Nexus 2 02 NICTeamingVPC

    9/12

    N5K1# show interface e1/1-2 | in rate|Ethernet

    Ethernet1/1 is up

    Hardware: 1000/10000 Ethernet, address: 000d.eca2.ed88 (bia 000d.eca2.ed

    88)

    30 seconds input rate 946992 bits/sec, 198 packets/sec

    30 seconds output rate 5899400 bits/sec, 926 packets/sec

    input rate 946.99 Kbps, 198 pps; output rate 5.90 Mbps , 926 pps

    Ethernet1/2 is up

    Hardware: 1000/10000 Ethernet, address: 000d.eca2.ed89 (bia 000d.eca2.ed

    89)

    30 seconds input rate 5899032 bits/sec, 926 packets/sec

    30 seconds output rate 947384 bits/sec, 199 packets/sec

    input rate 5.90 Mbps, 926 pps; output rate 947.38 Kbps, 199 pps

    N5K2# show interface e1/1-2 | in rate|Ethernet

    Ethernet1/1 is up

    Hardware: 1000/10000 Ethernet, address: 000d.eca4.7408 (bia 000d.eca4.7408)

    30 seconds input rate 40 bits/sec, 0 packets/sec

    30 seconds output rate 6211424 bits/sec, 975 packets/sec

    input rate 40 bps, 0 pps; output rate 6.21 Mbps , 975 pps

    Ethernet1/2 is up

    Hardware: 1000/10000 Ethernet, address: 000d.eca4.7409 (bia 000d.eca4.74

    09)

    30 seconds input rate 6211216 bits/sec, 975 packets/sec

    30 seconds output rate 144 bits/sec, 0 packets/sec

    input rate 6.21 Mbps, 975 pps; output rate 144 bps, 0 pps

    Note that on N5K1 the input rate of E1/2, which connects to Server 2, matches the output rate of

    E1/1, which connects to Server 1. Likewise, on N5K2 the input rate of E1/2, which connects to

    Server 2, matches the output rate of E1/1, which connects to Server 1. Also note that these

    traffic flows do not cross the vPC Peer Link between the Nexus 5Ks, because this link is

    excluded from the data plane under normal correct operations. Verification of the counters of

    Port-Channel1, the vPC Peer Link, show little to no traffic being sent or received on the port.

    N5K1# show interface port-channel 1 | include rate

    30 seconds input rate 944 bits/sec, 1 packets/sec

    30 seconds output rate 1168 bits/sec, 1 packets/sec

    input rate 976 bps, 1 pps; output rate 1.07 Kbps, 1 pps

    The output shown above indicates the normal forwarding logic of vPC, which is that the vPC Peer

    will first attempt to forward traffic to a local vPC Member Port instead of crossing the vPC Peer

    Link. The only time that this rule is normally broken for known unicast traffic is if the local vPC

    Member Port is down. For example, if a failure occurs between N5K1 and Server 1, N5K1s

    traffic received from Server 1 going to Server 2 must be sent over the vPC Peer Link; otherwise it

    would be blackholed. This can be seen below.

  • 8/22/2019 Nexus 2 02 NICTeamingVPC

    10/12

    Normally this detection is immediate based on link failure, but in this topology design Server 1 is

    a Virtual Machine that is not directly physically connected to N5K1. When the LACP timerexpires, N5K1 detects that the vPC peer is gone, and the vPC Member Port goes down.

  • 8/22/2019 Nexus 2 02 NICTeamingVPC

    11/12

    N5K1#

    2013 Mar 3 22:54:34 N5K1 %ETH_PORT_CHANNEL-5-PORT_DOWN: port-channel101:

    Ethernet1/1 is down

    2013 Mar 3 22:54:34 N5K1 %ETH_PORT_CHANNEL-5-PORT_DOWN: port-channel101:

    port-channel101 is down

    N5K1# show vpc

    Legend:

    (*) - local vPC is down, forwarding via vPC peer-link

    vPC domain id : 1

    Peer status : peer adjacency formed ok

    vPC keep-alive status : peer is alive

    Configuration consistency status: success

    Per-vlan consistency status : success

    Type-2 consistency status : successvPC role : primary

    Number of vPCs configured : 2

    Peer Gateway : Disabled

    Dual-active excluded VLANs : -

    Graceful Consistency Check : Enabled

    vPC Peer-link status

    ---------------------------------------------------------------------

    id Port Status Active vlans

    -- ---- ------ --------------------------------------------------

    1 Po1 up 1,10

    vPC status

    --------------------------------------------------------------------------

    --

    id Port Status Consistency Reason Active vl

    ans

    ------ ----------- ------ ----------- -------------------------- ---------

    --

    101 Po101 down* success success -

    102 Po102 up success success 10

    Now any traffic that comes in on N5K1 from Server 2 that is going toward Server 1 must transit

    the vPC Peer Link.

  • 8/22/2019 Nexus 2 02 NICTeamingVPC

    12/12

    ^ back to top

    N5K1# show interface port-channel 1 | include rate

    30 seconds input rate 1784 bits/sec, 1 packets/sec

    30 seconds output rate 5520864 bits/sec, 862 packets/sec

    input rate 992 bps, 1 pps; output rate 5.67 Mbps , 856 pps

    This situation normally only happens during a failure event. It is highly undesirable for vPC

    because the vPC Peer Link is usually much lower bandwidth (such as 20Gbps) than the

    aggregate of the vPC Member Ports (such as 400Gbps+, depending on port density), so the

    Peer Link can quickly become overwhelmed if it needs to be used in the data plane.

    Disclaimer (http://www.ine.com/feedback.htm) | Privacy Policy (http://www.ine.com/resources/) | 2013 INE

    Inc., All Rights Reserved (http://www.ine.com/about-us.htm)

    http://www.ine.com/about-us.htmhttp://www.ine.com/resources/http://www.ine.com/feedback.htmhttp://labs.ine.com/workbook/view/nexus-technology-labs/task/active-active-nic-teaming-with-vpc-MTE5#top