BRKSAN-2047

download BRKSAN-2047

of 121

Transcript of BRKSAN-2047

  • 8/12/2019 BRKSAN-2047

    1/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE Design, Operations andManagement Best PracticesBRKSAN2047

    1

  • 8/12/2019 BRKSAN-2047

    2/121

  • 8/12/2019 BRKSAN-2047

    3/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE Storage

    VNP

    Nexus 5000

    Nexus 2000

    Nexus 7000

    MDS 9500

    FCoE Storage

    Ethernet

    Fibre Channel

    Dedicated FCoE Link

    Converged Link

    FCoE Storage

    Nexus 7000

    B22 FEX for HP

    C7000/C3000

    EvPC

    Nexus 7000

    Unified Fabric and FCoEWhat?

  • 8/12/2019 BRKSAN-2047

    4/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Encapsulation of FC Framesover Ethernet

    Enables FC to Runon a LosslessEthernet Network

    Fewer CablesBoth block I/O & Ethernet

    traffic co-exist on same cableFewer adapters neededOverall less powerInteroperates with existingSANs

    Management of SANsremains constant

    No Gateway

    FCoE Benefits

    FibreChannel

    Traffic

    Ethernet

    Unified Fabric & FCoEWhy?

  • 8/12/2019 BRKSAN-2047

    5/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Unified FabricWhy?

    Embedded on Motherboard Integrated into O/S Many Suppliers Mainstream Technology Widely Understood Interoperability by Design

    Always a stand alone Card Specialized Drivers Few Suppliers Specialized Technology Special Expertise Interoperability by Test

    Ethernet Economic Model FC Economic Model

  • 8/12/2019 BRKSAN-2047

    6/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    iSCSI

    Appliance

    File System

    Applicaon

    SCSI Device DriveriSCSI Driver

    TCP/IP Stack

    NIC

    Volume Manager

    NIC

    TCP/IP Stack

    iSCSI Layer

    Bus Adapter

    iSCSI

    Gateway

    FC

    File System

    Applicaon

    SCSI Device DriveriSCSI Driver

    TCP/IP Stack

    NIC

    Volume Manager

    NIC

    TCP/IP Stack

    iSCSI Layer

    FC HBA

    NAS

    Appliance

    NICTCP/IP Stack

    I/O Redirector

    File System

    Applicaon

    NFS/CIFS

    NIC

    TCP/IP Stack

    File System

    Device Driver

    Block I/O

    NAS

    Gateway

    NICTCP/IP Stack

    I/O Redirector

    File System

    Applicaon

    NFS/CIFS

    FC

    NIC

    TCP/IP Stack

    File System

    FC HBA

    FCoE SAN

    FCoE

    SCSI Device Driver

    File System

    Applicaon

    Computer System Computer System Computer System Computer System Computer System

    Block I/O File I/O

    Ethernet Ethernet

    Block I/O

    NIC

    Volume Manager

    Volume Manager

    FCoE Driver

    EthernetEthernetEthernet

    Ability to re-provision anycompute unit to leverage anyaccess method to the data

    stored on the spindle Serialized Re-Use (e.g. Boot

    from SAN and Run from NAS)

    Virtualizaon requires thatthe Storage Fabric needs toexist everywhere the IP fabricdoes

    Unified FabricWhy?

  • 8/12/2019 BRKSAN-2047

    7/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Unified FabricFCoE Ecosystem is Gaining Momentum

    Adapters

    Servers

    Current Q4CY'10 Q1CY'11 2HCY'11Q2CY'11

    Switches

    Storage

    10GE LOM with FCoE

  • 8/12/2019 BRKSAN-2047

    8/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Source: Infonecs

    Unified Fabric & FCoEWhen? FCoE Projected Growth

  • 8/12/2019 BRKSAN-2047

    9/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE - Design, Operations and Management Best PracticesAgenda

    Unified Fabric What and When FCoE Protocol Fundamentals Nexus FCoE Capabilities FCoE Network Requirements and Design

    Considerations

    DCB & QoS - Ethernet Enhancements Single Hop Design Multi-Hop Design Futures

  • 8/12/2019 BRKSAN-2047

    10/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE Protocol FundamentalsStandards for I/O Consolidation

    FCoE

    www.T11.org

    FibreChannel on

    networkmedia

    FC-BB-5

    PFC

    DCB

    IEEE 802.1

    ETS DCBx

    IEEE 802.1Qbb

    Priority-based Flow Control

    IEEE 802.1Qaz

    Priority Grouping

    Enhanced Transmission Selecon

    IEEE 802.1Qaz

    Configuraon Verificaon

    Completed June 2009Published by ANSI in May2010

    Completed March 2011Forwarded to RevCom for publication

  • 8/12/2019 BRKSAN-2047

    11/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Ethernet

    Header

    FCoE

    Header

    FC

    Header

    FC Payload CRC

    EOF

    FCS

    Destination MAC Address

    Source MAC Address

    (IEEE 802.1Q Tag)

    ET = FCoE Ver Reserved

    Reserved

    Reserved

    Reserved SOF

    Encapsulated FC Frame (with CRC)

    EOF Reserved

    FCS

    Byte 0Byte 2197

    FCoE Frame FormatBit 0 Bit 31 Fibre Channel over Ethernet provides a

    high capacity and lower cost transport

    opon for block based storage

    Two protocols defined in the standard FCoE Data Plane Protocol FIP Control Plane Protocol

    FCoE is a standard - June 3rd 2009, theFC-BB-5 working group of T11

    completed its work and unanimouslyapproved a final standard for FCoE

    FCoE is Fibre Channel

    FCoE Protocol FundamentalsFibre Channel over Ethernet (FCoE)

  • 8/12/2019 BRKSAN-2047

    12/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE

    Data Plane It is used to carry most of the

    FC frames and all the SCSI traffic

    Uses Fabric Assigned MACaddress (dynamic) : FPMA

    IEEE-assigned Ethertype forFCoE traffic is 0x8906

    FIP (FCoE Initialization Protocol)

    It is the control plane protocol It is used to discover the FC entities

    connected to an Ethernet cloud

    It is also used to login to and logout fromthe FC fabric

    Uses unique BIA on CNA for MAC IEEE-assigned Ethertype for FCoE traffic

    is 0x8914

    http://www.cisco.biz/en/US/prod/collateral/switches/ps9441/ps9670/white_paper_c11-560403.html

    FC-BB-5 defines two protocols required for an FCoE enabled Fabric

    FCoE Protocol FundamentalsProtocol Organization Data and Control Plane

  • 8/12/2019 BRKSAN-2047

    13/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE Protocol FundamentalsIts Fibre Channel Control Plane + FIP

    From a Fibre Channel standpoint itsFC connectivity over a new type of cable called Ethernet

    From an Ethernet standpoints itsYet another ULP (Upper Layer Protocol) to be transported

    FC-0 Physical Interface

    FC-1 Encoding

    FC-2 Framing & Flow Control

    FC-3 Generic Services

    FC-4 ULP Mapping

    Ethernet Media Access Control

    Ethernet Physical Layer

    FC-2 Framing & Flow Control

    FC-3 Generic Services

    FC-4 ULP Mapping

    FCoE Logical End Point

  • 8/12/2019 BRKSAN-2047

    14/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Neighbour Discovery and Configuration(VN VF and VE to VE)

    Step 1: FCoE VLAN Discovery FIP sends out a multicast to ALL_FCF_MAC address

    looking for the FCoE VLAN

    FIP frames use the native VLAN Step 2: FCF Discovery

    FIP sends out a multicast to the ALL_FCF_MAC addresson the FCoE VLAN to find the FCFs answering for thatFCoE VLAN

    FCFs responds back with their MAC address Step 3: Fabric Login

    FIP sends a FLOGI request to the FCF_MAC found inStep 2

    Establishes a virtual link between host and FCF

    EnodeInitiator

    FCoE SwitchFCF

    VLAN

    Discovery

    FLOGI/

    FDISC

    FLOGI/FDISC

    Accept

    FC

    Command

    FCCommand

    Responses

    FCoEInitialization

    Protocol

    (FIP)

    FCoEProtocol

    VLAN

    Discovery

    FCF

    Discovery

    Solicitation

    FCF

    DiscoveryAdver

    tisement

    FCoE Protocol FundamentalsFCoE Initialization Protocol (FIP)

    ** FIP does not carry any Fibre Channelframes

  • 8/12/2019 BRKSAN-2047

    15/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCF (Fibre Channel Forwarder) is the Fibre Channel switching element inside an FCoEswitch

    Fibre Channel logins (FLOGIs) happens at the FCF Consumes a Domain ID

    FCoE encap/decap happens within the FCF Forwarding based on FC information

    Eth

    port

    Eth

    port

    Eth

    port

    Eth

    port

    Eth

    port

    Eth

    port

    Eth

    port

    Eth

    port

    Ethernet Bridge

    FC

    port

    FC

    port

    FC

    port

    FC

    port

    FCF

    FCoE Switch FC Domain ID : 15

    FCoE Protocol FundamentalsFibre Channel Forwarder - FCF

  • 8/12/2019 BRKSAN-2047

    16/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    VE_Port

    VF_Port

    VF_Port

    VE_Port

    VN_Port

    VN_Port

    FCoE_NPV

    Switch

    VF_Port VNP_PortFCF

    Switch

    EndNode

    EndNode

    FCoE Switch : FCF

    FCoE Protocol FundamentalsExplicit Roles still defined in the Fabric

    FCoE does not change the explicit port level relationships between devices (add a V tothe port type when it is an Ethernet wire)

    Servers (VN_Ports) connect to Switches (VF_Ports) Switches connect to Switches via Expansion Ports (VE_Ports)

  • 8/12/2019 BRKSAN-2047

    17/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCF FCF

    Fibre ChannelDrivers

    EthernetDrivers

    Operating System

    PCIe

    FibreChannel

    Ethernet

    10GbE

    10GbE

    Link

    FCoE Protocol FundamentalsCNA: Converged Network Adapter Converged Network Adapter (CNA) presents two PCI

    address to the Operating System (OS)

    OS loads two unique sets of drivers and manages two uniqueapplication topologies

    Server participates in both topologies since it has two stacksand thus two views of the same unified wire SAN Multi-Pathing provides failover between two fabrics

    (SAN A and SAN B)

    NIC Teaming provides failover within the same fabric(VLAN)

    Ethernet Driverbound to

    Ethernet NIC PCIaddress

    FC Driverbound to FC

    HBA PCIaddress

    Data VLAN(s)are passed to the

    Ethernet driver

    FCoE VLANterminates on the

    CNA

    Nexus Edge participates inboth distinct FC and IP Coretopologies

    Operating Systemsees:

    Dual port 10 GigabitEthernet adapter

    Dual Port 4 GbpsFibre Channel HBAs

  • 8/12/2019 BRKSAN-2047

    18/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE, Same Model as FCConnecting to the Fabric

    CNA

    ENode

    FC Fabric

    Target

    Ethernet Fabric

    DCB capable Switchacting as an FCF

    Unified Wire

    Same host to target communication Host has 2 CNAs (one per fabric) Target has multiple ports to connect to

    fabric

    Connect to a capable switch Port Type Negotiation (FC port type will be

    handled by FIP)

    Speed Negotiation DCBX Negotiation

    Access switch is a Fibre Channel Forwarder(FCF)

    Dual fabrics are still deployed for redundancy

  • 8/12/2019 BRKSAN-2047

    19/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    My port is upcan I talk now?FIP and FCoE Login Process

    VN_Port

    VF_Port

    FIP Discovery

    E_ports orVE_Port

    Step 1: FIP Discovery Process FCoE VLAN Discovery FCF Discovery Verifies Lossless Ethernet is capable of

    FCoE transmission Step 2: FIP Login Process

    Similar to existing Fibre Channel Loginprocess - sends FLOGI to upstream FCF

    FCF assigns the host a Enode MAC addressto be used for FCoE forwarding (FabricProvided MAC Address - FPMA)

    CNA

    FC or FCoE Fabric

    Target

    ENodeFC-MAP(0E-FC-xx)

    FC-ID

    7.8.9FC-MACAddress

    FC-MAP(0E-FC-xx)

    FC-ID10.00.01

  • 8/12/2019 BRKSAN-2047

    20/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE Protocol FundamentalsFibre Channel over Ethernet Addressing Scheme Enode FCoE MAC assigned for each FCID Enode FCoE MAC composed of a FC-MAP and FCID

    FC-MAP is the upper 24 bits of the Enodes FCoE MAC FCID is the lower 24 bits of the Enodes MAC

    FCoE forwarding decisions still made based on FSPF and the FCID within the EnodeMAC

    For different physical networks the FC-MAP is used as a fabric identifier FIP snooping will use this as a mechanism in realizing the ACLs put in place to

    prevent data corruption

    FC-MAP

    (0E-FC-xx)

    FC-ID

    7.8.9FC-MACAddress

    FC-MAP

    (0E-FC-xx)

    FC-ID

    10.00.01

  • 8/12/2019 BRKSAN-2047

    21/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    The FCoE VLAN is manually configured on the Nexus 5K

    The FCF-MAC address is configured on the Nexus 5K by default oncefeature fcoe has been configured

    This is the MAC address returned in step 2 of the FIP exchange This MAC is used by the host to login to the FCoE fabric

    ** FIP does not carry any Fibre Channel frames

    My port is upcan I talk now?FIP and FCoE Login Process

  • 8/12/2019 BRKSAN-2047

    22/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Login completealmost thereFabric Zoning

    FC/FCoE Fabric

    Initiator

    Target

    fcid 0x10.00.01 [pwwn 10:00:00:00:c9:76:fd:31] [tnitiator]fcid 0x11.00.01 [pwwn 50:06:01:61:3c:e0:1a:f6] [target]

    pwwn 10:00:00:00:c9:76:fd:31

    pwwn 50:06:01:61:3c:e0:1a:f6

    FCF with Domain ID 10

    Zoning is a feature of the fabric and isindependent of Ethernet transport

    Zoning can be configured on the Nexus5000/7000 using the CLI or Fabric Manager

    If Nexus 5000 is in NPV mode, zoning will beconfigured on the upstream core switch andpushed to the Nexus 5000

    Devices acting as Fibre Channel Forwardersparticipate in the Fibre Channel security(Zoning) control

    DCB only bridges do not participate in zoningand require additional security mechanisms

    (ACL applied along the forwarding path on aper FLOGI level of granularity)

  • 8/12/2019 BRKSAN-2047

    23/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Login process: show flogi database and show fcoe database show the loginsand associated FCIDs, xWWNs and FCoE MAC addresses

    Login completeFlogi and FCoE Databases are populated

  • 8/12/2019 BRKSAN-2047

    24/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    CE - Classical Ethernet (non lossless) DCB & DCBx - Data Center Bridging, Data Center Bridging Exchange FCF- Fibre Channel Forwarder (Nexus 5000, Nexus 7000, MDS 9000) FIP FCoE Initialization Protocol Enode: a Fiber Channel end node that is able to transmit FCoE frames using one

    or more Enode MACs.

    FIP snooping Bridge FCoE-NPV -Fibre Channel over IP N_Port Virtualization Single hop FCoE : running FCoE between the host and the first hop access level

    switch

    Multi-hop FCoE : the extension of FCoE beyond a single hop into the Aggregationand Core layers of the Data Centre Network

    Zoning - Security method used in Storage Area Networks FPMA Fabric Provided Management Address

    FCoE Protocol FundamentalsSummary of TerminologyFor Your

    Reference

  • 8/12/2019 BRKSAN-2047

    25/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE - Design, Operations and Management Best PracticesAgenda

    Unified Fabric What and When FCoE Protocol Fundamentals Nexus FCoE Capabilities FCoE Network Requirements and Design

    Considerations

    DCB & QoS - Ethernet Enhancements Single Hop Design Multi-Hop Design Futures

  • 8/12/2019 BRKSAN-2047

    26/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Servers, FCoE attachedStorage

    Nexus 5500 SeriesFibre Channel, FCoE and Unified Ports Nexus 5000 and 5500 are full feature Fibre

    Channel fabric switches

    No support for IVR, FCIP, DMM Unified Port supports multiple transceiver

    types

    1GEthernet Copper/Fibre 10G Ethernet Copper/Fibre 10G DCB/FCoE Copper/Fibre 1/2/4/8G Fibre Channel

    Change the transceiver and connect evolvingend devices,

    Server 1G to 10G NIC migration FC to FCoE migration FC to NAS migration

    FC AttachedStorage

    Servers

    Unified Port Any device in any rack connected to the sameedge infrastructure

    FibreChannel

    Traffic

    Ethernet

    orFibreChannelTraffic

    Fibre Channel

    Any Unified Port can be configured as

  • 8/12/2019 BRKSAN-2047

    27/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Nexus 5500 Series5548UP/5596UP UPC (Gen-2) and Unified Ports

    Eth Ports

    Eth Ports Eth Eth

    FC Ports

    FC FC

    Slot 1

    Slot 2 GEM Slot 3 GEM Slot 4 GEM

    With the 5.0(3)N1 and later releases each module can define any number of ports as Fibre Channel(1/2/4/8 G) or Ethernet (either 1G or 10G)

    Initial SW releases supports only a continuous set of ports configured as Ethernet or FC within eachslot

    Eth ports have to be the first set and they have to be one contiguous range

    FC ports have to be second set and they have to be contiguous as well Future SW release will support per port dynamic configuration

    n5k(config)# slot

    n5k(config-slot)# port type

  • 8/12/2019 BRKSAN-2047

    28/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    32 server facing 10Gig/FCoE portsT11 standard based FIP/FCoE support on all ports

    8 10Gig/FCoE uplink ports for connections to the Nexus5K

    Support for DCBx

    N7K will support FCoE on 2232 (Future)

    Nexus 2000 SeriesFCoE Support

    N2232TM32 Port 1/10GBASE-T Host Interfaces

    8 x 10G Uplinks (Module)

    N2232PP32 Port 1/10G FCoE Host Interfaces8 x 10G Uplinks

    FCoE not yet certified for 10GBaseTUndesired coupling of signal between adjacent

    cables

    Main electrical parameter limiting theperformance of 10GCannot be cancelledRe-Training is a major barrier to use of

    10GBaseT for block level storage (FCoE)

  • 8/12/2019 BRKSAN-2047

    29/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Nexus 5500 + B22 (HP FEX)

    B22 extends FEX connectivity into the HP bladechassis

    Cisco Nexus 5000 Switch is a single managementpoint for all the blade chassis I/O modules

    66% decrease in blade management points* Blade & rack networking consistency Interoperable with Nexus 2000 Fabric Extenders in

    the same Nexus parent switch

    End-to-end FCoE support

    Support for 1G & 10G, LOM and Mez Dell supports Pass-Thru as an alternative option to

    directly attaching Blade Servers to FEX ports

    Cisco Nexus B22 Series Blade FEX

    DC Design Details Blade ChassisNexus B22 Series Fabric Extender

  • 8/12/2019 BRKSAN-2047

    30/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Nexus 7000 F-Series SFP+ ModuleFCoE Support Q2CY11 32 & 48 port 1/10 GbE for Server Access and Aggregation F1 Supports FCoE F2 support for FCoE targeted 1HCY12

    FEX + FCoE support 2HCY12 10 Gbps Ethernet supporting Multiprotocol Storage Connectivity

    Supports FCoE, iSCSI and NAS Loss-Less Ethernet: DCBX, PFC, ETS

    Enables Cisco FabricPath for increased bisectional bandwidth for iSCSI and NAS traffic FCoE License (N7K-FCOEF132XP)

    $10,000 Cisco List One license per F1/F2 module

    SAN Enterprise (N7K-SAN1K9) $15,000 Cisco List per chassis IVR, VSAN Based Access Control, Fabric Binding

    32-port F1 Series

    48-port F2 Series

  • 8/12/2019 BRKSAN-2047

    31/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Storage VDCLAN VDC

    FCoE &FIP

    Ethernet

    Dedicated for VE_Ports

    Storage VDC on the Nexus 7000Supported VDC models

    Model for host/targetinterfaces, not VE_Port

    Ingress Ethernet traffic is split based

    on frame ether type

    FCoE traffic is processed in the

    context of the Storage VDC

    Storage VDCLAN VDC

    Converged I/O

    FCoE & FIPEthernet

    Separate VDC running ONLY storage related protocols Storage VDC: a virtualMDS FC switch Running only FC related processes Only one such VDC can be created Provides control plane separation

    Shared Converged Port

  • 8/12/2019 BRKSAN-2047

    32/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Creating the Storage VDC

    Create VDC of type storage and allocate non-shared interfaces:N7K-50(config)# vdc fcoe id 2 type storage

    N7K-50(config-vdc)# allocate interface Ethernet4/1-16, Ethernet4/19-22

    Allocate FCoE vlan range from the Owner VDC to the Storage VDC. This is a necessary step for sharinginterfaces to avoid vlan overlap between the Owner VDC and the Storage VDC

    N7K-50(config) vdc fcoe id 2

    N7K-50(config-vdc)# allocate fcoe-vlan-range 10-100 from vdcs n7k-50

    Allocated the shared interfaces:N7K-50(config-vdc)# allocate shared interface Ethernet4/17-18

    Install the license for the FCoE Module.n7k-50(config)# license fcoe module 4

    N7K only

  • 8/12/2019 BRKSAN-2047

    33/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    F2VDC

    Storage VDC

    F1/M1VDC

    F1F2F2 Any non-F2

    Dedicated Ports

    Storage

    VDC

    F1/M1

    VDC

    F1

    Storage VDC

    F2VDC

    F2

    NX-OS 5.2

    NX-OS 6.1(1HCY12)

    NX-OS 6.1(1HCY12)

    Shared Ports

    Some restricons when using mixed line cards (F1/F2/M1) F2 ports need to be in a dedicated VDC if using shared ports

    Storage VDCF2 line cards

  • 8/12/2019 BRKSAN-2047

    34/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    MDS 9000 8-Port 10G FCoE ModuleFCoE Support Enables integraon of exisng FC infrastructure into Unified Fabric

    8 FCoE ports at 10GE full rate in MDS 9506, 9509, 9513No FCoE License Required

    Standard Support T11 FCoE IEEE DCBX, PFC, ETS

    Connecvity FCoE Only, No LAN VE to Nexus 5000, Nexus 7000, MDS 9500 VF to FCoE Targets

    Opcs Support

    SFP+ SR/LR, SFP+ 1/3/5m Passive, 7/10m Acve CX-1 (TwinAx) Requirements

    SUP2A Fabric 2 modules for the backplane (applicable to 9513 only)

    MDS 9500

  • 8/12/2019 BRKSAN-2047

    35/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Install feature-set fcoefeature-set fcoe

    There is no need to enable FCoE explicitly on the MDS switch. The following features willbe enabled once an FCoE capable linecard is detected.

    Create VSAN and VLAN, Map VLAN to VSAN for FCoE

    pod3-9513-71(config)# vsan databasepod3-9513-71(config-vsan-db)# vsan 50pod3-9513-71(config-vsan-db)# vlan 50pod3-9513-71(config-vlan)# fcoe vsan 50

    Build the LACP Port Channel on the MDS

    Create VE port and assign to the LACP Port-channel

    pod3-9513-71(config-if-range)# interface vfc-port-channel 501pod3-9513-71(config-if)# switchport mode epod3-9513-71(config-if)# switchport trunk allowed vsan 50pod3-9513-71(config-if)# no shut

    feature lldpfeature vlan-vsan-mapping

    MDS 9000 8-Port 10G FCoE ModuleFCoE Support

  • 8/12/2019 BRKSAN-2047

    36/121

  • 8/12/2019 BRKSAN-2047

    37/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Network vs. FabricDifferences & Similarities

    Ethernet is non-deterministic. Flow control is destination-based Relies on TCP drop-retransmission / sliding window

    Fibre-Channel is deterministic. Flow control is source-based (B2B credits) Services are fabric integrated (no loop concept) Channels

    Connecon service Physical circuits Reliable transfers High speed Low latency Short distance Hardware intense

    Networks Conneconless Logical circuits Unreliable transfers High connecvity Higher latency Longer distance Soware intense

  • 8/12/2019 BRKSAN-2047

    38/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Ethernet/IPGoal : provide any-to-any connectivityUnaware of packet loss (lossy) relies on ULPs forretransmission and windowing

    Provides the transport without worrying about the services- services provided by upper layers

    East-west vs north-south traffic ratios are undefined Network design has been optimized for:

    High Availability from a transport perspective byconnecting nodes in mesh architectures

    Service HA is implemented separatelyTakes in to account control protocol interaction (STP,OSPF, EIGRP, L2/L3 boundary, etc)

    ??

    ?

    ?

    ????

    ?

    ?

    ??

    Switch Switch

    Switch

    ?

    Client/Server

    Relationships are not pre-

    defined

    ? ?

    ?

    Fabric topology and traffic flows are

    highly flexible

    Network vs. FabricClassical Ethernet

  • 8/12/2019 BRKSAN-2047

    39/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Servers typically dual homed to two or moreaccess switches

    LAN switches have redundant connections tothe next layer

    Distribution and Core can be collapsed into asingle box L2/L3 boundary typically deployed in the

    aggregation layer

    Spanning tree or advanced L2 technologies(vPC) used to prevent loops within the L2boundary

    L3 routes are summarized to the core Services deployed in the L2/L3 boundary of

    the network (load-balancing, firewall, NAM,etc)

    L2

    L3

    Core

    Aggregaon

    Access

    Virtual Port-Channel (VPC)

    Virtual Port-Channel (VPC)

    Outside Data Center

    cloud

    STP

    STP

    Network vs. FabricLAN Design Access/Aggregaon/Core

  • 8/12/2019 BRKSAN-2047

    40/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Fibre Channel SANTransport and Services are on the same layerin thesame devices

    Well defined end device relationships (initiators andtargets)

    Does not tolerate packet drop requires losslesstransport

    Only north-south traffic, east-west traffic mostlyirrelevant

    Network designs optimized for Scale andAvailability

    High availability of network services provided throughdual fabric architecture

    Edge/Core vs Edge/Core/EdgeService deployment Client/Server Relationshipsare pre-defined

    I(c)

    I(c)T(s)

    Fabric topology, services and traffic flows are

    structured

    Network vs. FabricClassical Fibre Channel

    T2

    I5

    I4I3I2

    I1

    I0

    T1T0

    Switch Switch

    Switch

    DNS FSPF

    ZoneRSCN DNS

    FSPFZone

    RSCN

    DNS

    Zone

    FSPF

    RSCN

  • 8/12/2019 BRKSAN-2047

    41/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Edge-Core or Edge-Core-Edge Topology Servers connect to the edge switches Storage devices connect to one or more core

    switches

    HA achieved in two physically separate, butidentical, redundant SAN fabric

    Very low oversubscription in the fabric (1:1 to12:1)

    FLOGI Scaling Considerations

    FC

    Core Core

    Network vs. FabricSAN Design Two or Three Tier Topology

    FC

    Example: 10:1 O/S

    rao60 Servers with 4 Gb HBAs

    240 G 24 G 24 G

  • 8/12/2019 BRKSAN-2047

    42/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCFCoE

    AGG

    Access

    CORE

    L3

    L2

    Converged Link to the access switch Cost savings in the reduction of

    required equipment

    cable oncefor all servers to haveaccess to both LAN and SANnetworks

    Dedicated Link from access toaggregation

    Separate links for SAN and LANtraffic - both links are same I/O(10GE)

    Advanced Ethernet features can beapplied to the LAN links

    Maintains fabric isolation

    Dedicated FCoELinks/Port-Channels

    MDS FCSAN A

    MDS FCSAN B

    EthernetFibre Channel

    Dedicated FCoE Link

    Converged Link

    Converged FCoELink

    Nexus

    Converged FCoE

    Link

    vPC Port Channel

    Port Channel

    Network vs. FabricConverged and Dedicated Links

  • 8/12/2019 BRKSAN-2047

    43/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Why support dedicated ISLs as oppose to Converged?

    Agg BW: 40GFCoE: 20G

    Ethernet: 20G

    One wire for all traffic typesETS: QoS output featureguarantees minimum bandwidth

    allocation

    No Clear Port ownershipDesirable for DCI Connections

    Dedicated wire for a traffic typeNo Extra output feature processingDistinct Port ownershipComplete Storage Traffic Separation

    Different methods, Producing the sameaggregate bandwidth

    Dedicated Linksprovide additional isolation of Storage Traffic

    Available on Nexus 5x00Nexus 7000 Supported at

    NX-OS 5.2(1)

    Available on Nexus 5x00Nexus 7000 Support Under

    ConsiderationHA: 4 Links AvailableHA: 2 Links Available

    Dedicated vs. Converged ISLs

  • 8/12/2019 BRKSAN-2047

    44/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Shared wire and VPC does it break basic SAN design fundamentals?

    FCoE

    Fabric A

    Fabric B

    vPC with Converged Links provides anActive-Active connection for FCoE traffic

    Seemingly more bandwidth to the Core Ethernet forwarding behavior can break SANA/B separation

    Currently Not supported on Nexus Switches(exception is the dual homed FEX - EVPC)

    B

    Now that I have Converged Link Support.Can I deploy vPC for my Storage Traffic?

    Converged Links and vPC

  • 8/12/2019 BRKSAN-2047

    45/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FC

    Core Core

    Fabric vs. Network or Fabric & NetworkSAN Dual Fabric Design

    FC

    FC

    ?

    Will you migrate the SAN dual fabric HA model into the LAN full meshed HA model Is data plane isolaon required? (traffic engineering) Is control plane isolaon required? (VDC, VSAN)

  • 8/12/2019 BRKSAN-2047

    46/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Fabric vs. Network or Fabric & NetworkHop by Hop or Transparent Forwarding Model

    or

    A number of big design questions for you Do you want a routed topology or a bridged topology Is FCoE a layer 2 overlay or integrated topology (ships in the night)

    VE VFVEVF VE VEVN VN

    VF VNVFVN VE VE

  • 8/12/2019 BRKSAN-2047

    47/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    H1

    H2

    H3

    Traffic

    QCN message

    QCN message

    DA: H3

    SA: H1

    DA: H3

    SA: H2

    DA: H1

    SA: H3

    DA: H2

    SA: H3

    Hop by Hop or Transparent Forwarding Model

    802.1Qau Congestion Notification - QCN

    QCN/DCB

    Congestion

    Self-Clocking Control loop Derived from FCC (Fibre Channel Congestion

    Control)

    Congestion Control between layer 2 devices Not passed through any device that changes MAC

    addresses

  • 8/12/2019 BRKSAN-2047

    48/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE - Design, Operations and Management Best Practices

    Agenda

    Unified Fabric What and When FCoE Protocol Fundamentals Nexus FCoE Capabilities FCoE Network Requirements and DesignConsiderations DCB & QoS - Ethernet Enhancements Single Hop Design Multi-Hop Design Futures

  • 8/12/2019 BRKSAN-2047

    49/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Ethernet EnhancementsCan Ethernet Be Lossless? Yes, with Ethernet PAUSE Frame

    PAUSESTOP

    Ethernet Link

    Switch A Switch B

    Queue Full

    Defined in IEEE 802.3Annex 31B The PAUSE operaon is used to inhibit transmission of data frames for a specified period of me Ethernet PAUSE transforms Ethernet into a lossless fabric, a requirement for FCoE

  • 8/12/2019 BRKSAN-2047

    50/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Ethernet EnhancementsIEEE DCB

    Standard / Feature Status of the StandardIEEE 802.1QbbPriority-based Flow Control (PFC)

    Completed

    IEEE 802.3bdFrame Format for PFC

    Completed

    IEEE 802.1QazEnhanced Transmission Selection (ETS) andData Center Bridging eXchange (DCBX)

    Completed

    IEEE 802.1Qau Congestion Notification Complete, published March 2010

    IEEE 802.1Qbh Port Extender In its first task group ballot

    Developed by IEEE 802.1 Data Center Bridging Task Group (DCB) All Standards Complete

    CEE (Converged Enhanced Ethernet) is an informal group of companies that submied

    inial inputs to the DCB WGs.

  • 8/12/2019 BRKSAN-2047

    51/121

  • 8/12/2019 BRKSAN-2047

    52/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    20 msec

    Voice Packets

    Bytes

    200

    600

    1000

    AudioSamples

    1400

    Time

    200

    600

    1000

    1400

    33 msec

    Video Packets

    VideoFrame

    VideoFrame

    VideoFrame

    NX-OS QoS Design RequirementsAttributes of Voice and Video

  • 8/12/2019 BRKSAN-2047

    53/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Access-Edge Switches

    Conditionally Trusted EndpointsExample: IP Phone + PC

    Secure EndpointExample: Software-protected PC

    With centrally-administered QoS markings

    Unsecure Endpoint

    TrustBoun

    dary

    Trust

    Boundary

    NX-OS QoS Design RequirementsTrust Boundaries What have we trusted?

  • 8/12/2019 BRKSAN-2047

    54/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    PCP/COSNetwork priority Acronym Traffic characteristics

    1 0 (lowest) BK Background

    0 1 BE Best Effort

    2 2 EE Excellent Effort

    3 3 CA Critical Applications

    4 4 VI Video, < 100 ms latency

    5 5 VO Voice, < 10 ms latency

    6 6 IC Internetwork Control

    IEEE 802.1Q-2005

    NX-OS QoS RequirementsCoS or DSCP?

    We have non IP based traffic to consider againFCoE Fibre Channel Over Ethernet

    RCoE RDMA Over Ethernet

    DSCP is still marked but CoS will be required and used in Nexus Data Centerdesigns

  • 8/12/2019 BRKSAN-2047

    55/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Data Center Bridging Control ProtocolDCBX Overview - 802.1Qaz

    DCBX Switch

    DCBX CNAAdapter

    Negotiates Ethernet capabilitys : PFC, ETS, CoS values betweenDCB capable peer devices

    Simplifies Management : allows for configuration and distribution ofparameters from one node to another

    Responsible for Logical Link Up/Down signaling of Ethernet andFibre Channel

    DCBX is LLDP with new TLV fields The original pre-standard CIN (Cisco, Intel, Nuova) DCBX utilized

    additional TLVs

    DCBX negotiation failures result in: per-priority-pause not enabled on CoS values vfc not coming up when DCBX is being used in FCoE

    environment

    dc11-5020-3# sh lldp dcbx interface eth 1/40

    Local DCBXP Control information:

    Operation version: 00 Max version: 00 Seq no: 7 Ack no: 0

    Type/Subtype Version En/Will/Adv Config

    006/000 000 Y/N/Y 00

    https://www.cisco.com/en/US/netsol/ns783/index.html

  • 8/12/2019 BRKSAN-2047

    56/121

  • 8/12/2019 BRKSAN-2047

    57/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Offered Traffic

    t1 t2 t3

    10 GE Link Realized Traffic Utilization

    3G/s HPC Traffic3G/s

    2G/s

    3G/sStorage Traffic3G/s

    3G/s

    LAN Traffic4G/s

    5G/s3G/s

    t1 t2 t3

    3G/s 3G/s

    3G/s 3G/s 3G/s

    2G/s

    3G/s 4G/s 6G/s

    Prevents a single traffic class of hogging all the bandwidth and starving otherclasses

    When a given load doesnt fully utilize its allocated bandwidth, it is available toother classes

    Helps accommodate for classes of a bursty nature

    Enhanced Transmission Selection (ETS)Bandwidth Management 802.1Qaz

  • 8/12/2019 BRKSAN-2047

    58/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Nexus QoSQoS Policy Types

    There are three QoS policy types used to define systembehavior (qos, queuing, network-qos)

    There are three policy aachment points to apply thesepolicies to

    Ingress interface System as a whole (defines global behavior) Egress interface

    Policy Type Function Attach Point

    qos Define traffic classification rulessystem qos

    ingress Interface

    queuingStrict Priority queue

    Deficit Weight Round Robin

    system qos

    egress Interfaceingress Interface

    network-qosSystem class characteristics (drop or no-

    drop, MTU), Buffer size, Markingsystem qos

  • 8/12/2019 BRKSAN-2047

    59/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Configuring QoS on the Nexus 5500Create New System Class

    Step 1 Define qos Class-Map

    Step 2 Define qos Policy-Map

    Step 3 Apply qos Policy-Map undersystem qos or interface

    N5k(config)# ip access-list acl-1

    N5k(config-acl)# permit ip 100.1.1.0/24 any

    N5k(config-acl)# exit

    N5k(config)# ip access-list acl-2

    N5k(config-acl)# permit ip 200.1.1.0/24 any

    N5k(config)# class-map type qos class-1

    N5k(config-cmap-qos)# match access-group name acl-1

    N5k(config-cmap-qos)# class-map type qos class-2

    N5k(config-cmap-qos)# match access-group name acl-2

    N5k(config-cmap-qos)#

    N5k(config)# policy-map type qos policy-qos

    N5k(config-pmap-qos)# class type qos class-1

    N5k(config-pmap-c-qos)# set qos-group 2

    N5k(config-pmap-c-qos)# class type qos class-2

    N5k(config-pmap-c-qos)# set qos-group 3

    N5k(config)# system qos

    N5k(config-sys-qos)# service-policy type qos input policy-qos

    N5k(config)# interface e1/1-10

    N5k(config-sys-qos)# service-policy type qos input policy-qos

    Create two system classes for traffic with different sourceaddress range

    Supported matching criteriaN5k(config)# class-map type qos class-1

    N5k(config-cmap-qos)# match ?

    access-group Access group

    cos IEEE 802.1Q class of service

    dscp DSCP in IP(v4) and IPv6 packets

    ip IP

    precedence Precedence in IP(v4) and IPv6 packetsprotocol Protocol

    N5k(config-cmap-qos)# match

    Qos-group range for user-configured system class is 2-5

    Policy under system qos applied to all interfaces Policy under interface is preferred if same type of policy is

    applied under both system qos and interface

  • 8/12/2019 BRKSAN-2047

    60/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Configuring QoS on the Nexus 5500Create New System Class(Continue)Step 4 Define network-qos Class-Map

    Step 5 Define network-qos Policy-

    Map

    Step 6 Apply network-qos policy-map undersystem qos context

    N5k(config)# class-map type network-qos class-1

    N5k(config-cmap-nq)# match qos-group 2

    N5k(config-cmap-nq)# class-map type network-qos class-2

    N5k(config-cmap-nq)# match qos-group 3

    N5k(config)# policy-map type network-qos policy-nq

    N5k(config-pmap-nq)# class type network-qos class-1

    N5k(config-pmap-nq-c)# class type network-qos class-2

    N5k(config-pmap-nq-c)# system qos

    N5k(config-sys-qos)# service-policy type network-qos policy-nq

    N5k(config-sys-qos)#

    No acon ed to this class indicates default network-qosparameters.

    Policy-map type network-qos will be used to configure no-drop class, MTU, ingress buffer size and 802.1p marking

    Default network-qos parameters are listed in the tablebelow

    Network-QoS

    Parameters Default ValueClass Type Drop class

    MTU 1538Ingress Buffer Size 20.4KB

    Marking No marking

    Match qos-group is the only opon for network-qosclass-map

    Qos-group value is set by qos policy-map in previousslide

  • 8/12/2019 BRKSAN-2047

    61/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Configuring QoS on the Nexus 5500Strict Priority and Bandwidth Sharing Create new system class by using policy-map qosand network-qos(Previous two slides) Then Define and apply policy-map type queuingto configure strict priority and bandwidth

    sharing

    Checking the queuing or bandwidth allocang with command show queuing interfaceN5k(config)# class-map type queuing class-1N5k(config-cmap-que)# match qos-group 2

    N5k(config-cmap-que)# class-map type queuing class-2

    N5k(config-cmap-que)# match qos-group 3

    N5k(config-cmap-que)# exit

    N5k(config)# policy-map type queuing policy-BW

    N5k(config-pmap-que)# class type queuing class-1

    N5k(config-pmap-c-que)# priority

    N5k(config-pmap-c-que)# class type queuing class-2

    N5k(config-pmap-c-que)# bandwidth percent 40

    N5k(config-pmap-c-que)# class type queuing class-fcoe

    N5k(config-pmap-c-que)# bandwidth percent 40

    N5k(config-pmap-c-que)# class type queuing class-default

    N5k(config-pmap-c-que)# bandwidth percent 20

    N5k(config-pmap-c-que)# system qos

    N5k(config-sys-qos)#service-policy type queuing output policy-BW

    N5k(config-sys-qos)#

    Define queuing class-map

    Define queuing policy-map

    Apply queuing policy under

    system qos or egress interface

  • 8/12/2019 BRKSAN-2047

    62/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Configuring QoS on the Nexus 5500Check System Classes

    N5k# show queuing interface ethernet 1/1

    Interface Ethernet1/1 TX Queuing

    qos-group sched-type oper-bandwidth

    0 WRR 20

    1 WRR 40

    2 priority 0

    3 WRR 40

    Interface Ethernet1/1 RX Queuing

    qos-group 0:

    q-size: 163840, MTU: 1538

    drop-type: drop, xon: 0, xoff: 1024Stascs:

    Pkts received over the port : 9802

    Ucast pkts sent to the cross-bar : 0

    Mcast pkts sent to the cross-bar : 9802

    Ucast pkts received from the cross-bar : 0

    Pkts sent to the port : 18558

    Pkts discarded on ingress : 0

    Per-priority-pause status : Rx (Inacve), Tx (Inacve)

    qos-group 1:

    q-size: 76800, MTU: 2240

    drop-type: no-drop, xon: 128, xoff: 240

    Stascs:

    Pkts received over the port : 0Ucast pkts sent to the cross-bar : 0

    Mcast pkts sent to the cross-bar : 0

    Ucast pkts received from the cross-bar : 0

    Pkts sent to the port : 0

    Pkts discarded on ingress : 0

    Per-priority-pause status : Rx (Inacve), Tx (Inacve)

    Connue

    qos-group 2:

    q-size: 20480, MTU: 1538

    drop-type: drop, xon: 0, xoff: 128

    Stascs:

    Pkts received over the port : 0

    Ucast pkts sent to the cross-bar : 0

    Mcast pkts sent to the cross-bar : 0

    Ucast pkts received from the cross-bar : 0

    Pkts sent to the port : 0

    Pkts discarded on ingress : 0

    Per-priority-pause status : Rx (Inacve), Tx (Inacve)

    qos-group 3:

    q-size: 20480, MTU: 1538

    drop-type: drop, xon: 0, xoff: 128

    Stascs:

    Pkts received over the port : 0

    Ucast pkts sent to the cross-bar : 0

    Mcast pkts sent to the cross-bar : 0

    Ucast pkts received from the cross-bar : 0

    Pkts sent to the port : 0

    Pkts discarded on ingress : 0

    Per-priority-pause status : Rx (Inacve), Tx (Inacve)

    Total Mulcast crossbar stascs:

    Mcast pkts received from the cross-bar : 18558

    N5k#

    Strict priority and WRR

    configuraon

    class-default

    class-fcoe

    User-configured system

    class: class-1

    User-configured system

    class: class-2

    Packet counter for

    each class

    Drop counter for

    each class

    Current PFC status

  • 8/12/2019 BRKSAN-2047

    63/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    On Nexus 5000 once feature fcoeis configured, 2 classes are made by default

    Priority Flow Control Nexus 5000/5500Operations Configuration Switch Level

    FCoE DCB Switch

    DCB CNA Adapter

    class-fcoeis configured to be no-dropwith an MTU of 2158

    policy-map type qos default-in-policyclass type qos class-fcoeset qos-group 1

    class type qos class-defaultset qos-group 0

    policy-map type network-qos default-nq-policyclass type network-qos class-fcoepause no-dropmtu 2158

    system qosservice-policy type qos input fcoe-default-in-policyservice-policy type queuing input fcoe-default-in-policyservice-policy type queuing output fcoe-default-out-policyservice-policy type network-qos fcoe-default-nq-policy

    Enabling the FCoE feature on Nexus 5548/96 does notcreate no-drop policiesautomatically as on Nexus 5010/20

    Must add policies under system QOS:

  • 8/12/2019 BRKSAN-2047

    64/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Configs for

    3000m no-drop

    class

    Buffer sizePause Threshold

    (XOFF)

    Resume

    Threshold (XON)

    N5020 143680 bytes 58860 bytes 38400 bytes

    N5548 152000 bytes 103360 bytes 83520 bytes

    Tuning of the lossless queues to support a variety of use cases Extended switch to switch no drop traffic lanes

    Support for 3km with Nexus 5000 and 5500 Increased number of no drop services lanes (4) for RDMA

    and other multi-queue HPC and compute applications

    Support for 3 kmno drop switch to

    switch links

    Inter Building DCBFCoE links

    Nexus 5000/5500 QoSPriority Flow Control and No-Drop Queues

    5548-FCoE(config)# policy-map type network-qos 3km-FCoE5548-FCoE(config-pmap-nq)# class type network-qos 3km-FCoE5548-FCoE(config-pmap-nq-c)# pause no-drop buffer-size 152000 pause-threshold 103360resume-threshold 83520

    Gen 2 UPC

    Unified Crossbar Fabric

    Gen 2 UPC

  • 8/12/2019 BRKSAN-2047

    65/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Enhanced Transmission Selection - N5K

    Bandwidth Management

    When configuring FCoEby default, each classis given 50% of the available bandwidth

    Can be changed through QoS settings whenhigher demands for certain traffic exist (i.e. HPC

    traffic, more Ethernet NICs)

    1Gig FC HBAs

    1Gig Ethernet NICs

    Traditional Server

    Best Practice: Tune FCoE queue to provide equivalent capacity to theHBA that would have been used (1G, 2G, )

    N5k-1# show queuing interface ethernet 1/18Ethernet1/18 queuing information:TX Queuing

    qos-group sched-type oper-bandwidth0 WRR 501 WRR 50

  • 8/12/2019 BRKSAN-2047

    66/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    show policy-map systemType network-qos policy-maps=====================================

    policy-map type network-qos default-nq-7e-policyclass type network-qos c-nq-7e-drop

    match cos 0-2,4-7congestion-control tail-dropmtu 1500

    class type network-qos c-nq-7e-ndrop-fcoematch cos 3

    match protocol fcoe

    pause

    mtu 2112

    Priority Flow Control Nexus 7K & MDSOperations Configuration Switch LevelN7K-50(config)# system qosN7K-50(config-sys-qos)# service-policy type network-qos default-nq-7e-policy

    Policy Template choices

    No-Drop PFC w/ MTU 2K set for Fibre Channelshow class-map type network-qos c-nq-7e-ndrop-fcoe

    Type network-qos class-maps=============================================class-map type network-qos match-any c-nq-7e-ndrop-fcoe

    Description: 7E No-Drop FCoE CoS mapmatch cos 3

    match protocol fcoe

    Template Drop CoS (Priority) NoDrop CoS (Priority)

    default-nq-8e-policy 0,1,2,3,4,5,6,7 5,6,7 - -

    default-nq-7e-policy 0,1,2,4,5,6,7 5,6,7 3 -

    default-nq-6e-policy 0,1,2,5,6,7 5,6,7 3,4 4

    default-nq-4e-policy 0,5,6,7 5,6,7 1,2,3,4 4

  • 8/12/2019 BRKSAN-2047

    67/121

  • 8/12/2019 BRKSAN-2047

    68/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    On the ingress either 2 or 4 queues (buffer pools) are carved out depending on the template.

    Ingress Queuing determines:

    - Amount of buffers to be allocated for a CoS queue-limit percent- Priority Grouping and its Bandwidth allocaon adverzed using DCBXP

    bandwidth percent

    - Untrusted port default CoS set cosEach ingress queue can be assigned a bandwidth percentage

    Each queues CoS values (priority group) and its bandwidth is relayed to the peerusing DCBX. Nothing changes on the local port.

    Expected to be used by the peer as a guideline for sourcing traffic for each CoS.Can be configured on a port or a port-channel. It overrides the system queuing policy applied by

    default on it.

    Nexus 7000QoS ingress queuing policies

  • 8/12/2019 BRKSAN-2047

    69/121

  • 8/12/2019 BRKSAN-2047

    70/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    MDS 9500QoS - ETS

    9513-71# sh policy-map interface ethernet 13/1

    Global stascs status : enabled

    Ethernet 13/1

    Service-policy (queuing) input: default-4q-7e-in-policy

    policy stascs status: enabled (current status: enabled)

    Class-map (queuing): 1q4t-7e-in-q-default (match-any)

    queue-limit percent 100

    bandwidth percent 100

    queue dropped pkts : 0

    Service-policy (queuing) output: default-4q-7e-out-policy

    policy stascs status: enabled (current status: enabled)

    Class-map (queuing): 1q1t-7e-out-q-default (match-any)

    bandwidth remaining percent 100

    queue dropped pkts : 0

    9513-71# sh policy-map system type queuing

    Service-policy (queuing) input: default-4q-7e-in-policypolicy stascs status: disabled (current status: disabled)

    Class-map (queuing): 1q4t-7e-in-q-default (match-any)

    queue-limit percent 100

    bandwidth percent 100

    Service-policy (queuing) output: default-4q-7e-out-policy

    policy stascs status: disabled (current status: disabled)

    Class-map (queuing): 1q1t-7e-out-q-default (match-any)

    bandwidth remaining percent 100

    MDS FCoE linecard does not compete with the Fibre Channel bandwidth andreceived 100% of the ethernet bandwidth

  • 8/12/2019 BRKSAN-2047

    71/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    iSCSI

    2G

    1G 1G 1G 1G

    1G

    2G

    4G

    10G

    10G

    1. Steady state traffic is

    within end to end

    network capacity

    2. Burst traffic from a

    source

    5. All sources are

    eventually flow controlled

    3. No Drop traffic isqueued

    DC Design DetailsNo Drop Storage Considerations

    4. Buffers begin to fill and

    PFC flow control iniated

    TCP not invokedimmediately as framesare queued not

    dropped

    Is the opmal

  • 8/12/2019 BRKSAN-2047

    72/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Blocking - Impact on Design Performance Performance can be adversely affected across an entire multiswitch

    FC Fabric by a single blocking port

    HOL is a transitory event (until some BB_Credits are returned on the blockedport)

    To help alleviate the blocking problem and enhancethe design performance

    Virtual Output Queuing (VoQ) on all ports

    DC Design DetailsHOLB is also a fundamental part of Fibre Channel SAN design

  • 8/12/2019 BRKSAN-2047

    73/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE - Design, Operations and Management Best PracticesAgenda

    Unified Fabric What and When FCoE Protocol Fundamentals Nexus FCoE Capabilities FCoE Network Requirements and Design

    Considerations

    DCB & QoS - Ethernet Enhancements Single Hop Design Multi-Hop Design Futures

  • 8/12/2019 BRKSAN-2047

    74/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    N-Port Virtualizer (NPV) utilizes NPIV functionality to allow a switch to act like a serverperforming multiple logins through a single physical link

    Physical servers connected to the NPV switch login to the upstream NPIV core switch Physical uplink from NPV switch to FC NPIV core switch does actual FLOGI Subsequent logins are converted (proxy) to FDISC to login to upstream FC switch

    No local switching is done on an FC switch in NPV mode FC edge switch in NPV mode does not take up a domain ID

    FC NPIV Core Switch

    Eth1/1

    Eth1/2

    Eth1/3

    Server1

    N_Port_ID 1

    Server2

    N_Port_ID 2

    Server3

    N_Port_ID 3

    F_Port

    N-Port

    F-Port

    F-PortNP-Port

    FCoE EdgeN-Port Virtualizer (NPV)

  • 8/12/2019 BRKSAN-2047

    75/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    VLAN 10,30

    VLAN 10,20

    Each FCoE VLAN and VSAN count as aVLAN HW resource therefore a VLAN/VSAN mapping accounts for TWO VLANresources

    FCoE VLANs are treated differently thannative Ethernet VLANs: no flooding,

    broadcast, MAC learning, etc. BEST PRACTICE: use different FCoE VLANs/

    VSANs for SAN A and SAN B

    The FCoE VLAN must not be configured as anative VLAN

    Shared Wires connecting to HOSTS must beconfigured as trunk ports and STP edge ports

    Note:STP does not run on FCoE vlans betweenFCFs (VE_Ports) but does run on FCoE VLANstowards the host (VF_Ports)

    ! VLAN 20 is dedicated for VSAN 2 FCoE traffic(config)# vlan 20(config-vlan)# fcoe vsan 2

    VSAN 2

    STP Edge Trunk

    Fabric A Fabric BLAN Fabric

    Nexus 5000

    FCF

    Nexus 5000

    FCF

    VSAN 3

    Unified Fabric DesignThe FCoE VLAN

  • 8/12/2019 BRKSAN-2047

    76/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    With NX-OS release 4.2(1) Nexus 5000supports F-Port Trunking and Channeling

    VSAN Trunking and Port-Channel on thelinks between an NPV device and upstreamFC switch (NP port -> F port)

    F_Port Trunking: Better multiplexing oftraffic using shared links (multiple VSANson a common link)

    F_Port Channeling: Better resiliencybetween NPV edge and Director Core(avoids tearing down all FLOGIs on a failinglink)

    Simplifies FC topology (single uplink fromNPV device to FC director)

    Fabric A Supporting

    VSAN 20 & 40

    F Port Trunking & Channeling

    Unified Fabric DesignF_Port Trunking and Channeling

    VLAN 10,50

    VLAN 10,30

    VSAN 30,50

    Fabric B

    Supporting VSAN 30

    & 50

    VF

    VN

    TF

    TNP

    Server 1VSAN 20 & 30

    Server 2

    VSAN 40 & 50

  • 8/12/2019 BRKSAN-2047

    77/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Nexus 223210GE FEX

    Fabric Extender - FEXUnified Fabric and FCoE

    Nexus 5000 asFCF or as NPV

    device

    Nexus5000/5500

    Generation 2 CNAs

    Fabric A Fabric B

    FC

    FCoEFC

    Nexus 5000 access switches operating in NPVmode

    With NX-OS release 4.2(1) Nexus 5000supports F-Port Trunking and Channeling on thelinks between an NPV device and upstream FCswitch (NP port -> F port)

    F_Port Trunking: Better multiplexing of trafficusing shared links (multiple VSANs on acommon link)

    F_Port Channeling: Better resiliency betweenNPV edge and Director Core

    No host re-login needed per link failure

    No FSPF recalculation due to link failure

    Simplifies FC topology (single uplink from NPVdevice to FC director)

  • 8/12/2019 BRKSAN-2047

    78/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Unified Fabric DesignFCoE and vPC together

    Direct Aach vPC

    Topology

    VLAN 10,30

    VLAN 10,20

    STP Edge Trunk

    VLAN 10 ONLY HERE!

    Fabric A Fabric BLAN Fabric

    Nexus 5000

    FCF-A Nexus 5000

    FCF-B

    vPC contains only 2 X 10GE

    links one to each Nexus

    5X00

    vPC with FCoE are ONLY supported between hosts andN5k or N5k/2232 pairsAND they must follow specificrules

    A vfc interface can only be associated with asingle-port port-channel

    While the port-channel configuraons are thesame on N5K-1 and N5K-2, the FCoE VLANs aredifferent

    FCoE VLANs are not carried on the vPC peer-link(automacally pruned)

    FCoE and FIP ethertypes are not forwarded overthe vPC peer link either

    vPC carrying FCoE between two FCFs is NOTsupported

    Best Pracce: Use stac port channel configuraonrather than LACP with vPC and Boot from SAN (this

    will change with future releases)

  • 8/12/2019 BRKSAN-2047

    79/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    EvPC & FEXNexus 5550 Topologies starting with NX-OS 5.1(3)N1

    In an Enhanced vPC (EvPC)SAN A/B isolation is configuredby associating each FEX witheither SAN A or SAN B Nexus5500

    FCoE & FIP traffic is forwardedonly over the links connected tothe specific parent swicth

    Ethernet is hashed over all FEXfabric links

    FCoE enabled server (dual CNA)

    FCoEFC

    N5K-A(config)# fex 100

    N5K-A(config-fex)# fcoe

    N5K-A(config)# fex 101

    N5K-B(config)# fex 101

    N5K-B(config-fex)# fcoe

    N5K-B(config)# fex 100

    FEX 101

    FEX 100

    N5K-BN5K-A

  • 8/12/2019 BRKSAN-2047

    80/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    VFC1 is bound to port-channel 1

    Port-channel 1 is using LACP to negotiate with host The VFC/port-channel never comes up and the host isnt able to boot from SAN

    SAN BSAN A

    N5K(+N2K) as FCFor NPV DeviceN5K(+N2K) as FCFor NPV Device

    1/2/4/8G FC

    10G Ethernet

    10G Unified I/O10G FCoE

    LAN

    N5K(+N2K) as FCFor NPV DeviceN5K(+N2K) as FCFor NPV Devicevfc 1

    po-1 (vpc 1)

    Eth1/1 Eth1/1

    Configurationinterface vfc 1

    bind interface po-1

    Configurationinterface vfc 2

    bind interface po-1

    po 1

    vfc 2

    po 1

    vPC & Boot from SANPre 5.1(3)N1 Behaviour

  • 8/12/2019 BRKSAN-2047

    81/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    As of NX-OS Release 5.1(3)N1(1) for N5K, new VFC binding models will be supported In this case, we now support VF_Port binding to a member port of a given port-channel Check the configuration guide and operations guide for additional VFC binding changes

    vPC & Boot from SAN5.1(3)N1 Behaviour

  • 8/12/2019 BRKSAN-2047

    82/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Adapter-FEX presents standard PCIe virtualNICs (vNICs) to servers

    Adapter-FEX virtual NICs are configured andmanaged via Nexus 5500

    Forwarding, Queuing, and Policy enforcementfor vNIC traffic by Nexus 5500

    Adapter-FEX connected to Nexus 2000 FabricExtender - Cascaded FEX-Link deployment

    Forwarding, Queuing, and Policy enforcementfor vNIC traffic still done by Nexus 5500

    vNIC vNIC vNIC

    vHBA vHBA

    PCIex16

    10GbE/FCoE

    UserDefinablevNICs

    Eth

    0

    FC

    1 2

    FC

    3

    Eth

    127

    Adapter FEX802.1BR

  • 8/12/2019 BRKSAN-2047

    83/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Working Group Ballot of Bridge Port Extension (P802.1BR), the IEEE standardfor VNLink, has reached 100% approval by the voting members of the IEEE802.1 committee.

    November 10, the IEEE 802.1 committee passed a motion to advance the draft

    standard to Sponsor Ballot. This is the final stage for ratification of thestandard.The first Sponsor Ballot is expected to take place in late November

    Ratification of the standard is currently predicted for March 2012

    The same is true for P802.1Qbg, which is the standard the includes some of theprotocols that support Bridge Port Extension as well as the VEPA device beingpromoted by HP

    Both standards are expected to be ratified in March.

  • 8/12/2019 BRKSAN-2047

    84/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Adapter FEX & FCoE

    CNA

    FCoE

    Nexus 5000(San B)

    Nexus 5000(San A)

    FCoEStorage

    vfc1

    veth1

    Bound to

    Bound to

    ethernet101/1/1

    vfc2

    veth2

    Bound to

    Bound to

    ethernet102/1/1

    Active Standby

    Active TrafficPath

    Lets look a lile more

  • 8/12/2019 BRKSAN-2047

    85/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    interface vfc1 bind interface veth1

    interface veth1 bind interface eth101/1/1 channel 1

    interface eth101/1/1 switchport mode vntag

    And in this case, to make sure wedont break SAN A/B separation,make sure we configure theFEX2232:

    fex 101 fcoe

    CNA

    FCoE

    Nexus 5000(San B)

    Nexus 5000(San A)

    FCoEStorage

    vfc1

    veth1

    Bound to

    Bound to

    ethernet101/1/1

    vfc2

    veth2

    Bound to

    Bound to

    ethernet102/1/1

    Active Standby

    Active TrafficPath

    Adapter FEX & FCoE

  • 8/12/2019 BRKSAN-2047

    86/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Transparent Bridges?

    FIP Snooping

    ENode

    ENode MAC

    0E.FC.00.07.08.09

    FIP SnoopingSpoofed MAC0E.FC.00.DD.EE.FF

    FCF

    FCF MAC0E.FC.00.DD.EE.FF

    What does a FIP Snooping device do? FIP solicitaons (VLAN Disc, FCF Disc and FLOGI)

    sent out from the CNA and FIP responses fromthe FCF are snooped

    How does a FIP Snooping device work? The FIP Snooping device will be able to know

    which FCFs hosts are logged into

    Will dynamically create an ACL to make sure thatthe host to FCF path is kept secure

    A FIP Snooping device has NO intelligence or impacton FCoE traffic/path selecon/load balancing/loginselecon/etc

    Menoned in the Annex of the FC-BB-5 (FCoE)standard as a way to provide security in FCoEenvironments

    Supported on Nexus 5000/5500 4.1(3) Supported on Nexus 7000 - 6.1(1) with F2, F1 cards

    VF

    VN

  • 8/12/2019 BRKSAN-2047

    87/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Fibre Channel Aware DeviceFCoE NPV

    FCF

    What does an FCoE-NPV device do? FCoE NPV bridge" improves over a "FIP snooping

    bridge" by intelligently proxying FIP funconsbetween a CNA and an FCF

    Acve Fibre Channel forwarding and securityelement

    FCoE-NPV load balance logins from the CNAs evenlyacross the available FCF uplink ports

    FCoE NPV will take VSAN into account whenmapping or pinning logins from a CNA to an FCFuplink

    Emulates exisng Fibre Channel Topology(same mgmt, security, HA, )

    Avoids Flooded Discovery and Configuraon(FIP & RIP)

    Fibre Channel Configuraon and Control

    Applied at the Edge Port

    Proxy FCoE VLAN Discovery

    Proxy FCoE FCF Discovery

    FCoENPV

    VF

    VNP

  • 8/12/2019 BRKSAN-2047

    88/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE NPV: Fabric Login

    VF-port

    VF-port

    VN-port

    Initiator

    Target

    FC Switch

    FCoE-NPIVCore Switch

    VNP-port

    FC Link

    FCoE Link

    FCoE NPVSwitch

    FCoE Target

    FLOGI

    FCIDFPMAFPMA FCoE NPV core and FCoE NPV edge switches gothrough FIP Negoaon process*

    (*explained in next slides)

    FCoE NPV edge switch does a fabric login (FLOGI)into NPV core switch

    FCoE NPV Core assigns FCID and FabricProvided Mac-Address (FPMA) toFCoE NPV edge switch

  • 8/12/2019 BRKSAN-2047

    89/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE NPV: FIP VLAN Discovery

    VF-port

    VF-port

    VN-port

    Initiator

    Target

    FC Switch

    FCoE-NPIVCore Switch

    VNP-port

    FC Link

    FCoE Link

    FCoE NPVSwitch

    FCoE Target

    FIP VLAN Discovery

    FIP VLAN Noficaon

    FCoE VLAN=5

    FCoE VLAN=5

    FcoE Initialization Protocol (FIP) is used to discoverFCoE VLAN between the Initiator and FCF Initiator CNA sends FIP VLAN Discovery packets,gets forwarded to FCoE NPV core switch

    Initiator discovers FCoE VLAN (VLAN=5) whichwill be used for FCoE communication

  • 8/12/2019 BRKSAN-2047

    90/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE NPV: FIP FCF Discovery

    VF-port

    VF-port

    VN-port

    Initiator

    Target

    FC Switch

    FCoE-NPIVCore Switch

    VNP-port

    FC Link

    FCoE Link

    FCoE NPVSwitch

    FCoE Target

    FIP FCF Solicitaon

    FIP FCF Adversement

    FCF DiscoveredName=SwitchWWNMAC=FCF-

    MAC

    The Initiator sends FCF solicitation message with destination MACaddress ALL-FCF-MAC, FCoE NPV switch forwards to FCoE NPVcore switch

    NPV core switch responds with FCF advertisement containing its ownMAC addressand fabric related details

    Initiator detects FCF along with FCF-MAC and switch WWN

  • 8/12/2019 BRKSAN-2047

    91/121

  • 8/12/2019 BRKSAN-2047

    92/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE-NPV configuration Details

    n5k(config)# feature fcoe-npv

    Proper no drop QOS needs to be applied to all

    NPIV VDCs and NPV switches as shown in

    earlier slides

    N7K Storage VDC

    n7k-fcoe(config)# feature npiv

    N5Ks with release 5.2.x

    MDS w/

    release 5.2.x

    MDS Global command

    MDS9513-71# feature

    npiv

    LACP Port-channels an be configured between switches for High

    availability.

    Becomes VNP to VF

    N7K w/

    release 5.2.x

    N5Ks with release 5.2.x

  • 8/12/2019 BRKSAN-2047

    93/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Benefits DCB FIP Snooping FCoE NPVFCoE

    Switch

    Scalability(Server connectivity)

    Support for LosslessEthernet

    FCoE TrafficEngineering

    Security(Man in the middle attack)

    FC to FCoE Migration(Ease of FCoE device

    migration from FC fabric toFCoE network)

    FCoE Traffic Load

    Balancing SAN Administration

    (VSAN, VFC visibility for SANAdministration)

    FCoE NPVEdge Capabilities

    For YourReference

  • 8/12/2019 BRKSAN-2047

    94/121

  • 8/12/2019 BRKSAN-2047

    95/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE Multi-Tier Fabric DesignUsing VE_PortsWith NX-OS 5.0(2)N2, VE_Ports are

    supported on/between the Nexus 5000 andNexus 5500 Series Switches

    VE_Ports are run between switches actingas Fibre Channel Forwarders (FCFs)

    VE_Ports are bound to the underlying 10Ginfrastructure

    VE_Ports can be bound to a single10GE port

    VE_Ports can be bound to a port-channel interface consisting of multiple10GE links

    VN

    VE

    VF

    VE

    VF

    VN

    FCoEFC

    All above switches are Nexus 5X00Series acting as an FCF

    VE

    VE

  • 8/12/2019 BRKSAN-2047

    96/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    What happens when FCFs are connected via VE_Ports

    10Gig Ethernet ISL or LACP port-channel must first be establishedbetween FCF Switches expanding the L2 ethernet network

    LLDP frames with DCBx TLVs, sourcing the MAC addresses of eachswitch are exchanged across the ethernet Link to determine abilities.

    FIP Control exchange is done between switches FSPF routing established Fibre Channel Protocol is exchanged between the FCFs and a Fibre

    Channel merge of Zones is accomplished building out the FC SAN.

    You now have established a VE_Port between two DCB switchesDedicated FCoE Link

  • 8/12/2019 BRKSAN-2047

    97/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    VE_Port FIP exchange

    A FIP ELP (Exchange Link Parameter)is sent on each VLAN by both

    the N5K, N7K or MDS. A FIP ACC is sent by the switch for each VLAN.

    Discovery Solicitaons &

    Adversements from the

    FCF are sent both ways

    across the VE_Port, one for

    each FCoE mapped VLANthat is trunked on the

    interface.

  • 8/12/2019 BRKSAN-2047

    98/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE VE - Fibre Channel E_Port handshake

    Domain ID Assign by ExistingPrincipal Switch

    Request Domain ID fromNew Switch

    Exchange Fabric Parameters

    Zone Merge Request

    Enhanced Zoning Merge RequestResource Allocation

    Build Fabric

    FSPF exchanges

  • 8/12/2019 BRKSAN-2047

    99/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Differences in Trunking VSANs with FCoE VE_Ports

    In FC on the MDS, trunking is used to carry multiple VSANs over the samephysical FC link. With FCoE, a physical link is replaced by a virtual link, a pair ofMAC addresses.

    FCoE uses assigned MAC addresses that are unique only in the context of asingle FC fabric. Carrying multiple fabrics over a single VLAN would then meanhaving a strong possibility for duplicated MAC addresses.

    In FCoE there cannot be more than one VSAN mapped over a VLAN. The net result is that trunking is done at the Ethernet level, not at the FC level. FC trunking is not needed and the Fibre Channel Exchange SwitchCapabilities(ESC) & Exchange Port Parameters (EPP) processing is not required

    to be performed as on the MDS

  • 8/12/2019 BRKSAN-2047

    100/121

  • 8/12/2019 BRKSAN-2047

    101/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Multi - Hop DesignExtending FCoE to MDS SAN from Aggregation

    Ethernet

    Fibre Channel

    Dedicated FCoE Link

    Converged Link

    FCFCoE

    AGG

    Access

    CORE

    L3

    L2

    N7K

    MDS FC SAN A MDS FC SAN B

    N7K

    N7KN7K

    Converged Network into theexisting SAN Core

    Leverage FCoE wires between FibreChannel SAN to Ethernet DCBswitches in Aggregation layer usingDedicated ports

    Maintain the A B SAN Topologywith Storage VDC and Dedicatedwires

    Using N7K director Class Switchesat Access layer

    Dedicated FCoE Ports betweenaccess and Aggregation, vPCs for

    Data Zoning controlled by Core A-B SAN

  • 8/12/2019 BRKSAN-2047

    102/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Storage on MDSExtending FCoE to MDS SAN from Access

    Ethernet

    Fibre ChannelDedicated FCoE Link

    Converged Link

    AGG

    Access

    CORE

    L3

    L2

    N5K

    MDS FC SAN AMDS FC SAN B

    FCFCoE

    N5K

    Converged Network capabilities tothe existing SAN Core

    Leverage FCoE wires betweenFibre Channel SAN to Ethernet

    DCB switches (VE_Ports)

    N5K access switches can be inFibre Channel switch node andassigned Domain ID , or N5Kaccess switches can run inFCoE-NPV mode, no FCservices running local.

    Zoning controlled by Core A-BSAN

  • 8/12/2019 BRKSAN-2047

    103/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Migration of Storage to Aggregation Different requirements for LAN and

    SAN network designs

    Factors that will influence this usecase

    Port density Operational roles and change management Storage device types

    Potentially viable for smallerenvironments

    Larger environments will needdedicated FCoE SANdevicesproviding target ports

    Use connections to a SAN Use a storageedge of other FCoE/DCB

    capable devices

    Ethernet

    Fibre Channel

    Dedicated FCoE Link

    Converged Link

    AGG

    Access

    CORE

    L3

    L2

    N5K

    SAN A SAN B

    FCoE

    N5K

    Multiple VDCsFCoE SANLAN AggLAN Core

    SAN Admins manageStorage VDC

    ZoningLogin Services

  • 8/12/2019 BRKSAN-2047

    104/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    SAN BSAN A

    Does passing FCoE traffic through alarger aggregation point make sense?

    Multiple links required to support theHA models

    1:1 ratio between access toaggregation and aggregation to SANcore is required

    Need to plan for appropriate capacityin any core VE_Port link

    When is a direct Edge to Core links forFCoE are more cost effective thanadding another hop?

    Smaller Edge device more likely to beable to use under-provisioned uplinks

    1:1 Ratio of linksrequired unless

    FCoE-NPV

    FCoE uplink is

    over-provisioned

    CORE

    Congestion onAgg-Core

    links

    Require

    proper sizing

    FCoE Deployment ConsiderationsShared Aggregation/Core Devices

    Ethernet

    Fibre Channel

    Dedicated FCoE Link

    Converged Link

  • 8/12/2019 BRKSAN-2047

    105/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Data Center Network Manager

    DCNM

    DCNM(Converged)

    DCNM-SANDESKTOP

    CLIENT

    Fabric ManagerFMS

    One converged product

    Single pane of glass (web 2.0 dashboard) Common operations (discovery, topology) Single installer, Role based access control Consistent licensing model (licenses on server) Integration with UCS Manager and other OSS tools

    DCNM-LAN

    DESKTOPCLIENT

  • 8/12/2019 BRKSAN-2047

    106/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    LAN/SAN RolesData Center Network Manager

    Collaborative managementDefined roles & functionsFCoE Wizards

    Nexus

    7000

    Tasks Tools

    LANAdmin

    Storage VDC provisioningVLAN managementEthernet config (L2, network security, VPC, QoS, etc.DCB Configuration (VL, PFC, ETS Templates)

    DCNM-LAN

    SANAdmin

    Discovery of Storage VDCs

    VLAN-VSAN mapping (use reserved pool) WizardvFC provisioning WizardZoning

    DCNM-SAN

    FCoE Wizards

  • 8/12/2019 BRKSAN-2047

    107/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    FCoE - Design, Operations and Management Best Practices

    Agenda

    Unified Fabric What and When FCoE Protocol Fundamentals Nexus FCoE Capabilities FCoE Network Requirements and Design

    Considerations

    DCB & QoS - Ethernet Enhancements Single Hop Design Multi-Hop Design Futures

  • 8/12/2019 BRKSAN-2047

    108/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Data Center Design with E-SAN

    Same topologies as existingnetworks, but using NexusUnified Fabric Ethernet switchesfor SANs

    Physical and Logical separationof LAN and SAN traffic

    Additional Physical and Logicalseparation of SAN fabrics

    Ethernet SAN Fabric carries FC/FCoE & IP based storage (iSCSI,NAS, )

    Common components: EthernetCapacity and Cost

    Ethernet LAN and Ethernet SAN

    CNA

    L2

    L3

    NIC orCNA

    Isolation

    Fabric A Fabric B

    FCoE

    Nexus 7000Nexus 5000

    Nexus 7000Nexus 5000

    FC

    Ethernet

  • 8/12/2019 BRKSAN-2047

    109/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Converged Access

    Shared Physical, SeparateLogical LAN and SAN traffic at

    Access Layer

    Physical and Logical separationof LAN and SAN traffic atAggregation Layer

    Additional Physical and Logicalseparation of SAN fabrics

    Storage VDC for additionalmanagement / operationseparation

    Higher I/O, HA, fast re-convergence for host LAN traffic

    Sharing Access Layer for LAN and SAN

    L2

    L3

    CNA

    Fabric A Fabric B

    FCFCoE

    Nexus 7000Nexus 5000

    MDS 9000

    ConvergedFCoE link

    DedicatedFCoE link

    FC

    Ethernet

  • 8/12/2019 BRKSAN-2047

    110/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    LAN and SAN traffic share physicalswitches

    LAN and SAN traffic use dedicated linksbetween switches

    All Access and Aggregation switches areFCoE FCF switches Dedicated links between switches are

    VE_Ports

    Storage VDC for additional operationseparation at high function agg/core

    Improved HA, load sharing and scale forLAN vs. traditional STP topologies

    SAN can utilize higher performance, higherdensity, lower cost Ethernet switches forthe aggregation/core

    Maintaining Dual SAN fabrics with Overlay

    L2

    L3

    CNAFCFCoE

    Nexus 7000Nexus 5000

    ConvergedFCoE link

    DedicatedFCoE link

    FC

    Ethernet

    FCFFCF

    FCF

    VE

    Fabric A

    Fabric B

    LAN/SAN

    Converged Network Fabrics with Dedicated

    Links

    MDS 9500

  • 8/12/2019 BRKSAN-2047

    111/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Converged Network with Dedicated Links

    FabricPath enabled for LAN traffic Dual Switch core for SAN A & SAN B All Access and Aggregation switches

    are FCoE FCF switches

    Dedicated links between switchesare VE_Ports

    Storage VDC for additional operationseparation at high function agg/core

    Improved HA and scale over vPC(ISIS, RPF, and N+1 redundancy)

    SAN can utilize higher performance,higher density, lower cost Ethernetswitches

    Maintaining Dual SAN fabrics with FabricPath

    L2

    L3

    CNA

    Convergence

    FCFCoE

    Nexus 7000Nexus 5000

    Fabric A

    Fabric B

    FCF

    FCF

    FCF

    FCF

    VE

    ConvergedFCoE link

    DedicatedFCoE link

    FC

    Ethernet

    FabricPath

  • 8/12/2019 BRKSAN-2047

    112/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    ConvergedFCoE link

    DedicatedFCoE link

    FC

    Ethernet

    FabricPath

    Looking forward: Converged Network

    Single FabricSAN Separation at the Access Switch

    L2

    L3

    FCoE

    Nexus 7000Nexus 5000

    FCFFCF

    CNA1 CNA2

    10,20 20,30 10 30

    Array1 Array2

    10,2020,30 10

    30

    Fabric A

    Fabric B

    LAN and SAN traffic share physicalswitches and links

    FabricPath enabled All Access switches are FCoE FCF

    switches VE_Ports to each neighbor Access

    switch

    Single process and database(FabricPath) for forwarding

    Improved (N + 1) redundancy for LAN& SAN

    Sharing links increases fabric flexibilityand scalability

    Distinct SAN A & B for zoningisolation and multipathing redundancy

  • 8/12/2019 BRKSAN-2047

    113/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    ConvergedFCoE link

    DedicatedFCoE link

    FC

    Ethernet

    FabricPath

    Looking forward: Converged Network

    Single Fabric FC-BB-6

    L2

    L3

    FCoE

    Nexus 7000Nexus 5000

    FCFFDF

    CNA1 CNA2

    10,20 20,30 10 30

    Array1 Array2

    10,2020,30 10

    30

    Fabric A

    Fabric B

    LAN and SAN traffic share physicalswitches and links

    FabricPath enabled VA_Ports to each neighbor FCF switch Single Domain FDF to FCF transparent failover Single process and database Single

    process and database (FabricPath) forforwarding

    Improved (N + 1) redundancy for LAN &SAN

    Sharing links increases fabric flexibilityand scalability

    Distinct SAN A & B for zoning isolationand multipathing redundancy

    VA_PortsVA_PortsVA_PortsVA_Ports

  • 8/12/2019 BRKSAN-2047

    114/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Unified Multi-Tier Fabric DesignCurrent Model 2010 & 2011 All devices in the server storage path are Fibre Channel Aware Evolutionary approach to migration to Ethernet transport for Block storage FC-BB-6 and FabricPath (L2MP) may provide enhancements but need to be aware of

    your ability to evolve

    EthernetFabric

    FC Fabric

    FC Domain 7 FC Domain 3MAC A

    FCID 7.1.1

    FCID 1.1.1MAC C

    D_ID = FC-ID (1.1.1)S_ID = FC-ID (7.1.1)

    FC Frame

    D_ID = FC-ID (1.1.1)S_ID = FC-ID (7.1.1)

    FC Frame

    EthernetFabric

    FC Domain 1MAC B

    FC Storage

    FCoEFrame

    D_ID = FC-ID (1.1.1)S_ID = FC-ID (7.1.1)

    Dest. = MAC BSrce. = MAC A

    D_ID = FC-ID (1.1.1)S_ID = FC-ID (7.1.1)

    Dest. = MAC CSrce. = MAC B

    FC link

    VE_port VE_port VF_port VN_port

  • 8/12/2019 BRKSAN-2047

    115/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Recommended Reading

  • 8/12/2019 BRKSAN-2047

    116/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Appendix : Larger scale FCoE Design References

  • 8/12/2019 BRKSAN-2047

    117/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Converged Access Design

    Access (SAN Edge): 2 x Cisco Nexus 7018

    896 ports to Hosts @ 10G

    64 ports to SAN Core @ 10G

    Total 2 Host Edge Switches

    16 16

    OverallOversubscription 7:1

    896 Host Ports

    64 Storage Ports

    16 16

    FCoEFCoEFCoEFCoE

    FCoE

    Populated w/16 Line-cards

    Each card having:2 ports @ 10G to SAN Core Dedicated FCoE2 ports @ 10G to LAN Core Ethernet(1)

    28 ports @ 10G to HostsTotal of 448 ports @ 10G to Hosts

    Populated w/ 8 Line-cardsEach card having:

    4 Storage ports @10G4 Edge ports @ 10G

    Dedicated FCoESAN Core:

    2 x Cisco MDS 9513Total 64 Storage Ports

    Total 64 Edge Ports

    Analogous to a SAN Core-Edge Design

    Notes: 1. Classical Ethernet LAN connections not shown.

    2. This sample is only SAN A (or B) portion of the network.

    3. Shared Links to the hosts allocate 50% of bandwidth to FCoE traffic.

  • 8/12/2019 BRKSAN-2047

    118/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Converged Network Design

    Notes: 1. Classical Ethernet LAN connections not shown.

    2. This sample is only SAN A (or B) portion of the network.

    3. Shared Links to the hosts allocate 50% of bandwidth to FCoE traffic.

    Analogous to a SAN Edge-Core-Edge Design

    Aggregaon

    FCoE FCoE FCoE FCoE FCoE FCoE

    323232 32

    40 4444

    SAN Edge: 3 x Cisco MDS 9513

    128 Storage ports @ 10G

    128 Core ports @ 10G

    Total 3 Storage Edge Switches

    Access(SAN Edge) 4 x Cisco Nexus 7018

    1,792 ports to Hosts @ 10G128 ports to Core @ 10G

    Total 4 Host Edge Switches

    OverallOversubscription 7:1

    1,792 Host Ports

    128 Storage Ports

    Populated w/ 16 Line-cards:2 ports @ 10G to Core Dedicated FCoE2 ports @ 10G to Core Ethernet(1)

    28 ports @ 10G to HostsTotal of 448 ports @ 10G to Hosts

    Aggregaon(SAN Core):

    Populated w/ 8 Line-cards:

    256 ports @ 10G

    Populated w/ 11 Line-cards:

    4 ports @ 10G to Storage

    4 ports @ 10G to Core

    Access

    SAN

    Edge

  • 8/12/2019 BRKSAN-2047

    119/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Converged Network Design

    Access(SAN Edge) 4 x Cisco Nexus70181,792 ports to Hosts @ 10G128 ports to Core @ 10G

    Total 4 Host Edge Switches

    FCoE FCoE

    323232 32

    OverallOversubscription 7:1

    1,792 Host Ports

    128 Storage Ports

    Notes: 1. Classical Ethernet LAN connections not shown.

    2. This sample is only SAN A (or B) portion of the network.

    3. Converged Links to the hosts allocate 50% of bandwidth to FCoE traffic.

    FCoE

    Populated w/ 16 Line-cards2 ports @ 10G to Core Dedicated FCoE2 ports @ 10G to Core Ethernet(1)

    28 ports @ 10G to HostsTotal of 448 ports @ 10G to Hosts

    Aggregaon(SAN Core)

    Populated w/ 8 Line-cards:

    128 Storage ports @ 10G

    128 Core ports @ 10G

    Access

    Aggregaon

    Analogous to a SAN Core-Edge Design

  • 8/12/2019 BRKSAN-2047

    120/121

    2012 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public

    Complete Your Online

    Session Evaluation Give us your feedback and you

    could win fabulous prizes.Winners announced daily.

    Receive 20 Passport points for eachsession evaluation you complete.

    Complete your session evaluationonline now (open a browser throughour wireless network to access ourportal) or visit one of the Internetstations throughout the ConventionCenter.

    Dont forget to activate yourCisco Live Virtual account for access toall session material, communities, andon-demand and live activities throughoutthe year. Activate your account at theCisco booth in the World of Solutions or visitwww.ciscolive.com.

    120

  • 8/12/2019 BRKSAN-2047

    121/121