Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10...

16
© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 16 White Paper Cisco Virtual Topology System: Data Center Automation for Next-Generation Cloud Architectures

Transcript of Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10...

Page 1: Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10 BGP EVPN Control-Plane Route Cisco Virtual Topology System Use Cases ..... 13 ...

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 16

White Paper

Cisco Virtual Topology System: Data Center Automation for

Next-Generation Cloud Architectures

Page 2: Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10 BGP EVPN Control-Plane Route Cisco Virtual Topology System Use Cases ..... 13 ...

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 2 of 16

Contents

What You Will Learn ................................................................................................................................................ 3

Abstract .................................................................................................................................................................... 3

Trends and Challenges in Data Center Virtualization and NFV ........................................................................... 3

Introduction to Cisco Virtual Topology System .................................................................................................... 4 Policy Plane .......................................................................................................................................................... 5 Control Plane ........................................................................................................................................................ 6 Cisco Virtual Topology Forwarder ......................................................................................................................... 6 Hardware Switches: ToR, Access, Spine, and Aggregation ................................................................................. 7 DCI Gateway ......................................................................................................................................................... 7

Implementing Tenant Overlay Networks Using Cisco Virtual Topology System ............................................... 8

Cisco Virtual Topology System High Availability ................................................................................................. 9

Cisco Virtual Topology System EVPN-Based VXLAN Overlay Provisioning ...................................................... 9 Prerequisites ......................................................................................................................................................... 9 Device Discovery .................................................................................................................................................. 9 Using System Policies to Define Data Center Pods ............................................................................................ 10 BGP EVPN Control-Plane Route Distribution ..................................................................................................... 12

Cisco Virtual Topology System Use Cases ......................................................................................................... 13 Virtualized Infrastructure for Multitenant Data Centers ....................................................................................... 13 Integration of Bare-Metal and Virtual Workloads ................................................................................................. 14 Network Function Virtualization ........................................................................................................................... 15

For More Information ............................................................................................................................................. 16

Page 3: Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10 BGP EVPN Control-Plane Route Cisco Virtual Topology System Use Cases ..... 13 ...

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 3 of 16

What You Will Learn

This document provides a technical introduction to Cisco® Virtual Topology System (VTS).

Abstract

Service providers and large enterprises are considering cloud architectures to achieve their desired business

outcomes of faster time to market, increased revenue, and lower costs. Flat, scalable cloud architectures increase

the need for robust overlays (virtual networks) to achieve greater agility and mobility and for a vastly simplified

operational model for the underlay physical networks. SDN attempts to address these requirements by allowing

networks and network functions and services to be programmatically assembled in any arbitrary combination to

produce unique, isolated, and secure virtual networks, on demand and in a rapid manner. And these capabilities

are achieved without having to trade security, performance, or manageability for speed and agility.

Cisco Virtual Topology System, or VTS, is an open, scalable, SDN framework for data center virtual network

provisioning and management. It is designed to address the requirements of today's multi-tenant data centers for

cloud and network function virtualization (NFV) services without sacrificing the security, reliability, and performance

of traditional data center architectures.

Trends and Challenges in Data Center Virtualization and NFV

The fundamentals of the data center are changing, placing new demands on IT. Enterprise and IT workloads are

increasingly moving to the cloud and bring with them new requirements for a variety of flexible cloud services.

Automation, self-service, isolation, and scalability are main tenets of any such cloud architecture. To achieve

higher utilization and lower costs, IT departments are seeking ways to manage heterogeneous pools of servers,

storage, and network resources as a single system and to automate the tasks associated with the consumption of

the resources within these pools. These needs can be met with a highly scalable, policy-based overlay

management solution that complements the virtualization and orchestration infrastructure by drastically simplifying

the management and operation of overlay infrastructure by abstracting it from complex underlying hardware-based

infrastructure.

The transformation and evolution of cloud architectures is applicable to both enterprises and service providers.

Service providers can use this unique opportunity to differentiate themselves from competitors by offering

guaranteed service-level agreements (SLAs) and scalable multi-tenancy to enable enterprises to move

business-critical workloads reliably into the service provider cloud. Enterprises can use this evolution to build a

highly scalable, secure, multitenant private or hybrid cloud, and to transparently move workloads to achieve greater

productivity and efficiency while reducing the overall total cost of ownership (TCO).

Traditional data center solutions encounter multiple challenges in trying to address these new requirements.

In addition to placing greater demands on IT staff, VLAN-based designs in the data center often are complex and

don’t scale to meet the requirements of a large multi-tenant data center. Existing automation and orchestration

systems lack the agility and declarative policy abstractions needed to deliver secure, virtualized network services

dynamically. Any solution that addresses the requirements of cloud architectures should focus on deployment and

delivery of applications and services with speed, agility, flexibility, and security, at scale.

Virtual Topology System addresses these requirements by delivering agility, scalable multi-tenancy, and

programmability to the cloud-enabled data center.

Page 4: Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10 BGP EVPN Control-Plane Route Cisco Virtual Topology System Use Cases ..... 13 ...

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 4 of 16

Introduction to Cisco Virtual Topology System

The Cisco® Virtual Topology System (VTS) is a standards-based, open, overlay management and provisioning

system for data center networks. It automates fabric provisioning for both physical and virtual workloads. It helps

service providers and large enterprises capitalize on next-generation cloud architectures through automation and

faster service delivery.

Figure 1. Main Attributes of Cisco Virtual Topology System Solution

Virtual Topology System enables the creation of a highly scalable, open-standards based, multi-tenant solution for

service providers and large enterprises. It automates complex network overlay provisioning and management tasks

through integration with cloud orchestration systems such as OpenStack and VMware vCenter. The solution can

be managed from the embedded GUI or entirely by a set of northbound REST APIs that can be consumed by

orchestration and cloud management systems.

Main attributes of the Virtual Topology System solution include:

● Fabric automation: The solution supports faster, agile, network provisioning of a wide range of hardware

and software endpoints.

● Programmability: Provides an open, well-documented Representational State Transfer (REST)-based

northbound API, which allows integration with an external orchestration or cloud management system.

Offers extensive southbound integration through platform APIs (Cisco NX-API) or Netconf/YANG.

● Open, scalable, standards based solution: The standards-based Multiprotocol Border Gateway Protocol

(MP-BGP) Ethernet Virtual Private Network (EVPN) control plane helps enable flexible workload placement

and mobility in a high-scale multi-tenant environment without compromising performance.

● Investment protection: The solution supports the entire Cisco Nexus®

Family portfolio (Cisco Nexus 2000

Series Fabric Extenders and Cisco Nexus 5000, 7000 through 9000 Series Switches).

● High-performance software forwarder: Cisco VTS includes a virtual forwarder known as the Virtual Topology

Forwarder (VTF). The VTF is a lightweight, multi-tenant software data plane designed for high performance

packet processing on x86 servers. It leverages Cisco Vector Packet Processing (VPP) technology and Intel

Data Path Development Kit (DPDK) for high performance L2, L3 and VXLAN packet forwarding. It allows

the Virtual Topology System to terminate VXLAN tunnels on host servers by using the VTF as a Software

VXLAN Tunnel Endpoint (VTEP). VTS also support hybrid overlays by stitching together physical and virtual

endpoints into a single VXLAN segment.

Page 5: Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10 BGP EVPN Control-Plane Route Cisco Virtual Topology System Use Cases ..... 13 ...

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 5 of 16

Virtual Topology System allows customers to achieve the full potential of their data center investments, providing

the capability to support multiple tenants, on demand, over the same underlying physical infrastructure. This

capability provides a scalable, high-performance network infrastructure for multi-tenant data centers that enables

simplicity, flexibility, and elasticity in both greenfield (new) and brownfield (existing underlay) deployments.

Cisco Virtual Topology System Architecture Overview

At the core of the Virtual Topology System architecture are two main components: the policy plane and the control

plane. These perform core functions such as SDN control, resource allocation and core management functions

(Figure 2).

Figure 2. Cisco Virtual Topology System Architecture

Policy Plane

Virtual Topology System implements a robust policy plane that enables it to implement a declarative policy model

designed to capture user intent and render it into specific device-level constructs. The solution exposes a

comprehensive set of modular policy constructs that can be flexibly organized into user-defined services for

multiple use cases across service provider and cloud environments. These policy constructs are exposed through

a set of REST APIs that can be consumed by a variety of orchestrators and applications to express user intent, or

that can be instantiated through the Virtual Topology System GUI. Policy models are exposed as system policies or

service policies.

Page 6: Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10 BGP EVPN Control-Plane Route Cisco Virtual Topology System Use Cases ..... 13 ...

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 6 of 16

System policies allow administrators to logically group devices into pods within or across data centers to define

administrative domains with common system parameters (for example, BGP-EVPN control plane with distributed

Layer 2 and 3 gateways). This capability provides flexibility and investment protection by supporting brownfield

deployments.

Service policies capture user and application intent, which is then translated by Virtual Topology System into

networking constructs, making complex network service chains and graphs easy to abstract and consume.

The inventory module maintains a database of the available physical entities (for example, data center interconnect

[DCI] routers and top-of-rack [ToR], spine, and border-leaf switches) and virtual entities (for example, VTFs) in the

Virtual Topology System domain. The database also includes interconnections between these entities and details

about all services instantiated within a Virtual Topology System domain.

The resource management module manages all available resource pools in the Virtual Topology System domain,

including VLANs, VXLAN Network Identifiers (VNIs), IP addresses, multicast groups, and Network Address

Translation [NAT] IP address pools.

Control Plane

The control plane module serves as the SDN control subsystem that programs the various data planes: the VTF

residing on the x86 servers, hardware ToR switches, DCI gateways, etc. The control plane hosts a full-function

Cisco IOS® XRv Software instance that provides route peering capabilities between the DCI gateways or to a BGP

route reflector. Cisco IOS XRv is the virtualized version of Cisco’s award-winning Cisco IOS XR Software, which is

among the most widely deployed and proven network operating systems, running in hundreds of service provider

networks for more than a decade. Cisco IOS XRv brings a feature-rich, mature, and stable BGP code base to the

Virtual Topology System solution, helping ensure scalable, optimal operation. The control plane enables an

MP-BGP EVPN-based control plane for VXLAN overlays originating from ToR switches or software VXLAN tunnel

endpoints (VTEPs).

The device management module enables robust device configuration and management capabilities within Virtual

Topology System, with multiprotocol support to support a multivendor environment.

Cisco Virtual Topology Forwarder

Virtual Topology System can be deployed with a Virtual Topology Forwarder (VTF). The VTF is a lightweight,

multi-tenant software data plane designed for high performance packet processing on x86 servers. VTF uses an

innovative technology from Cisco called Vector Packet Processing, or VPP. VPP is a full-featured networking stack

with a highly optimized software forwarding engine. VPP is recently open sourced as Fd.io managed by Linux

foundation. VTF leverages VPP technology and Intel Data Path Development Kit (DPDK) for high performance L2,

L3 and VXLAN packet forwarding allowing up to 10 Gbps of throughput on a single CPU core. The VTF is

multithreaded, and customers can allocate additional CPU cores to scale its performance.

VTF allows the Virtual Topology System to terminate VXLAN tunnels on host servers by using the VTF as a

Software VXLAN Tunnel Endpoint (VTEP). VTS also support hybrid overlays by stitching together physical and

virtual endpoints into a single VXLAN segment.

VTF provides a full-featured networking stack functions, including Layer 2 forwarding and Layer 3 routing for IPv4,

IPv6, Policy-Based Routing (PBR), VXLAN, and Multiprotocol Label Switching over generic routing encapsulation

(MPLSoGRE) overlays. Intel DPDK and Cisco VPP technologies are complementary. VTF uses the benefits of

both to deliver the highest-performance multitenant software forwarder in the industry.

Page 7: Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10 BGP EVPN Control-Plane Route Cisco Virtual Topology System Use Cases ..... 13 ...

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 7 of 16

Hardware Switches: ToR, Access, Spine, and Aggregation

The Virtual Topology System extends a robust set of Software Defined Networking (SDN) capabilities to the entire

Cisco Nexus portfolio by bringing automation and programmability to the Cisco Nexus 2000 Series Fabric

Extenders and Cisco Nexus 5000, 7000, and 9000 Series Switches. VTS supports overlay provisioning, bare-metal

devices, and integration of physical and virtual workloads. The solution uses a MP-BGP EVPN control plane

between the Virtual Topology System control plane and the ToR switch, with VXLAN-based software overlays in

the data plane. The solution also supports a flood-and-learn mechanism for VXLAN, which enables hardware

switches that do not support BGP EVPN to be deployed as part of the virtual topology that VTS controls.

DCI Gateway

The DCI router provides connectivity between the external network, such as the WAN, and applications running on

virtual machines, containers, bare-metal workloads, and VNFs hosted in the data center. It implements a virtual

routing and forwarding (VRF) table for the tenant while performing the packet encapsulation and decapsulation

required between the DCI and the VTEPs. On the WAN, the DCI can be connected to a service provider MPLS

backbone or to Internet service providers providing connectivity to the public Internet.

Virtual Topology System delivers a highly scalable data center SDN solution to instantiate flexible virtual network

topology for a tenant network on demand. The solution can span multiple domains across single or multiple data

centers and across both virtual and physical workloads. The use of the solution across multiple domains or data

centers is enabled by the use of BGP federation (Figure 3).

Figure 3. Cisco Virtual Topology System Architecture in a Multiple-Data Center Deployment

Page 8: Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10 BGP EVPN Control-Plane Route Cisco Virtual Topology System Use Cases ..... 13 ...

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 8 of 16

Implementing Tenant Overlay Networks Using Cisco Virtual Topology System

Network automation and self-service are main tenets of the Virtual Topology System solution. Instant availability of

computing and application workloads in the virtualized data center have made network provisioning a major

bottleneck, leading to the belief that networks are impediments to a software-defined data center. Virtual Topology

System removes these barriers through the use of overlay connectivity orchestrated through an SDN-based control

plane.

The solution uses VXLAN to overcome scale limits in the data center and to better segment the network. VXLAN is

designed to provide the same Ethernet Layer 2 network services as VLAN does today, but with greater extensibility

and flexibility. VXLAN is a MAC address in User Datagram Protocol (MAC-in-UDP) encapsulation scheme that

allows flexible placement of multitenant segments throughout the data center and allows 16 million Layer 2

(VXLAN) segment identifiers to coexist in the same administrative domain. The dependence on a Layer 3 underlay

network allows VXLAN to take complete advantage of Layer 3 routing, equal-cost multipath (ECMP) routing, and

link aggregation protocols. Virtual Topology System supports hardware and software VTEPs to better segment the

data center network.

Early implementations of VXLAN use network-based flooding for MAC address resolution and learning.

The flood-and-learn model is often deemed not scalable. It also doesn’t take full advantage of the benefits of an

underlying IP network that could enable more efficient behavior in the underlay, including the capability to contain

failure domains and scope the network following a routed model. Early approaches adopted in the industry to

address this problem included the use of OpenFlow, coupled with extensions in Open vSwitch Database (OVSDB)

Protocol by SDN controllers to manage state in the overlay network. OVSDB is an OpenFlow configuration protocol

designed for managing Open vSwitch (OVS) deployments and can be used to create, configure, and delete ports

and tunnels for VXLAN. However, this capability requires that both the controller and the vSwitch understand the

same extensions in OVSDB, and interoperability may be difficult because these extensions may be proprietary.

A better approach now validated by the industry is the use of a MP-BGP EVPN based control plane to manage the

VXLAN overlay. A main advantage of the BGP model is that it provides a distributed network database, built to

federate and proven to scale, as proven by the wide reach of the Internet, which is based on this model. This

approach contrasts with the centralized and administratively scoped approach of an OpenFlow controller, which

does not lend itself well to federation.

Virtual Topology System implements the highly scalable MP-BGP with the standards-based EVPN address family

as the overlay control plane to:

● Distribute attached host MAC and IP addresses and avoid the need for the flood-and-learn mechanism for

broadcast, unknown unicast, and multicast traffic

● Support multi-destination traffic by either using the multicast capabilities of the underlay or using unicast

ingress replication over a unicast network core (without multicast) for forwarding Layer 2 multicast and

broadcast packets

● Terminate Address Resolution Protocol (ARP) requests early and avoid flooding

Page 9: Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10 BGP EVPN Control-Plane Route Cisco Virtual Topology System Use Cases ..... 13 ...

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 9 of 16

Control-plane separation is also maintained among the interconnected VXLAN networks. As with large-scale BGP

implementations, capabilities such as route filtering and route reflection can be used to provide greater flexibility

and scalability in deployment. Virtual Topology System supports both VXLAN overlays using the BGP EVPN

control plane and VXLAN overlays using IP Multicast-based flood-and-learn techniques. The BGP EVPN solution is

the preferred option, and it can be flexibly implemented using the infrastructure policy constructs within the Virtual

Topology System environment.

Cisco Virtual Topology System High Availability

The Virtual Topology System solution is designed to support redundancy, with two solution instances running on

separate hosts in an active-standby configuration. During initial setup, each instance is configured with both an

underlay IP address and a virtual IP address. Virtual Router Redundancy Protocol (VRRP) is used between the

instances to determine which instance is active. REST API calls from northbound systems are performed on the

virtual IP address of the Virtual Topology System. The active-instance data is synchronized with the standby

instance after each transaction to help ensure consistency of the control-plane information to accelerate failover

after a failure. BGP peering is established from both Virtual Topology System instances for the distribution of

tenant-specific routes. During the switchover, nonstop forwarding (NSF) and graceful restart help ensure that

services are not disrupted.

Cisco Virtual Topology System EVPN-Based VXLAN Overlay Provisioning

This example presents the steps for establishing a simple VXLAN overlay network with hardware and software

VTEPs using a BGP EVPN control plane.

Prerequisites

A certain amount of day-zero configuration is essential to prepare the physical environment to be managed by

Virtual Topology System to build virtual overlays. On the ToR switches, the following day-zero configuration is

considered essential:

● Configure Layer 2 PortChannel and Layer 2 trunk port between ToR switches

● Configure virtual PortChannel (vPC) to server host

● Configure Layer 2 trunk port configuration on physical and vPC interfaces

● Configure loopback-0 interface with IP address

● Configure underlay control protocol on ToR switches (can be Interior Gateway Protocol [IGP] or BGP)

● Configure infrastructure VLAN and switch virtual interface (SVI) VLAN and allow Dynamic Host

Configuration Protocol (DHCP) relay

● Configure VXLAN, EVPN, VPC feature sets are configured.

Device Discovery

Virtual Topology System supports network topology and server host discovery through Link Layer Discovery

Protocol (LLDP). The solution automatically discovers the network topology in the data center and lets users export

the device details as a comma-separated values (CSV) file that contains the port-to-server mapping. Users can

modify and import this CSV file to add details to the inventory. After the file has been imported, users can use the

network inventory table in Virtual Topology System to view information about a device and the Host Inventory

section to view details about the hosts connected to the switches.

Page 10: Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10 BGP EVPN Control-Plane Route Cisco Virtual Topology System Use Cases ..... 13 ...

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 10 of 16

Using System Policies to Define Data Center Pods

After endpoints are added to the inventory, users can define data center pods to group hardware and software

VTEPs, Layer 3 gateways, and DCI gateways into administrative domains with similar properties. For example,

one data center pod could implement the EVPN control plane with distributed Layer 2 and 3 gateways, and another

pod could implement flood-and-learn mode with a centralized Layer 3 gateway.

Step 1. The user creates a custom system template: Template 1. (The user could also use one of the predefined

system templates provided in Virtual Topology System.)

Step 2. The custom system template allows the user to select:

● BGP EVPN or flood-and-learn as the preferred learning mechanism

● Distributed Layer 2 and 3 or centralized Layer 3

● Replication mechanism (multicast or ingress replication)

Step 3. The user creates a new data center pod A and:

● Attaches the custom system template: Template1

● Selects and imports devices from the device inventory into the Layer 2 gateway (L2GW) group

● Selects and imports devices from the device inventory into the Layer 3 gateway (L3GW) group and

assigns additional attributes to each L3GW device: Layer 3 only, border leaf, or DCI

Step 4. The user commits the changes to the network group. Virtual Topology System then automatically pushes

all the relevant configuration information to the respective ToR switches, Cisco IOS XRv route reflectors,

and DCI gateways. At this point, the pod is ready to build overlay networks based on the intent defined by

the service policy or through a VMM or orchestration environment (Figure 4).

Figure 4. Sample Cisco Virtual Topology System Logical Groups (Data Center Pods)

Page 11: Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10 BGP EVPN Control-Plane Route Cisco Virtual Topology System Use Cases ..... 13 ...

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 11 of 16

For this example, assume that the networks, subnets, hosts, and routers are created through OpenStack. Also

assume that the user creates two networks, attaches one or more virtual machines to each network, and connects

those networks through a router. The routing element will be implemented as a distributed Layer 2 and 3 gateway

within the data center infrastructure, with anycast gateways provisioned on ToR switches and VTFs. Figure 5

shows the step sequence.

Figure 5. Sample Cisco Virtual Topology System Deployment to Understand the Data Plane

1. Tenant and tenant networks are created in OpenStack.

2. The OpenStack Neutron plug-in intercepts the request and creates tenant and tenant networks within Virtual

Topology System.

3. VXLAN VNID is assigned for each network.

4. OpenStack user attaches virtual machines to the networks.

5. Information about the new VM is passed to VTS via the VTS Neutron plugin.

6. Virtual Topology System provisions VTEP on the respective ToR switches and configures VLANs to the

computing host.

7. Neutron agent on the host programs the vSwitch with the right VLAN.

Page 12: Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10 BGP EVPN Control-Plane Route Cisco Virtual Topology System Use Cases ..... 13 ...

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 12 of 16

8. In case L3 networks need to be created, OpenStack user creates a router and attaches interfaces to the two

networks. Virtual Topology System provisions a Layer 3 VXLAN that spans all ToR switches and VTFs

supporting those networks. It also provisions the SVI with an anycast gateway configuration under the VLAN

interface.

BGP EVPN Control-Plane Route Distribution

Virtual Topology System implements a highly scalable MP-BGP extension called EVPN as the overlay control

plane to:

● Distribute attached host MAC and IP addresses and avoid the need for the flood-and-learn mechanism for

broadcast, unicast, and multicast traffic

● Use a unicast network core (without multicast) and ingress replication for forwarding Layer 2 multicast and

broadcast packets

● Terminate ARP requests early and avoid flooding

The BGP route reflector could be implemented in the Virtual Topology System using the Cisco IOS XRv virtual

machine in the control plane or on the network spine. Figure 6 shows the steps for EVPN control-plane route

distribution.

Page 13: Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10 BGP EVPN Control-Plane Route Cisco Virtual Topology System Use Cases ..... 13 ...

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 13 of 16

Figure 6. BGP EVPN Control-Plane Route Distribution

Cisco Virtual Topology System Use Cases Virtualized Infrastructure for Multitenant Data Centers

A primary use case addressed by the Virtual Topology System solution is traffic separation within a multi-tenant

data center (Figure 8). For example, large enterprises have traditionally built and maintained separate physical

infrastructures to meet compliance requirements. They maintain separate infrastructures for different departments’

networks within the organization. Although this approach provides the traffic separation that is needed for

compliance, it also generates vast resource waste, because departments cannot use the physical resources that

belong to each other. This approach is contrary to the oversubscription model that is well established in the service

provider space.

The Virtual Topology System software overlay solution allows customers to tap into the unused computing

capacity, enabling greater utilization and better return on investment (ROI) while still meeting compliance

requirements. For example, a data center network may have two tenant networks: Tenant 1 and Tenant 2. Both

tenant networks terminate at the DCI, which allows MPLS, Cisco Locator/ID Separation Protocol (LISP), or plain IP

WAN connectivity to the rest of the network. On the VTFs that reside on the computing hosts, a mesh of VXLAN

tunnels are automatically created between all the VTFs and between the VTFs and the DCI gateway. The VTFs

host the tenant networks for one or multiple tenants and provide traffic separation for the traffic belonging to

different tenants, as well as encapsulation and decapsulation between the DCI gateway and the VTFs.

The DCI routers peer with Virtual Topology System, which also acts as the route reflector.

Page 14: Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10 BGP EVPN Control-Plane Route Cisco Virtual Topology System Use Cases ..... 13 ...

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 14 of 16

Figure 7. Data Center Virtualization Using Software Overlay with Cisco Virtual Topology System

Integration of Bare-Metal and Virtual Workloads

Bare-metal integration is the other main use case that the Virtual Topology System solution supports (Figure 9).

This use case can be used as a baseline for building network connectivity between physical and virtual workloads

in the data center. A MP-BGP EVPN based control plane from Virtual Topology System and ToR switches such as

Cisco Nexus 9000 Series Switches can be used for this scenario, and a VXLAN-based software overlay can be

used in the data plane. The VXLAN overlay solution allows physical VTEPs for both virtualized and bare-metal

servers through the use of physical and virtual integrated overlays and allows DCI and services integration.

VXLAN-based software overlay supports two variants of the solution: a VXLAN overlay with a BGP EVPN control

plane and a VXLAN overlay with the IP Multicast flood-and-learn mechanism.

One topology supported for this solution deploys distributed Layer 2 and 3 gateways. In this case, the Layer 2 and

3 boundary for the server or virtual machines resides on the overlay network gateways that are directly attached to

the physical servers. In the physical topology, these reside on the ToR switches in each server rack. Each ToR

switch then becomes the Layer 2 and 3 gateway for the virtual machines that are directly attached to it. Virtual

machines belonging to the same subnet may also span racks, so the Layer 3 gateway functions for these subnets

will be distributed across the ToR switches (anycast gateway). This overlay network extends between the

distributed gateways across the spine and aggregation switches.

The ToR switches also provide VXLAN physical gateway termination. Examples of use cases for physical gateway

overlay termination include:

● Physical gateway for virtualized servers: In this case, the server has a Layer 2 vSwitch and uses VLANs to

segment traffic belonging to different tenants. Traffic for the different tenants is tagged with the required

VLANs and terminates on the physical gateway.

● Physical gateway for bare-metal servers: In this case, each VLAN or group of VLANs is assigned to a

specific bare-metal endpoint or access network.

● Physical gateway stitching: This case provides the functions that are needed to stitch the overlay into the

physical network for the Internet, VPNs, or services in scenarios such as a DCI or border-services leaf.

Page 15: Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10 BGP EVPN Control-Plane Route Cisco Virtual Topology System Use Cases ..... 13 ...

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 15 of 16

Another topology that the solution supports is the use of separate Layer 2 and 3 gateways. In this case, the ToR

switches that are physically connected to the servers may not support Layer 3 functions, providing distributed

Layer 2 gateway functions only. The aggregation layer in this case provides the Layer 3 gateway functions.

Virtual Topology System also provides software VTEP functions on the VTF running on the servers. The virtual

machines that reside on one server can communicate with virtual machines on the other server in this scenario

using the VXLAN-based software overlay. Similarly, Virtual Topology System supports VXLAN overlay from the

software VTEP (VTF) to the hardware VTEP (ToR switch such as the Cisco Nexus 9000 Series) so that physical

and virtual workloads can communicate transparently.

For a VXLAN-based solution, the DCI provides the interconnection of the virtual overlay within the data center to

the required external VPN or physical network. The solution supports an external VRF-based Layer 3 interconnect

model in which the virtual overlay segment terminates in a VRF instance on one or more DCIs, providing access to

the external network or VPN.

Figure 8. Integration of Bare-Metal and Virtualized Workloads in Cisco Virtual Topology System

Network Function Virtualization

Network function virtualization, or NFV, is another major use of the Virtual Topology System solution (Figure 9).

The solution plays the role of an SDN subsystem in the Cisco Network Function Virtualization orchestration

solution, to help the NFV orchestrator programmatically instantiate tenant networks and service chains along with

their associated policies.

In this architecture, Virtual Topology System performs the role of the virtualized infrastructure manager (VIM) for

the network, along with OpenStack or another VMM. The VTF performs the role of virtualized network layer as a

multitenant software forwarder running on the x86 servers. Additionally, underlay switches such as the Cisco

Nexus 9000 Series or others, along with the DCI gateway, may be a part of the solution in the NFV architecture to

deliver bare-metal integration and WAN integration capabilities. Virtual Topology System is fully integrated with the

model-based Cisco Network Services Orchestrator (NSO), powered by Tail-f Systems, to perform the role of NFV

orchestration. The figure below shows a NFV reference architecture in an OpenStack environment with VTS as

part of the Virtual Infrastructure Manager, delivering capabilities such as physical to virtual service chaining and

seamless integration with the WAN and the Internet.

Page 16: Cisco Virtual Topology System: Data Center … System Policies to Define Data Center Pods..... 10 BGP EVPN Control-Plane Route Cisco Virtual Topology System Use Cases ..... 13 ...

© 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 16 of 16

Figure 9. Network Function Virtualization with Cisco Virtual Topology System

For More Information

Please contact your Cisco account team for more information about the Cisco Virtual Topology System solution.

For more information about the Cisco Evolved Services Platform, please visit

http://www.cisco.com/c/en/us/solutions/service-provider/evolved-services-platform/index.html.

Printed in USA C11-734904-01 04/17