SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

65
SDN Fundamentals for NFV, OpenStack, and Containers Nir Yechiel, Senior Technical Product Manager - OpenStack Red Hat Rashid Khan, Senior Development Manager - Platform Networking Red Hat

Transcript of SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Page 1: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

SDN Fundamentals for NFV, OpenStack, and Containers

Nir Yechiel,Senior Technical Product Manager - OpenStackRed Hat

Rashid Khan,Senior Development Manager - Platform NetworkingRed Hat

Page 2: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Agenda

● Setting context: OpenStack, SDN, and NFV○ OpenStack overview○ NFV Infrastructure○ Red Hat approach to SDN/NFV

● Key RHEL networking features and tools…○ and how they apply to OpenStack Networking (Neutron)

● Accelerating the datapath for modern applications

● Q&A

Page 3: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

SETTING CONTEXTOPENSTACK, SDN, AND NFV

Page 4: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

What is OpenStack?

● Fully open-source cloud “operating system”

● Comprised of several open source sub-projects

● Provides building blocks to create an IaaS cloud

● Governed by the vendor agnostic OpenStack Foundation

● Enormous market momentum

Page 5: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Red Hat OpenStack Platform

February 2015Red Hat OpenStack Platform 6 (Juno)RHEL 7

August 2015Red Hat OpenStack Platform 7 (Kilo)RHEL 7

April 2016Red Hat OpenStack Platform 8 (Liberty)RHEL 7

Source: https://access.redhat.com/support/policy/updates/openstack/platform

Page 6: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Red Hat OpenStack Platform 8

Page 7: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

What is Neutron?

● Fully supported and integrated OpenStack project● Exposes an API for defining rich network configuration● Offers multi-tenancy with self-service

Page 8: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Neutron - Tenant View

Page 9: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

What Neutron is not?

● Neutron does not implement the networks○ Uses the concept of plugins

Network Port Subnet

Core API

Provider Network

LBaaSQuotas Security Groups

Router

Resource and Attribute Extension API

Neutron Server

Core Plugin Service Plugins

ML2 FW... VPN ...L3

FWaaS ...

Page 10: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Neutron Key Features

● L2 connectivity● IP Address Management● Security Groups● L3 routing● External gateway, NAT and floating IPs● Load balancing, VPN and firewall

Page 11: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Red Hat Neutron Focus

● ML2 with Open vSwitch Mechanism Driver (today)*○ Overlay networks with VXLAN

● ML2 with OpenDaylight Mechanism Driver (roadmap)

● Broad range of commercial partners○ Through our certification program

*Focus of this presentation

Page 12: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Software Defined Networking

● Different things to different people○ Separation of control plane and forwarding plane○ Open standard protocols○ Well-known, stable API○ Application awareness○ Programmability○ Agility

Page 13: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Software Defined Networking

● OpenFlow != SDN

● We see the opportunity to push the virtual network edge to where it belongs – the hypervisor

Page 14: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Network Functions Virtualization

SGSN GGSN

PE Router BRAS RAN

Nodes

DPI FirewallCarrier-Grade NAT

Tester / QoE

Router CDNSession Border Control

WAN Accel.

• Fragmented non-commodity hardware

• Vertical design

• Physical install (per appliance, per site)

Independent Software Vendors

Virtual Appliance Virtual Appliance Virtual Appliance Virtual Appliance

Orchestrated, Automatic,

Remote Install

Standard High-Volume Servers

Standard High-Volume Storage

Standard High-Volume Ethernet Switches

Page 15: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Network Functions Virtualization

Source: NFV ISG

Page 16: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Network Functions Virtualization

Source: NFV ISG

NFVI

Page 17: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Network Functions Virtualization

Source: NFV ISG

VNF

Page 18: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Network Functions Virtualization

Source: NFV ISG

MANO

Page 19: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

NFV Loves OpenStack

● OpenStack is the de-facto choice for VIM on early NFV deployments

● OpenStack’s pluggable architecture - key for NFV○ e.g storage, networking

● Open standards and multi-vendor

● Many network equipment providers (NEPs) and large telco operators are already involved in the community

Page 20: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

NFV Infrastructure - Just OpenStack?

OpenStacklibvirtDPDK

Open vSwitchQEMU/KVM

Linux

Page 21: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Red Hat NFV Focus

● Our focus is on the infrastructure:○ NFVI○ VIM○ Enablement for VNFs

● Partner with ISVs for MANO

● Partner with hardware providers

● No separate product for NFV○ We don’t make an OpenStack version for NFV, but rather make NFV features available in

Red Hat OpenStack Platform and across the entire stack

Page 22: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

An SDN/NFV StackIt’s all about:

➔ Develop the right capabilities on the platform/Operating System level➔ Leverage and expose the capabilities across the entire stack

Single Root I/O Virtualization (SR-IOV)

Open vSwitch

Overlay networking technologies

Network namespaces

Data Plane Development Kit (DPDK)

Multiqueue support for Virtio-net

iptables conntrack

...

Page 23: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

KEY RHEL NETWORKING FEATURES

AND HOW THEY APPLY TO NEUTRON

Page 24: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

SR-IOV

Page 25: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Single Root I/O Virtualization (SR-IOV)

● Allows a device, such as a network adapter, to separate access to its resources among various PCIe hardware functions: physical function (PF) and one or more virtual functions (VFs)

● Enables network traffic to bypass the software layer of the hypervisor and flow directly between the VF and the virtual machine

● Near line-rate performance without the need to dedicate a separate NIC to each individual virtual machine

● Supported with RHEL 7, available starting with Red Hat OpenStack Platform 6

Page 26: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

SR-IOV Networking in OpenStack

● Implemented with a generic ML2 driver (sriovnicswitch)○ NIC vendor defined by the operator through vendor_id:product_id○ New Neutron port type (vnic-type ‘direct’)○ VLAN and flat networks are supported

● Optional agent for advanced capabilities (requires NIC support) ○ Reflect port status (Red Hat OpenStack Platform 6)○ QoS rate-limiting (Red Hat OpenStack Platform 8)

● Supported network adapters are inherited from RHEL○ See https://access.redhat.com/articles/1390483

● Coming soon with Red Hat OpenStack Platform: PF Passthrough

Page 27: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Open vSwitch

Page 28: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Open vSwitch (OVS)

● Multi-layer software switch designed to forward traffic between virtual machines and physical or logical networks

● Supports traffic isolation using overlay technologies (GRE, VXLAN) and 802.1Q VLANs

● Highlights:○ Multi-threaded user space switching daemon for increased scalability○ Support for wildcard flows in kernel datapath○ Kernel based hardware offload for GRE and VXLAN○ OpenFlow and OVSDB management protocols

Page 29: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Open vSwitch (OVS)

● Kernel module ships with RHEL, but the vSwitch is supported on Red Hat OpenStack Platform and OpenShift by Red Hat

● Integrated with OpenStack via a Neutron ML2 driver and associated agent

● New with Red Hat OpenStack Platform 8: technology preview offering of DPDK accelerated Open vSwitch (more later)

Page 30: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

OVS - L2 Connectivity in OpenStack

● Between VMs on the same Compute○ Traffic is bridged locally using normal MAC learning○ Each tenant gets a local VLAN ID○ No need to leave ‘br-int’

● Between VMs on different Computes○ OVS acts as the VTEP○ Flow rules are installed on ‘br-tun’ to encapsulate the traffic with the correct VXLAN VNI

Page 31: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

OVS - Compute View

tap

qvb

VMeth

tap

VMeth

tap

qvb

VMeth

qvo

qbrqvbqbr qbr

qvo qvoVLAN ID VLAN ID VLAN ID

br-int

TAP device

Linux bridge

veth pair

Open vSwitch

Tenant flows are separated by internal, locally significant,VLAN IDs. VMs that are connected to the same tenant network get the same VLAN tag

OVS can’t directly attach a TAP device where iptables rules are applied; bridge device is necessary - offers a route to the kernel for filtering

Page 32: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

tap

qvb

VMeth

tap

VMeth

tap

qvb

VMeth

qvo

qbrqvbqbr qbr

qvo qvoVLAN ID VLAN ID VLAN ID

br-int

TAP device

Linux bridge

veth pair

Open vSwitch

br-tun

Internal VLANs are converted to tunnels with unique GRE Key or VXLAN VNI per network

eth

Source interface is determined from “local_ip” configuration through routing lookup

patch

Tenant flows are separated by internal, locally significant,VLAN IDs. VMs that are connected to the same tenant network get the same VLAN tag

OVS - Compute View

OVS can’t directly attach a TAP device where iptables rules are applied; bridge device is necessary - offers a route to the kernel for filtering

Page 33: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Overlay Networking

Page 34: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Overlay Networking Technologies

● Virtual Extensible LAN (VXLAN)○ Common encapsulation protocol for running an overlay network using existing IP infrastructure ○ RHEL supports TCP/IP VXLAN offload and VXLAN GRO○ Recommended encapsulation for tenant traffic network in Red Hat OpenStack Platform

● Generic Routing Encapsulation (GRE)○ Can be used as an alternative to VXLAN○ Hardware checksum offload support using GSO/GRO

● Network adapter support is inherited from RHEL○ See https://access.redhat.com/articles/1390483

Page 35: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Network Namespaces

Page 36: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Network Namespaces

● Lightweight container-based virtualization allows virtual network stacks to be associated with a process group

● Create an isolated copy of the networking data structure such as the interface list, sockets, routing table, port numbers, and so on

● Managed through the iproute2 (ip netns) interface

Page 37: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Network Namespaces in OpenStack

● Fundamental technology which enable isolated network space for different projects (aka tenants)

○ Allows overlapping IP ranges○ Allows per tenant network services

● Used extensively by the l3-agent and dhcp-agent

Page 38: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

DHCP Namespace in OpenStack

VLAN ID VLAN ID

br-tun

Internal VLANs are converted to tunnels with unique GRE Key or VXLAN VNI per network

eth

Source interface is determined from “local_ip” configuration through routing lookup

patch

Namespace-1 Namespace-2

br-int

Virtual interface

Open vSwitch

Each service is separated by internal VLAN ID per tenant qdhcp qdhcp

DHCP namespace is created per tenant subnet. This namespace is managed by the dhcp-agent

Dnsmasq Dnsmasq

Page 39: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

eth

br-tun

qrouter-ba605e63-3c54-402b-9bd7-3eba85d00080net.ipv4.ip_forward=1

Each router interface is separated by internal VLAN ID per tenant

Virtual interface

Open vSwitch

Interface on external network. This network should have externally reachable IP pool

br-ex

eth

Internal VLANs are converted to tunnels with unique GRE Key or VXLAN VNI per network

qr-xxxx qg-xxxx

Routing namespace is created per router. This namespace is managed by the l3-agent

Tenant default gateway Uplink used for NAT

IP is assigned from the external pool

VLAN IDVLAN IDbr-int

int-br-ex

phy-br-ex

veth pair

patch-tun

patch-int

Routing Namespace in OpenStack

Page 40: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Routing - Example

qr-xxxx qg-xxxx

br-tun

172.17.17.1 192.168.101.2

eth eth

br-ex

br-int

Page 41: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Routing - Example

qr-xxxx qg-xxxx172.17.17.1 192.168.101.2

192.168.101.3

Floating IP (1:1 NAT) --A quantum-l3-agent-float-snat -s 172.17.17.2/32 -j SNAT --to-source 192.168.101.3-A quantum-l3-agent-PREROUTING -d 192.168.101.3/32 -j DNAT --to-destination 172.17.17.2

Default SNAT --A quantum-l3-agent-snat -s 172.17.17.0/24 -j SNAT --to-source 192.168.101.2

Page 42: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

DPDK

Page 43: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

DPDK● The Data Plane Development Kit (DPDK) consists of a set of libraries and user-

space drivers for fast packet processing. It enables applications to perform their own packet processing directly from/to the NIC, delivering up to wire speed performance for certain cases

● DPDK can be utilized with Red Hat OpenStack in two main use-cases:○ DPDK accelerated VNFs (guest VM)

■ A new package (dpdk) is available in the RHEL 7 “Extras” channel

○ Accelerated Open vSwitch (Compute nodes)■ A new package (openvswitch-dpdk) is available with Red Hat OpenStack■ DPDK is bundled with OVS for better performance

Page 44: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

DPDK accelerated OVS (Tech Preview)

● DPDK accelerated Open vSwitch significantly improves the performance of OVS while maintaining its core functionality

OVS+DPDKvSwitch

user-space

kernel

Forwarding Plane

Poll Mode Driver

Standard OVS

vSwitch

Forwarding Plane

kernel

Driver

user-space

Page 45: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

DPDK accelerated OVS (Tech Preview)

● Provided as a separately installable package from the standard OVS○ It’s not possible to run OVS and OVS+DPDK on the same host

● Device support is limited to in-tree PMDs only, which don't rely on third party drivers and/or non-upstream modifications

○ Currently includes e1000, enic, i40e, and ixgbe

● The OVS agent/driver were enhanced to support DPDK datapath○ datapath_type=netdev

○ Resulting in correct allocation of VIF types and binding details

Page 46: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

ACCELERATING THE DATAPATHFOR MODERN APPLICATIONS

Page 47: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Kernel Networking DPDK+vhost-user Device Assignment

IP Cloud

NIC Driver

Kernel Networking

Open vSwitch

VM-1 VM-2 VM-n

Kernel Networking Device Driver

KernelNetworking

Direct

Direct

Direct

KERNEL

USER SPACE

vhost-net / QEMU

Open vSwitch

vhost-user / QEMU

PMD DriverDPDK

SR-IOV

IP Cloud

VM-1 VM-2 VM-n

IP Cloud

VM-1 VM-2 VM-n

Page 48: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Comparison

Pros● Feature rich / robust solution● Ultra flexible● Integration with SDN● Supports Live Migration● Supports overlay networking ● Full Isolation support /

Namespace / multi-tenancy

Cons● CPU consumption

Pros● Packets directly sent to user

space● Line rate performance with tiny

packets● Integration with SDN● CPU consumption

Cons● More dependence on user

space packages

WIP● Live Migration● IOMMU support

Pros● Line rate performance ● Packets directly sent to VMs● HW based isolation ● No CPU burden

Cons● 10s of VMs or fewer● Not as flexible● No OVS● Need VF driver in the guest● Switching at ToR● No Live Migration● No overlays

OVS+kernel OVS+DPDK SR-IOV

Page 49: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Hypervisor Bypass SR-IOVDirect Pass-through DPDK SR-IOV

user space

kernel space

user space

kernel space

Guest VM Guest VM

PF Driver PF Driver

NIC

Virtual Ethernet Bridge and Classsifier

Physical Function

Virtual Function

PC

I Man

ager

pNIC

user space

DPDK

Guest VMuser space

Guest VM

DPDK

QEMUVF DriverVF Driver

KVM

Host / user space

SR-IOV

Host / kernel space

Virtual Function

Host / user space

NIC

Virtual Ethernet Bridge and Classsifier

Physical Function

Virtual Function

PC

I Man

ager

pNIC

KVM

SR-IOV

Host / kernel space

Virtual Function

QEMUQEMU

OVSEthernet

TAP

user space

NIC

kernel space

Kernel + vSwitchvirtio-net

DPDK Accelerated vSwitchvirtio-net DPDK PMD

Guest VMuser space

Guest VMuser space

Virtio-net

Guest VM

VirtIO VirtIO

Virtio-net PMDVirtio-net

NIC

DPDK Accelerated OVS

kernel space

Host / user space

vhost-user

OVSEthernet

TAP

NIC

DPDK PMD

user space

Guest VM

Virtio-net PMD

QEMUQEMU

VirtIO VirtIO

vhost vhost

Host / user space

QEMUQEMU

Host / user space

vhost-user

Ethernet

Vhost-user Vhost-user

KVM KVMHost / kernel space Host / kernel space

Page 50: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Red Hat’s Value add

● State of the art open source technology

● Support / Debuggability○ OVS + DPDK Tcpdump○ Command line simplification○ OpenFlow table names○ OpenFlow dump readability○ Cross layer debugging / tracing tools○ Plotnetcfg

● Integration and performance tuninge.g multi-queue, connection tracking, VirtIO expected perf increase by 35%

Page 51: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Security Groups and OVS/conntrack

BEFORE conntrack (extra layer of Linux bridges)

AFTER conntrack (native OVS)

https://github.com/jbenc/plotnetcfgPerformance info in backup section

Page 52: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

vhost-user Multiple Queues

10GbE

OVS DP

PMD CPU#0

NICRSS

PMD CPU#1

PMD CPU#2

PMD CPU#4

queue#0

queue#1

queue#2

queue#3

OVS DP

OVS DP

OVS DP

VM

vCPU #0virtio-netqueue #0

vhost-user q#0

vhost-user q#1

vhost-user q#2

vhost-user q#3

vCPU #1

vCPU #2

vCPU #3

queue #1

queue #2

queue #3

12 Mpps

Page 53: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Kernel Networking

Open vSwitch

VM-1 VM-2 VM-n

DPDK

10G 10G

vhost-user / QEMU

256 512 1024 1500

Cor

es

IP Cloud

Line Rate PerformanceUpstream and RHEL experts working side-by-side to eliminate bottlenecks

Page 54: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

More Help is on the Way...

Main CPU

NIC NIC

HW offload HW offload

NIC

HW offload

NIC

HW offload

Red Hat works with HW vendors

HW abstraction interface / Switchdev

Page 55: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

RHEL Host RHEL HostVXLAN w/ or w/oHW offload RHEL Host

VXLAN w/ or w/o HW offload

Red Hat works with NIC vendors to provide line rate performance for 10G and 40G, with HW offload

VM-1 VM-2 VM-n VM-1 VM-2 VM-n VM-1 VM-2 VM-n

Theoretical Max Performance - Host to Host

Intel ixgbe ● 9.1 Gb/s w/ HW offload● 4.8 Gb/s w/o HW offload

Page 56: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

RHELAtomic

Kernel Networking

Application

Application

Application

Application

Application

Application

Application

Application

Max throughput at memory transfer rate within the host

Containers Networking at Theoretical Max

Container

Page 57: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

RHELAtomic

VXLAN

Ultra flexible solutionPublic / Private / Hybrid clouds

RHELAtomic

Kernel Networking

Application

Application

Application

Application

Application

Application

Application

Application

Kernel Networking

Application

Application

Application

Application

Application

Application

Application

Application

Open vSwitchOpen vSwitch

Containers Networking at Theoretical Max

Container

Page 58: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Further Reading● Red Hat Solution for Network Functions Virtualization (NFV)

● Co-Engineered Together: OpenStack Platform and Red Hat Enterprise Linux

● SR-IOV – Part I: Understanding the Basics

● SR-IOV – Part II: Walking Through the Implementation

● Driving in the Fast Lane: CPU Pinning and NUMA Topology

● Driving in the Fast Lane: Huge Pages

● Scaling NFV to 213 Million Packets per Second with RHEL, OpenStack, and DPDK

● Boosting the NFV datapath with Red Hat OpenStack Platform

● Troubleshooting Networking with Red Hat OpenStack Platform: meet ‘plotnetcfg’

Page 59: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Questions?

Don’t forget to submit feedback using the Red Hat Summit app.

[email protected] [email protected]

Page 60: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

BACKUP

Page 61: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)
Page 62: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Multiqueue Support for Virtio-net

● Enables packet sending/receiving processing to scale with the number of available virtual CPUs in a guest

● Each guest virtual CPU can have it’s own separate transmit or receive queue that can be used without influencing other virtual CPUs

● Provides better application scalability and improved network performance in many cases

● Supported with RHEL 7, available with Red Hat OpenStack Platform 8○ hw_vif_multiqueue_enabled=true|false (default false)○ Nova will match number of queues to number of vCPUs

Page 63: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Packet size vs Core background data● Two socketed Intel Haswell Intel(R) Xeon(R) CPU E5-2690 v3● 2.60GHz ● 12 core per socket for a total of 24 cores ● Hyperthreading disabled● 256 GB RAM● Two 10GbE interface 82599 based NIC● Up to 1% acceptable packet loss (smaller loss shows similar

results)● Unidirectional traffic● RHEL 7.2 in the host and the guest

Page 64: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Security Groups iptables vs conntrack (1)

Page 65: SDN Fundamentals for NFV, OpenStack, and Containers (Red Hat Summit 2016)

Security Groups iptables vs conntrack (2)