Post on 02-Oct-2015
Copyright 2014 Juniper Networks, Inc. 1
NFV , JNCIE-SP Juniper Networks Russia and CIS , 18 2014
Copyright 2014 Juniper Networks, Inc. 2
Virtualization strategy & Goals
Copyright 2014 Juniper Networks, Inc. 3
Branch Oce
HQ Carrier Ethernet Switch
Cell Site Router
Mobile & Packet GWs
AggregaAon Router/ Metro
Core
DC/CO Edge Router Service Edge
Router
Core
Enterprise Edge/Mobile Edge
AggregaAon/Metro/Metro core
Service Provider Edge/Core and EPC
vCPE, Virtual Branch Virtual PE (vPE), Virtual BNG (vBNG)
Virtual RouAng Engine (vRE), Virtual Route Reector (vRR)
MX SDN Gateway
MX VirtualizaAon strategy
Hardware VirtualizaAon
SW
Control plane and OS: Virtual JunOS, Forwarding plane: Virtualized Trio
Leverage development eort and JunOS feature velocity across all virtualizaAon iniAaAves
vBNG, vPE, vCPE
Data center
ApplicaA
ons
Copyright 2014 Juniper Networks, Inc. 4
Juniper Networks Carried Grade Virtual Router vMX
Copyright 2014 Juniper Networks, Inc. 5
VMX goals
Agile and Scalable
Orchestrated
Leverage JUNOS and Trio
Scale-out elasAcity by spinning up new instances
Faster Ame-to-market oering Ability to add new services via service
chaining vMX treated similar to a cloud based
applicaAon
Leverages the forwarding feature set of Trio
Leverages the control plane features of JUNOS
Copyright 2014 Juniper Networks, Inc. 6
VMX product overview
Copyright 2014 Juniper Networks, Inc. 7
VMX a scale-out router
Scale-out (Virtual MX) Scale-up (Physical MX)
OpAmize for density in a single instance of the plaZorm.
Innovate in ASIC, power and cooling technologies to drive density and most ecient power footprint.
Virtualized plaZorms not opAmized to compete with physical routers with regards to capacity per instance.
Each instance is a router with its own dedicated control-plane and data-plane. Allows for a smaller footprint deployment with administraAve separaAon per instance.
Copyright 2014 Juniper Networks, Inc. 8
Virtual and Physical MX
PFE vPFE Microcode
TRIO x86
CONTROL PLANE
DATA PLANE
ASIC
PLATFORM
Copyright 2014 Juniper Networks, Inc. 9
VirtualizaAon techniques
ApplicaAon
Virtual NICs
Physical NICs
Guest VM#1
Hypervisor: KVM, VMWare ESXi
Physical layer
VirtIO drivers
Device emulaAon
Para-virtualization
Guest and Hypervisor work together to make emulation efficient
Offers flexibility for multi-tenancy but with lower I/O performance
NIC resource is not tied to any one application and can be shared across multiple applications
vMotion like functionality possible
PCI-Pass through with SR-IOV
Device drivers exist in user space Best for I/O performance but has dependency on NIC
type Direct I/O path between NIC and user-space application
bypassing hypervisor vMotion like functionality not possible
ApplicaAon
Virtual NICs
Guest VM#2
VirtIO drivers
ApplicaAon
Virtual NICs
Physical NICs
Guest VM#1
Hypervisor: KVM, VMWare ESXi
Physical layer
Device emulaAon
ApplicaAon
Virtual NICs
Guest VM#2
Device emulaAon
PCI Pass-through
SR-IO
V
Copyright 2014 Juniper Networks, Inc. 10
VMX Product
Virtual JUNOS to be hosted on a VM Follows standard JUNOS release cycles Additional software licenses for different
applications (vPE, vRR, vBNG)
Hosted on a VM, Bare Metal, Linux Containers Multi Core DPDK, SR-IOV, virtIO
VCP (Virtualized Control Plane)
VFP (Virtualized Forward Plane)
Copyright 2014 Juniper Networks, Inc. 11
VMX overview Ecient separaAon of control and data-plane
Data packets are switched within vTRIO MulA-threaded SMP implementaAon allows core elasAcity Only control packets forwarded to JUNOS Feature parity with JUNOS (CLI, interface model, service conguraAon) NIC interfaces (eth0) are mapped to JUNOS interfaces (ge-0/0/0)
Guest OS (Linux)
Guest OS (JUNOS)
Hypervisor
x86 Hardware
CHAS
SISD
RPD
LC- K
erne
l
DCD
SNMP
Virtual TRIO
VFP VCP
Intel DPDK
Copyright 2014 Juniper Networks, Inc. 12
vMX Performance
Copyright 2014 Juniper Networks, Inc. 13
VMX Environment
CPU assignments Packet Processing Engine in VFP
Variable based on desired performance Packet IO
One core per 10G port VCP - RE/Control plane VCP-VFP CommunicaAon Emulators
20 GB memory 16 GB for RIOT [VFP] 4 GB for RE [VCP]
6x10G Intel NICs with DPDK
Copyright 2014 Juniper Networks, Inc. 14
VMX Baseline Performance
VMX
Tester
Test setup
8.9G
8.9G
8.9G
8.9G
8.9G
8.9G
8.9G
8.9G
8.9G
8.9G
8.9G
8.9G
Single instance of VMX with 6 ports of 10G sending bidirecAonal trac
16 cores total
Up to 60G of bidirecAonal (120G unidirecAonal) performance per VMX instance (1 VCP instance and 1 VFP instance) @ 1500 bytes
No packet loss
IPv4 Throughput tesAng only
Port0 Port1 Port2 Port3 Port4 Port5
Copyright 2014 Juniper Networks, Inc. 15
VMX Baseline Performance part 2 VMX performance in Gbps:
Total # of cores
Frame size (Bytes) 5 6 8 10 12
256 2 3.8 7.2 9.3 12.6
512 3.7 7.3 13.5 18.4 19.8
1500 10.7 20 20 20 20
2 x 10G ports
4 x 10G ports Total # of cores
Frame size (Bytes) 7 8 10 12 14
256 2.1 4.2 6.8 9.6 13.3
512 4.0 7.9 13.8 18.6 26
1500 11.3 22.5 39.1 40 40
6 x 10G ports Total # of cores
Frame size (Bytes) 9 10 12 14 16
256 2.2 4.0 6.8 9.8
512 4.1 8.1 14 19.0 27.5
1500 11.5 22.9 40 53.2 60
Total # of cores includes cores for packet processing and all associated host funcAonality
Copyright 2014 Juniper Networks, Inc. 16
VMX performance improvements
Ivy Bridge
Haswell (current gen)
Sandy Bridge
Forw
arding perform
ance
Intel architecture changes & increase in # of cores/socket
VMX performance improvements will leverage the advancements in Intel Architecture
GeneraAonal changes happen every 2 years and provide about a 1.5x-2x improvement in performance
IteraAve process to opAmize eciency of Trio ucode compiled as x86 instrucAons
Streamline forwarding plane with reduced feature-set to increase packet per second performance i.e Hypermode for vMX
Incremental improvements in virtualized forwarding
plane Forw
arding perform
ance
Incremental improvements in forwarding eciency
Broadwell (Next gen)
Virtual Routing Engine
VMX
VMX
VMX
Scale-out VMX deployment with mulAple VMXs controlled by a single control plane
Scale out VMX
Copyright 2014 Juniper Networks, Inc. 17
vMX use cases and deployment models
Copyright 2014 Juniper Networks, Inc. 18
Service Provider VMX use case virtual PE (vPE)
DC/CO%Gateway%%
Provider%MPLS%cloud%CPE%
L2%PE%
L3%PE%
CPE%
Peering%
Internet%
SMB$CPE%
Pseudowire%
L3VPN%
IPSEC/Overlay%technology%
Branch$Oce$
Branch$Oce$
DC/CO%Fabric%%%
vPE%
Scale-out deployment scenarios
Low bandwidth, high control plane scale customers
Dedicated PE for new services and faster time-to-market
Market Requirement
VMX is a virtual extension of a physical MX PE
Orchestration and management capabilities inherent to any virtualized application apply
VMX Value Proposition
Copyright 2014 Juniper Networks, Inc. 19
Example VMX connecAvity model opAon 1
L2PE P DC/CO GW&ASBR
Provider MPLS Cloud
vPE
LDP+IGP LDP+IGP
L2/L3 overlay
BGP-LU
MPLS
BGP-LU
CPE
RR
NHS no NHS
Pseudowire
DC/CO Fabric ASBR
BGP-LU
NHS no NHS
RR
Extend SP MPLS cloud to the VPE L2 backhaul from CPE Scale the number of vPEs within the DC/CO by using concepts
from Seamless MPLS
Copyright 2014 Juniper Networks, Inc. 20
VMX as a DC Gateway
VM# VM# VM#
ToR#(IP)#
ToR#(L2)#
Non#Virtualized#environment#(L2)##
VXLAN#Gateway#(VTEP)#
VTEP#
VM# VM# VM#
VTEP#
Virtualized#Server# Virtualized#Server#
VPN#Cust#A# VPN#Cust#B#
VRF#A#
VRF#B#
MPLS#Cloud#
VPN#Gateway#(L3VPN)#
VMX#
Virtual#Network#B# Virtual#Network#A#
VM# VM# VM# VM# VM# VM#
Data#Center/#Central#Oce#
Service Providers need a gateway router to connect the virtual networks to the physical network
Gateway should be capable of supporting different DC overlay, DC Interconnect and L2 technologies in the DC such as GRE, VXLAN, VPLS and EVPN
Market Requirement
VMX supports all the overlay, DCI and L2 technologies available on MX
Scale-out control plane to scale up VRF instances and number of VPN routes
VMX Value Proposition
Copyright 2014 Juniper Networks, Inc. 21
VMX to oer managed CPE/centralized CPE
Service providers want to oer a managed CPE service and centralize the CPE funcAonality to avoid truck rolls Large enterprises want a centralized CPE oering to manage all their branch sites Both SPs and enterprises want the ability to oer new services without changing the CPE device
MARKET REQUIREMENT VMX with service chaining can oer best of breed rouAng and L4-L7 funcAonality Service chaining oers the exibility to add new services in a scale-out manner
VMX VALUE PROPOSITION
vMX$as$vCPE$$(IPSec,$NAT)$
vSRX$(Firewall)$
Branch'Oce'
Switch$$
Provider$MPLS$cloud$
DC/CO$GW$
Branch'Oce'
Switch$
Provider$MPLS$cloud$
DC/CO$Fabric$+$$Contrail$overlay$
vMX$as$vPE$
Branch'Oce'
Switch$
L2$PE$
L2$PE$
PE$
Internet$Contrail Controller
Switch'Switch'
Switch'
Branch'Oce' Branch'
Oce'
Branch'Oce'
Private'MPLS'cloud'
Internet'
vMX'as'vCPE''(IPSec,'NAT)'
vSRX'(Firewall)'
vMX'as'WAN'router'
Contrail Controller
Enterprise HQ
Switch'
Branch'Oce'
Storage'&'Compute'
Enterprise Data Center
Service Provider Managed Virtual CPE Large enterprise centralized Virtual CPE
Copyright 2014 Juniper Networks, Inc. 22
Example VMX connecAvity model opAon 2
L2PE P DC/CO GW
Provider MPLS Cloud
vPE
LDP+IGP
VLAN MPLS
CPE
Pseudowire
DC/CO Fabric
Terminate the L2 connecAon from CPE on the DC/CO GW Create a NNI connecAon from the DC/CO GW to the VPE
instances
Copyright 2014 Juniper Networks, Inc. 23
vMX FRS
Copyright 2014 Juniper Networks, Inc. 24
VMX FRS product
Ocial FRS target date for VMX Phase-1 is targeted for Q1 2015 with JUNOS release 14.1R5 High level overview of FRS product
DPDK integraAon. Min 60G throughput per VMX instance OpenStack integraAon 1:1 mapping between VFP and VCP Hypervisor support: KVM, VMWare ESXi, Xen High level feature support for FRS:
Full IP capabiliAes MPLS: LDP, RSVP MPLS applicaAons: L3VPN, L2VPN, L2Circuit IP and MPLS mulAcast Tunneling: GRE, LT OAM: BFD QoS: Intel DPDK QoS feature-set
VFP VCP
Hypervisor/Linux
NIC drivers, DPDK
Server, CPU, NIC
Juniper deliverable
Customer dened
Copyright 2014 Juniper Networks, Inc. 25
vMX Roadmap
Copyright 2014 Juniper Networks, Inc. 26
VMX QoS model
VFP
Physical NICs
Virtual NICs
WAN traffic
Utilize the Intel DPDK QoS toolkit to implement the scheduler
Existing JUNOS QoS configuration applies Destination Queue + Forwarding Class used to determine
scheduler queue Scheduler instance per Virtual NIC
QoS scheduler implemented per VNIC instance
Copyright 2014 Juniper Networks, Inc. 27
qPort: q Shaping-rate
q VLAN: q Shaping-rate q 4k per IFD
q Queues: q 6 queues q 3 prioriAes
q 1 High q 1 medium q 4 low
q Priority groups scheduling follows strict priority for a given VLAN
q Queues of the same priority for a given VLAN use WRR
q High and medium queues are capped at transmit-rate
Que
ue 0
Que
ue 1
VLAN-1
Port
High Priority
Que
ue 5
Que
ue 2
Que
ue 4
Que
ue 3
Low Priority Medium Priority
Rate-limiter
VMX QoS model
Rate-limiter
Copyright 2014 Juniper Networks, Inc. 28
VMX with vRouter and OrchestraAon
vMX with vRouter integraAon VirtIO uAlized for Para-
virtualized drivers
Contrail OpenStack for VM management Seqng up overlay network
NFV Orchestrator (potenAally OpenStack Heat templates) uAlized to easily create and replicate VMX instances
UAlize OpenStack Ceilometer to determine VMX instance uAlizaAon for billing
VCP VFP
Physical NICs WAN trac
Guest VM (Linux + DPDK)
Cores Memory
OOB Management
Contrail vRouter
vRouter Agent vRouter Agent
Contrail controller
NFV orchestrator
Template based cong BW per instance Memory # of WAN ports
VirtIO
VirtIO
Guest VM (FreeBSD)
Physical layer
Copyright 2014 Juniper Networks, Inc. 29
Physical & Virtual MX
Oer a scale-out model across both physical and virtual resources
Depending on the type of customer and service oering NFV orchestrator decides whether to provision the customer on a physical or virtual resource
Physical Forwarding resources
L2 interconnect
Virtual Forwarding resources
Contrail controller
NFV orchestrator Template based cong BW per instance Memory # of WAN ports
Virtual RouAng Engine
VMX1 VMX2
Copyright 2014 Juniper Networks, Inc. 30
VMX development milestones/roadmap SAll in planning
In development
The target is for VMX to follow the same release cycles as JUNOS
VMX will FRS with 14.1R5
1H2015 2H2015 2016 VMX FRS (JUNOS 14.1R5)
Min 60G per VMX instance @ 1500 bytes.
1:1 mapping between VFP and VCP
Hypervisor support: KVM, VMWare ESXi, Xen
Linux container support (LXC) Para-virtualized drivers: VirtIO, VMXNET3
Test only VMX (VMX-T) capable of running on a single core and up to 100Mbps of forwarding capacity
High level features: Dened in FRS product slide.
Target applicaAons: Virtual PE
VMX post FRS features ( Target release 15.1R1)
L2: Bridging & IRB, VPLS, VXLAN, EVPN
Inline services: jow, IPFIX OAM: CFM, LFM
VMX as vRR soluAon
VMX with Contrail vRouter and integraAon into Contrail OpenStack
VMX on Amazon AWS cloud for Virtual Private Cloud (VPC) Gateway router
Inline IPSec
Target applicaAons: Virtual Private Cloud Gateway router, Virtual DC Gateway router
More MS DPC/MS MPC services integraAon into VMX
HA capabiliAes for VCP
Copyright 2014 Juniper Networks, Inc. 31
vBNG (Virtual Unified Edge Solution)
Copyright 2014 Juniper Networks, Inc. 32
vBNG, what is it?
Runs on x86 inside virtual machine Two virtual machines needed, one for forwarding and one for control
plane First iteration supports KVM for hypervisor and OpenStack for
orchestration VMWARE support planned
Based on the same code base and architecture as Junipers VMX Runs Junos
Full featured and constantly improving Some features, scale and performance of vBNG will be different
than pBNG Easy migration from pBNG
Supports multiple BB models vLNS BNG based on PPP, DHCP, C-VLAN and PWHT connections types
Copyright 2014 Juniper Networks, Inc. 33
vBNG Value proposiAon
Assump^ons Highly uAlized physical BNGs (pBNG) cost less (capex) than x86 based BNGs (vBNG)
InstallaAon (rack and stack) of pBNG costs more (opex) than installaAon of vBNGs
Capex cost of the cloud infrastructure (switches, servers and sotware) is spread over mulAple applicaAons (vRouters and other applicaAons)
vBNG is a candidate when a single pBNG serves 12,000 or fewer subscribers or pBNG peak uAlizaAon is about 20 Gb/s or less or BNG uAlizaAon and subscriber count uctuates signicantly over Ame or The applicaAon has many subscribers and small bandwidth
pBNG is the best answer when BNGs are centralized and serve >12000 subscribers or >20 Gb/s
Copyright 2014 Juniper Networks, Inc. 34
Target use cases for vBNG
vBNG for BNG near CO vLNS for business vBNG for lab tesAng new features or new releases vLNS for applicaAons where the subscribers count uctuates
Copyright 2014 Juniper Networks, Inc. 35
vBNG for BNG near CO
vBNG Deployment
Model
SP Core
vBNG
Internet OLT/DSLAM
DSL or Fiber CPE in BB Homes
Last Mile
OLT/DSLAM
DSL or Fiber CPE in BB Homes
Last Mile
OLT/DSLAM
DSL or Fiber CPE in BB Homes
Last Mile
Central Oce With Cloud Infrastructure
L2 Switch L2 Switch
Business case is strongest when vBNG aggregates 12K or fewer subscribers
1 10 OLTs/DSLAMs
Copyright 2014 Juniper Networks, Inc. 36
vRR (Virtual Route Reflector)
Copyright 2014 Juniper Networks, Inc. 37
Route Reector PAIN POINTs addressed by VRR
Route Reectors are characterized by RIB scale (available memory) and BGP Performance (Policy ComputaAon, route resolver, network I/O - determined by CPU speed)
Memory drives route reector scaling Larger memory means that RRs can hold more RIB routes With higher memory an RR can control larger network segments lower number of RRs required in a network
CPU speed drives faster BGP performance Faster CPU clock means faster convergence Faster RR CPUs allow larger network segments controlled by one RR - lower numbers of RRs required in a network
vRR product addresses these pain point by running Junos image as an RR applicaAon on faster CPUs and with memory on standard servers/appliances
Copyright 2014 Juniper Networks, Inc. 38
Juniper vRR DEVELOPMENT Strategy
vRR development is following three pronged approach 1. Evolve plaborm capabili^es using virtualiza^on technologies Allow instanAaAon of Junos image on a non RE hardware
Any Intel Architecture Blade Server / Server 2. Evolve Junos OS and RPD capabili^es 64 bit Junos kernel 64 bit RPD improvements for increased scale RPD modularity / mulA-threading for bever convergence performance 3. Evolve Junos BGP capabili^es for RR applica^on BGP Resilience and Reliability improvements BGP monitoring protocol BGP Driven ApplicaAon control DDoS prevenAon via FlowSpec
Copyright 2014 Juniper Networks, Inc. 39
Virtual Route Reector delievering
Support network based as well as data center based RR design Easy deployment as scaling & flexibility is built into virtualization
technology, while maintaining all essential product functionality
Virtual RR
Junos Image
Any Intel Server for instanAaAng vRR
Gene
ric X86
Plab
orm
Juno
s Sodw
are
JunosXXXX.img sotware image as vRR No hardware is included
Includes ALL currently supported address families - IPv4 /IPv6, VPN, L2, mulAcast AF (as todays product does)
Exact same RR funcAonality as MX No Forwarding Plane
Sotware SKUs for primary and standby RR Customer can choose any x86 plaZorm
Customer can choose CPU and memory size as per scaling needs
Copyright 2014 Juniper Networks, Inc. 40
VRR: First implementaAon
Junos Virtual RR Ocial Release:13.3 R3 64-bit kernel; 64-bit RPD SCALING: driven by memory allocated to vRR instance VirtualizaAon Technology: QEMU-KVM Linux distribuAon: CENTOS 6.4, Ubuntu 14.04 LTS OrchestraAon PlaZorm: LIBVIRT 0.9.8 , Openstack (Icehouse), ESXi 5.5
Copyright 2014 Juniper Networks, Inc. 41
vRR scaling
Copyright 2014 Juniper Networks, Inc. 42
VRR: Reference hardware Juniper is tesAng vRR on following reference hardware CPU: 16-core Intel(R) Xeon(R) CPU E5620 @ 2.40GHz Available RAM: 128G
Only 32G per VM instance is being tested On-chip cache memory:
L1 cache I-cache: 32KB D-cache:32KB
L2 cache: 256KB L3 cache: 12MB
Linux distribuAon: CentOS release 6.4 - KVM/QEMU Juniper will provide scaling guidance based on this hw specs
Performance behavior might defer if a dierent HW is chosen 64 bit FreeBSD does not work on IvyBridge, due to known sotware bugs, please refrain from using IvyBridge
Copyright 2014 Juniper Networks, Inc. 43
VRR Scaling Results
* The convergence numbers also improve with higher clock CPU
Tested with 32G vRR instance
Address Family
# of adverAzing
peers acAve routes Total Routes
Memory UAlizaAon(for receive all routes)
Time taken to receive all
routes # of receiving
peers Time taken to adverAse
the routes and Mem UAls.
IPv4 600 4.2 million 42Mil(10path) 60% 11min 600 20min(62%)
IPv4 600 2 million 20Mil(10path) 33% 6min 600 6min(33%)
IPv6 600 4 million 40Mil(10path) 68% 26min 600 26min(68%)
VPNv4 600 2Mil 4Mil (2 paths ) 13% 3min 600 3min(13%)
VPNv4 600 4.2Mil 8.4Mil (2 paths ) 19% 5min 600 23min(24%)
VPNv4 600 6Mil 12Mil (2 paths ) 24% 8min 600 36min(32%)
VPNv6 600 6Mil 12Mil (2 paths ) 30% 11min 600 11min(30%)
VPNv6 600 4.2Mil 8.4Mil (2 paths ) 22% 8min 600 8min(22%)
Copyright 2014 Juniper Networks, Inc. 44
vRR FRS
Copyright 2014 Juniper Networks, Inc. 45
VRR: FEATURE Support
vRR Features Support Status
Support for all BGP address families Supported today : same as chassis based
implementaAon L3 unicast address families IPv4, IPv6, VPNv4 and
VPNv6, BGP-LU Supported today : same as chassis based
implementaAon L3 mulAcast address families IPv4, IPv6, VPNv4 and
VPNv6 Supported today : same as chassis based
implementaAon
L2VPN address families (RFC4761, RFC6074) Supported today : same as chassis based
implementaAon
Route Target address family (RFC4684) Supported today : same as chassis based
implementaAon
Support for the BGP ADD_PATH feature starAng in 12.3 (IPv4, IPv6, Labeled unicast v4 and
labeled unicast v6
Support for 4-byte AS numbers Supported today
BGP neighbors Supported today : same as chassis based
implementaAon
OSPF adjacencies Supported today : same as chassis based
implementaAon
ISIS adjacencies Supported today : same as chassis based
implementaAon
LDP adjacencies Not supported at FRS
Copyright 2014 Juniper Networks, Inc. 46
VRR: FEATURE Support part 2
vRR Features Support Status Ability to control BGP learning and adverAsing of routes based on any combinaAon of the following avributes:
Prex, prex length, AS-Path, Community Supported today : same as chassis based
implementaAon
Interfaces must support 802.1Q VLAN encapsulaAon Supported Interfaces must support 802.1ad (QinQ) VLAN
encapsulaAon Supported Ability to run at least two route reectors as virtual
routers in the one physical router Yes - via dierence spawning dierent instances of
route reectors on dierent cores Non-stop rouAng for all rouAng protocols and address
families Not at FRS; need to schedule Graceful restart for all rouAng protocols and address
families Supported today ; same as chassis based
implementaAon
BFD for BGPv4 Supported today - control plane BFD implementaAon
BFD for BGPv6 Supported today - control plane BFD implementaAon
MulAhop BFD for both BGPv4 and BGPv6 Supported today
Copyright 2014 Juniper Networks, Inc. 47
vRR Use cases and deployment models
Copyright 2014 Juniper Networks, Inc. 48
Network based Virtual Route Reector Design
Client 1
vRRs can be deployed in the same locaAons in the network Same connecAvity paradigm between vRRs and clients as todays RRs and clients vRR instanAaAon and connecAvity (underlay) provided by Openstack
Client 2
Client 3
Client n
iBGP
Junos VRR on VMs On standard servers
Copyright 2014 Juniper Networks, Inc. 49
CLOUD Based Virtual Route Reector DESIGN Solving the best path selec^on problem for cloud virtual route reector
VRR 1 Region 1
Regional Network 2
VRR 2 Region 2 Data Center
Cloud Backbone
GRE, IGP
GRE, IGP
VRR 2 selects path based on R1 view
R1
R2 VRR 2 selects path based on R2 view
vRR as an ApplicaAon hosted in DC GRE tunnel is originated from gre.X (control plane interface) VRR behaves like it is locally avached to R1 (requires resoluAon RIB cong)
Client 2
Client 1 Regional
Network 1
iBGP
iBGP
Client 3
iBGP
Cloud Overlay w/ Contrail or VMWare
Copyright 2014 Juniper Networks, Inc. 50
vRR Roadmap
Copyright 2014 Juniper Networks, Inc. 51
FRS Product (target 13.3 R3)
Junos 14.1 R3+, 14.2 R1+
Future (in planning)
CPU X86 32/64-BIT vCPU 1 (single core per VM)
X86 32/64-BIT vCPU 1 (single core per VM)
X86 32/64-BIT vCPU > 1
Memory Max 32G per vRR (Server can have more memory)
Max 32G per vRR instance 64G + per vRR instance (if reqd)
Disk Min 4G no max Min 4G no max Min 4G no max Virt-io block based
Interfaces 1GE and 10GE 1GE and 10GE Virt-io net support for untagged interface
Virt-io net support 1GE and 10GE
Host OS Centos 6.4 Ubuntu 14.04 LTS ESXi 5.5
Centos 6.4 Ubuntu 14.04 LTS ESXi 5.5
AddiAonal OSes will be considered as per customer demand
Hyper-visor technology
QEMU-KVM ESXi 5.5
QEMU-KVM ESXi 5.5
Per customer demand
OrchestraAon libvirt 0.9.8 and OpenStack (icehouse) ESXi 5.5
libvirt 0.9.8 and OpenStack (icehouse) ESXi 5.5
Per customer demand
VRR: PlaZorm Enhancements
Copyright 2014 Juniper Networks, Inc. 52
Virtual CPE
Copyright 2014 Juniper Networks, Inc. 53
There is a App for That
EVOLVING SERVICE DELIVERY to bring cloud properAes to managed BUSINESS services 30Mbps Firewall
Applica^on Accelera^on
Remote access for 40 employees
Applica^on Repor^ng
There is an App for That
Copyright 2014 Juniper Networks, Inc. 54
The concept of Cloud Based CPE
A Simplied CPE Remove CPE barriers to
service innovaAon Lower complexity & cost
DHCP Firewall Routing / IP Forwarding NAT
Modem / ONT Switch Access Point
Voice MoCA/ HPAV/ HPNA3
Typical CPE Func^ons
DHCP
FW Routing / IP Forwarding
NAT Modem / ONT Switch Access Point
Voice MoCA/ HPAV/ HPNA3
Simplied L2 CPE
In Network CPE funcAons Leverage & integrate with
other network services Centralize & consolidate Seamless integrate with mobile
& cloud based services
Direct Connect Extend reach & visibility
into the home Per device awareness &
state Simplied user experience
Simplify the device required on the customer premise Centralize key CPE functions & integrate them into the network edge
BNG / PE in SP Network
Copyright 2014 Juniper Networks, Inc. 55
CLOUD CPE ARCHITECTURE HIGH LEVEL COMPONENTS
Customer Site
vCPE Context
Onsite CPE
Services
Management
Edge Router
Access & AggregaAon
Simplified CPE device with switching, access point, upstream QoS and WAN
interfaces Optional L3 and Tunneling
CPE specific context in router Layer 3 services (addressing & routing) Basic value services (NAT, Firewall,)
Advanced security services (UTM, IPS, Firewall,) Extensible to other value added services (M2M, WLAN, Hosting, Business apps,) Cloud based
Provisioning Monitoring Customer Self-care
Copyright 2014 Juniper Networks, Inc. 56
Virtual CPE use cases
Copyright 2014 Juniper Networks, Inc. 57
VCPE MODELS SCENARIO A: Integrated v- BRANCH ROUTER
Ethernet NID
Switch with Smart SFP
DHCP
Routing NAT, FW
VPN
Cloud CPE Context
Edge Router
L2 CPE (opAonally with L3 awareness
for QoS and Assurance)
LAG, VRRP, OAM, L2 Filters,..
StaAsAcs and Monitoring per vCPE
Addressing, RouAng, Internet & VPN, QoS
NAT, Firewall, IDP
vCPE instance = VPN rouAng instance
Pros Simplest onsite CPE Limited investments LAN extension Device visibility
Cons Access network impact Limited services Management impact
Juniper MX JS Self-Care App NID Partners
Copyright 2014 Juniper Networks, Inc. 58
VCPE MODELS SCENARIO B : OVERLAY v- BRANCH ROUTER
CPE
VPN
Lightweight L3 CPE
(Un)Secure Tunnel L2 or L3 Transport
vCPE instance = VR on VM
Pros No domain constraint Opera^onal isola^on VM exibility Transparent to exis^ng
network
Cons Pre-requisites on CPE Blindsided Edge Virtualiza^on Tax
Juniper Firey Virtual Director
CPE
VM
VM
VM
VM can be shared across sites
Copyright 2014 Juniper Networks, Inc. 59
BROADBAND DEVICE VISIBILITY EXAMPLE: PARENTAL CONTROL BASED ON DEVICE POLICIES
HOME NETWORK
Laptop
L2 Bridge
Tablet
Little Jimmys Desktop
ACTIVITY REPORTING
Volumes Content
Facebook.com Twitter.com Hulu.com Wikipedia.com Iwishiwere21.com
Portal / Mobile App
Self-care & ReporAng
CONTENT FILTER
You have tried to access www.iwishiwere21.com
This site is ltered in order to protect you
TIME OF DAY
Internet access from this device is not permimed between 7pm and 7am.
Try again tomorrow
Copyright 2014 Juniper Networks, Inc. 60
CLOUD CPE APPLICATIONS CURRENT STATUS
Market
Technology
Benets well understood ExisAng demand at the edge and in the
data center
NSP and government projects Driven by product, architecture, planning
departments
Business Cloud CPE Residen^al Cloud CPE
Emerging concept, use cases under deniAon (which cloud services?)
No short term commercial demand StandardizaAon at BBF (NERG) Driven by NSP R&D departments
Extension to MPLS VPN requirements IniAal focus on rouAng and security
services
L2 CPE iniAally, L3 CPE coming Concern on redundancy
Extension to BNG Focus on transforming complex CPE
features into services
Key areas: DHCP, NAT, UPnP, management, self-care
Requires very high scale
All individual components available and can run separately
MX vCPE context based on rouAng instance, with core CPE features and basic services on MS-DPC. L2 based.
System integraAon work in progress + roadmap for next steps
EvangelizaAon Working on markeAng demo (focused on
use cases/ benets)
Involvement in NSP proofs of concept StandardizaAon Design in progress
Copyright 2014 Juniper Networks, Inc. 61
How to get this presentaAon?
Scan it
Download it! Join at Facebook: Juniper.CIS.SE (Juniper techpubs ru)
?
!