OpenStack Deployment in the Enterprise & Service Provider · storage features (SwiftStack) or...
Transcript of OpenStack Deployment in the Enterprise & Service Provider · storage features (SwiftStack) or...
OpenStack Deployment in the Enterprise & Service Provider
Shannon McFarland – CCIE #5245
Principal Engineer
@eyepv6
BRKDCT-2367
• Cloud Trends
• What is OpenStack?
• OpenStack Participation
• What are Enterprises & SPs doing with OpenStack?
• OpenStack Deployment
• Cisco Product Integration
• Conclusion
Agenda
Cloud Trends
Enterprise Trends – Cloud
Virtualization
(Server, Storage, App,
etc)
Public/Hybrid Cloud
Public Cloud Retraction Private Cloud
Cost driven - mistake Missed expectations:
- Cost
- HA
- Performance
- Ops
Cloud done their way:
- Self-service
- Reset cost expectations
- Elastic
- Understand Cloud HA
- Multi-tenancy
- IT meet DevOps
Some skip the public cloud step
Legacy IT Change Control <> Diametrically Opposed to Cloud
• Cool and exciting technologies are borderline useless if IT process & change control don’t adapt
• Elastic, self-service, FastIT, are all the enemy of legacy IT models
Continuous Integration/Continuous DeploymentOperational Process for an Upgradeable OpenStack
• The biggest issues with OpenStack in the Enterprise is actually not ‘OpenStack in the Enterprise’ but the operational processes that surround it
• DevOps – Learn it, Live it, Love it: http://www.jedi.be/blog/2012/05/12/codifying-devops-area-practices/
• CI/CD – The make or break process that your customer has to understand
• Build the processes BEFORE building the OpenStack environment
• Remember, OpenStack was built for modern-day distributed web applications that are driven by developers
Revision Control System
Code Review Tool
Code Repo
Test JobsIntegration
Server
High-Level CI/CD Overview• RCS: Subversion,
Mercurial, CVS, Bazaar, Perforce, ClearCase, etc..
• Code Review: Gerrit, Git pull request, Phabricator, Barkeep, Gitlab, etc..
• Code Repo: GitHub, BitBucket, BitKeeper, Gitorious, etc..
• Integration Server: Jenkins/Hudson, Zuul, CloudBees, Go, Maven, etc..
• Test Jobs: Tempest, Rally, puppet-rspec, tox, etc..
• Artifacts: rpmbuild, Jenkins, Artifactory, Apache Archiva, etc..
Artifact Creation
Artifact Rep Mgr
Deployment
Jobs
(Gerrit/Git pull request)
*See notes for logo credits
(Tempest/Rally/etc)(rpmbuild/Jenkins/etc)
Continuous Integration
Continuous
Deployment
(GitHub)
What is OpenStack?
“OpenStack is a collection of open source technologies delivering a massively scalable
cloud operating system” - openstack.org
OpenStack Cloud Computing Software
• Freely available, open source
software allowing anyone to build
their own private or public clouds.
• Open source and open APIs allows
the customer to avoid being locked in
to a single vendor
• Built by a growing community of
contributors
• Opportunities for vendors to develop
their own solutions and services
http://www.openstack.org/assets/welcome-guide/OpenStackWelcomeGuide.pdf
Austin – Oct 2010
Bexar– Feb
2011
Diablo – September 2011
Essex– April 2012
Cactus – April 2011
Folsom –Sept 2012
Grizzly– April 2013
Havana – October 2013
2011 2012 2013 2014
Icehouse– April 2014
2015
Kilo – April 2015
OpenStack ReleasesJuno – November
2014
OpenStack is “Project” BasedCore Projects Shown
Compute
“Nova”
- Houses VMs
- API driven
- Support for multi-hypervisors
StorageImage, Object, Block
“Glance, Swift, Cinder”
- Instance/VM image storage
- Cloud object storage
- Persistent block level storage
Dashboard
“Horizon”
- Web app for controlling OpenStack resources
- Self-service portal
Identity
“Keystone”
- Centralized policies
- Tenant mgmt.
- RBAC
- Ext. integration (LDAP)
Networking
“Neutron”
- Networking as a service
- Multiple models
- IP address mgmt.
- Plugins to external HW
Telemetry
“Ceilometer”
- Central collection point
- Metering and monitoring
Orchestration
“Heat”
- Template-based orchestration engine
- More rapid deployment of applications
Database
“Trove”
-DBaaS
-Single-tenant DB within instance
Data Processing
“Sahara”
- Fast provisioning of Hadoop clusters
New projects are added over time…
http://governance.openstack.org/reference/projects/index.html
OpenStack Participation
Why Does OpenStack Matter?
• Choice• There is no one-size fits all option for cloud computing
• There is no single vendor who can fill all needs of a cloud stack – You will likely engage with multiple partners
• Community• Open Source
• Community driven – Individual, organizational
• Better time-to-market and faster feature velocity
• Commercialization• Start with the ‘baseline’ OpenStack components
• Vendor opportunities for value-add integration on top of OpenStack baseline• Design, deployment, automation, operation, high-availability, applications, etc…
Who is Involved in OpenStack?
• You name it – Compute, Storage, Networking vendors, Universities, Gov’t, massive pile of OpenStack-specific startups
• Traditional HW vendors – Cisco, HP, Dell, etc…
• Providers – Rackspace, AT&T, Comcast, etc…
• Startups – PistonCloud, SwiftStack and many, many more…
• Distributions & Support – Red Hat, Canonical, SUSE
• Some are focused on only small parts of OpenStack such as driving object storage features (SwiftStack) or high-performance block storage (SolidFire)
Cisco’s Focus on OpenStack - Today
• Start simple, build from there – Focus on automation and HA
• Baseline + Cisco Integration
• General education of what Cisco is doing - Thought Leadership – Help customers know What, When, Where & How
Engineering
Customers
Community• Nexus/ACI integration
• UCS/UCSM
• CSR/ASR
• Cisco Prime Network Registrar
• Co-developed solutions (Red Hat, Canonical, SUSE)
• Metacloud/Cisco Private Cloud
• Neutron – Network Service
• Horizon – Dashboard
• Keystone – Identity
• Swift – Object Storage
• Ceph/Cinder – Block Storage
• Automation
• Design/Deployment
Cisco References
• http://www.cisco.com/go/openstack
• http://docwiki.cisco.com/wiki/OpenStack
• https://developer.cisco.com/site/openstack/
• https://github.com/cisco-openstack/
• https://launchpad.net/openstack-cisco
Reference
Cisco + Other Distributions/Vendors
• Cisco.com OpenStack: http://www.cisco.com/web/solutions/openstack/index.html
• Red Hat:
• UCSO: http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Data_Center/OpenStack/UCSO/Starter/1-0/UCSO.pdf
• http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Data_Center/OpenStack/RHEL-UCS/Red-Hat-Openstack-Platform-UCS.pdf
• http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/UCS_CVDs/ucs_rhos.pdf
• http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-7000-series-switches/wp_openstack.pdf
• http://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/unified-fabric/solution-brief-c22-729865.pdf
• Ubuntu: http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/UCS_CVDs/ucs_ubuntu.pdf
Reference
To Automate or Not and How Much to Automate
• Single Shot – Manually setup everything:
• Deep appreciation for what installers do
• Best way to learn how the components of OpenStack communicate
• Semi-Automatic – Use automation for ‘some’ of the setup and maintain/modify manually:
• See slide on installers
• Automatic – Install > Operate > Upgrade
• CI/CD a huge part of this flow
Distro/Vendor Supported Installers
• Red Hat OpenStack (RHOS/RDO) – PackStack and Foreman:
http://www.redhat.com/openstack/
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/
Spinal Stack: http://spinal-stack.readthedocs.org/en/latest/index.html
• Canonical/Ubuntu – MAAS and JuJu: http://www.ubuntu.com/cloud
• Others …
Reference
Red Hat - Packstack• Meant for single/few host deployments in NON-production deployments:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/6/html/Deploying_OpenStack_Proof_of_Concept_Environments/index.html
• Install Packstack:
• Generate SSH keys (or let Packstack do it):
• Generate an answer file (or just run ‘packstack’ and follow the prompts):
• Run the answer file:
yum install -y openstack-packstack
ssh-keygen
packstack --gen-answer-file=~/answers.cfg
packstack --answer-file=~/answers.cfg
Reference
What are Enterprises/SPs doing with OpenStack?
Common Enterprise Use Cases
• OpenStack, at least today, is targeted at hosting modern day distributed applications written for the cloud – This isn’t your grandpa’s server virtualization platform built for individual VM HA/Mobility
• Sandbox environments
• A place to research, learn and test CI/CD processes
• PoC web applications along with ‘practicing’ the new DevOps methodology
• A place to learn the whole cloud deployment framework, document, train, move to production
• Development environments
• Using the lessons learned in the sandbox phase:• Build Dev, QA and production environments
• Apply CI/CD processes
• Slow-role Web application deployment either on ‘standard’ OpenStack or in conjunction with a PaaS deployment
• Data Processing environments – Big Data clusters, etc..
• Training systems – Cheap and fast to build and tear down for each class
• Revenue generating applications – Vertical applications
Telco’s are Turning to OpenStack for NFV
› Resource Allocation & Optimisation
› Resource Isolation
PLUGIN ESXi
OS NETWORK
FRAMEWORK
OS COMPUTE
FRAMEWORK
OS STORAGE
FRAMEWORK
NEUTRON
APINOVA API
SWIFT
API
PLUGIN
GLANCE
API
CINDER
API
PLUGIN
OS KEYSTONE
FRAMEWORK
KEYSTONE
API
Ceilo
mete
r
PLUGINLinux
COMPUTE STORAGENETWORK IDAM
Su
pp
ort fu
nctio
nsPLUGINPLUGIN
Cloud Manager
Application Domain OSS
NFV Applications Enterprise Applications
› Real Time Response– Interrupt servicing
– OVS latency
› Networking– WAN orchestration
– VNF provisioning
› Carrier Grade Security– Multi-tenancy with end-to-end
isolation
› Software Management and Upgrade Support– Hitless & automated upgrades
› Backup and Restore– Automatic backup
› Audit and Trouble Shooting– Audit log, monitor
› Assurance:› High Availability
– Mitigation of failures
– Fault monitoring and heath check
FirewallDPICDNWAN
AccelerationDNS
CarrierGrade NAT Session Border
Controller
PE RouterEPC
https://wiki.openstack.org/wiki/Teams/NFV
Shock-and-Awe: Dashboard is not where tenants do their work
Cloud Apps Deployment – Automate it
Boot the Instance
Config Management
App is Deployed
Rinse & Repeat
- Cloud-init for Puppet/Chef/etc..
- Image already has agent/script
http://docs.openstack.org/user-
guide/content/user-data.html
# Nodes for web server instances
node 'sales-web-01' {
include lamp
}
root@build-server:~# tree /etc/puppet/modules/lamp/
/etc/puppet/modules/lamp/
├── files
│ ├── apache2.conf
│ ├── index.php
│ └── php5.conf
└── manifests
└── init.pp
nova boot --user-data ./cloud-config-puppet.txt --image
precise-x86_64 --flavor m1.tiny --key_name ctrl-key --nic
net-id=42823c88-bb86-4e9a-9f7b-ef1c0631ee5e sales-web-01
Cloud Apps Deployment - Heat
• Growing interest in Heat-backed deployments
• Today, Heat orchestrates resources inside a tenant space
• https://wiki.openstack.org/wiki/Heat
• http://docwiki.cisco.com/wiki/OpenShift_Origin_Heat_Deployment_Guide
• http://blog.scottlowe.org/2014/05/01/an-introduction-to-openstack-heat/
• https://github.com/shmcfarl/my-heat-templates
Baseline vs. Premium OpenStack Deployments
Common Baseline Components - Example
OpenStack Platform
Network
Neutron
ML2
OVS Linux Bridge
Infrastructure
Haproxy/Keepalived
Compute
Nova
KVM Zen
Storage
Swift
Ceph Object GW
Cinder
Ceph Block
RBD
Glance
Orchestration etc..
Common Premium Components - Example
OpenStack Platform
Network
Neutron
ML2
OVSCisco
NexusLinux Bridge
Infrastructure
Compute
Nova
KVM Zen
Storage
Swift Cinder
Ceph Block
RBD
Glance
Orchestration etc..
OpenStack Deployment Overview
- Rack/Node Scale
AIOController/Compu
te/Storage
AIO Controller:
- MySQL, MariaDB, etc
- RabbitMQ, Qpid, etc..
- API Endpoints:
- Keystone
- Glance
- Nova
- Neutron
- Cinder
- Heat
- Swift
AIOController
Compute/Stor
age
Compute/Stor
age
Compute
Compute
Storage
Storage
StorageCompute
AIOController
All-in-One (AIO) – Getting Started
Data Center
Infrastructure
OOB
Compute
Network
Node(s)
AIO
Controller
Compute
Network
Node(s)
AIO
Controller
Compute
Network
Node(s)
AIO
Controller
Spine/Agg Layer
TOR(s) TOR(s) TOR(s)
Spine/Agg Layer
Block
Storage
Block
Storage
Block
Storage
AIO Controllers:
- Galera/MySQL
- RabbitMQ
- API Endpoints:
- Keystone
- Glance
- Nova
- Neutron
- Cinder
- Heat
- Swift
OOB OOBSLB
Infrastructure
Services
Build/PXE
Automation
DNS
DHCP
NTP
Logging
Object
Storage
Object
Storage
Object
Storage
All-in-One (AIO) Compressed HA
Data Center
Infrastructure
OOB
Spine/Agg Layer
TOR(s) TOR(s) TOR(s)
Spine/Agg Layer
OOB OOB
Object
Storage
Object
Storage
Swift
Proxies
TOR(s)
Object
Storage
OOBOOB
RabbitMQ
API
Endpoints
Galera
TOR(s) TOR(s)
Compute
OOB
Block
Storage
Object
Storage
RabbitMQ
API
Endpoints
Galera
Compute
Block
Storage
Object
Storage
RabbitMQ
API
Endpoints
Galera
Compute
Block
Storage
Object
Storage
Compute
Network
Node(s)
Compute
Compute
Compute
Compute
Network
Node(s)
Compute
Compute
Compute
Block
Storage
Block
Storage
Compute Compute
Service Cloud Tenant Cloud
Service Cloud + Tenant Cloud
What’s a Service Cloud?
• It’s the ‘under cloud’
• Used as a hosting platform for tenant cloud services – usually in a large cloud (1000s of instances with 100-1000s of tenants)
• It is an OpenStack deployment that will host (virtually) the OpenStack control functions used by each tenant
Service Cloud
AIO
ControllerAIO
Controller
AIO
Controller Tenant 1
AIO
ControllerAIO
Controller
AIO
Controller Tenant 2
Compute
Compute
OpenStack Deployment Overview- Network
What Really Changes in my Data Center?
• OpenStack components live South of the Top-of-Rack switch
• Your existing DC, Internet Edge and BN architecture stays the same
• It’s about the compute, storage and orchestration/management tiers
• Your apps go largely unchanged
Se
rvic
es
AccessLayer
AggLayer
CoreLayer
UC
S C
-Se
rie
s UC
S B
-Se
ries
Enterprise/Internet
OpenStack Lives Here
Network Decisions• OpenStack Networking
• http://docs.openstack.org/admin-guide-cloud/content/section_networking-scenarios.html
• Many vendor plugins• ML2/OVS, ML2/Linux Bridge
• Cisco Nexus Mechanism driver for VLAN and VXLAN - http://docwiki.cisco.com/wiki/OpenStack/ML2NexusMechanismDriver
• Cisco Nexus Mechanism driver for APIC - https://techzone.cisco.com/t5/Application-Centric/APIC-OpenStack-Driver-Installation/ta-p/764781
• Cisco Nexus 1000v - http://www.cisco.com/c/en/us/products/switches/nexus-1000v-kvm/index.html
• VLAN Trunking, GRE, VXLAN
• Overlay/Underlay MTU sizing/testing
• Scale
• VLAN number limitations for large tenant + networking environments
• GRE/VXLAN – Throughput impact, especially on older releases
• Service scale – i.e. VPNaaS mesh
• Network Tuning – Linux kernel, networking and vSwitch-specific (OVS) tuning is critical:
• vhost-net (‘modprobe vhost-net’): http://www.linux-kvm.com/content/how-maximize-virtio-net-performance-vhost-nethttps://ask.openstack.org/en/question/6140/quantum-neutron-gre-slow-performance/
• Test Offload settings: ‘ethtool -K eth1 gro off’ - http://www.linuxcommand.org/man_pages/ethtool8.html
BRKDCT-2445 - Agile OpenStack Networking
BRKACI-3456 – Mastering OpenStack and ACI
Nexus Plugin Example Topology - VLANs
• Trunk links from each compute node to ToR/Access Layer
• Each tenant uses one or more VLANs for tenant isolation
• Very basic and very fast
• Cisco Nexus ML2 driver allows for auto-configuration of ToR trunk links facing the compute nodes
compute-server01
compute-server02
AggLayer
Trunk links:VLAN:500-600
eth0
control-server
eth0
eth1eth1
eth0
eth1
e1/8
e1/9
Provider Networks(s):VLAN500: 192.168.250.0/24VLAN501: 192.168.251.0/24…
MgmtNetwork
Spine/Leaf – VXLAN examples
Compute 1 Compute 2
VXLAN host-to-host
Leaf
Spines
Compute 1 Compute 2
VLANs host-to-leaf
Leaf
Spines
VXLAN leaf-to-leaf
Mixed VLAN/VXLAN Host-based VXLAN
The Hard Stuff – IPv6 + Cloud• Inside of a private cloud stack you have a lot of moving parts and they all ride on IP:
• API endpoints
• Provisioning, Orchestration and Management services
• Boatload of protocols and databases and high-availability components
• Virtual networking services <> Physical networking
• IPv6 has been available with OpenStack for awhile but it has depended on a lot of backports and custom patches to be functional
• Kilo offers the best ‘out-of-box’ support yet – but still needs more work
• Tenant IPv6 Address Assignment via:
• SLAAC, Stateful DHCPv6, Stateless DHCPv6
• Two common approaches for IPv6 support:
• Dual-Stack everything (Service Tier + Tenant Access Tier [Tenant management interface along with VM network access])
• Conditional Dual stack (Tenant Access Tier only – API endpoints & DBs are still IPv4)
Cloud Stack – IP Version Options
API endpoints
Service Tier
Database(s)
Automation
Interface
(GUI, CLI)
VM Operating
System
Tenant
Access Tier
Virtual
Networking
(L2/L3)
Virtual
Network
Services
(SLB/FW)
Tenant
Interface
(GUI, CLI)
Dual-Stack Everything
IPv4/IPv6
IPv4/IPv6
IPv4/IPv6
IPv4/IPv6
IPv4/IPv6
IPv4/IPv6
IPv4/IPv6
IPv4/IPv6
API endpoints
Service Tier
Database(s)
Automation
Interface
(GUI, CLI)
VM Operating
System
Tenant 1
Access Tier
Virtual
Networking
(L2/L3)
Virtual
Network
Services
(SLB/FW)
Tenant
Interface
(GUI, CLI)
Conditional Dual-Stack
IPv4/IPv6
IPv4/IPv6
IPv4/IPv6
IPv4/IPv6
IPv4
IPv4
IPv4
IPv4/IPv6
Tenant 2
Access Tier
IPv6
IPv6
IPv6
IPv6
VM Operating
System
Virtual
Networking
(L2/L3)
Virtual
Network
Services
(SLB/FW)
Tenant
Interface
(GUI, CLI)
Tenant IPv6 Address Options
Web
Server
App
Server
Tenant 1 Tenant 2
2001:420::/32
:BAD:BEEF::/64 :DEAD:BEEF::/64
::1
::2
::A
:BA
D:F
AC
E::/6
4 Web
Server
App
Server
::1
::2
::A
:DE
AD
:FA
CE
::/6
4
Option 1
Cloud Provider-assigned
Addressing
Web
Server
App
Server
Tenant 1 Tenant 2
Tenant 1 = 2001:DB8:1::/48
Tenant 2 = 2001:DB8:2::/48
:1000::/64 :2000::/64
::1
::2
::A
:10
01
::/6
4
Web
Server
App
Server
::1
::2
::A
:20
01
::/6
4
Option 2
Tenant Brings Addressing
Web
Server
App
Server
Tenant 1 Tenant 2
Tenant 1 = 2001:DB8:1::/48
Tenant 2 = 2001:DB8:2::/48
ULA Block/48 ULA Block/48
::1
::2
::A
Web
Server
App
Server
::1
::2
::A
Option 3
Prefix Translation
FD
9C
:58
ED
:7D
73
:1::
/64
FD
DE
:50
EE
:79
DA
:1::
/64
XLATE/Proxy
Don’t do this
IPv6 in OpenStack – Example Topology
Reference
Create the Public Network/Subnetneutron net-create public --router:external
neutron subnet-create --name public-subnet --allocation-pool start=172.16.12.5,
end=172.16.12.254 public 172.16.12.0/24
neutron subnet-create --ip-version=6 --name=public-v6-subnet --allocation-pool start=2001:db8:cafe:d::5,
end=2001:db8:cafe:d:ffff:ffff:ffff:fffe --disable-dhcp public 2001:db8:cafe:d::/64
Router
IPv4: 172.16.12.0/24 IPv6: 2001:db8:cafe:d::/64
.5 ::5
DC
rtr
SLAAC Mode
IPv4: 10.0.0.0/24 IPv6: 2001:db8:cafe:0::/64
Instance
Router
IPv4: 172.16.12.0/24 IPv6: 2001:db8:cafe:d::/64
.5 ::5
.1 ::1
DC
DNS
IPv4: 10.0.0.9
IPv6: 2001:db8:cafe:0:f816:3eff:fe79:5acc
neutron net-create private
neutron subnet-create --ip-version=6 --name=private_v6_subnet --ipv6-address-mode=slaac
--ipv6-ra-mode=slaac private 2001:db8:cafe::/64
+-------------------+-----------------------------------------------------------------------------+
| Field | Value |
+-------------------+-----------------------------------------------------------------------------+
| allocation_pools | {"start": "2001:db8:cafe::2", "end": "2001:db8:cafe:0:ffff:ffff:ffff:fffe"} |
| cidr | 2001:db8:cafe::/64 |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 2001:db8:cafe::1 |
| host_routes | |
| id | 42cc3dbc-938b-4ad6-b12e-59aef7618477 |
| ip_version | 6 |
| ipv6_address_mode | slaac |
| ipv6_ra_mode | slaac |
| name | private_v6_subnet |
| network_id | 7166ce15-c581-4195-9479-ad2283193d06 |
| subnetpool_id | |
| tenant_id | f057804eb39b4618b40e06196e16265b |
+-------------------+-----------------------------------------------------------------------------+
2001:db8:cafe:a::e
Reference
SLAAC Mode Info
• OpenStack will not inject the IPv6 DNS entry from the subnet dns_nameserversentry
• Options
• Manually setting the IPv6 DNS server entry in the resolv.conf file allows for correct IPv6-based name resolution
• Bake DNS settings into your image
• Cloud-init to inject the DNS configuration
• You do get A and AAAA records back over IPv4 transport
• Basically, it works as it should
SLAAC Mode – Sniffer Capture
15:08:01.520353 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 16) fe80::f816:3eff:fe79:5acc > ff02::2: [icmp6 sum ok] ICMP6, router solicitation, length 16
source link-address option (1), length 8 (1): fa:16:3e:79:5a:cc
0x0000: fa16 3e79 5acc
15:08:01.520667 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 56) fe80::f816:3eff:fec3:17b4 > ff02::1: [icmp6 sum ok] ICMP6, router advertisement, length 56
hop limit 64, Flags [none], pref medium, router lifetime 30s, reachable time 0s, retrans time 0s
prefix info option (3), length 32 (4): 2001:db8:cafe::/64, Flags [onlink, auto], valid time 86400s, pref. time 14400s
0x0000: 40c0 0001 5180 0000 3840 0000 0000 2001
0x0010: 0db8 cafe 0000 0000 0000 0000 0000
source link-address option (1), length 8 (1): fa:16:3e:c3:17:b4
0x0000: fa16 3ec3 17b4
15:08:02.256004 IP6 (hlim 1, next-header Options (0) payload length: 36) fe80::f816:3eff:fe79:5acc > ff02::16: HBH (rtalert: 0x0000) (padn) [icmp6 sum ok] ICMP6, multicast listener report v2, 1 group record(s) [gaddr ff02::1:ff79:5acc is_ex { }]
15:08:02.484047 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 24) :: > ff02::1:ff79:5acc: [icmp6 sum ok] ICMP6, neighbor solicitation, length 24, who has 2001:db8:cafe:0:f816:3eff:fe79:5acc
Reference
Stateful DHCPv6 Mode
IPv4: 10.0.1.0/24 IPv6: 2001:db8:cafe:1:/64
Instance
Router
IPv4: 172.16.12.0/24 IPv6: 2001:db8:cafe:d::/64
.5 ::5
.1 ::1
DC
DNS
IPv4: 10.0.1.4
IPv6: 2001:db8:cafe:1:4
neutron net-create private-dhcpv6
neutron subnet-create --ip-version=6 --name=private_dhcpv6_subnet --ipv6-address-mode=dhcpv6-stateful
--ipv6-ra-mode=dhcpv6-stateful private-dhcpv6 2001:db8:cafe:1::/64 --dns-nameserver 2001:db8:cafe:a::e
+-------------------+-------------------------------------------------------------------------------+
| Field | Value |
+-------------------+-------------------------------------------------------------------------------+
| allocation_pools | {"start": "2001:db8:cafe:1::2", "end": "2001:db8:cafe:1:ffff:ffff:ffff:fffe"} |
| cidr | 2001:db8:cafe:1::/64 |
| dns_nameservers | 2001:db8:cafe:a::e |
| enable_dhcp | True |
| gateway_ip | 2001:db8:cafe:1::1 |
| host_routes | |
| id | 545ea206-9d14-4dca-8bae-7940719bdab5 |
| ip_version | 6 |
| ipv6_address_mode | dhcpv6-stateful |
| ipv6_ra_mode | dhcpv6-stateful |
| name | private_dhcpv6_subnet |
| network_id | 55ed8333-2876-400a-92c1-ef49bc10aa2b |
| subnetpool_id | |
| tenant_id | f057804eb39b4618b40e06196e16265b |
+-------------------+-------------------------------------------------------------------------------+
2001:db8:cafe:a::e
Reference
DHCPv6 Stateful Mode Info
• Enable client for DHCPv6:
• Then you get addressing and options:
/etc/network/interfaces
auto eth0
iface eth0 inet dhcp
iface eth0 inet6 dhcp
Ubuntu
ubuntu@dhcpv6-1:~$ more /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 172.16.10.14
nameserver 2001:db8:cafe:a::e
search openstacklocal
CentOS/RHEL/Fedora
/etc/sysconfig/network-scripts/ifcfg-xxxx
IPV6INIT="yes"
DHCPV6C="yes"
Reference
DHCPv6 Stateful Mode – Sniffer Capture14:56:02.671930 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 24) fe80::f816:3eff:fe77:e5a0 > ff02::1: [icmp6 sum ok] ICMP6, router advertisement, length 24
hop limit 64, Flags [managed], pref medium, router lifetime 30s, reachable time 0s, retrans time 0s
source link-address option (1), length 8 (1): fa:16:3e:77:e5:a0
0x0000: fa16 3e77 e5a0
14:56:08.042878 IP6 (hlim 1, next-header UDP (17) payload length: 64) fe80::f816:3eff:fe22:386b.546 > ff02::1:2.547: [udp sum ok] dhcp6 solicit (xid=85680b (client-ID hwaddr/time type 1 time 482446373 fa163e22386b) (option-request DNS-server DNS-search-list Client-FQDN SNTP-servers) (elapsed-time 101) (IA_NA IAID:1042430059 T1:3600 T2:5400))
14:56:08.143267 IP6 (class 0xc0, hlim 64, next-header UDP (17) payload length: 175) fe80::f816:3eff:fe06:176f.547 > fe80::f816:3eff:fe22:386b.546: [udpsum ok] dhcp6 advertise (xid=85680b (client-ID hwaddr/time type 1 time 482446373 fa163e22386b) (server-ID hwaddr type 1 fa163e06176f) (IA_NA IAID:1042430059 T1:43200 T2:75600 (IA_ADDR 2001:db8:cafe:1::4 pltime:86400 vltime:86400)) (status-code success) (preference 255) (DNS-search-list openstacklocal.) (DNS-server 2001:db8:cafe:a::e) (Client-FQDN))
14:56:08.143719 IP6 (hlim 1, next-header UDP (17) payload length: 106) fe80::f816:3eff:fe22:386b.546 > ff02::1:2.547: [udp sum ok] dhcp6 request (xid=9cb172 (client-ID hwaddr/time type 1 time 482446373 fa163e22386b) (server-ID hwaddr type 1 fa163e06176f) (option-request DNS-server DNS-search-list Client-FQDN SNTP-servers) (elapsed-time 0) (IA_NA IAID:1042430059 T1:3600 T2:5400 (IA_ADDR 2001:db8:cafe:1::4 pltime:7200 vltime:7500)))
14:56:08.143897 IP6 (class 0xc0, hlim 64, next-header UDP (17) payload length: 186) fe80::f816:3eff:fe06:176f.547 > fe80::f816:3eff:fe22:386b.546: [udpsum ok] dhcp6 reply (xid=9cb172 (client-ID hwaddr/time type 1 time 482446373 fa163e22386b) (server-ID hwaddr type 1 fa163e06176f) (IA_NA IAID:1042430059 T1:3600 T2:6300 (IA_ADDR 2001:db8:cafe:1::4 pltime:7200 vltime:7500)) (status-code success) (DNS-search-list openstacklocal.) (DNS-server 2001:db8:cafe:a::e) (Client-FQDN))
Reference
Stateless DHCPv6 Mode
IPv4: 10.0.2.0/24 IPv6: 2001:db8:cafe:2:/64
Instance
Router
IPv4: 172.16.12.0/24 IPv6: 2001:db8:cafe:d::/64
.5 ::5
.1 ::1
DC
DNS
IPv4: 10.0.2.4
IPv6: 2001:db8:cafe:2:f816:3eff:fefe:d157
neutron net-create private-dhcpv6-stateless
neutron subnet-create --ip-version=6 --name=private_dhcpv6_stateless_subnet
--ipv6-address-mode=dhcpv6-stateless --ipv6-ra-mode=dhcpv6-stateless private-dhcpv6-stateless
2001:db8:cafe:2::/64 --dns-nameserver 2001:db8:cafe:a::e
+-------------------+-------------------------------------------------------------------------------+
| Field | Value |
+-------------------+-------------------------------------------------------------------------------+
| allocation_pools | {"start": "2001:db8:cafe:2::2", "end": "2001:db8:cafe:2:ffff:ffff:ffff:fffe"} |
| cidr | 2001:db8:cafe:2::/64 |
| dns_nameservers | 2001:db8:cafe:a::e |
| enable_dhcp | True |
| gateway_ip | 2001:db8:cafe:2::1 |
| host_routes | |
| id | e63e72d5-493a-4a49-8f7d-8846c2bc7a8f |
| ip_version | 6 |
| ipv6_address_mode | dhcpv6-stateless |
| ipv6_ra_mode | dhcpv6-stateless |
| name | private_dhcpv6_stateless_subnet |
| network_id | 27618d5e-318c-46a4-b6a2-a155beed9643 |
| subnetpool_id | |
| tenant_id | f057804eb39b4618b40e06196e16265b |
+-------------------+-------------------------------------------------------------------------------+
2001:db8:cafe:a::e
Reference
DHCPv6 Stateless Mode Info
• Enable client for DHCPv6 Stateless:
• Then you get addressing and options:
/etc/network/interfaces
auto eth0
iface eth0 inet dhcp
iface eth0 inet6 auto
dhcp 1
ubuntu@dhcpv6-stateless-4:~$ more /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 172.16.10.14
nameserver 2001:db8:cafe:a::e
search openstacklocal
Ubuntu CentOS/RHEL/Fedora
/etc/sysconfig/network-scripts/ifcfg-xxxx
IPV6INIT="yes"
DHCPV6C="yes”
DHCPV6C_OPTIONS="-S"
Reference
DHCPv6 Stateless Mode – Sniffer Capture15:43:23.911172 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 56) fe80::f816:3eff:fec1:bc52 > ff02::1: [icmp6 sum ok] ICMP6, router advertisement, length 56
hop limit 64, Flags [other stateful], pref medium, router lifetime 30s, reachable time 0s, retrans time 0s
prefix info option (3), length 32 (4): 2001:db8:cafe:2::/64, Flags [onlink, auto], valid time 86400s, pref. time 14400s
0x0000: 40c0 0001 5180 0000 3840 0000 0000 2001
0x0010: 0db8 cafe 0002 0000 0000 0000 0000
source link-address option (1), length 8 (1): fa:16:3e:c1:bc:52
0x0000: fa16 3ec1 bc52
15:43:25.353331 IP6 (hlim 1, next-header UDP (17) payload length: 44) fe80::f816:3eff:fefe:d157.546 > ff02::1:2.547: [udp sum ok] dhcp6 inf-req(xid=d2dbc8 (client-ID hwaddr type 1 fa163efed157) (option-request DNS-server DNS-search-list Client-FQDN SNTP-servers) (elapsed-time 94))
15:43:25.353578 IP6 (class 0xc0, hlim 64, next-header UDP (17) payload length: 88) fe80::f816:3eff:fe2d:a6de.547 > fe80::f816:3eff:fefe:d157.546: [udpsum ok] dhcp6 reply (xid=d2dbc8 (client-ID hwaddr type 1 fa163efed157) (server-ID hwaddr type 1 fa163e2da6de) (DNS-search-list openstacklocal.) (DNS-server 2001:db8:cafe:a::e) (lifetime 86400))
Reference
OpenStack Deployment Overview
- High Availability
High Availability Decisions
• Know what you don’t know
• Pick your release – Major changes in HA across all parts of OpenStack have progressed on each release
• Many components are:
• Databases: Options include MySQL-WSREP and Galera
• Message Queue: RabbitMQ Clustering and RabbitMQ Mirrored Queues
• API/Web services: HAProxy, Keepalived, traditional SLB
• Swift proxy nodes: HAProxy, Keepalived, traditional SLB
• Swift nodes: Architecturally designed to be available (i.e. multiple copies of objects)
• Compute node: Nothing directly HA, but can use Migration for planned maintenance windows
• Puppet HA: Search “puppet master redundancy” or “masterless puppet” – you will land plenty of reading choices ;-)
L3 High-Availability (HA)
• New in Juno release: https://wiki.openstack.org/wiki/ReleaseNotes/Juno
• Helps resolve issue of single tenant router going down and isolating the tenant instances
• Can be configured manually via neutron client by an Admin:
‘neutron router-create --ha True|False’
• Or set as a system default within the neutron.conf/l3_agent.ini files
• Uses Keepalived for VRRP between L3 agents
• Existing non-HA enabled routers can be updated to HA:
‘neutron router-update <router-name> --ha=True’
• In Juno, Distributed Virtual Router (DVR) and L3 HA cannot be enabled at the same time
• Requires a minimum of two network nodes (or controllers) that have L3 agents running
L3 HA – Tenant View
• Tenant sees one router with a single gateway IP address
• non-Admin users cannot control if the router is HA or non-HA
• From the tenant’s perspective the router behaves the same in HA or non-HA mode
L3 HA – Routing View
Bridge
Bridge
External Networks
Internal Network
10.10.30.x/24
HA Network
VRRP VIP
169.254.0.1Tenant GW
10.10.30.1
master backup
• Tenant network has 10.10.30.x/24 assigned
• VRRP is using 169.254.xx over a dedicated HA-only network that traverse the same tenant network type
• Router (L3 agent) on the left is the VRRP master and is the tenant GW (10.10.30.1)
L3 HA – Host View
Management/Underlay Network
Public Network
eth0
br-tun
patch-int
patch-tun
br-int
V M V M V M
Compute Node
eth1
br-eth1
br-int
phy-br-eth1
int-br-eth1
patch-int
patch-tun
Network Node 1
qr-xxxx
qrouter
keepalived
ha-xxxx
qg-xxxx
eth0
br-tun
eth1
br-eth1
br-int
phy-br-eth1
int-br-eth1
patch-int
patch-tun
Network Node 2
qr-xxxx
qrouter
keepalived
ha-xxxx
qg-xxxx
eth0
br-tun
L3 HA – Traffic Flow
Management/Underlay Network
Public Network
eth0
br-tun
patch-int
patch-tun
br-int
V M V M V M
Compute Node
eth1
br-eth1
br-int
phy-br-eth1
int-br-eth1
patch-int
patch-tun
Network Node 1
qr-xxxx
qrouter
keepalived
ha-xxxx
eth1
br-eth1
br-int
phy-br-eth1
int-br-eth1
patch-int
patch-tun
Network Node 2
qr-xxxx
qrouter
keepalived
ha-xxxx
qg-xxxx qg-xxxx
VRRPv2
169.254.192.x >> 224.0.0.18
master backup
eth0
br-tun
eth0
br-tun
Enabling L3 HA – Neutron Server
• On node running Neutron server – Edit the /etc/neutron/neutron.conf file:
• Restart neutron server (i.e. systemctl restart neutron-server.service)
router_distributed = False
# =========== items for l3 extension ==============
# Enable high availability for virtual routers.
l3_ha = True
#
# Maximum number of l3 agents which a HA router will be scheduled on. If it
# is set to 0 the router will be scheduled on every agent.
max_l3_agents_per_router = 3
#
# Minimum number of l3 agents which a HA router will be scheduled on. The
# default value is 2.
min_l3_agents_per_router = 2
#
# CIDR of the administrative network if HA mode is enabled
l3_ha_net_cidr = 169.254.192.0/18
Reference
Enabling L3 HA – L3 Agents (on each network node)
• On nodes running L3 Agent – Edit the /etc/neutron/l3_agent.ini file:
• Restart the L3 Agent service on each node (i.e. systemctl restart neutron-l3-agent.service)
# Location to store keepalived and all HA configurations
ha_confs_path = $state_path/ha_confs
# VRRP authentication type AH/PASS
ha_vrrp_auth_type = PASS
# VRRP authentication password
ha_vrrp_auth_password = cisco123
# The advertisement interval in seconds
ha_vrrp_advert_int = 2
Reference
Example: neutron router-create
• neutron client with --ha flag: • Note: Once the neutron.confand l3_agent.ini configs are done you no longer need to use the --ha True flag to enable HA – it does it by default
• If you want to create a non-HA enabled router, use --ha False
• Remember that admins are the only ones who can use the flags
[root@net1 ~]# neutron router-create --ha True test1
Created a new router:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| distributed | False |
| external_gateway_info | |
| ha | True |
| id | 1fe9e406-2bb5-42c4-af62-3daef314e181 |
| name | test1 |
| routes | |
| status | ACTIVE |
| tenant_id | 45e1c2a0b3a244a3a9fad48f67e28ef4 |
+-----------------------+--------------------------------------+
Reference
Keepalived.conf[root@net1 ~]# cat /var/lib/neutron/ha_confs/719b853f-539e-420b-a76b-0440146f05de/keepalived.conf
. . . output abbreviated
vrrp_instance VR_1 {
state BACKUP
interface ha-0d655b16-c6
virtual_router_id 1
priority 50
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass cisco123
}
track_interface {
ha-0d655b16-c6
}
virtual_ipaddress {
169.254.0.1/24 dev ha-0d655b16-c6
}
virtual_ipaddress_excluded {
10.10.30.1/24 dev qr-c3090bd6-1b
192.168.81.13/24 dev qg-4f163e63-c4
}
virtual_routes {
0.0.0.0/0 via 192.168.81.2 dev qg-4f163e63-c4
}
L3 HA Interface
Track the L3 HA interface
VRRP IP address
VIP address from ‘real’ networks
Default route
VRRPv2 Advertisement[root@net1 ~]# ip netns exec qrouter-719b853f-539e-420b-a76b-0440146f05de tcpdump -n -i ha-0d655b16-c6
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ha-0d655b16-c6, link-type EN10MB (Ethernet), capture size 65535 bytes
14:00:03.123895 IP 169.254.192.33 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20
14:00:05.125386 IP 169.254.192.33 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20
14:00:07.128133 IP 169.254.192.33 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20
14:00:09.129421 IP 169.254.192.33 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20
14:00:11.130814 IP 169.254.192.33 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20
14:00:13.131529 IP 169.254.192.33 > 224.0.0.18: VRRPv2, Advertisement, vrid 1, prio 50, authtype simple, intvl 2s, length 20
Reference
Testing a failure
[root@net1 ~]# cat /var/lib/neutron/ha_confs/719b853f-539e-420b-a76b-0440146f05de/state
master
[root@net2 ~]# cat /var/lib/neutron/ha_confs/719b853f-539e-420b-a76b-0440146f05de/state
backup
[root@net1 ~]# ip netns exec qrouter-719b853f-539e-420b-a76b-0440146f05de ifconfig ha-0d655b16-c6 down
[root@net1 ~]# cat /var/lib/neutron/ha_confs/719b853f-539e-420b-a76b-0440146f05de/state
fault
[root@net2 ~]# cat /var/lib/neutron/ha_confs/719b853f-539e-420b-a76b-0440146f05de/state
master
ubuntu@server1:~$ ping 8.8.8.8
64 bytes from 8.8.8.8: icmp_seq=20 ttl=127 time=65.4 ms
64 bytes from 8.8.8.8: icmp_seq=21 ttl=127 time=107 ms
64 bytes from 8.8.8.8: icmp_seq=22 ttl=127 time=64.5 ms
64 bytes from 8.8.8.8: icmp_seq=23 ttl=127 time=67.6 ms
Check who is master:
Simulate a failure by shutting down the HA interface (remember this was in the ‘track’ list):
Check that VRRP switched to the other node as master:
Ping – the ultimate test of HA :
Increased delay but no loss
Storage
References for Storage Info
• OpenStack Storage: https://www.openstack.org/software/openstack-storage/
• Block Storage: http://docs.openstack.org/havana/config-reference/content/ch_configuring-openstack-block-storage.html
• Object Storage:http://docs.openstack.org/havana/config-reference/content/ch_configuring-object-storage.html
• Cinder How-to: http://docwiki.cisco.com/wiki/OpenStack:Havana:Cinder-Volume-Test
• CEPH Storage: http://ceph.com/docs/master/rados/
• http://www.redhat.com/en/technologies/storage/ceph
• http://www.slideshare.net/Inktank_Ceph/scaling-ceph-at-cern
Reference
Cisco Integration
Product Integration Overview• Nexus 1000v: http://www.cisco.com/c/en/us/products/switches/nexus-1000v-kvm/index.html
• Nexus 3000 and Higher: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps11541/data_sheet_c78-727737.html
• Cisco Nexus + OpenStack Deployment: http://docwiki.cisco.com/wiki/OpenStack:_Havana:_2-Role_Nexus
• Cisco CSR 1000v: http://www.cisco.com/c/en/us/td/docs/routers/csr1000/software/configuration/csr1000Vswcfg/installkvm.html
• Cisco ACI with OpenStack: http://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/unified-fabric/solution-brief-c22-729865.pdf
• Cisco APIC driver for OpenStack Neutron ML2: http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/guide-c07-732454.html
• UCS Mechanism Driver for ML2 – Kilo: http://docwiki.cisco.com/wiki/OpenStack/UCS_Mechanism_Driver_for_ML2_Plugin_Kilo
OpenStack integration with Cisco Solutions SummaryPurpose Using Cisco Product Code Availability Status
Network Layer 2 Virtual Switch Nexus 1000v OpenStack Juno Preview
SR-IOV UCS Fabric Interconnect Cisco OpenStack Juno Plus Tech Preview Preview
Physical Switch Nexus OpenStack Juno Preview
DHCP IPAM Prime Network Registrar Not upstream yet Preview
Network Layer 3 Virtual RouterCloud Services Router 1000v
OpenStack Juno Preview
Physical Router ASR 1000 Not upstream yet Preview
Network ServicesVirtual Firewall and VPN
Cloud Services Router 1000v
Firewall - Cisco OpenStack Juno Tech PreviewVPN - Cisco OpenStack Juno Plus Tech Preview
Preview
Network Layer2, Layer3, Services
ControllerApplication Policy Infrastructure Controller
APIC L2 - OpenStack JunoAPIC L3 - OpenStack Juno
Released
Declarative Policy ModelGroup Based Policy Framework
Group Based Policy StackForge Juno Released
Support
• Community model is like any other open source community support model• http://docs.openstack.org/grizzly/openstack-compute/admin/content/community-support.html
• http://ask.openstack.org
• Cisco AS - Assessments, plans, design, implement, support & optimize
• Cisco + Partnerships
• Channel Partners - Build a practice NOW!
Conclusion
• Next time: Scale, HA, apps, network design impact and some new breakouts (OpenStack storage session)
• OpenStack is for real and maturing at a rapid pace
• Many different players involved and it is evolving rapidly
• Align yourself with market leaders who have strong partnerships
• There is still a lot of focus on getting OpenStack Deployed, but we are progressing rapidly towards true operational issues:
• Scale
• Application deployment
• Upgrades
• Start now!
• Get involved in the community – open source enjoys the major advantage of feature velocity
Participate in the “My Favorite Speaker” Contest
• Promote your favorite speaker through Twitter and you could win $200 of Cisco Press products (@CiscoPress)
• Send a tweet and include
• Your favorite speaker’s Twitter handle @eyepv6
• Two hashtags: #CLUS #MyFavoriteSpeaker
• You can submit an entry for more than one of your “favorite” speakers
• Don’t forget to follow @CiscoLive and @CiscoPress
• View the official rules at http://bit.ly/CLUSwin
Promote Your Favorite Speaker and You Could Be a Winner
Complete Your Online Session Evaluation
Don’t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online
• Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 Amazon gift card.
• Complete your session surveys though the Cisco Live mobile app or your computer on Cisco Live Connect.
Continue Your Education
• Demos in the Cisco campus
• Walk-in Self-Paced Labs
• Table Topics
• Meet the Engineer 1:1 meetings
• Related sessions
https://developer.cisco.com/site/openstack/
http://www.cisco.com/go/openstack
Thank you
Reference Slides
Understanding Neutron + OVS for example.com
Example – Network LayoutHost View
Management Network: 10.121.13.x
eth0 eth1
Public: 192.168.238.x/24
br-ex
br-int
phy-br-ex
int-br-ex
br-tun
patch-int
patch-tun
Qrouter
eth0
br-tun
patch-int
patch-tun
br-int
V M V M V M
control-servercompute-server01
Example – OVS Bridge/Neutron Router“br-int” view
Management Network: 10.121.13.x
eth0 eth1
Public: 192.168.238.x/24
br-ex
br-int
phy-br-ex
int-br-ex
br-tun
patch-int
patch-tun
Qrouter
eth0
br-tun
patch-int
patch-tun
br-int
V M V M V M
root@control-server:~# ovs-vsctl list-ports br-int
int-br-ex
patch-tun
qr-024a0619-71
qr-10f02a4b-ab
qr-b37e1034-06
qr-ef7c1e0c-79
tap2340872e-68
tap271689cd-23
tap3fe91abf-c8
tap60a25081-14
tap6d3911a5-44
control-servercompute-server01
qr-xx-ab qr-xx-06
Example – OVS Bridge/Neutron Router“br-int” view qr-xx & tapxx
Management Network: 10.121.13.x
eth0 eth1
Public: 192.168.238.x/24
br-ex
br-int
phy-br-ex
int-br-ex
br-tun
patch-int
patch-tun
Qrouter
eth0
br-tun
patch-int
patch-tun
br-int
V M V M V M
root@control-server:~# ovs-vsctl list-ports br-int
int-br-ex
patch-tun
qr-024a0619-71
qr-10f02a4b-ab
qr-b37e1034-06
qr-ef7c1e0c-79
tap2340872e-68
tap271689cd-23
tap3fe91abf-c8
tap60a25081-14
tap6d3911a5-44
control-servercompute-server01
qr-xx-ab qr-xx-06
A tap interface for each network used for DHCP service:
68=10.10.10.2
23=10.10.15.2
c8=192.168.238.5
14=10.10.20.2
44=10.10.25.2
Example – OVS Bridge/Neutron Router“br-ex” & br-tun view
Management Network: 10.121.13.x
eth0 eth1
Public: 192.168.238.x/24
br-ex
br-int
phy-br-ex
int-br-ex
br-tun
patch-int
patch-tun
Qrouter
eth0
br-tun
patch-int
patch-tun
br-int
V M V M V M
root@control-server:~# ovs-vsctl list-ports br-ex
eth1
phy-br-ex
qg-8a8db076-b3
root@control-server:~# ovs-vsctl list-ports br-tun
gre-1
gre-3
patch-int
control-servercompute-server01
qg-xx-b3
gre-1 gre-3
Example – OVS Bridge/Neutron Routercompute-server01 “br-int” view
Management Network: 10.121.13.x
eth0 eth1
Public: 192.168.238.x/24
br-ex
br-int
phy-br-ex
int-br-ex
br-tun
patch-int
patch-tun
Qrouter
eth0
br-tun
patch-int
patch-tun
br-int
V M V M V M
control-servercompute-server01
root@compute-server01:~# ovs-vsctl list-ports br-int
patch-tun
qvo180f8458-7b
qvo3e60deda-cc
qvo92774056-da
Example – OVS Bridge/Neutron Router
br-tun
patch-int
patch-tun
br-int
V M V M V M
compute-server01
root@compute-server01:~# brctl show
bridge name bridge id STP enabled interfaces
br-int 0000.5e15d719a548 no int-br-ex
qvo180f8458-7b
qvo3e60deda-cc
qvo92774056-da
br-tun 0000.febc48d02540 no
qbr180f8458-7b 8000.1a425eeda354 no qvb180f8458-7b
vnet0
qbr3e60deda-cc 8000.8a70b498c8ce no qvb3e60deda-cc
vnet2
qbr92774056-da 8000.3e21bdf7dd5b no qvb92774056-da
vnet1
7bqvo-xx cc da
qvb-xxqbr-xx
7b7b
vnet0vnetxeth0
cccc
vnet1
eth0
dada
vnet2
eth0
V M
7bqvo-xx
qvb-xxqbr-xx
7b7b
vnetvnetx
eth0
*Thanks to Etsuji Nakai for the original detailed overview of OVS/Neutron ports : http://www.slideshare.net/enakai/how-quantum-configures-virtual-networks-under-the-hood
Example – OVS Bridge/Neutron Router
Management Network: 10.121.13.x
eth0 eth1
Public: 192.168.238.x/24
br-ex
br-int
phy-br-ex
int-br-ex
br-tun
patch-int
patch-tun
Qrouter
eth0
br-tun
patch-int
patch-tun
br-int
V M V M V M
control-servercompute-server01
root@compute-server01:~# ovs-vsctl show
ac44a899-5f10-4ff9-8dad-902fa7c10e5e
...
Bridge br-tun
Port "gre-2"
Interface "gre-2"
type: gre
options: {in_key=flow, out_key=flow, remote_ip="10.121.13.50"}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "gre-3"
Interface "gre-3"
type: gre
options: {in_key=flow, out_key=flow, remote_ip="10.121.13.52"}
Port br-tun
Interface br-tun
type: internal
gre-2 gre-3
control-server
compute-server02
Example – Basic VM Traffic FlowHigh-Level Walk-Thru
Management Network: 10.121.13.x
eth0 eth1
Public: 192.168.238.x/24
br-ex
br-int
phy-br-ex
int-br-ex
br-tun
patch-int
patch-tun
Qrouter
eth0
br-tun
patch-int
patch-tun
br-int
control-servercompute-server01
DHCP
tap
gre-110.121.13.5010.121.13.51
10.10.10.2
NAT
GRE tunnel
V M
7b
7b7b
vnet0 VM Boots
DHCP
IP Tables/Floating IP
Monitoring
Basic Monitoring is Available –Nagios/Graphite/Collectd• http://<build-server>/nagios3 - Health monitoring of OpenStack nodes
• http://<build-server>:8190 – Main Graphite performance console
• http://<build-server>:8190/dashboard/ -User/Self-service performance console
• http://www.nagios.org/
• http://graphite.wikidot.com/
• http://collectd.org/
Services
LBaaS
• A service to provide basic load-balancing of VMs/Instances within the OpenStack cluster
• The default LB provider is HAProxy
• Can leverage plugins for LBaaS to control external virtual or physical load-balancers (i.e. F5, A10, Citrix)
VPNaaS
• A service to provide IPsec VPN connectivity on a per-router/per-tenant basis
• Manual configuration via CLI or OpenStack Dashboard
• As with any IPsec site-to-site VPN, large deployments with lots of sites/tenants will require a lot of configuration due to mesh-type connectivity
• Cisco provides CSR as a means of deploying VPN
VPN Topology
Internet(Shared Network)
all-in-one-br1compute-
server01-br2
NeutronRouter
V M
Tenant 1
NeutronRouter
V M
Tenant 1
Branch 1 Branch 2
all-in-one-br2
NeutronRouter
V M
Tenant 2
NeutronRouter
V M
Tenant 2
Net10:
10.10.10.0/24
Net15:
10.10.15.0/24
Net20:
10.10.20.0/24
Net25:
10.10.25.0/24
Router IP:
192.168.3.5Router IP:
192.168.5.30
Setup IKE Policies –Dashboard Example
Add name
These are all defaults
Setup IPsec Policies –Dashboard Example
Add name
These are all defaults
Add IPsec Site ConnectionBranch 1 Branch 2
Match all from previous steps
Add name
Match from network info
Launch Instances
• Note: Depending on your firewall/security group setup, you may need to open up UDP/500 for ISAKMP and IP Protocol 50 for ESP
• Boot an instance at each site within the tenant you created the VPNaaS policies:
# Boot an instance using ID for the “net10” network
root@all-in-one-br1:~# nova boot --image cirros-x86_64--flavor m1.tiny --key_name ctrl-key --nic net-id=c57e37d2-4a49-4056-b119-1351906ebaf4 --security-groups Branch1 br1-instance-1
# Boot an instance using ID for the “net20” network
root@all-in-one-br2:~# nova boot --image cirros-x86_64--flavor m1.tiny --key_name ctrl-key --nic net-id=719e1969-ee29-4a11-af3c-60fa018d9af0 --security-groups Branch2 br2-instance-1
Initiate Connections Between Sites• Connect to an instance and ping instance at the other site:
• After the first instance has launched and initiates a connection the VPN Service should show ‘ac
root@all-in-one:~# ip netns exec qrouter-e81de88d-d339-47c2-ae0b-37f2948d1147 ssh [email protected]
The authenticity of host '10.10.20.3 (10.10.20.3)' can't be established.
RSA key fingerprint is 5e:1f:9f:11:da:d4:5c:24:ed:d7:2c:37:5d:5c:40:be.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.10.20.3' (RSA) to the list of known hosts.
$ ping 10.10.10.2
PING 10.10.10.2 (10.10.10.2): 56 data bytes
64 bytes from 10.10.10.2: seq=0 ttl=62 time=4.112 ms
64 bytes from 10.10.10.2: seq=1 ttl=62 time=1.689 ms
Is it really a VPN?
• Setup:
• Connection traffic between instances:
09:55:52.617054 IP 192.168.5.30.isakmp > 192.168.3.5.isakmp: isakmp: phase 1 ? ident
09:55:52.623361 IP 192.168.5.30.isakmp > 192.168.3.5.isakmp: isakmp: phase 1 ? ident
09:55:52.626899 IP 192.168.5.30.isakmp > 192.168.3.5.isakmp: isakmp: phase 1 ? ident[E]
09:55:52.635606 IP 192.168.5.30.isakmp > 192.168.3.5.isakmp: isakmp: phase 2/others ? oakley-quick[E]
09:56:18.255468 IP 192.168.5.30.isakmp > 192.168.3.5.isakmp: isakmp: phase 2/others ? inf[E]
09:56:18.255557 IP 192.168.5.30.isakmp > 192.168.3.5.isakmp: isakmp: phase 2/others ? inf[E]
09:59:05.002332 IP 192.168.5.30 > 192.168.3.5: ESP(spi=0xea65be5d,seq=0x7), length 132
09:59:06.002231 IP 192.168.5.30 > 192.168.3.5: ESP(spi=0xea65be5d,seq=0x8), length 132
09:59:07.002531 IP 192.168.5.30 > 192.168.3.5: ESP(spi=0xea65be5d,seq=0x9), length 132
Reference
Setup Tenant 2
• Repeat previous steps:
• Setup new Project for Tenant 2
• Create/associate relevant Project users/groups
• Create new private network/subnet
• Create new Neutron Router (set gateway/add interface to private network)
• Add VPN service using new private network and router for Tenant 2
• Add IPsec site connection using new peer router info
Reference
Branch 1 –Tenant 2 Example
Branch 2 – Tenant 2 Neutron
Router IP
Branch 2 – Tenant 2 Private
Network
Reference
Testing Instances Between Sites – Tenant 2• Check to see if you can connect between sites on for Tenant 2 instances
BUT not Tenant 2-to-Tenant 1root@all-in-one-br1:~# neutron router-list
+--------------------------------------+--------------+---------------------------------------------------------------+
| id | name | external_gateway_info |
+--------------------------------------+--------------+---------------------------------------------------------------+
| 689df3c4-e5ae-42e5-99f4-70e9dcb8993f | os-router-1 | {"network_id": "3964c2bd-9562-408c-9489-f1a643c128f8", "enable_snat": true} |
| aa3e1b12-524a-460e-901f-3f9b7c8fb0e0 | os-router-t2 | {"network_id": "3964c2bd-9562-408c-9489-f1a643c128f8", "enable_snat": true} |
+--------------------------------------+--------------+---------------------------------------------------------------+
root@all-in-one-br1:~# ip netns exec qrouter-aa3e1b12-524a-460e-901f-3f9b7c8fb0e0 ssh [email protected]
$ ping 10.10.25.2
PING 10.10.25.2 (10.10.25.2): 56 data bytes
64 bytes from 10.10.25.2: seq=0 ttl=62 time=3.452 ms
64 bytes from 10.10.25.2: seq=1 ttl=62 time=1.766 ms
$ ping 10.10.20.2
PING 10.10.20.2 (10.10.20.2): 56 data bytes
--- 10.10.20.2 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
Ping Instance on Tenant 2 –
Branch 2
Ping Instance on Tenant 1 –
Branch 2
Reference
Running Applications
Multiple Paths to Managing Images/Apps• Docker:
• http://www.docker.io/
• https://wiki.openstack.org/wiki/Docker
• VMBuilder:
• http://docwiki.cisco.com/wiki/OpenStack:VM_Build
• https://launchpad.net/vmbuilder
• https://help.ubuntu.com/12.04/serverguide/jeos-and-vmbuilder.html
• Disk Image Builder:
• https://github.com/stackforge/diskimage-builder
• Heat – Template based orchestration engine :
• https://wiki.openstack.org/wiki/Heat
• https://github.com/openstack/heat
• Salt Cloud
• https://github.com/saltstack/salt-cloud
• Baseline images + automated application deployment (scripts, Puppet, Chef)
• Template images – Prebuilt with apps installed and deployed from Glance
Reference
VMBuilder
• https://help.ubuntu.com/12.04/serverguide/jeos-and-vmbuilder.html
• Build images from KVM installed machine
• Create a configuration file and run vmbuilder:
• Or via full CLI with options:
root@builder:~# more /etc/vmbuilder.cfg
[DEFAULT]
tmpfs = suid,dev,size=2G
arch = amd64
domain = example.com
ip = 10.121.13.77
mask = 255.255.255.0
net = 10.121.13.0
bcast = 10.121.13.255
gw = 10.121.13.1
dns = 10.121.12.10
user = localadmin
name = localadmin
pass = ubuntu
firstboot = /etc/vmbuilder/firstscripts/firstboot.sh
[kvm]
libvirt = qemu:///system
bridge = virbr0
virtio_net = true
mem = 2048
cpus = 2
[ubuntu]
proxy = http://10.129.16.14:8080
suite = precise
flavour = virtual
#install-mirror = http://10.121.13.17:3142/
components = main,universe
addpkg = openssh-server, unattended-upgrades, git, vim, puppet
# vmbuilder kvm ubuntu --hostname=base2 \
> --destdir=/var/lib/libvirt/images/base2
vmbuilder kvm ubuntu --suite precise --flavour virtual \
--arch amd64 –o --libvirt qemu:///system --ip 10.121.13.77 \
--hostname base2 --part vmbuilder.partition \
--user localadmin --name localadmin --pass ubuntu \
-m 2048 --cpus 1 --addpkg unattended-upgrades \
--addpkg openssh-server --addpkg puppet --addpkg git
Reference
Puppet on Baseline Instances
• Puppet is installed via baseline image or manually installed
• Puppet master or local puppet (masterless) is built and manifests defined• Use same PM as the OpenStack build used or your production PM(s)
for apps
• Puppet agent runs (or local puppet apply) and apps for that instance are installed and configured• Alternatively, install via puppet modules: http://forge.puppetlabs.com/
• Test apps
Puppet Agent Run – Example w/LAMP
• On Puppet Master - /etc/puppet/manifests/site.pp
• LAMP layout on PM:
Info: Applying configuration version '1363712915'
Debug: /Stage[main]/Lamp/Exec[mysqlpasswd]/require: requires Package[mysql-server]
Debug: /Stage[main]/Lamp/Exec[mysqlpasswd]/require: requires Package[apache2]
Debug: /Stage[main]/Lamp/Exec[mysqlpasswd]/notify: subscribes to Service[mysql]
Debug: /Stage[main]/Lamp/Exec[mysqlpasswd]/notify: subscribes to Service[apache2]
Debug: /Stage[main]/Lamp/Service[apache2]/require: requires Package[apache2]
Debug: /Stage[main]/Lamp/Exec[userdir]/require: requires Package[apache2]
# Nodes for web server instances
node 'sales-web-01' {
include lamp
}
root@build-server:~# tree /etc/puppet/modules/lamp/
/etc/puppet/modules/lamp/
├── files
│ ├── apache2.conf
│ ├── index.php
│ └── php5.conf
└── manifests
└── init.pp
Reference