Installation runbook for NuageNetworks Virtualized...
Transcript of Installation runbook for NuageNetworks Virtualized...
Installation runbook for NuageNetworks Virtualized Services PlatformTest Result: Test Date: Partner Name: Nuage Networks Product Name: Nuage VSP Product Version: 3.0.R5 MOS Version: 6.0 MOX Version (if applicable): OpenStack version: Juno Product Type: SDN Platform Partner Contact Name: Partner Contact Phone: Partner Contact Email: [email protected] Partner Main Support Number: Partner Main Support Email: Certification Lab Location: Raleigh, NC
Contents
Reviewers Document History Support contacts Introduction
Target Audience Product Overview Joint Reference Architecture Networking Installation and Configuration
Overview of MOS installation steps MOS Installation in details Nuage VSP Integration
Installation Automation
Nuage VRSG Integration Testing
Reviewers
Date Name e-mail
Document History
Version Revision Date Description 0.2 04-08-2015 Addressed Review Comments
received on 4/8/2015
Support contacts
Name Phone E-mail community@nuagenetworks.
net
Introduction
This document serves as a detailed Deployment Guide for Mirantis Openstack with the Nuage
Networks Virtual Services Platform (VSP). The Nuage Networks VSP offers a SDN solution
that can be used by Mirantis Openstack for implementing an Openstack networking service.
This document describes a reference architecture along with detailed installation steps for
integrating Nuage VSP with Mirantis Openstack. In addition, the document describes in detail
the tests that need to be run to verify the integrated setup.
Target Audience
This guide is intended for Openstack Administrators who are deploying Mirantis Openstack
using Openstack Networking (neutron) with Nuage VSP. The Openstack Administrator should
be familiar with the Openstack compute and networking services.The administrator should
also be familiar with Nuage VSP capabilities and configuration as documented in the Nuage
VSP User’s guide.
Product Overview
Nuage VSP consists of the following three components as shown in Figure 1.
• VSD which is the policy and analytics engine.
• VSC which is the control plane.
• VRS which is the forwarding plane.
Figure 1: Nuage Architecture Overview
OpenStack Neutron connects to the Nuage VSP via the Nuage Neutron Plugin which uses the
standard VSD REST APIs for all configuration.
Unlike the default Neutron architecture, no centralized agents are required to support L3
routing or DHCP. Routing and DHCP address distribution are handled by the VRS agent on
each Nova Compute node based on policy distributed from the VSD.
The Nuage VRS agent on the Nova Compute node monitors VM events and configures
network topology based on the VMs provisioned on the node.
Figure 2: Nuage Neutron Plugin Overview
VM configuration includes tunnel configuration for traffic between hypervisors and other
components. This is carried via VXLAN between VSP components or MPLS over GRE for
compatibility with Provider Edge (PE) routers.
Figure 3: Openstack Nuage Architecture Datapath
Since the tenant networks are implemented as VXLAN overlay networks, a gateway is required to reach the tenant VM from an external (underlay) network. This can achieved using
either a hardware VXLAN gateway like Nuage VSG, Cisco ASR, etc or using a software
gateway like Nuage VRSG. The solution presented in this document will utilize a software
based gateway (Nuage VRSG).
Joint Reference Architecture
Figure 4: Mirantis Openstack with Nuage VSP
The above diagram shows the logical topology that will be used to get Mirantis Openstack
working with Nuage VSP. The topology requires two interfaces on each of the Openstack
nodes. The details about how how these interfaces are connected are described in
Networking section of the document. Note in the above diagram the VSD, VSC and VRSG
are operating in standalone mode, however for a production system these components need
to be deployed in redundant mode as explained in the Nuage Installation guide.
The Nuage VSP will be used to implement tenant networks as Vxlan Overlay networks. The
VRSG (Virtualized Routing and Services Gateway) will enable hosts on the public network to
access the tenant VMs using floating ip addresses.
Networking
Figure 5: Openstack node interfaces
The screenshot above shows the network interfaces on each of the Mirantis Openstack
nodes. As mentioned in thejoint reference architecture section, two networks are needed for
the Mirantis Openstack and Nuage VSP integration. The first network (blue) will be used for
the inter VM traffic and for traffic between the different components of VSP (VSD, VSC and
VRS). The second network (green) is the public network which will be used to access the
tenant VMs using floating ips.
Installation and Configuration
Overview of MOS installation steps
Please follow the Mirantis user guide to create and install an Openstack cluster using Fuel.
Make sure you choose Nova Network during the Openstack installation. This prevents the
neutron service from being setup during the installation process. The neutron service will be
setup as a part of the integration process. The following section highlights some the important
steps that need to be followed while setting up a Mirantis Openstack cluster with Nuage VSP.
MOS Installation in details
While creating the new Openstack environment please make sure to choose Juno on Ubuntu
12.04.4 as shown below
While choosing the networking setup make sure to choose novanetwork. We will later
update the configuration to neutron while integrating with the Nuage VSP.
Once the openstack cluster has been setup the network settings for the Openstack cluster
must resemble the settings shown below.
Nuage VSP Integration
Nuage Software packages
Please contact Nuage Networks support team to obtain the URL to download the Nuage
Software Packages and Installation guides.
Installation
Please follow the Nuage Installation guide to install the VSP. The Mirantis Openstack nodes
and the VSP nodes will communicate with each other over a tagged vlan (103) network (as
shown in the reference architecture and the networking section above). In the reference
architecture above, VSD (Virtual Services Directory) and VSC (Virtual Services Controller) are
two separate Linux hosts. VSD is a Linux CentOS host and can be installed on a bare metal
server using the iso image provided by Nuage. VSD host acts as a policy engine for policy
based network configuration and communicates with the VSC via XMPP protocol (Extensible
Messaging and Presence Protocol). The VSC software is provided as a qcow2 VM image and
can be run on a dedicated Linux server using KVM hypervisor. VSC host acts as the control
plane for the Nuage VSP product and communicates with VRS on the compute nodes via the
Openflow protocol.In addition to this requirement the following points need to be noted.
1. Since the Fuel node has a NTP server running, both VSC and VSD must use the ip
address of the Fuel node for the NTP server. Check and make sure the ntp server on
the Fuel node has been synchronized with the external NTP server. This can be done
using the ntpstat command.
2. The Fuel node has a DHCP server running. Thus the VSC and VSD can get IP
addresses assigned using the DHCP server on the Fuel node. The ip addresses for
the VSC and VSD can also be assigned manually as long as these ip addresses are
on the same subnet as the Openstack network and do not conflict with already
assigned ip addresses.
3. For the VRS installation on the compute nodes, the software packages first need to be
copied over to the fuel node, since this is the only node that has access to the public
network. Once the VRS packages are on the fuel node, they can be copied over to the
compute nodes.
VSP Integration with Mirantis Openstack
The integration involves the following steps:
1. Installing, configuring and running the neutron service on the controller node
2. Configuring the nova service on the controller and compute nodes
3. Installing the Nuage neutron plugin
4. Restarting the neutron and nova services
5. Enable access to the Vxlan port on the compute nodes
Once these integration steps are complete, an Openstack administrator should be able to
create Vxlan based tenant networks. The VMs on the tenant network should be able to
access each other. However these VMs will remain inaccessible from the public network.
Prerequisites
The following pieces of information are needed for the integration
1. Rabbitmq password
2. Nova password
3. Nova admin tenant id
The rabbitmq password can be obtained by logging into the Openstack controller node by
running the following command
grep default_pass /etc/rabbitmq/rabbitmq.config
The nova password can be obtained using:
grep admin_password /etc/nova/nova.conf
The nova admin tenant id can be obtained using:
keystone tenant-get services
Neutron Service
Please follow the Juno Neutron Install guide inorder to setup the neutron service on the
Openstack controller node. Following variations are required to get the neutron service
working with Mirantis Openstack.
1. Choose a password for the neutron service. The password value maps to the
NEUTRON_DBPASS in the install guide.
2. For Step 2 in the Juno Neutron Install Guide, you will need to source openrc
3. For Step 3b in the Juno Neutron Install Guide, the tenant name should be services
instead of service.
4. Skip the ML2 plugin installation section.
Configuring Nova
On all of the Openstack nodes (Controller + compute) remove the following entries from
/etc/nova/nova.conf
network_manager
network_api_class
libvirt_vif_driver
firewall_driver
The above entries need to be replaced as follows
neutron_admin_username=neutron
neutron_admin_password=$NEUTRON_DBPASS
neutron_url=http://$CONTROLLER_IP:9696
neutron_admin_tenant_name=services
neutron_auth_strategy=keystone
neutron_url_timeout=90
neutron_region_name=RegionOne
neutron_admin_auth_url=http://$CONTROLLER_IP:35357/v2.0
neutron_default_tenant_id=default
firewall_driver=nova.virt.firewall.NoopFirewallDriver
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
neutron_ovs_bridge = alubr0
network_api_class=nova.network.neutronv2.api.API
Nuage Neutron Plugin
Please follow the Plugin Installation and Configuration section of the Nuage Openstack
(Juno) Installation guide for installing the Neutron plugin.
Restart services
On the Controller node, restart the following services:
nova-api
nova-scheduler
nova-conductor
neutron-server
On the compute nodes, restart the following services:
nova-api
nova-compute
On restarting the services please make sure that you do not see any errors in the log files
corresponding to the services. The log files can be found at:
/var/log/nova
/var/log/neutron
Enable VXLAN Port
Since the Openstack install was done using novanetwork the vxlan port is blocked on the
compute nodes by default. The Vxlan port needs to be unblocked inorder for the tunneled
traffic to flow freely across the VMs on the different compute nodes. This can be done by
running the following commands on each of the compute nodes.
lineno=$(iptables -nvL INPUT --line-numbers | grep "state RELATED,ESTABLISHED" | awk 'print
$1')
iptables -I INPUT $lineno -s 0.0.0.0/0 -p udp -m multiport --dports 4789 -m comment --comment
"001 vxlan incoming" -j ACCEPT
iptables-save > /etc/iptables/rules.v4
The iptablessave command saves the iptables rules to /etc/iptables/rules.v4 file on each
compute node to make them persistent across compute node reboots.
Automation All of the integration steps mentioned above have been automated using Ansible scripts
provided along with this document. The ansible scripts can be executed on a Linux host that
is attached on the management network of the Openstack Cluster. The automation requires
passwordless ssh access to the fuel node using the root login. All of the parameters for the
Openstack and VSP setup are present in the three files mentioned below. Please modify the
parameters in these files as per your environment and use the “run_all.sh” script to setup the
entire MirantisVSP Integrated environment.
mirantis_nodes: This configuration file needs to specify the management IP addresses for all of the VSP components namely VSD, VSC and the VRSG node, the Fuel node and also
the Openstack nodes i.e controller and two compute nodes. The administrator also needs to
specify parameters pertaining to Openstack components like rabbitmq password
(rabbitmq_pw), nova password (nova_pw), neutron service authentication URL (neutron_url)
and the nova admin tenant ID (nova_admin_tenant_id). The parameters listed in the table
below need to be modified before running the Ansible automation script:
Parameter Description
vrs_package URL for the Nuage VRS package
openstack_package URL for the Nuage Openstack (neutron plugin)
package
controller_ip Controller node IP address
neutron_pw neutron Password
Under [compute]
section
Compute node IP addresses
rabbitmq_pw rabbitmq password
nova_pw nova password
nova_admin_tenant_id nova admin tenant id
neutron_url neutron URL (Modify the controller IP in the neutron
URL)
vsc_ip VSC IP address
Under [vsd] section VSD IP address
Under [vrsg] section VRSG IP address
plugin.ini: This configuration file is required by the Nuage neutron plugin to communicate with the Nuage VSP . Please refer to The Nuage Openstack (Juno) Installation guide to understand how to specify the parameters in this file as per your Openstack environment.
vrsinstall.yml: This is the ansible configuration file which is used to remove the default openvswitch and install Nuage VRS (Virtual Routing and Switching) service on all of the
compute nodes. The administrator needs to modify the sample management ip addresses
(10.20.0.*) for the controller and compute nodes as per their environment.
NOTE: When the ansible scripts prompt for a sudo password, just press enter.
When the ansible automation script completes successfully; the following console message is
displayed along with the IP addresses of different nodes namely fuel, controller and compute.
Absence of any failures (failed=0) implies that the integration was completed successfully.
Nuage VRS-G Integration
With the steps mentioned in the Nuage VSP Integration section, VMs on the tenant network
can access each other. However these VMs remain inaccessible from the public network. In
order to access the VMs from the public network, they need to be assigned floating ips and a
VRSG (Virtualized Routing Services Gateway) node needs to be added to the Openstack
cluster. The VRSG is a software gateway that acts as a Vxlan tunnel endpoint for the tenant
networks. The VRSG node is a simple Linux node running the Nuage VRS component in the
gateway mode. With the VRSG added, the Openstack controller in the reference architecture
will behave like a L3 router and will be used to route the traffic from the public network to the
tenant VMs using floating ips. It is also possible to have a separate Linux node to behave as
L3 router instead of the Openstack controller. The following sections describe the steps to get
VRSG up and running:
Floating IP subnet
On the Openstack controller, use the Openstack neutron commands to create a floating ip network and subnet. Once the floating ip subnet is created successfully it should appear in the
VSD as shown below. Note down this floating ip subnet id.
VRSG
1. Follow the Nuage installation guide to install the Nuage VRS software on a Linux
server (Ubuntu 12.04). During installation, make sure you set the parameter
“PERSONALITY” to “vrsg” in “/etc/default/openvswitch” configuration file.
2. Change the eth1 interface on the vrsg node to be an access port that will accept an
untagged traffic from the public Openstack network
nuage-vlan-config mod eth1 access 0
Gateway Configuration using VSD
1. If the VRSG is configured properly it should be autodiscovered by the VSD and
should appear as a Pending Gateway. Please refer to the Discovering Gateways >
Autodiscovery > Gateway Not Already configured in VSD section of the Nuage
User Guide to activate the newly discovered gateway.
2. Once the gateway is online make sure you see the ports eth0 as the network and eth1
as the access port.
3. Add vlan 0 to the eth1 port.
4. Click the discovered gateway and assign permission to the required enterprise. Please
refer to the Gateway Provisioning > Assigning Permissions > Use Permissions
section of the Nuage User Guide.
Security Policies Using VSD, enable ip forwarding for both the ingress security as well as egress security policies as shown below
Uplink host vport
The floating ip network now needs to be connected to the public network via the VRSG. This
will be done by creating a uplink host vport using the setup_fip_port script provided along with the document. The script can be run from any Linux host that can reach VSD and which
has the open source jq tool installed. Please change the following parameters in the script
before running it from the Fuel node or any other node that is onthe Fuel network.
Parameter Value
VLAN The vlan tag for the public network traffic, 0 for untagged
traffic
VSD_IP IP address for VSD
FIP_NAME Floating ip subnet name
GW_NAME The gateway name (Can we obtained from the VSD)
PORT_NAME The access port on vrsg
UPLINK_SUBNET The public subnet
UPLINK_MASK The public subnet mask
UPLINK_IP The ip address of the eth1 of the controller node
UPLINK_IP_MAC The mac address of eth1 on the controller
UPLINK_GW A ip address on the public network which will be used as the
gateway. Please note no other host on the public network
should be using this ip address.
Sample parameters for the reference architecture are shown below
VLAN="0"
VSD_IP="10.0.0.6"
FIP_NAME="72665124-95dd-482a-85b1-ee49f8a1565a"
GW_NAME="10.0.0.7"
PORT_NAME="eth1"
UPLINK_SUBNET="172.16.0.0"
UPLINK_MASK="255.255.255.128"
UPLINK_IP="172.16.0.2"
UPLINK_GW="172.16.0.100"
UPLINK_GW_MAC="52:54:00:fd:6b:fc"
Setting Routes On the Openstack controller (L3 router) add a route to the floating ip subnet as follows
ip route add "floating ip subnet" via "uplink gateway"
For example,
ip route add 172.16.0.128/25 via 172.16.0.100
Once this route has been added, you should be able to ping the tenant VMs over their floating
ips from the Openstack controller.
The other nodes in the public subnet can reach the tenant VMs, if they add a route via the
openstack controller as follows
ip route add "floating ip subnet" via "controller ip"
For example,
ip route add 172.16.0.128/25 via 172.16.0.2
You will also need to add the following iptable rule on the controller node
iptables -t nat -A POSTROUTING -o eth1 -j SNAT --to-source "controller ip"
For example
iptables -t nat -A POSTROUTING -o eth1 -j SNAT --to-source 172.16.0.2
Testing
Once the Mirantis Openstack cluster is setup along with Nuage VSP, the setup can be
validated by running the Nuage Health Check Tool provided along with this document.
The following steps are needed to validate the integration by running the health check tool on
the controller node.
1. ./nuage_neutron_health_check --setup
As soon below this step creates a multiple tenant networks, subnets and a floating ip
network. VMs are then spawned on each of the hypervisor node and attached to a
private subnet. In addition a floating ip is associated with each of the VM.
2. Once step 1 succeeds, attach the floating ip network created in step 1 to the
public network by following the VRSG integration section.
3../nuage_neutron_health_check --run
As shown below this will run a test that will ping the VM on each of the hypervisor
making sure the VM is reachable from the public network using the floating ips. In
addition the tool logs into each of the test VM using ssh and pings all the other test
VMs on their private IP address, there by exercising all of the reachable paths from the
public as well as private networks.
4../nuage_neutron_health_check --cleanup
As shown below this will delete the setup created in step 1.