LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers...

57
LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1. Readme.1 st 2. Objective 3. Lab setup 4. Lab access 5. Openstack overview 6. Create flavor 7. Create image 8. Create volume 9. Create network 10. Launch CSR1Kv instance 11. Launch XRv9K instance 12. Console access 13. Floating IP 14. Telnet access 15. Orchestration using HEAT templates 16. Launching VM with startup config using config-drive 17. Troubleshooting packet drops 18. Allowing address-pairs on ports 19. Acknowledgements 20. Appendix 1. Readme.1 st 1.1. Please take one-question pre-lab survey: PRE-LAB SURVEY We request you to take POST-LAB SURVEY after doing this lab. 1.2. Sudo: You have sudo privilege. Simply do “sudo [linux command]” for tasks that require root privilege. 1.3. Some sections are marked FYI. Breeze through these sections if you have limited time. You don’t want to miss the later sections, which are more important.

Transcript of LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers...

Page 1: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

LTRCLD-1451-OpenstackforNetworkEngineers

TableofContents:

1. Readme.1st 2. Objective

3. Lab setup 4. Lab access

5. Openstack overview 6. Create flavor

7. Create image 8. Create volume

9. Create network 10. Launch CSR1Kv instance

11. Launch XRv9K instance 12. Console access

13. Floating IP 14. Telnet access

15. OrchestrationusingHEATtemplates 16. LaunchingVMwithstartupconfigusingconfig-drive

17. Troubleshooting packet drops 18. Allowing address-pairs on ports

19. Acknowledgements 20. Appendix

1. Readme.1st

1.1. Please take one-question pre-lab survey: PRE-LAB SURVEY

We request you to take POST-LAB SURVEY after doing this lab. 1.2. Sudo: You have sudo privilege. Simply do “sudo [linux command]” for tasks that require

root privilege.

1.3. Some sections are marked FYI. Breeze through these sections if you have limited time. You don’t want to miss the later sections, which are more important.

Page 2: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

2. ObjectiveBy doing the lab, you will learn the following topics at a basic level (FYI):

• Nova flavor • Glance image • Neutron network, subnet, port • Cinder volume • Instantiating CSR1Kv with startup config • Instantiating XRv9K with startup config • Assigning floating IP to router instances • Troubleshooting packet drops

We will build the following topology in this lab:

Remember that here the Hypervisor is Linux KVM. Openstack is used to manage virtual infrastructure.

TOP OF THE PAGE

3. Labsetup(FYI only) This lab setup is hosted in a UCS-B chassis, which is located in DMZ lab in RTP. UCS –B chassis is behind a dedicated firewall. This network is not part of Cisco enterprise network. So, you need to VPN into the firewall even if you are in the office. Go to the next section after a quick look through the topology diagram.

Page 3: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

TOP OF THE PAGE

4. LabaccessTo access the lab, you need to VPN into lab firewall (ASAv).

4.1. Lab access topology

From your lab point of view, you will use two VLANs: VLAN-10 for accessing Linux and Openstack. VLAN-XX to access VMs created within your space (you as a tenant). Here is a topology to represent this:

Page 4: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

4.2. VPN in

Use the following details to VPN into the ASAv firewall:

• Server address: <Will be provided> • Username: <Will be provided> • Password: <Will be provided>

TOP OF THE PAGE

5. OpenstackoverviewFYI. The objective of this section is to expose you to general openstack lab environment. You can breeze through this section and focus more on the Openstack topics. [NOTE: Do not spend more than 5 min. on this section]

5.1. Enabled components and status

Observe: $ openstack-status $ openstack-service list

5.2. Answers file

Page 5: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

Observe: $ sudo ls –l /root $ sudo more /root/{answer file name from above output} $ sudo grep ^[^#] /root/{answer file name} $ sudo grep NOVA /root/{answer file name} $ sudo grep NEUTRON /root/{answer file name} $ sudo grep CINDER /root/{answer file name} $ sudo grep HEAT /root/{answer file name}

5.3. Logging directory

Observe: Openstack log files are saved in /var/log directory $ sudo ls -l /var/log/ $ sudo ls -l /var/log/nova $ sudo ls -l /var/log/neutron/ $ sudo ls -l /var/log/glance/ $ sudo ls -l /var/log/cinder $ sudo ls -l /var/log/heat

5.4. Configuration files

Observe: Openstack log files are saved in /etc directory $ sudo ls -l /etc | egrep "nova|neutron|heat|cinder"

5.5. Cinder volumes

Observe: The disk is partitioned like this: 1Gig for boot, 200Gig for /root, 16 Gig for Swap, and the rest for Cinder volumes (~350Gig or ~800Gig depending on your server)

$ sudo fdisk –l $ sudo pvs $ sudo lvs $ sudo vgs

TOP OF THE PAGE

Page 6: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

6. CreateflavorFlavor is la template that defines size of various parameters. These templates are used when launching virtual machine (VM) instances. Admin user can create flavors to be used by tenants. Openstack comes with five predefined flavors. For example, flavor named, “m1.tiny” assigns RAM=512MB, disk=1Gig, vCPU=1 to the VM when the VM is created with m1.tiny flavor.

6.1. Syntax

$ nova flavor-create FLAVOR_NAME FLAVOR_ID RAM_IN_MB ROOT_DISK_IN_GB NUMBER_OF_VCPUS For description of the parameters go to flavor-section in Appendix. 6.2. Find existing flavors

$ nova flavor-list $ nova flavor-show {flavor-name} 6.3. Monitor the log messages. FYI.

Open three terminal windows. Window-1 is where you execute lab instructions. Use Window-2 and 3 for observation purpose. Important: Note that there are four labs per Openstack server. The log files are common for all the four labs. So, you will see messages that are pertaining to other labs. You need to pay attention to the flavor name and time-stamp to distinguish your log from others. [NOTE: you may skip monitoring step is you are uninterested]

6.3.1. On Window-2: $ sudo tail –f /var/log/nova/nova-api.log 6.3.2. On window-3: $ sudo tail –f /var/log/nova/nova-compute.log

6.4. In Window-1, create a new flavor (XX=your lab-ID)

After every flavor-create command, monitor the “tail –f” windows.

Flavor to be used for CSR1Kv router: $ nova flavor-create csr-flav-XX 1XXXX 4096 0 1 $ nova flavor-show csr-flav-XX $ sudo grep 1XXXX /var/log/nova/* Flavor to be used for XRv9K router:

Page 7: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

$ nova flavor-create xrv9k-flav-XX 2XXXX 16384 55 4 $ nova flavor-show xrv9k-flav-XX $ sudo grep 2XXXX /var/log/nova/nova-api.log

6.5. Create a flavor with XRv9K specs via Horizon dashboard

6.5.1. Login to Horizon dashboard

URL=http://172.31.56.23X (your Openstack server) Username=userXX Password=cisco 6.5.2. Go to Admin---àSystem---àFlavors---àCreate flavor

Use the following parameters and create two flavors: sample-csr-flav-XX

• name=sample-csr-flav-XX • ID=auto • VCPUs=1 • RAM=4096MB • Root disk=0GB • Ephemeral=0 • Swap=0

sample-xrv9k-flav-XX

• name=sample-xrv9k-flav-XX • ID=auto • VCPUs=4 • RAM=16384MB • Root disk=55GB

Page 8: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

• Ephemeral=0 • Swap=0

Verify the parameters of the flavor with “flavor-show” $ nova flavor-show sample-csr-flav-XX $ nova flavor-show sample-xrv9k-flav-XX

6.6. Cleanup

$ nova flavor-list $ nova flavor-delete sample-csr-flav-XX $ nova flavor-list $ nova flavor-delete sample-xrv9k-flav-XX $ nova flavor-list At this time you should have the csr-flav-XX and csr-flav-XX that you created in the output of “nova flavor-list”. Terminate the tail –f commands. 6.7. Sample output

TOP OF THE PAGE

7. CreateGlanceimage

Page 9: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

FYI. Virtual machine images are managed by Glance service. Images are to be catalogued by Glance to be used for launching VM instances. Go through the following steps to create CSR1Kv and XRv9K images. Usage: glance --os-image-api-version 2 image-create [--architecture <ARCHITECTURE>] [--protected [True|False]] [--name <NAME>] [--instance-uuid <INSTANCE_UUID>] [--min-disk <MIN_DISK>] [--visibility <VISIBILITY>] [--kernel-id <KERNEL_ID>] [--tags <TAGS> [<TAGS> ...]] [--os-version <OS_VERSION>] [--disk-format <DISK_FORMAT>] [--self <SELF>] [--os-distro <OS_DISTRO>] [--id <ID>] [--owner <OWNER>] [--ramdisk-id <RAMDISK_ID>] [--min-ram <MIN_RAM>] [--container-format <CONTAINER_FORMAT>] [--property <key=value>] [--file <FILE>] [--progress] For description of the arguments used in “glance image-create”, refer to IMAGE-SECTION in the appendix.

7.1. FYI. Monitor glance-image log [NOTE: It is safe to skip this step if you are running short of time]

In Window-2: $ sudo tail –f /var/log/glance/api.log In Window-3: $ sudo tail –f /var/log/glance/registry.log 7.2. Create CSR1Kv image

After every image-create command, monitor “tail –f” windows. Correlate time-stamp and image name in the log to ensure you are watching the right log. Please note that every Openstack server has four users working and all logs are saved in the same file(s).

$ glance image-list $ glance image-create --name csr-img-XX --disk-format qcow2 --container-

format bare --file /var/lib/libvirt/images/csr1000v-universalk9.03.16.01a.S.155-3.S1a-ext.qcow2

$ glance image-list $ glance image-show csr-img-XX 7.3. Create XRv9K image While monitoring tail –f windows create XRv9K image.

$ glance image-list

Page 10: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

$ glance image-create --name xrv9k-img-XX --disk-format iso --container

bare --file /var/lib/libvirt/images/xrv9k-fullk9_vga-x.iso-5.4.0 $ glance image-list $ glance image-show xrv9k-img-XX

7.4. Cleanup

7.4.1. Terminate tail –f commands

7.5. Sample output

TOP OF THE PAGE

8. CreatevolumeXRv9K requires a disk volume for booting. Uncompressed IOS XR OS files are saved in this disk space. In this section we will create a volume, which will be used in later sections for XRv9K launching.

8.1. Cinder create syntax $ cinder create SIZE_IN_GB --display-name NAME For more info on cinder visit CINDER SECTION in appendix

8.2. FYI. Observe the disk partitions

Find sizes of each partition.

$ sudo fdisk –l $ sudo vgs 8.3. FYI. Check which volume cinder service is going to use

$ sudo grep volume_group /etc/cinder/cinder.conf Cinder is configured to use volume named “cinder-volumes”. Do we have a disk volume

called “cinder-volumes”?

Page 11: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

8.4. Observe existing cinder volumes

$ cinder list 8.5. FYI. Monitor log

Keep the below tail –f running in the observation windows.

$ sudo tail -f /var/log/cinder/scheduler.log

$ sudo tail -f /var/log/cinder/volume.log

8.6. Create a volume for XRv9K instance

$ cinder create --display-name xrv9k-vol-XX 55

$ cinder set-bootable xrv9k-vol-XX true $ cinder list $ cinder show xrv9k-vol-XX

8.7. Create and delete a sample volume

$ cinder create --display-name sample-vol-XX 1 $ cinder list $ cinder show sample-vol-XX $ cinder delete sample-vol-XX

8.8. Cleanup

8.8.1. Terminate tail –f commands

8.8.2. Verify the volume that you created. Keep the volume xrv9k-vol-XX as this is

needed later in the lab. Delete sample or any other volumes you created for experimental purpose.

$ cinder list

8.9. Sample output

Page 12: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

TOP OF THE PAGE

9. CreatenetworkNeutron networking implementation and terminology can confuse engineers with strong networking background. So, be patient when dealing with Neutron initially. It will soon make sense. FYI. Network: There are three parts to network: network, subnet, and port. Use neutron net-show, neutron subnet-show, and neutron port-show to see details.

• Network defines type (VLAN, VXLAN, FLAT, Local etc), corresponding type ID, internal/external etc.

• Subnet defines CIDR (IP subnet), pool of IP addresses to be allocated, DHCP agent enable/disable, DNS enable/disable, gateway enable/disable, IPv4/IPv6 etc.

• Port defines specific IP address, device which owns it etc. When a VM (or other devices) is launched a port is automatically created when the VM is attached to a network.

Neutron agents: Neutron provides certain inbuilt devices that provide inter-VM networking functions such as routing, load balancing, DHCP etc. We can create a Neutron router (called q-router). In addition to router, neutron also provides DHCP and L2 switch (OVS switch) devices. It is not the focus of this section to go over these but you will come across them. We need another lab to look at them closely. In this section you will create network and subnets that you will be using in the later section.

9.1. FYI. Command syntax

At the end of this session, you will have a network and subnet created.

Usage: (for more details on command options, refer to NEUTRON section in appendix ) neutron net-create [-h] [-f

{html,json,json,shell,table,value,yaml,yaml}] [-c COLUMN] [--max-width <integer>] [--noindent]

Page 13: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

[--prefix PREFIX] [--request-format {json}] [--tenant-id TENANT_ID] [--admin-state-down] [--shared] [--provider:network_type

<network_type>] [--provider:physical_network

<physical_network_name>] [--provider:segmentation_id <segmentation_id>] [--vlan-transparent {True,False}] [--qos-policy QOS_POLICY] [--availability-zone-hint AVAILABILITY_ZONE] NAME neutron subnet-create [-h] [-f

{html,json,json,shell,table,value,yaml,yaml}] [-c COLUMN] [--max-width <integer>] [--

noindent] [--prefix PREFIX] [--request-format {json}] [--tenant-id TENANT_ID] [--name NAME] [--gateway GATEWAY_IP | --no-gateway] [--allocation-pool start=IP_ADDR,end=IP_ADDR] [--host-route

destination=CIDR,nexthop=IP_ADDR] [--dns-nameserver DNS_NAMESERVER] [--disable-dhcp] [--enable-dhcp] [--ip-version {4,6}] [--ipv6-ra-mode {dhcpv6-stateful,dhcpv6-

stateless,slaac}] [--ipv6-address-mode {dhcpv6-stateful,dhcpv6-

stateless,slaac}] [--subnetpool SUBNETPOOL] [--prefixlen PREFIX_LENGTH] NETWORK [CIDR] neutron port-create [-h] [-f

{html,json,json,shell,table,value,yaml,yaml}] [-c COLUMN] [--max-width <integer>] [--

noindent] [--prefix PREFIX] [--request-format {json}] [--tenant-id TENANT_ID] [--name NAME] [--fixed-ip

subnet_id=SUBNET,ip_address=IP_ADDR] [--device-id DEVICE_ID] [--device-owner DEVICE_OWNER] [--admin-state-

down] [--mac-address MAC_ADDRESS] [--vnic-type <direct | direct-physical |

macvtap | normal | baremetal>] [--binding-profile BINDING_PROFILE] [--security-group SECURITY_GROUP | --no-

security-groups] [--extra-dhcp-opt EXTRA_DHCP_OPTS] [--qos-policy QOS_POLICY] [--allowed-address-pair

ip_address=IP_ADDR[,mac_address=MAC_ADDR] | --no-allowed-address-pairs]

Page 14: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

NETWORK An example of a simple CLI: $ neutron net-create {net name} $ neutron subnet-create {net-name} {IP subnet} --name {subnet name} $ neutron port-create {net name} --fixed-ip ip_address={IP address} 9.2. FYI. Monitor log

Keep the below tail –f running in the observation windows.

$ sudo tail -f /var/log/neutron/server.log

$ sudo tail -f /var/log/neutron/l3-agent.log 9.3. Create network

$ neutron net-list $ neutron net-create b2b-net-XX $ neutron net-show b2b-net-XX 9.4. Create subnet

$ neutron subnet-list $ neutron subnet-create b2b-net-XX 1.1.1.0/24 --name b2b-subnet-XX $ neutron subnet-list $ neutron subnet-show b2b-subnet-XX 9.5. FYI. Create and delete sample net, subnet, port

We will be using b2b-net-XX and b2b-subnet-XX in the later sections. Do not delete them. Create some sample net, subnet, and port and delete them.

$ neutron net-list $ neutron net-create sample-net-XX $ neutron net-list $ neutron net-show sample-net-XX $ neutron subnet-list $ neutron subnet-create sample-net-XX 10.254.254.0/24 --name sample-subnet-XX

Page 15: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

$ neutron subnet-list $ neutron subnet-show sample-subnet-XX $ neutron port-list $ neutron port-create sample-net-XX $ neutron port-list $ neutron port-show {id of the port taken from the above output} Now, delete port, sample subnet, and sample net. Do not delete b2b-net-XX and b2b-

subnet-XX: $ neutron port-delete {id of the port} $ neutron subnet-delete sample-subnet-XX $ neutron net-delete sample-net-XX

9.6. Cleanup

9.6.1. Terminate tail –f commands

9.6.2. Verify the network, subnet that you created. Keep the network b2b-net-XX and

subnet b2b-subnet-XX. Delete other sample network, subnet, port that you created.

$ neutron net-list $ neutron subnet-list $ neutron port-list

9.7. Sample output

Page 16: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

TOP OF THE PAGE

10. LaunchCSR1Kvinstance FYI. To launch we need flavor, image, and network defined. The service that handles instance launching is nova. Below is the syntax for “nova boot”, which is used to launch an instance. Detailed description of the parameters is in the NOVA subsection of the appendix.Launching of Nova instance involving a host of interactions among Nova, Neutron, and other projects. A high level flow is given in the appendix: Instance Provisioning Flow nova boot [--flavor <flavor>] [--image <image>] [--image-with <key=value>] [--boot-volume <volume_id>] [--snapshot <snapshot_id>] [--min-count <number>] [--max-count <number>] [--meta <key=value>] [--file <dst-path=src-path>] [--key-name <key-name>] [--user-data <user-data>] [--availability-zone <availability-zone>] [--security-groups <security-groups>] [--block-device-mapping <dev-name=mapping>] [--block-device key1=value1[,key2=value2...]]

Page 17: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

[--swap <swap_size>] [--ephemeral size=<size>[,format=<format>]] [--hint <key=value>] [--nic <net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid>] [--config-drive <value>] [--poll] [--admin-pass <value>] [--access-ip-v4 <value>] [--access-ip-v6 <value>] <name> An example of a simple CLI is: $ nova boot csr-rtr-1 --image csr-image --flavor csr-flavor --nic net-id={ID of the net to connect}

10.1. FYI. Monitor log

Keep the below tail –f running in the observation windows.

$ sudo tail -f /var/log/nova/nova-scheduler.log

$ sudo tail -f /var/log/neutron/nova-compute.log 10.2. Verify availability of resources

Make sure you have the right flavor, image, and network. You need csr-flav-XX, csr-img-XX, tenant-net-XX, and b2b-net-XX. $ nova flavor-list $ glance image-list $ neutron net-list 10.3. Instantiate CSR router instance

$ nova list $ nova boot csr-XX --image csr-img-XX --flavor csr-flav-XX --nic net-id={id of tenant-net-XX} --nic net-id={id of b2b-net-XX} $ nova list // pay attention to: status, power, networks and IP addresses associated etc. $ nova show csr-XX

Note that the order of net-id in the CLI is significant. In CSR, first net-id will be attached to gig-1 interface and second net-id will be attached to gig-2. To go as per the topology, use tenant-net id to the first NIC in the CLI and b2b-net to the second NIC in the CLI. 10.4. Cleanup

10.4.1. Terminate tail –f commands

10.4.2. Verify that CSR instance is running well:

Page 18: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

Indicators in “nova list” output should be:

• Status = Active • Power state = Running • Attached to tenant-net-XX and b2b-net-XX networks and have IP addresses

assigned for each net.

10.5. Sample output

Page 19: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

[user24@almas -2 ~(keystone_user24)]$ nova boot csr-24 --image csr-img-24 --flavor csr-flav-24 --nic net-id=e46f10d4-a114-43c9-ab81-6422f9c5dd05 --nic net-id=2ae9f9bb-df07-49a8-ab35-a6f5da17cbda +--------------------------------------+---------------------------------------------------+ | Property | Value | +--------------------------------------+---------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000024 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | 8FqVUoEP3tmi | | config_drive | | | created | 2016-03-07T17:56:22Z | | flavor | csr-flav-24 (12424) | | hostId | | | id | 4711bc14-9427-4da7-982b-4c6e05280437 | | image | csr-img-24 (e2802f98-83a0-4f17-8485-1dda55fc22c9) | | key_name | - | | metadata | {} | | name | csr-24 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | 0bfdbace7047444388068ecf7dce42f8 | | updated | 2016-03-07T17:56:22Z | | user_id | 032308082b1a47479578669fc86c2416 | +--------------------------------------+---------------------------------------------------+ [user24@almas -2 ~(keystone_user24)]$ [user24@almas -2 ~(keystone_user24)]$ nova list +--------------------------------------+--------+--------+------------+-------------+----------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------+--------+------------+-------------+----------------------------------------------+ | 4711bc14-9427-4da7-982b-4c6e05280437 | csr-24 | BUILD | spawning | NOSTATE | b2b-net-24=1.1.1.3; tenant-net-24=10.30.24.7 | +--------------------------------------+--------+--------+------------+-------------+----------------------------------------------+ [user24@almas -2 ~(keystone_user24)]$

Page 20: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

TOP OF THE PAGE

11. LaunchXRv9KinstanceXRv9K instantiation is different from that of CSR1Kv: XRv9K requires a disk volume. It expands ISO file and installs OS on to a disk volume.

11.1. FYI. Monitor log

Keep the below tail –f running in the observation windows.

$ sudo tail -f /var/log/nova/nova-scheduler.log

$ sudo tail -f /var/log/neutron/nova-compute.log

11.2. Verify availability of resources

Make sure you have the right flavor, image, network, and disk volume. You need: xrv9k-flav-XX, xrv9k-img-XX, tenant-net-XX, mgmt-other, mgmt.-host, b2b-net-XX, and xrv9k-vol-XX. IDs in the below output is needed for the command to instantiate xrv9K instance. $ nova flavor-list $ glance image-list $ neutron net-list $ cinder list $ sudo vgs

11.3. Instantiate XRv9K router instance

Go over the CLI syntax:

nova boot xrv9k-XX --flavor {xrv9k-flavor-name} \ --nic net-id={tenant-net-id} \ --nic net-id={mgmt-other-net-id} \ --nic net-id={mgmt-host-net-id} \ --nic net-id={b2b-net-XX} \ --block-device id={glance id of xrv9k ISO image},source=image,dest=volume,bus=ide,device=/dev/hdc,size=1,type=cdrom,bootindex=1 \ --block-device source=volume,id={cinder ID of xrv9k disk volume},dest=volume,size=55,bootindex=0

Example CLI

Page 21: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

nova boot xrv9k-84 --flavor xrv9k-flav-84 --nic net-id=e053d464-a8e0-4537-a8d1-d5d13fe0e39e --nic net-id=2f92588e-4321-4018-9e40-9236e4200ee6 --nic net-id=5293dac3-2219-45c5-a54c-26469a5f096f --nic net-id=b3ac3cd8-fd18-4bd6-b502-9385ee8c95b2 --block-device id=02feb47c-b6fc-45e0-a2f1-2d77d60b3fa8,source=image,dest=volume,bus=ide,device=/dev/hdc,size=1,type=cdrom,bootindex=1 --block-device source=volume,id=6ad553d8-a3e4-47a0-a3ee-a1790d91cdff,dest=volume,size=55,bootindex=0 Use the below CLI to instantiate XRv9K instance. This CLI is sensitive to white spaces. So, copy/paste or type with care. Use output from the commands used in the step11.2 for the respective ID values in the below CLI. $ nova list $ nova boot xrv9k-XX --flavor xrv9k-flav-XX \ --nic net-id={tenant-net-XX id} \ --nic net-id={mgmt-other id} \ --nic net-id={mgmt-host id} \ --nic net-id={b2b-net-XX id} \ --block-device id={xrv9k-img-XX id},source=image,dest=volume,bus=ide,device=/dev/hdc,size=1,type=cdrom,bootindex=1 \ --block-device source=volume,id={xrv9k-vol-XX id},dest=volume,size=55,bootindex=0 $ nova list $ nova show xrv9k-XX $ cinder list $ sudo vgs 11.4. Cleanup

11.5. Terminate tail –f commands

11.6. Verify that CSR instance is running well

Using the CLI given above (nova list, cinder list etc), ensure the VM is in operational state. 11.7. Sample output

Page 22: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

[user24@almas -2 ~(keystone_user24)]$ nova flavor-list | grep -e flav-24 -e ID -e + +-------+---------------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +-------+---------------+-----------+------+-----------+------+-------+-------------+-----------+ | 12424 | csr-flav-24 | 4096 | 0 | 0 | | 1 | 1.0 | True | | 22424 | xrv9k-flav-24 | 16384 | 55 | 0 | | 4 | 1.0 | True | +-------+---------------+-----------+------+-----------+------+-------+-------------+-----------+ [user24@almas -2 ~(keystone_user24)]$ [user24@almas -2 ~(keystone_user24)]$ neutron net-list | grep -e net-24 -e mgmt-other -e mgmt-host -e id -e + +--------------------------------------+---------------+-----------------------------------------------------+ | id | name | subnets | +--------------------------------------+---------------+-----------------------------------------------------+ | 2ae9f9bb-df07-49a8-ab35-a6f5da17cbda | b2b-net-24 | 44310bcf-a79e-4297-8c1b-80a7c42d9671 1.1.1.0/24 | | b7d88dc8-bac2-48f0-8a8c-ddf8efa06b6a | mgmt-host | 7192d57b-732d-4444-8397-77c1a90227c0 221.1.2.0/24 | | c34f522a-657b-4437-a1da-db69d23b6cbb | mgmt-net-24 | 3c041262-fecb-4559-8b75-a08ba0e4eff2 172.30.24.0/24 | | e46f10d4-a114-43c9-ab81-6422f9c5dd05 | tenant-net-24 | 00af729f-8a92-42c4-9f73-731721ad3b05 10.30.24.0/24 | | e8ef2abb-a125-418d-8647-ebe25ec22b57 | mgmt-other | 145047f3-0874-4890-9fd4-2d5ef54c26fd 221.1.1.0/24 | +--------------------------------------+---------------+-----------------------------------------------------+ [user24@almas -2 ~(keystone_user24)]$ [user24@almas -2 ~(keystone_user24)]$ glance image-list | grep -e img-24 -e ID -e + +--------------------------------------+------------------+-------------+------------------+------------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+------------------+-------------+------------------+------------+--------+ | e2802f98-83a0-4f17-8485-1dda55fc22c9 | csr-img-24 | qcow2 | bare | 1401552896 | active | | cc29748d-bc0e-43b5-b28a-7c962e9743f0 | xrv9k-img-24 | iso | bare | 722710528 | active | +--------------------------------------+------------------+-------------+------------------+------------+--------+ [user24@almas -2 ~(keystone_user24)]$ [user24@almas -2 ~(keystone_user24)]$ cinder list | grep -e vol-24 -e ID -e + +--------------------------------------+-----------+---------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+---------------+------+-------------+----------+-------------+ | ea6e35cf-8126-4079-bd61-785d476ce3ca | available | xrv9k-vol-24 | 55 | - | true | | +--------------------------------------+-----------+---------------+------+-------------+----------+-------------+ [user24@almas -2 ~(keystone_user24)]$ [user24@almas -2 ~(keystone_user24)]$ nova boot xrv9k-24 --flavor xrv9k-flav-24 \ > --nic net-id=e46f10d4-a114-43c9-ab81-6422f9c5dd05 \ > --nic net-id=e8ef2abb-a125-418d-8647-ebe25ec22b57 \ > --nic net-id=b7d88dc8-bac2-48f0-8a8c-ddf8efa06b6a \ > --nic net-id=2ae9f9bb-df07-49a8-ab35-a6f5da17cbda \ > --block-device id=cc29748d-bc0e-43b5-b28a-7c962e9743f0,source=image,dest=volume,bus=ide,device=/dev/hdc,size=1,type=cdrom,bootindex=1 \ > --block-device source=volume,id=ea6e35cf-8126-4079-bd61-785d476ce3ca,dest=volume,size=55,bootindex=0 +--------------------------------------+--------------------------------------------------+ | Property | Value | +--------------------------------------+--------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000025 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | DLRrb9zuXB3y | | config_drive | | | created | 2016-03-07T18:06:35Z | | flavor | xrv9k-flav-24 (22424) | | hostId | | | id | c0713685-db8f-43c0-9b66-8887cb34b78d | | image | Attempt to boot from volume - no image supplied | | key_name | - | | metadata | {} | | name | xrv9k-24 | | os-extended-volumes:volumes_attached | [{"id": "ea6e35cf-8126-4079-bd61-785d476ce3ca"}] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | 0bfdbace7047444388068ecf7dce42f8 | | updated | 2016-03-07T18:06:36Z | | user_id | 032308082b1a47479578669fc86c2416 | +--------------------------------------+--------------------------------------------------+ [user24@almas -2 ~(keystone_user24)]$ nova list +--------------------------------------+----------+--------+------------+-------------+-------------------------------------------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------+--------+------------+-------------+-------------------------------------------------------------------------------------------+ | 4711bc14-9427-4da7-982b-4c6e05280437 | csr-24 | ACTIVE | - | Running | b2b-net-24=1.1.1.3; tenant-net-24=10.30.24.7 | | c0713685-db8f-43c0-9b66-8887cb34b78d | xrv9k-24 | ACTIVE | - | Running | mgmt-other=221.1.1.10; b2b-net-24=1.1.1.4; mgmt-host=221.1.2.10; tenant-net-24=10.30.24.8 | +--------------------------------------+----------+--------+------------+-------------+-------------------------------------------------------------------------------------------+

Page 23: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

TOP OF THE PAGE

12. ConsoleaccessNow that you have CSR and XRv9K instances launched, you can access them over console via NOVNC HTML UI. NOVNC is an in-browser VNC client implemented using HTML5. The URL to console can be acquired through Horizon dashboard or through Openstack Nova CLI.

12.1. Access csr-XX console

12.1.1. Acquire URL to csr-XX console

Syntax: $ nova get-vnc-console INSTANCE_NAME VNC_TYPE Do: $ nova get-vnc-console csr-XX novnc

An output with HTML url is printed. Copy this URL. Please note that this URL has a timeout (5 minutes); it is to be accessed within 5 minutes. Feel free to get a new URL when the console gets disconnected. Sample output: $ nova get-vnc-console wisp-ce2 novnc +-------+----------------------------------------------------------------------------------+ | Type | Url | +-------+----------------------------------------------------------------------------------+ | novnc | http://172.31.56.2:6080/vnc_auto.html?token=c2c98095-6b0e-4b4c-8bca-6e4c032a4b68 | +-------+----------------------------------------------------------------------------------+

12.1.2. Access csr-XX console from your browser

Access the console at the URL given in the above step.

12.2. Access xrv9k-XX console In this step let us use Horizon dashboard to access the console. Access Horizon dashboard of your Openstack server at http://172.31.56.23X (.238 for users 81

– 84). Password for al users is cisco. Go to Project à Compute à Instances

Page 24: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

Right-click on xrv9k-XX à Open link in new tab à go to the newly opened tab à go to Console sub-tab à click on “click here to show only console”

Then,

Page 25: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

You should see the console screen. Use cisco for username/password.

TOP OF THE PAGE

13. OutofbandaccessusingFloatingIPAs given in the diagram below, out of band management access to the routers are through gig1 intf for CSR and mgmt/eth/0 intf for XRV9K router. In the first step, we need to have them un-shut and assigned an IP address. OOB access topology:

Page 26: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

IP subnet 10.30.XX.0/24 is in tenant space. Cisco corp network doesn’t have a route to this net. Connectivity is achieved through network address translation (NAT). We will use Neutron’s inbuilt q-router agent for this. Q-router’s placement in the topology is: CSR-gig1---tenant-net-XX---Q.router----oob-external-net---server-vNIC---UCS infra----DMZ switch--Internet In the second step, we will configure static NAT on the q.router by configuring “floating IP”

13.1. Basic connectivity on tenant-net-X.

Command to get console URL is: nova get-vnc-console <instance-name or ID> novnc Do the following from router console:

13.1.1. On csr-XX router, verify if the interface gig1 is UP and an IP address is assigned. If

not, do the following:

unshut gig1 Config: “ip address dhcp” Exit and save the config

You should now get an IP address for gig1 interface.

Page 27: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

Verify connectivity by (from csr-XX rotuer console): 1) ping gig1 interface IP 2)ping 10.30.XX.1

13.1.2. On xrv9k-XX router, verify if the interface MgmtEth0/RP0/CPU0/0 is UP and an IP

address is assigned. If not, do the following: Unshut MgmtEth0/RP0/CPU0/0 Configure: ipv4 address dhcp Exit and save the config

You should now get an IP address for MgmtEth0/RP0/CPU0/0 interface. Verify connectivity by (from xrv9k-XX router console): 1) ping MgmtEth0/RP0/CPU0/0 interface IP 2) ping 10.30.XX.1 3) ping gig-1 IP addr of csr-XX.

From your PC (not from Openstack server):

ping csr-XX gig1 IP ping xrv9k-XX MgmtEth0/RP0/CPU0/0

ping from PC should fail as there is no route to 10.30.XX.0/24 from your PC (via lab FW). You

may check routing table on your PC (netstat –nr, or ip route, or route print etc). Floating IP using Q.router :

Page 28: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

13.2. Create floating IP address using CLI

Q.router, which is not discussed much in this lab, has two interfaces: First connected to tenant-net-XX network. Second connected to mgmt.-net-XX, which has connectivity to the external network via server’s vNIC. Q.router provide NAT functionality with tenant-net-XX as NAT inside interface and mgmt.-net-84 as NAT outside interface. Floating IP will assign NAT static mapping between IP addresses on tenant-net-XX and mgmt.-net-XX.

$ neutron router-list //Observe different routers $ nova floating-ip-pool-list $ nova floating-ip-create mgmt-net-XX $ nova floating-ip-list Sample output:

13.3. Assign floating IP with CLI

In this step you will map floating IP to fixed IP on csr-XX (Syntax: $ nova floating-ip-associate {VM name or ID} {floating IP addr}) Assign floating IP to csr-XX $ nova list // pick floating IP addr from this output

$ nova floating-ip-associate csr-XX 172.30.XX.Z

$ nova list Example:

[user84@merman -8 ~(keystone_user84)]$ nova floating-ip-pool-list +-------------+ | name | +-------------+ | mgmt-net-84 | | mgmt-net-81 | | mgmt-net-82 | | mgmt-net-83 | +-------------+ [user84@merman -8 ~(keystone_user84)]$ nova floating-ip-create mgmt-net-84 +--------------------------------------+-------------+-----------+----------+-------------+ | Id | IP | Server Id | Fixed IP | Pool | +--------------------------------------+-------------+-----------+----------+-------------+ | 1d96acea-6983-4711-9cf7-86516fcf7172 | 172.30.84.2 | - | - | mgmt-net-84 | +--------------------------------------+-------------+-----------+----------+-------------+ [user84@merman -8 ~(keystone_user84)]$ nova floating-ip-list +--------------------------------------+-------------+--------------------------------------+------------+-------------+ | Id | IP | Server Id | Fixed IP | Pool | +--------------------------------------+-------------+--------------------------------------+------------+-------------+ | 1d96acea-6983-4711-9cf7-86516fcf7172 | 172.30.84.2 | - | - | mgmt-net-84 | +--------------------------------------+-------------+--------------------------------------+------------+-------------+ [user84@merman -8 ~(keystone_user84)]$

Page 29: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

13.4. Create floating IP address from Horizon dashboard

Log in Horizon dashboard (http://172.31.56.23X and userXX/cisco) Go to: Project à Compute à Access&Security à Floating IP à Allocate IP to project Then, Pool à mgmt-net-XX à Allocate IP Sample output:

$ nova floating-ip-associate csr-84 172.30.84.2 [user84@merman -8 ~(keystone_user84)]$ nova floating-ip-list +--------------------------------------+-------------+--------------------------------------+------------+-------------+ | Id | IP | Server Id | Fixed IP | Pool | +--------------------------------------+-------------+--------------------------------------+------------+-------------+ | 1d96acea-6983-4711-9cf7-86516fcf7172 | 172.30.84.2 | c35ae7a9-2336-43f4-8275-791982cb4bd2 | 10.30.84.4 | mgmt-net-84 | +--------------------------------------+-------------+--------------------------------------+------------+-------------+ [user84@merman -8 ~(keystone_user84)]$

Page 30: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

13.5. Assign floating IP from Horizon dashboard

Assign the newly created floating IP to xrv9k-XX. Go to: Project à Compute à Access&Security à Floating IP à Allocate IP to project Then, Select floating IP à Associate

Page 31: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

The above steps should create and assign floating IPs to both csr-XX and xrv9k-XX in tenant-net-XX segment. Now the routers should be reachable from your PC.

Page 32: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

TOP OF THE PAGE

14. TelnetaccessIn this section, you will telnet to csr-XX and xrv9k-XX from your PC.

14.1. Config router to accept telnet

From router console, config csr-XX as below:

enable password cisco ! line vty 0 4 passwd cisco login

! ip route 0.0.0.0 0.0.0.0 10.30.XX.1 !

14.2. telnet into csr-XX

Check connectivity to csr-XX OOB IP. Floating IP of csr-XX is the NAT outside IP. So, floating IP should be used here. $ nova list //pick the floating IP associated with tenant-net-XX From your PC: ping {csr-XX gig-1 floating IP} From your PC: telnet {csr-XX gig-1 floating IP} Sample output:

[user24@almas -2 ~(keystone_user24)]$ nova list +--------------------------------------+----------+--------+------------+-------------+--------------------------------------------------------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------+--------+------------+-------------+--------------------------------------------------------------------------------------------------------+ | 4711bc14-9427-4da7-982b-4c6e05280437 | csr-24 | ACTIVE | - | Running | b2b-net-24=1.1.1.3; tenant-net-24=10.30.24.7, 172.30.24.3 | | c0713685-db8f-43c0-9b66-8887cb34b78d | xrv9k-24 | ACTIVE | - | Running | mgmt-other=221.1.1.10; b2b-net-24=1.1.1.4; mgmt-host=221.1.2.10; tenant-net-24=10.30.24.8, 172.30.24.4 | +--------------------------------------+----------+--------+------------+-------------+--------------------------------------------------------------------------------------------------------+ [user24@almas -2 ~(keystone_user24)]$

Page 33: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

14.3. Config router to accept telnet

From router console, config xrv9k-XX as below: telnet ipv4 server max-servers 10 ! router static address-family ipv4 unicast 0.0.0.0/0 10.30.XX.1 !

14.4. telnet into xrv9k-XX

Check connectivity to xrv9k-XX OOB IP. Floating IP of xrv9k-XX is the NAT outside IP. So, floating IP should be used here. $ nova list //pick the floating IP (172.30.XX. address) associated with tenant-net-XX

[user84@merman -8 ~(keystone_user84)]$ nova list +--------------------------------------+----------+--------+------------+-------------+------------------------------------------------------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------+--------+------------+-------------+------------------------------------------------------------------------------------------------------+ | c35ae7a9-2336-43f4-8275-791982cb4bd2 | csr-84 | ACTIVE | - | Running | b2b-net-84=1.1.1.2; tenant-net-84=10.30.84.4, 172.30.84.2 | | 2fbb9688-9290-4430-b304-555b0dceb844 | xrv9k-84 | ACTIVE | - | Running | tenant-net-84=10.30.84.3, 172.30.84.4; b2b-net-84=1.1.1.1; mgmt-other=221.8.1.7; mgmt-host=221.8.2.7 | +--------------------------------------+----------+--------+------------+-------------+------------------------------------------------------------------------------------------------------+ [user84@merman -8 ~(keystone_user84)]$ Note that the prompt below is from my laptop (not Openstack server): GNAGANAB-M-J0A4:trash gnaganab$ ping 172.30.84.2 PING 172.30.84.2 (172.30.84.2): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 64 bytes from 172.30.84.2: icmp_seq=2 ttl=253 time=25.191 ms 64 bytes from 172.30.84.2: icmp_seq=3 ttl=253 time=25.318 ms 64 bytes from 172.30.84.2: icmp_seq=4 ttl=253 time=26.551 ms 64 bytes from 172.30.84.2: icmp_seq=5 ttl=253 time=26.107 ms ^C GNAGANAB-M-J0A4:trash gnaganab$ telnet 172.30.84.2 Trying 172.30.84.2... Connected to iqrams1-us-wan-gw1-gig0-0-10.cisco.com. Escape character is '^]'. User Access Verification Password: Kerberos: No default realm defined for Kerberos! host-10-30-84-4>en Password: host-10-30-84-4#

Page 34: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

From your PC: ping {xrv9k-XX mgmt-eth-0 floating IP} From your PC: telnet {xrv9k-XX mgmt-eth-0 floating IP} Sample output:

TOP OF THE PAGE

15. OrchestratingVMlaunchingusingHEATtemplatesHeat is an OpenStack Orchestration service. It orchestrates lifecycle of infrastructure and applications of Openstack clouds. Heat uses templates that predefine and pool a long list of parameters that make up the applications. Heat templates are text files that are used to describe cloud infrastructure/applications. Heat templates are written in human friendly YAML syntax. * Heat templates are written in the following structure:

heat_template_version: 2015-04-30 for Kilo description: # a description of the template parameter_groups: # a declaration of input parameter groups and order LTRSPG-1277: Building Virtual SP Network Using Cisco Virtual Network Function (VNF) Elements with Openstack

[user84@merman -8 ~(keystone_user84)]$ nova list +--------------------------------------+----------+--------+------------+-------------+------------------------------------------------------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------+--------+------------+-------------+------------------------------------------------------------------------------------------------------+ | c35ae7a9-2336-43f4-8275-791982cb4bd2 | csr-84 | ACTIVE | - | Running | b2b-net-84=1.1.1.2; tenant-net-84=10.30.84.4, 172.30.84.2 | | 2fbb9688-9290-4430-b304-555b0dceb844 | xrv9k-84 | ACTIVE | - | Running | tenant-net-84=10.30.84.3, 172.30.84.4; b2b-net-84=1.1.1.1; mgmt-other=221.8.1.7; mgmt-host=221.8.2.7 | +--------------------------------------+----------+--------+------------+-------------+------------------------------------------------------------------------------------------------------+ [user84@merman -8 ~(keystone_user84)]$ From laptop prompt: GNAGANAB-M-J0A4:trash gnaganab$ telnet 172.30.84.4 Trying 172.30.84.4... Connected to iqrams1-us-sw1.cisco.com. Escape character is '^]'. User Access Verification Username: cisco Password: RP/0/RP0/CPU0:ios#

Page 35: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

����Page 53 of 66 Cisco Live Berlin 2016 parameters: # declaration of input parameters resources: # declaration of template resources outputs: # declaration of output parameters

* http://docs.openstack.org/developer/heat/template_guide/hot_spec.html In this section we use simple templates to describe parameters for virtual devices- CSR1000v and XRv9000. Then we use Heat CLI and Horizon dashboard to instantiate router instances. Because all the parameters in "nova boot" CLI are defined in Heat templates, the launching becomes simpler.

15.1. Kill csr-XX and xrv9k-XX instances

$ nova list $ cinder list $ nova delete csr-XX $ nova list $ nova delete xrv9k-XX $ nova list $ cinder list

15.2. Verify Heat template of csr-XX

We saved Heat csr Heat template at: /var/lib/libvirt/heat-templates/ $ sudo more /var/lib/libvirt/heat-templates/csr1kv-XX.yml Make sure the references (names of network, flavor etc) made in the Heat template are correct. 15.3. Instantiate csr-XX by creating Heat stack $ heat stack-create -f /var/lib/libvirt/heat-templates/csr1kv-XX.yml csr-XX $ heat stack-list $ heat stack-show csr-XX $ nova list You will find csr-XX with Heat added extension to its name.

Page 36: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

15.4. Verify Heat template of xrv9k-XX

We saved Heat csr Heat template at: /var/lib/libvirt/heat-templates/ $ sudo cd /var/lib/libvirt/heat-templates/ $ sudo more xrv9k-cfg-XX.txt Make sure the references (names of network, flavor etc) made in the Heat template are correct. 15.5. Instantiate xrv9k-XX by creating Heat stack $ heat stack-create -f /var/lib/libvirt/heat-templates/xrv9k-XX.yml xrv9k-XX $ heat stack-list $ heat stack-show xrv9k-XX $ nova list $ cinder list You will find xrv9k-XX with Heat added extension to its name. 15.6. Clean up

$ heat stack-list $ heat stack-delete csr-84 $ heat stack-delete xrv9k-84 $ nova list

TOP OF THE PAGE

16. ConfigdriveIn the above exercises the routers booted without any config (except for the DHCP IP and static route). It is possible to boot a router with a pre-configured config. Openstack can write metadata to a special configuration drive. Router instance can mount the configuration drive and read data from it. One of the use cases for this feature is to boot up router instance with some interfaces un-shut and assign pre-determined IP address, where DHCP service is not available. Or IP Sec config is needed for a VM separated across Internet. The objective of this section is to boot csr-XX and xrv9k-XX with a predefined configuration.

16.1. Kill csr-XX and xrv9k-XX instances

Page 37: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

$ nova list $ cinder list $ nova delete csr-XX $ nova list $ nova delete xrv9k-XX $ nova list $ cinder list

16.2. Check the predefined config for csr-XX

We saved sample config at: /var/lib/libvirt/cfg-drive/ Study the preconfig: $ sudo more /var/lib/libvirt/cfg-drive/csr-cfg-84.txt

16.3. Launch csr-XX with “config drive” option

$ neutron net-list $ nova flavor-list $ glance image-list $ nova boot csr-XX --flavor csr-flav-XX --image csr-img-XX –-nic net-id={tenant-net-XX id} –-nic net-id={b2b-net-XX id} --config-drive=true --file iosxe_config.txt=/var/lib/libvirt/cfg-drive/csr-cfg-XX.txt $ nova list $ nova show csr-XX $ nova get-vnc-console csr-XX novnc // you may access console and

monitor log here

16.4. Associate floating IP

$ nova floating-ip-list $ nova floating-ip-associate csr-XX {any available floating IP from floating-ip-list} $ nova floating-ip-list $ nova list //floating IP should be associated to tenant-net-XX $ nova show csr-XX //floating IP should be associated to tenant-net-XX 16.5. Log in to csr-XX from your PC

From you PC: telnet {floating ip associated to csr-XX}

Page 38: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

16.6. Verify router config on csr-XX Compare the config on csr-XX with the config drive file config (=/var/lib/libvirt/cfg-drive/csr-cfg-XX.txt) 16.7. Check the predefined config for xrv9k-XX

We saved sample config at: /var/lib/libvirt/cfg-drive/ Study the preconfig: $ sudo more /var/lib/libvirt/cfg-drive/xrv-cfg-84.txt

16.8. Launch xrv9k-XX with “config drive” option

$ neutron net-list $ nova flavor-list $ glance image-list $ cinder list $ nova boot xrv9k-XX --flavor xrv9k-flav-XX \ --nic net-id={tenant-net-id} \ --nic net-id={mgmt-other-net-id} \ --nic net-id={mgmt-host-net-id} \ --nic net-id={b2b-net-XX} \ --block-device id={glance id of xrv9k ISO image},source=image,dest=volume,bus=ide,device=/dev/hdc,size=1,type=cdrom,bootindex=1 \ --block-device source=volume,id={cinder ID of xrv9k disk volume},dest=volume,size=55,bootindex=0 \ --config-drive=true --user-data /var/lib/libvirt/cfg-drive/xrv9k-cfg-XX.txt \ --file /root/iosxr_config.txt=/var/lib/libvirt/cfg-drive/xrv9k-cfg-XX.txt \

$ nova list $ nova show xrv9k-XX $ nova get-vnc-console xrv9k-84 novnc // monitor console

16.9. Associate floating IP

$ nova floating-ip-list $ nova floating-ip-associate xrv9k-XX {available IP addresses from the floating-ip-list CLI above} $ nova floating-ip-list 16.10. Log in to xrv9k-XX from your PC and config gig0/0/0/0

From you PC: telnet {floating ip associated to xrv9k-XX}

Page 39: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

16.11. Verify router config on xrv9k-XX

Compare the config on xrv9k-XX with the config drive file config (=/var/lib/libvirt/cfg-drive/xrv-cfg-XX.txt)

DHCP option is not available on gig interfaces of xrv9k. So, you need to configure it manually. Find the nova assigned IP address and use the same. 16.12. Configure gig0/0/0/0 $ nova list //find IP address assignment on b2b-net-XX From router config prompt #(config) interface gig0/0/0/0 ipv4 address 1.1.1.Z/24 Make sure you can ping csr-XX from xrv9k-XX on b2b-net-XX.

• Ping each other router’s b2b-net-XX IP address. • Ensure that OSPF adjacency is up • Is BGP adjacency up? If not, find out which router is dropping the packets.

TOP OF THE PAGE

17. Troubleshootingpacketdrops In our topology, cxr-XX and xrv9k-XX are connected by b2b-net-XX. We may draw this connection as a single segment. But there are multiple segments that make up b2b-net-XX. To trace packets on b2b-net-XX we need to trace multiple segments and devices. The following diagrams show the transit segments and devices. End to end segment of b2b-net-XX is made up of sub-segments that transit through two KVM bridges and one OpenVSwitch (OVS) switch. Refer to https://www.rdoproject.org/networking/networking-in-too-much-detail/ for more info. After completion of the above step:

• Ping between the routers to b2b-net-XX interface should be successful. • OSPF adjacency should be up • Routers should learn each other’s loopback address • Ping to loopback address of the other router should fail.

Make sure the above observations before proceeding. As part of troubleshooting we will do the following:

• From csr-XX ping xrv9k-XX b2b-net-XX interface address • Monitor successful packet flow at 6 locations • From csr-XX ping xrv9k-XX loopback-0 interface address • Monitor successful packet flow at 6 locations • Make observations on where the packets are dropped

17.1. Topology

Page 40: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

17.2. Find port-IDs

Find port-ID of each sub-segment on the transit devices and note them down for use in later steps.

17.2.1. Find port-IDs

$ neutron port-list | inc 1.1.1 // note the port-ID connected csr-XX $ brctl show // note the two associated ports that is connected to qbr (Q.bridge). Qbr name will contain first 10 digits from the port-ID $ sudo ovs-vsctl show | grep Bridge // make note that there are multiple OVS switches $ sudo ovs-vsctl show // Note the port-ID connected to BR-INT switch/bridge.

$ neutron port-list | inc 1.1.1 // note the port-ID connected xrv9k-XX

$ brctl show // note the two associated ports that is connected to qbr (Q.bridge). Qbr name will contain first 10 digits from the port-ID $ sudo ovs-vsctl show | grep Bridge // make note that there are multiple OVS switches

Page 41: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

$ sudo ovs-vsctl show // Note the port-ID connected to BR-INT switch/bridge. Sample output:

Page 42: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

[user84@merman -8 ~(keystone_user84)]$ neutron port-list | grep 1.1.1 | 287ded09-e75e-4efd-a993-b71a685d0e79 | | fa:16:3e:a7:3f:33 | {"subnet_id": "e10cc82e-ce00-4d03-8bc5-781d848957d6", "ip_address": "1.1.1.4"} | | 987d44dd-66fd-449b-abe2-8e51b41caf8e | | fa:16:3e:a1:80:dd | {"subnet_id": "e10cc82e-ce00-4d03-8bc5-781d848957d6", "ip_address": "1.1.1.3"} |

[user84@merman -8 ~(keystone_user84)]$ brctl show bridge name bridge id STP enabled interfaces qbr287ded09-e7 8000.5e216428ba1f no qvb287ded09-e7 tap287ded09-e7 qbr4ef1ae09-a2 8000.f6ee12687f64 no qvb4ef1ae09-a2 tap4ef1ae09-a2 qbr86b144c7-ce 8000.6ad4255047f3 no qvb86b144c7-ce tap86b144c7-ce qbr91729cf7-5e 8000.1e0febf406ad no qvb91729cf7-5e tap91729cf7-5e qbr987d44dd-66 8000.0e95752ef548 no qvb987d44dd-66 tap987d44dd-66 qbra232f01c-f4 8000.3af40f888081 no qvba232f01c-f4 tapa232f01c-f4 [user84@merman -8 ~(keystone_user84)]$ sudo ovs-vsctl show | grep Bridge Bridge "br-ex-81" Bridge br-int Bridge "br-ex-82" Bridge "br-ex-84" Bridge "br-ex-83" Bridge br-tun [user84@merman -8 ~(keystone_user84)]$ [user84@merman -8 ~(keystone_user84)]$ sudo ovs-vsctl show a3dba6fb-1327-47ab-8eec-072c73d89a3a : Bridge br-int fail_mode: secure Port "qg-d4dd09e8-75" tag: 8 Interface "qg-d4dd09e8-75" type: internal

: Port "qvo287ded09-e7" tag: 11 Interface "qvo287ded09-e7" Port "qvo987d44dd-66" tag: 11 Interface "qvo987d44dd-66" : ovs_version: "2.4.0" [user84@merman -8 ~(keystone_user84)]$

Page 43: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

17.3. Monitoring successful ICMP packet flow from csr-XX port to xrv9k-XX

In the steps below, we are going to monitor packets on ports, starting from csr-XX gig-2 interface to xrv9k-XX gig0/0/0/0 interface, in a serial manner. First check the port on left qbridge port facing csr-XX. Next, left-Qbridge port facing BR-INT OVS switch. Next, port on BR-INT facing left-Q.bridge. Next, port on BR-INT facing right-Q.bridge. Next, port on right-Qbridge facing BR-INT. Then, port on right Q.bridge facing xrv9k-XX.

Mark the port-IDs of the above ports before proceeding to the following steps.

17.3.1. In a new window monitor traffic on left-Qbridge port facing csr-XX

$ tcpdump –i {port-ID of Qbr going to csr-XX}

17.3.1.1. From csr-XX router start pings to xrv9k-XX

$ ping {b2b-net-XX addr on xrv9k-XX} repeat 2 You should see this traffic in tcpdump output. If you don’t see it verify port-ID used in tcpdump command.

17.3.2. In another window monitor traffic on left-Qbridge port facing BR-INT

$ tcpdump –i {port-ID of left-Qbr facing BR-INT}

17.3.2.1. From csr-XX router start pings to xrv9k-XX

$ ping {b2b-net-XX addr on xrv9k-XX} repeat 2

Page 44: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

You should see this traffic in tcpdump output. If you don’t see it verify port-ID used in tcpdump command.

17.3.3. In another window monitor traffic on BR-INT port facing left-Qbridge

$ tcpdump –i {port-ID of BR-INT facing left-Qbr}

17.3.3.1. From csr-XX router start pings to xrv9k-XX

$ ping {b2b-net-XX addr on xrv9k-XX} repeat 2 You should see this traffic in tcpdump output. If you don’t see it verify port-ID used in tcpdump command.

17.3.4. In another window monitor traffic on BR-INT port facing right-Qbridge

$ tcpdump –i {port-ID of BR-INT facing right-Qbr }

17.3.4.1. From csr-XX router start pings to xrv9k-XX

$ ping {b2b-net-XX addr on xrv9k-XX} repeat 2 You should see this traffic in tcpdump output. If you don’t see it verify port-ID used in tcpdump command.

17.3.5. In another window monitor traffic on port on right-Qbridge facing BR-INT

$ tcpdump –i {port-ID of port on right-Qbr facing BR-INT}

17.3.5.1. From csr-XX router start pings to xrv9k-XX

$ ping {b2b-net-XX addr on xrv9k-XX} repeat 2 You should see this traffic in tcpdump output. If you don’t see it verify port-ID used in tcpdump command.

17.3.6. In another window monitor traffic on BR-INT port facing right-Qbridge

$ tcpdump –i {port-ID of port on right-Qbr facing xrv9k-XX}

17.3.6.1. From csr-XX router start pings to xrv9k-XX

$ ping {b2b-net-XX addr on xrv9k-XX} repeat 2 You should see this traffic in tcpdump output. If you don’t see it verify port-ID used in tcpdump command.

Other useful commands: $ brctl showmacs {bridge ID} $ brctl --help

Page 45: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

$ sudo ovs-vsctl --help

Sample output:

Page 46: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

[user84@merman -8 ~(keystone_user84)]$ sudo tcpdump -i tap287ded09-e7 tcpdump: WARNING: tap287ded09-e7: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap287ded09-e7, link-type EN10MB (Ethernet), capture size 65535 bytes 17:31:16.469999 IP 1.1.1.3 > ospf-all.mcast.net: OSPFv2, Hello, length 60 17:31:18.904091 IP 1.1.1.4 > 1.1.1.3: ICMP echo request, id 2, seq 0, length 80 17:31:18.905599 IP 1.1.1.3 > 1.1.1.4: ICMP echo reply, id 2, seq 0, length 80 17:31:18.907146 IP 1.1.1.4 > 1.1.1.3: ICMP echo request, id 2, seq 1, length 80 [user84@merman -8 ~(keystone_user84)]$ sudo tcpdump -i qvb287ded09-e7 tcpdump: WARNING: qvb287ded09-e7: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on qvb287ded09-e7, link-type EN10MB (Ethernet), capture size 65535 bytes 17:28:11.942163 IP 1.1.1.4 > 1.1.1.3: ICMP echo request, id 1, seq 6629, length 80 17:28:11.942719 IP 1.1.1.3 > 1.1.1.4: ICMP echo reply, id 1, seq 6629, length 80 17:28:11.944172 IP 1.1.1.4 > 1.1.1.3: ICMP echo request, id 1, seq 6630, length 80 [user84@merman -8 ~(keystone_user84)]$ sudo tcpdump -i qvo287ded09-e7 tcpdump: WARNING: qvo287ded09-e7: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on qvo287ded09-e7, link-type EN10MB (Ethernet), capture size 65535 bytes 17:34:10.826924 IP 1.1.1.4 > 1.1.1.3: ICMP echo request, id 3, seq 0, length 80 17:34:10.828341 IP 1.1.1.3 > 1.1.1.4: ICMP echo reply, id 3, seq 0, length 80 17:34:10.830145 IP 1.1.1.4 > 1.1.1.3: ICMP echo request, id 3, seq 1, length 80 [user84@merman -8 ~(keystone_user84)]$ sudo tcpdump -i qvo987d44dd-66 tcpdump: WARNING: qvo987d44dd-66: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on qvo987d44dd-66, link-type EN10MB (Ethernet), capture size 65535 bytes 17:36:02.776061 IP 1.1.1.4 > 1.1.1.3: ICMP echo request, id 4, seq 0, length 80 17:36:02.777251 IP 1.1.1.3 > 1.1.1.4: ICMP echo reply, id 4, seq 0, length 80 17:36:02.779223 IP 1.1.1.4 > 1.1.1.3: ICMP echo request, id 4, seq 1, length 80 [user84@merman -8 ~(keystone_user84)]$ sudo tcpdump -i qvb987d44dd-66 tcpdump: WARNING: qvb987d44dd-66: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on qvb987d44dd-66, link-type EN10MB (Ethernet), capture size 65535 bytes 17:37:03.458969 IP 1.1.1.4 > 1.1.1.3: ICMP echo request, id 5, seq 0, length 80 17:37:03.460069 IP 1.1.1.3 > 1.1.1.4: ICMP echo reply, id 5, seq 0, length 80 17:37:03.462166 IP 1.1.1.4 > 1.1.1.3: ICMP echo request, id 5, seq 1, length 80 [user84@merman -8 ~(keystone_user84)]$ sudo tcpdump -i tap987d44dd-66 tcpdump: WARNING: tap987d44dd-66: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap987d44dd-66, link-type EN10MB (Ethernet), capture size 65535

Page 47: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

17.4. Locating ICMP packet drops. Observe both echo-request and echo-reply.

17.5. From csr-XX ping loopback-0 address of xrv9k_XX

# ping 10.0.XX.3

17.6. Monitor left-Qbridge port facing csr-XX (step-1)

$ sudo tcpdump –i {port ID of left-Qbr facing csr-XX} 17.7. Monitor left-Qbridge port facing BR-INT (step-2)

$ sudo tcpdump –i {port ID of left-Qbr facing BR-INT} 17.8. Monitor BR-INT port facing left-Qbridge (step-3)

$ sudo tcpdump –i {port ID of BR-INT facing left-Qbr} 17.9. Monitor BR-INT port facing right-Qbridge (step-4)

$ sudo tcpdump –i {port ID of BR-INT facing right-Qbr} 17.10. Monitor right-Qbridge port facing BR-INT (step-5)

$ sudo tcpdump –i {port ID of right-Qbr facing BR-INT} 17.11. Monitor right-Qbridge port facing xrv9k-XX (step-6)

$ sudo tcpdump –i {port ID of right-Qbr facing xrv9k-XX}

Observe both echo-reqest and echo-reply. Determine the location of packet drop.

TOP OF THE PAGE

18. Allowingaddress-pairsonports In the previous section you should have observed that the right-QBridge is dropping echo-reply packets. Echo-reply packets would have xrv9k-XX’s loopback-0 address as the source address. KVM Q.Bridge won’t allow packets with source address is outside the subnet range assigned to the segment’s network (b2b-net-XX in the above section). This is Neutron’s anti-spoofing security mechanism. We need to explicitly allow source addresses outside the subnet range.

18.1. Allow all address-pairs on csr-XX port

Syntax: $ neutron port-update PORT_UUID --allowed-address-pairs type=dict list=true mac_address=MAC_ADDRESS,ip_address=IP_CIDR

Page 48: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

Find port-ID of xrv9k-XX on b2b-net-XX: $ neutron port-list | grep 1.1.1 $ neutron port-update <port-ID connected to xrv9k-XX> --allowed-address-pairs type=dict list=true ip_address=0.0.0.0/0 $ neutron port-show <port-ID> From csr-XX, ping xrv9k-XX loopback-0 IP address.

• Is the ping successful? • Monitor packets on right-QBridge and other locations using tcpdump

From xrv9k-XX, ping csr-XX loopback-0 IP address.

• Is the ping successful? • Monitor packets on right-QBridge (both csr-XX facing and BR-INT facing) and other

locations using tcpdump 18.2. Allow all address-pairs on csr-XX port Find port-ID of csr-XX on b2b-net-XX: $ neutron port-list | grep 1.1.1 $ neutron port-update <port-ID connected to csr-XX> --allowed-address-pairs type=dict list=true ip_address=0.0.0.0/0 $ neutron port-show <port-ID> From xrv9k-XX, ping csr-XX loopback-0 IP address.

• Is the ping successful? • Monitor packets on right-QBridge (both csr-XX facing and BR-INT facing) and other

locations using tcpdump With both ports updated to allow all address pairs (0.0.0.0/0), ping to other router-s loop-back address should succeed and IBGP session should come up. You have completed this lab. Congratulations! Post-lab survey has 3 questions and may take about a minute. Please take post-lab survey HERE

TOP OF THE PAGE

19. Acknowledgements

• http://docs.openstack.org/

TOP OF THE PAGE

Page 49: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

Appendix

I. Post-labSurveyPost-lab survey has 3 questions and may take about a minute. Please take post-lab survey HERE

II. Flavor A flavor consists of the following parameters: Flavor ID Unique ID (integer or UUID) for the new flavor. If specifying ‘auto’, a UUID will be automatically generated. Name Name for the new flavor. VCPUs Number of virtual CPUs to use. Memory MB Amount of RAM to use (in megabytes). Root Disk GB Amount of disk space (in gigabytes) to use for the root (/) partition. Ephemeral Disk GB Amount of disk space (in gigabytes) to use for the ephemeral partition. If unspecified, the value is 0 by default. Ephemeral disks offer machine local disk storage linked to the lifecycle of a VM instance. When a VM is terminated, all data on the ephemeral disk is lost. Ephemeral disks are not included in any snapshots. Swap Amount of swap space (in megabytes) to use. If unspecified, the value is 0 by default.

TOP OF THE PAGE

III. Image Following are arguments that you can use with the create and update commands to modify image properties. --name NAME The name of the image. --disk-format DISK_FORMAT The disk format of the image. Acceptable formats are ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, and iso. --container-format CONTAINER_FORMAT The container format of the image. Acceptable formats are ami, ari, aki, bare, docker, and ovf. --owner TENANT_ID --size SIZE The tenant who should own the image. The size of image data, in bytes. --min-disk DISK_GB

Page 50: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

The minimum size of the disk needed to boot the image, in gigabytes. --min-ram DISK_RAM The minimum amount of RAM needed to boot the image, in megabytes. --location IMAGE_URL The URL where the data for this image resides. For example, if the image data is stored in swift, you could specify swift://account:[email protected]/container/obj. --file FILE Local file that contains the disk image to be uploaded during the update. Alternatively, you can pass images to the client through stdin. --checksum CHECKSUM Hash of image data to use for verification. --copy-from IMAGE_URL Similar to --location in usage, but indicates that the image server should immediately copy the data and store it in its configured image store. --is-public [True|False] Makes an image accessible for all the tenants (admin-only by default). --is-protected [True|False] Prevents an image from being deleted. --property KEY=VALUE Arbitrary property to associate with image. This option can be used multiple times. --purge-props Deletes all image properties that are not explicitly set in the update request. Otherwise, those properties not referenced are preserved. --human-readable Prints the image size in a human-friendly format.

TOP OF THE PAGE

IV. Cinder http://docs.openstack.org/user-guide/common/cli_manage_volumes.html

TOP OF THE PAGE

V. Neutron Create a network for a given tenant. Positional arguments¶ NAME Name of network to create. Optional arguments¶ -h, --help show this help message and exit --request-format {json} DEPRECATED! Only JSON request format is supported. --tenant-id TENANT_ID The owner tenant ID. --admin-state-down Set admin state up to false. --shared

Page 51: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

Set the network as shared. --provider:network_type <network_type> The physical mechanism by which the virtual network is implemented. --provider:physical_network <physical_network_name> Name of the physical network over which the virtual network is implemented. --provider:segmentation_id <segmentation_id> VLAN ID for VLAN networks or tunnel-id for GRE/VXLAN networks. --vlan-transparent {True,False} Create a vlan transparent network. --qos-policy QOS_POLICY Attach QoS policy ID or name to the resource. --availability-zone-hint AVAILABILITY_ZONE Availability Zone for the network (requires availability zone extension, this option can be repeated). Create a subnet for a given tenant. Positional arguments¶ NETWORK Network ID or name this subnet belongs to. CIDR CIDR of subnet to create. Optional arguments¶ -h, --help show this help message and exit --request-format {json} DEPRECATED! Only JSON request format is supported. --tenant-id TENANT_ID The owner tenant ID. --name NAME Name of this subnet. --gateway GATEWAY_IP Gateway IP of this subnet. --no-gateway No distribution of gateway. --allocation-pool start=IP_ADDR,end=IP_ADDR Allocation pool IP addresses for this subnet (This option can be repeated). --host-route destination=CIDR,nexthop=IP_ADDR Additional route (This option can be repeated). --dns-nameserver DNS_NAMESERVER DNS name server for this subnet (This option can be repeated). --disable-dhcp Disable DHCP for this subnet. --enable-dhcp Enable DHCP for this subnet. --ip-version {4,6} IP version to use, default is 4. Note that when subnetpool is specified, IP version is determined from the subnetpool and this option is ignored. --ipv6-ra-mode {dhcpv6-stateful,dhcpv6-stateless,slaac} IPv6 RA (Router Advertisement) mode. --ipv6-address-mode {dhcpv6-stateful,dhcpv6-stateless,slaac} IPv6 address mode. --subnetpool SUBNETPOOL ID or name of subnetpool from which this subnet will obtain a CIDR. --prefixlen PREFIX_LENGTH Prefix length for subnet allocation from subnetpool.

Page 52: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

Create a port for a given tenant. Positional arguments¶ NETWORK Network ID or name this port belongs to. Optional arguments¶ -h, --help show this help message and exit --request-format {json} DEPRECATED! Only JSON request format is supported. --tenant-id TENANT_ID The owner tenant ID. --name NAME Name of this port. --fixed-ip subnet_id=SUBNET,ip_address=IP_ADDR Desired IP and/or subnet for this port: subnet_id=<name_or_id>,ip_address=<ip>. You can repeat this option. --device-id DEVICE_ID Device ID of this port. --device-owner DEVICE_OWNER Device owner of this port. --admin-state-down Set admin state up to false. --mac-address MAC_ADDRESS MAC address of this port. --vnic-type <direct | direct-physical | macvtap | normal | baremetal> VNIC type for this port. --binding-profile BINDING_PROFILE Custom data to be passed as binding:profile. --security-group SECURITY_GROUP Security group associated with the port. You can repeat this option. --no-security-groups Associate no security groups with the port. --extra-dhcp-opt EXTRA_DHCP_OPTS Extra dhcp options to be assigned to this port: opt_na me=<dhcp_option_name>,opt_value=<value>,ip_version={4, 6}. You can repeat this option. --qos-policy QOS_POLICY Attach QoS policy ID or name to the resource. --allowed-address-pair ip_address=IP_ADDR[,mac_address=MAC_ADDR] Allowed address pair associated with the port.You can repeat this option. --no-allowed-address-pairs Associate no allowed address pairs with the port.

TOP OF THE PAGE

VI. Nova Boot a new server. Positional arguments¶ <name> Name for the new server.

Page 53: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

Optional arguments¶ --flavor <flavor> Name or ID of flavor (see ‘nova flavor-list’). --image <image> Name or ID of image (see ‘nova image-list’). --image-with <key=value> Image metadata property (see ‘nova image- show’). --boot-volume <volume_id> Volume ID to boot from. --snapshot <snapshot_id> Snapshot ID to boot from (will create a volume). --min-count <number> Boot at least <number> servers (limited by quota). --max-count <number> Boot up to <number> servers (limited by quota). --meta <key=value> Record arbitrary key/value metadata to /meta_data.json on the metadata server. Can be specified multiple times. --file <dst-path=src-path> Store arbitrary files from <src-path> locally to <dst-path> on the new server. Limited by the injected_files quota value. --key-name <key-name> Key name of keypair that should be created earlier with the command keypair-add. --user-data <user-data> user data file to pass to be exposed by the metadata server. --availability-zone <availability-zone> The availabilit zone for server placement. --security-groups <security-groups> Comma separated list of security group names. --block-device-mapping <dev-name=mapping> Block device mapping in the format <dev- name>=<id>:<type>:<size(GB)>:<delete-on- terminate>. --block-device key1=value1[,key2=value2...] Block device mapping with the keys: id=UUID (image_id, snapshot_id or volume_id only if using source image, snapshot or volume) source=source type (image, snapshot, volume or blank), dest=destination type of the block device (volume or local), bus=device’s bus (e.g. uml, lxc, virtio, ...; if omitted, hypervisor driver chooses a suitable default, honoured only if device type is supplied) type=device type (e.g. disk, cdrom, ...; defaults to ‘disk’) device=name of the device (e.g. vda, xda, ...; if omitted, hypervisor driver chooses suitable device depending on selected bus; note the libvirt driver always uses default device names), size=size of the block device in MB(for swap) and in GB(for other formats) (if omitted, hypervisor driver calculates size), format=device will be formatted (e.g. swap, ntfs, ...; optional), bootindex=integer used for ordering the boot disks (for image backed instances it is equal to 0, for others need to be specified) and shutdown=shutdown behaviour (either preserve or remove, for local destination set to remove). --swap <swap_size> Create and attach a local swap block device of <swap_size> MB. --ephemeral size=<size>[,format=<format>] Create and attach a local ephemeral block device of <size> GB and format it to <format>. --hint <key=value> Send arbitrary key/value pairs to the scheduler for custom use. --nic <net-id=net-uuid, v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid> Create a NIC on the server. Specify option multiple times to create multiple NICs. net- id: attach NIC to network with this UUID (either port-id or net-id must be provided), v4-fixed-ip: IPv4 fixed address for NIC (optional), v6-fixed-ip: IPv6 fixed address for NIC (optional), port-id: attach NIC to port with this UUID (either port-id or net-id must be provided).

Page 54: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

--config-drive <value> Enable config drive. --poll Report the new server boot progress until it completes. --admin-pass <value> Admin password for the instance. --access-ip-v4 <value> Alternative access IPv4 of the instance. --access-ip-v6 <value> Alternative access IPv6 of the instance.

NovaInstanceProvisioningFlowSource: http://docs.openstack.org/icehouse/training-guides/content/operator-computer-node.html

Page 55: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

The request flow for provisioning an instance goes like this:

1. The dashboard or CLI gets the user credentials and authenticates with the Identity Service via REST API.

The Identity Service authenticates the user with the user credentials, and then generates and sends back an auth-token which will be used for sending the request to other components through REST-call.

Page 56: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

2. The dashboard or CLI converts the new instance request specified in launch instance or nova-boot form to a REST API request and sends it to nova-api.

3. nova-api receives the request and sends a request to the Identity Service for validation of the

auth-token and access permission.

The Identity Service validates the token and sends updated authentication headers with roles and permissions.

4. nova-api checks for conflicts with nova-database.

nova-api creates initial database entry for a new instance.

5. nova-api sends the rpc.call request to nova-scheduler expecting to get updated instance entry

with host ID specified.

6. nova-scheduler picks up the request from the queue.

7. nova-scheduler interacts with nova-database to find an appropriate host via filtering and weighing.

nova-scheduler returns the updated instance entry with the appropriate host ID after filtering and weighing.

nova-scheduler sends the rpc.cast request to nova-compute for launching an instance on the appropriate host.

8. nova-compute picks up the request from the queue.

9. nova-compute sends the rpc.call request to nova-conductor to fetch the instance information such

as host ID and flavor (RAM, CPU, Disk).

10. nova-conductor picks up the request from the queue.

11. nova-conductor interacts with nova-database.

nova-conductor returns the instance information.

nova-compute picks up the instance information from the queue.

12. nova-compute performs the REST call by passing the auth-token to glance-api. Then, nova-compute uses the Image ID to retrieve the Image URI from the Image Service, and loads the image from the image storage.

13. glance-api validates the auth-token with keystone.

nova-compute gets the image metadata.

14. nova-compute performs the REST-call by passing the auth-token to Network API to allocate and

configure the network so that the instance gets the IP address.

15. neutron-server validates the auth-token with keystone.

nova-compute retrieves the network info.

Page 57: LTRCLD-1451 - Openstack for Network Engineers · LTRCLD-1451 - Openstack for Network Engineers Table of Contents: 1 ... (for more details on command options, refer to NEUTRON section

16. nova-compute performs the REST call by passing the auth-token to Volume API to attach volumes to the instance.

17. cinder-api validates the auth-token with keystone.

nova-compute retrieves the block storage info.

18. nova-compute generates data for the hypervisor driver and executes the request on the

hypervisor (via libvirt or API).

TOP OF THE PAGE

END OF LAB