Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration … · Red Hat Enterprise Linux...

30
OpenStack Team Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide Integrate VMware vSphere with Red Hat Enterprise Linux OpenStack Platform

Transcript of Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration … · Red Hat Enterprise Linux...

OpenStack Team

Red Hat Enterprise Linux OpenStackPlatform 7VMware Integration Guide

Integrate VMware vSphere with Red Hat Enterprise Linux OpenStackPlatform

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

Integrate VMware vSphere with Red Hat Enterprise Linux OpenStackPlatform

OpenStack [email protected]

Legal Notice

Copyright © 2016 Red Hat, Inc.

The text of and illustrations in this document are licensed by Red Hat under a Creative CommonsAttribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA isavailable athttp://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you mustprovide the URL for the original version.

Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.

Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the InfinityLogo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and othercountries.

Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.

Java ® is a registered trademark of Oracle and/or its affiliates.

XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United Statesand/or other countries.

MySQL ® is a registered trademark of MySQL AB in the United States, the European Union andother countries.

Node.js ® is an official trademark of Joyent. Red Hat Software Collections is not formally related toor endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack ® Word Mark and OpenStack Logo are either registered trademarks/service marksor trademarks/service marks of the OpenStack Foundation, in the United States and other countriesand are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed orsponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

AbstractA guide for integrating VMware vSphere and VMware NSX with Red Hat Enterprise LinuxOpenStack Platform.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Table of Contents

PREFACE

CHAPTER 1. INTEGRATE WITH VMWARE VCENTER1.1. ARCHITECTURE OVERVIEW1.2. SYSTEM REQUIREMENTS1.3. POST-DEPLOYMENT TASKS1.4. USE HOST AGGREGATES1.5. KNOWN ISSUES

CHAPTER 2. IMPORT VMWARE GUESTS2.1. CONFIGURE THE V2V HOST

CHAPTER 3. INTEGRATE VMWARE NSX WITH OPENSTACK NETWORKING3.1. SERVICE NODE CONCEPTS3.2. DISABLE NEUTRON-OPENVSWITCH-AGENT3.3. INSTALL AND ENABLE THE VMWARE NSX PLUGIN3.4. CONFIGURE THE VMWARE NSX PLUGIN3.5. CONFIGURE THE DHCP AGENT3.6. CONFIGURE THE NETWORK NODE

3

445

111314

1516

21212122232425

Table of Contents

1

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

2

PREFACE

This guide describes how to integrate RHEL OpenStack Platform with your existing VMwarevSphere environment, and includes coverage of the following topics:

Integration with the VMware vCenter server.

Importing VMware guests into Red Hat Enterprise Linux OpenStack Platform.

How to integrate OpenStack Networking (neutron) with VMware NSX.

PREFACE

3

CHAPTER 1. INTEGRATE WITH VMWARE VCENTER

This chapter describes the steps required to add a VMware hypervisor to your RHEL OpenStackPlatform environment.

Portions adapted from the OpenStack Configuration Reference. See Configuration Reference inRed Hat OpenStack Platform Licenses for Documentation.

1.1. ARCHITECTURE OVERVIEW

RHEL OpenStack Platform uses the Compute (nova) API to integrate with ESXi hypervisors. TheVMware vCenter server is central to this integration, using the vSphere driver in your OpenStackenvironment to grant openstack-nova-compute access to resources hosted in vSphere,including hypervisors and datastores. You can then use the OpenStack Dashboard to scheduleinstances to ESXi hypervisors while still retaining vSphere features such as HA and DRS.

Note

Refer to Known Issues at the end of this guide for details on current integration issues.

The diagram below illustrates how the VMware vCenter driver allows integration between the RHELOpenStack Platform and vSphere environments:

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

4

1.2. SYSTEM REQUIREMENTS

RHEL OpenStack Platform 4 or higher.

VMware vSphere 5.1 or higher

For DRS, only fully-automated placement is supported.

OpenStack integration requires a vCenter service account with the following minimumpermissions. Apply the permissions to the Datacenter root object, and select the Propagate to Child Objects option.

CHAPTER 1. INTEGRATE WITH VMWARE VCENTER

5

The current supported configuration is RHEL OpenStack Platform with vCenter, with networkingprovided by a combination of either Neutron/NSX or Neutron/Nuage. For more information onNeutron/Nuage, see https://access.redhat.com/articles/2172831

Root location Subtree value Subtree value

Datastore

Allocate space

Browse datastore

Low level file operation

Remove file

Extension

Register extension

Folder

Create folder

Host

Configuration

Maintenance

Network configuration

Storage partition configuration

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

6

Network

Assign network

Resource

Assign virtual machine toresource pool

Migrate powered off virtualmachine

Migrate powered on virtualmachine

Virtual Machine

Configuration

Add existing disk

Add new disk

Add or remove device

Advanced

Change CPU count

Disk change tracking

Root location Subtree value Subtree value

CHAPTER 1. INTEGRATE WITH VMWARE VCENTER

7

Host USB device

Memory

Raw device

Remove disk

Rename

Swapfile placement

Interaction

Configure CD media

Power Off

Power On

Reset

Suspend

Inventory

Create from existing

Create new

Root location Subtree value Subtree value

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

8

Move

Remove

Unregister

Provisioning

Clone virtual machine

Customize

Sessions

Validate session

View and stop sessions

Snapshot management

Create snapshot

Remove snapshot

vApp

Export

Import

Root location Subtree value Subtree value

CHAPTER 1. INTEGRATE WITH VMWARE VCENTER

9

1.2.1. Packstack Deployment

Packstack supports integration with an existing vSphere deployment. Specify the options below toconfigure the vCenter driver:

Setting Explanation

CONFIG_VCENTER_HOST IP Address or hostname of the VMware vCenterserver.

CONFIG_VCENTER_USER VMware vCenter service account to use.

CONFIG_VCENTER_PASSWORD Password for the VMware vCenter service accountspecified above.

CONFIG_VCENTER_CLUSTER_NAME Specify the vSphere cluster name.

CONFIG_CINDER_BACKEND=vmdk Configure Block Storage (cinder) to use VMDK asthe storage backend.

CONFIG_VMWARE_BACKEND=y Set to y to use VMware vCenter as the hypervisorand storage backend.

CONFIG_VCENTER_CLUSTER_NAME Specify the vCenter cluster that will host VMscreated by Compute.

For more information on Packstack configuration, refer to Deploying Openstack: Proof-Of-ConceptEnvironments.

1.2.2. Foreman Deployment

The use of Foreman to deploy OpenStack Compute and connect it to a vSphere installation usingthe vCenter driver is not supported.

1.2.3. Manual Deployment

1. Enable the VMware vCenter driver in nova.conf:

[DEFAULT]compute_driver=vmwareapi.VMwareVCDriver

2. Enter the details of your vSphere environment in the [vmware] section of nova.conf:

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

10

[vmware]host_ip - Hostname or IP address of the vCenter server.host_port - Port to use for connecting to the vCenter server. Default is 443.host_username - Username of the service account to use for logging in to the vCenter server.host_password - Password of the service account to use for logging in to the vCenter server.cluster_name - Name of the vSphere cluster to use. This will be the cluster containing the ESXi hypervisors to integrate with Compute.

3. Review the management options in the [vmware] section of nova.conf:

Setting Explanation

api_retry_count Number of times to retry an action after a failureoccurs. Default is 10. This value should never bechanged unless advised by support.

datastore_regex Regular expression to match the datastore name.

maximum_objects Limit the maximum number of objects to return in asingle result. Default is 100. This value shouldnever be changed unless advised by support.

task_poll_interval The interval used for polling for remote tasks.Default is 0.5. This value should never be changedunless advised by support.

use_linked_clone Determine whether a linked clone should be used.Enter true or false (boolean).

vnc_port The starting port to use for VNC connections.Default is 5900. Refer to the VMwaredocumentation for procedures allowing VNCaccess through to guests.

wsdl_location Enter the URL of the optional VIM service WSDLlocation. This value should never be changedunless advised by support.

1.3. POST-DEPLOYMENT TASKS

CHAPTER 1. INTEGRATE WITH VMWARE VCENTER

11

1.3.1. Create networks using novanetwork VlanManager

If using nova-networking, this step creates a network that allocates dhcp-server addresses

# nova network-create novanetwork --fixed-range-v4 192.168.36.0/22 --bridge-interface trunk0 --vlan 103

This example network uses vlan id 103 on the trunk0 interface. Novanetwork creates thecorresponding vlan103@trunk0 interface and attaches it to bridge br103. This bridge is then usedfor connectivity to the VM vifs. In addition, the vCenter driver searches for the br103 port group invCenter. If not present, it will create a standard switch port group. You can pre-create theDistributed Switch (dvSwitch) port group, however ensure it’s configured to tag the packets with thecorrect VLAN tag.

Note

You cannot change the name of the bridge with the --bridge parameter.

1.3.2. Configure VNC access

To enable VNC console access to a VMware guest running on an ESXi host, edit the .vmx of thepowered-off guest:

RemoteDisplay.vnc.enabled = trueRemoteDisplay.vnc.port = 5900RemoteDisplay.vnc.password = [Enter a complex password here]

Each guest requires a unique VNC port number. You may need to review your ESXi host’s firewallconfiguration for the incoming connections.

1.3.3. Image uploads

Images can be uploaded using the glance image-create command:

Setting Explanation

--name A descriptive name for the image. For easier filemanagement, it can be the same name as the.vmdk file.

--file The path to the VMDK file.

--property vmware_adaptertype Specifies the storage adapter type for the VM.Examples include scsi and lsilogic.

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

12

--property vmware_disktype Specifies the disk provisioning for the VM.Examples include sparse, preallocated,and streamOptimized.

--property vmware_ostype The VMware GuestID of the image. Examplesinclude rhel7_64Guest and windows8Server64Guest.

--disk-format The disk format of the VMware image. Likely to be vmdk.

--container-format Defines the container format for Image Service(glance). Examples include bare and ovf.

--property hypervisor_type Tags the image to a particular hypervisor type.Useful for multi-hypervisor environments.

Setting Explanation

For example:

glance image-create --name rhel-guest-image-7.1-x86_64.vmdk --file rhel-guest-image-7.1.x86_64.vmdk --property vmware_adaptertype=scsi --property vmware_disktype=sparse --property vmware_ostype=rhel7_64Guest --disk-format vmdk --container-format bare --property hypervisor_type=vmware

1.4. USE HOST AGGREGATES

If running a heterogeneous environment, it is recommended you aggregate the hosts by hypervisor.This allows greater control over where an instance of a particular type (flavor) gets scheduled to.

1. Create two host aggregates for kvm and vcenter:

# nova aggregate-create kvm-hosts# nova aggregate-set-metadata kvm-hosts hypervisor=kvm# nova aggregate-create vcenter-hosts# nova aggregate-set-metadata vcenter-hosts hypervisor=vcenter

2. Add the Compute nodes to the kvm aggregate, and the Controller to the vcenter aggregate.Replace the values in [square brackets] with those suiting your deployment:

# nova aggregate-add-host kvm-hosts [compute0 name]

CHAPTER 1. INTEGRATE WITH VMWARE VCENTER

13

# nova aggregate-add-host kvm-hosts [compute1 name]# nova aggregate-add-host kvm-hosts [compute2 name]# nova aggregate-add-host kvm-hosts [compute3 name]# nova aggregate-add-host vcenter-hosts [controller name]

3. Set properties on the flavors, to control which aggregate the flavor gets scheduled to. Replace thevalues in [square brackets] with those suiting your deployment:

# nova flavor-key m1.medium set hypervisor=kvm# nova flavor-key [vcenter flavor] set hypervisor=vcenter

1.5. KNOWN ISSUES

Important - BZ#1190706 - When running more than one Compute agent configured for thesame ESXi cluster (not a recommended configuration), rebooting of one of the Compute agentswould delete all running instances. A new configuration key named destroy_after_evacuate hasbeen added to the workarounds configuration group. It is recommended that users of the vCenterdriver set the value of this configuration key to False, preventing automatic deletion of instanceswhen encountering what Compute assumes is an evacuation event. This will prevent data loss,and in the event that an evacuation was truly intended, allow for manual cleanup of theinstance(s) to occur.

BZ#1133566 - Uploads to the Image Service can fail after about 45 minutes of running due to asession expiration bug.

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

14

CHAPTER 2. IMPORT VMWARE GUESTS

This chapter demonstrates how to use the virt-v2v command-line tool to retrieve an existing VMware(vCenter or ESX/ESXi) guest, and then convert and upload it directly to the Image Service (glance)in RHEL OpenStack Platform.

While this chapter demonstrates the procedure in detail, you must first ensure you have (or canaccess) the following prerequisites before continuing:

A Red Hat Enterprise Linux 7.1 host (virtual or physical), with:

Access to the RHEL OpenStack Platform package repository.

Access to the Supplementary RPMs packages repository.

Sufficient Red Hat subscriptions to allow installation of the virt-v2v packages and theirdependencies.

Sufficient disk space and network access to retrieve and temporarily store the VMwareguest(s).

Administrative credentials and network access to a RHEL OpenStack Platform (5 or higher)installation.

Administrative credentials and network access to a VMware vCenter + ESX(i) installation.

The name of the VMware datacenter and host.

The name of the guest(s) to be migrated to the Image Service.

Reliable and verified backups.

System and network recommendations for this procedure are as follows:

Reliable, low-latency network bandwidth of Gigabit speeds or better.

Higher amounts of memory on VMware and Red Hat Enterprise Linux hosts will alleviate slowcopying issues.

For multiple parallel conversions, consider allocating one CPU core and at least 1GB memory foreach running conversion instance.

Note

Your system names and IP addresses will differ from those shown in this chapter, and youmust modify example commands shown within to match those in your environment.Example systems in this chapter are named and addressed as follows:

Red Hat Enterprise Linux 7.1 system - v2vsystem1.example.com

VMware vCenter host - vcenterhost1.example.com

VMware ESX(i) host - esxihost1.example.com

Guest virtual machine targeted for conversion and migration - guestvm1

RHEL OpenStack Platform cloud - 192.0.2.1

CHAPTER 2. IMPORT VMWARE GUESTS

15

Note

Although the virt-v2v tool reads the VMware guest, it does not make changes to the targetVM itself until conversion is completed. The target VM should be shutdown/offline beforebeginning this process.

2.1. CONFIGURE THE V2V HOST

1. Login to the host system v2vsystem1.example.com as the root user.

2. Register the system to Red Hat Subscription Management:

# subscription-manager register

Enter your Red Hat Subscription Management credentials to continue.

3. To retrieve and store a list of your available pools, run the following command:

# subscription-manager list --available > /tmp/pools_available

Open the /tmp/pools_available file and take note of the Pool ID for the subscription that providesaccess to the following package groups:

Red Hat OpenStack RPMs

Red Hat Enterprise Linux Server RPMs

Red Hat Enterprise Linux Supplementary RPMs

4. Take that Pool ID value and run a command similar to the following. For example:

# subscription-manager subscribe --pool=8a85f4923cf494ef013ddc6045a541c8

5. Now that your system is subscribed, disable all repositories, and re-enable only those required forthis procedure:

# subscription-manager repos --disable=*# subscription-manager repos --enable rhel-7-server-rpms# subscription-manager repos --enable rhel-7-server-supplementary-rpms# subscription-manager repos --enable rhel-7-server-v2vwin-1-rpms

6. Enable the RHEL OpenStack Platform repos: * For RHEL OpenStack Platform 5, use:

# subscription-manager repos --enable rhel-7-server-openstack-5.0-rpms

For RHEL OpenStack Platform 6, use:

# subscription-manager repos --enable=rhel-7-server-rh-common-rpms# subscription-manager repos --enable=rhel-7-server-openstack-6.0-rpms

For RHEL OpenStack Platform 7, use:

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

16

# subscription-manager repos --enable=rhel-7-server-rh-common-rpms# subscription-manager repos --enable=rhel-7-server-openstack-7.0-rpms

7. Run a system update:

# yum update

8. Reboot the system into the new kernel version if one was provided in the system update, andlogin again as the root user.

Install the virt-v2v package and its dependencies:

# yum install virt-v2v

9. Install the Image Service client binary and tools:

# yum install python-glanceclient

10. Login to your RHEL OpenStack Platform machine (or your OpenStack Dashboard host) as theroot user.

Ensure TCP port 5000 is open:

# firewall-cmd --permanent --add-port=5000/tcp# firewall-cmd --reload

11. Copy the following variables and their values from the /root/keystonerc_admin file:

OS_USERNAMEOS_TENANT_NAMEOS_PASSWORDOS_AUTH_URL

Logout of the machine.

12. Login to the v2vsystem1.example.com machine, and add those variables and their values intothe /root/.bashrc file, substituting your settings. For example:

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=97ea7049b1e94a65export OS_AUTH_URL=http://192.0.2.1:5000/v2.0/

13. Add the following to /root/.bashrc also:

export LIBGUESTFS_BACKEND=direct

Logout and back in to the machine. You are now ready to run the virt-v2v command to retrieve,convert, and upload the guest to the Image Service.

2.1.1. Connect to VMware vCenter and convert the target guest

1. Login to your Red Hat Enterprise Linux 7.1 system as the root user.

CHAPTER 2. IMPORT VMWARE GUESTS

17

Optionally connect to your VMware vCenter server and list the available guests with the followingcommand, substituting your VMware host names:

# virsh -c 'vpx://vcenter1.example.com/datacentername/esxihost1.example.com?no_verify=1' list --all

Enter your vCenter administrator password when prompted.

Note

The no_verify=1 string is only required if you have invalid/self-signed SSL configuredon your VMware host(s).

2. Once you have chosen the guest name to be used, retrieve, convert and upload it with thefollowing command, substituting your VMware host names and guest name:

# virt-v2v -ic 'vpx://vcenter1.example.com/datacentername/esxihost1.example.com?no_verify=1' "guestvm1" -o glance

Enter your vCenter administrator password when prompted.

Note

You can specify a different guest name to be used after conversion with the -on _different_name_ option.

This process can take a long time. See the following for an example of the output shown:

[ 14.0] Creating an overlay to protect the source from being modified[ 16.0] Opening the overlay[ 77.0] Initializing the target -o glance[ 77.0] Inspecting the overlay[ 677.0] Checking for sufficient free disk space in the guest[ 677.0] Estimating space required on target for each disk[ 677.0] Converting Red Hat Enterprise Linux release 6 (Update 6) to run on KVMvirt-v2v: This guest has virtio drivers installed.[1139.0] Mapping filesystem data to avoid copying unused and blank areas[1158.0] Closing the overlay[1158.0] Copying disk 1/1 to /var/tmp/glance.9jLwLB/sda (raw)(0.00/100%)(1.03/100%)(2.05/100%)(3.08/100%)[...output truncated...](100.00/100%)[49643.0] Creating output metadata+------------------+--------------------------------------+| Property | Value |

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

18

+------------------+--------------------------------------+| checksum | 675bab2875fc027f33b2f881afa86272 || container_format | bare || created_at | 2014-11-08T13:29:24 || deleted | False || deleted_at | None || disk_format | raw || id | 98b0f3c4-86a0-4daa-baef-878cb7610cb4 || is_public | False || min_disk | 0 || min_ram | 0 || name | guestvm1 || owner | b33d23bd5ab94a1fa35522e0c0677381 || protected | False || size | 8589934592 || status | active || updated_at | 2014-11-08T13:33:24 || virtual_size | None |+------------------+--------------------------------------++----------------------------+--------------------------------------+| Property | Value |+----------------------------+--------------------------------------+| Property 'architecture' | x86_64 || Property 'hw_disk_bus' | virtio || Property 'hw_vif_model' | virtio || Property 'hypervisor_type' | kvm || Property 'os_distro' | rhel || Property 'os_type' | linux || Property 'os_version' | 6.6 || Property 'vm_mode' | hvm || checksum | 675bab2875fc027f33b2f881afa86272 || container_format | bare || created_at | 2014-11-08T13:29:24 || deleted | False || deleted_at | None || disk_format | raw || id | 98b0f3c4-86a0-4daa-baef-878cb7610cb4 || is_public | False || min_disk | 0 || min_ram | 512 || name | guestvm1 || owner | b33d23bd5ab94a1fa35522e0c0677381 || protected | False || size | 8589934592 |

CHAPTER 2. IMPORT VMWARE GUESTS

19

| status | active || updated_at | 2014-11-08T13:33:26 || virtual_size | None |+----------------------------+--------------------------------------+[9886.0] Finishing off

3. Login to your OpenStack Dashboard and confirm that the image has been uploaded in Project >Compute > Images.

The guest has been successfully converted, and the image can now be used to launch instances.

See Manage Images in the Instances and Images Guide for further details on managing images inOpenStack.

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

20

CHAPTER 3. INTEGRATE VMWARE NSX WITH OPENSTACKNETWORKING

This chapter describes the steps required to integrate OpenStack Networking with VMware NSXusing the NSX plugin for Neutron. OpenStack Networking uses the NSX plugin for Neutron tointegrate with an existing VMware vCenter deployment. When installed on the network nodes, theNSX plugin enables a NSX controller to centrally manage configuration settings and push them tomanaged network nodes. Network nodes are considered managed when they’re added ashypervisors to the NSX controller. The diagram below depicts an example NSX deployment andillustrates the route East-West traffic takes between subnets on separate Compute nodes:

3.1. SERVICE NODE CONCEPTS

This integration work conceptually introduces a number of new node types, although these nodeswould host services that are likely already running in your environment:

Controller Node

Runs neutron-server

Runs VMware NSX plugin for Neutron

Network Node

Runs dhcp-agent

Runs metadata-agent

Runs Open vSwitch (OVS)

3.2. DISABLE NEUTRON-OPENVSWITCH-AGENT

The neutron-openvswitch-agent must be disabled for environments integrating with VMware NSX.The required steps will differ for HA and non-HA environments:

3.2.1. Disable neutron-openvswitch-agent for HA environments

1. Add the new constraints first, to be as non-disruptive as possible:

CHAPTER 3. INTEGRATE VMWARE NSX WITH OPENSTACK NETWORKING

21

# pcs constraint order neutron-netns-cleanup-clone then neutron-dhcp-agent-clone# pcs constraint colocation add neutron-dhcp-agent-clone with neutron-netns-cleanup-clone

2. Remove older Ordering constraints:

# pcs constraint remove order-neutron-openvswitch-agent-clone-neutron-dhcp-agent-clone-mandatory order-neutron-netns-cleanup-clone-neutron-openvswitch-agent-clone-mandatory

3. Remove older Colocation constraints:

# pcs constraint remove colocation-neutron-dhcp-agent-clone-neutron-openvswitch-agent-clone-INFINITY colocation-neutron-openvswitch-agent-clone-neutron-netns-cleanup-clone-INFINITY

4. Disable the resource:

# pcs resource disable neutron-openvswitch-agent

5. Remove the resource completely:

# pcs resource delete neutron-openvswitch-agent

3.2.2. Disable neutron-openvswitch-agent for non-HA environments

1. Disable the neutron-openvswitch-agent:

# systemctl disable neutron-openvswitch-agent

2. Stop the neutron-openvswitch-agent:

# systemctl stop neutron-openvswitch-agent

3. Mask the neutron-openvswitch-agent:

# systemctl mask neutron-openvswitch-agent

3.3. INSTALL AND ENABLE THE VMWARE NSX PLUGIN

1. On the Controller node, install the VMware NSX plugin for Neutron:

# yum install openstack-neutron-vmware.noarch

2. Restart the openvswitch service:

# systemctl restart openvswitch

3. Symlink /etc/neutron/plugin.ini to plugins/vmware/nsx.ini:

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

22

# - file: path=/etc/neutron/plugin.ini state=absent - file: path=/etc/neutron/plugin.ini src=plugins/vmware/nsx.ini state=link

4. Configure these settings in the plugin.ini and neutron.conf files:

Note

Replace controllerList with a comma separated string that lists the applicable NSXcontrollers. Replace tzuuid with the UUID of the default NSX Transport zone.

# crudini --set /etc/neutron/plugin.ini DEFAULT nsx_controllers controllerList# crudini --set /etc/neutron/plugin.ini DEFAULT default_tz_uuid tzuuid# crudini --set /etc/neutron/neutron.conf DEFAULT

5. Set the core_plugin to vmware:

# crudini --set /etc/neutron/neutron.conf DEFAULT core_plugin vmware

6. Delete and recreate the ovs_neutron database:

# mysql <<< 'DROP DATABASE ovs_neutron'# mysql <<< 'CREATE DATABASE ovs_neutron'

Note

Although this configuration does not use the Open vSwitch plugin, reusing the samedatabase name will ease configuration and privilege assignment.

7. Populate the ovs_neutron database with the necessary tables:

# neutron-db-manage upgrade head

8. Restart the OpenStack Networking services:

# openstack-service restart neutron

9. Add the node to NSX management. Replace nsx_controller with the IP address of your NSXcontroller.

# ovs-vsctl set-manager ssl:nsx_controller

3.4. CONFIGURE THE VMWARE NSX PLUGIN

1. Edit the /etc/neutron/plugins/vmware/nsx.ini file and review the following configuration options:

CHAPTER 3. INTEGRATE VMWARE NSX WITH OPENSTACK NETWORKING

23

Setting Explanation

nsx_user Enter a user account with administrative privileges

nsx_password The password for the above user account.

req_timeout The NSX timeout duration.

http_timeout HTTP timeout period.

retries Number of HTTP retries to attempt.

redirects The maximum number of permitted HTTPredirects.

nsx_controllers Enter the IP address of the NSX controller. Specifymultiple entries here for load-balancing.

default_tz_uuid The UUID of the default NSX Transport Zone.

default_l3_gw_service_uuid The UUID of the gateway service.

2. Restart the neutron-server service to apply the new settings:

# systemctl restart neutron-server

3.5. CONFIGURE THE DHCP AGENT

To enable the DHCP agent on your Networking Node, add these options to the dhcp_agent.ini file.See the following table for an explanation of each value.

enable_isolated_metadata = Trueenable_metadata_network = Trueovs_use_veth = Trueinterface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

24

Setting Explanation

enable_isolated_metadata Enable DHCP to respond to instance requests forspecific host routes using Option 121.

enable_metadata_network Allow the distribution of DHCP metadata toinstances using the APIPA address 169.254.169.254.

ovs_use_veth Enable veth for an Open vSwitch (OVS) interface.

interface_driver Configure the DHCP agent to use OVSInterfaceDriver

3.6. CONFIGURE THE NETWORK NODE

Perform these steps on each node that is running the NSX plugin:

Configure an IP address on both the management and data networks.

Add the network node as a managed hypervisor to the NSX controller.

Review the hypervisor in the NSX management tools to ensure that the br-int bridge isaccessible.

On the ESXi host:

1. Import the security certificate

cat > /etc/nsxvwitch/cacert.pem

2. Connect the uplink to vmnic1:

nsxcli uplink/connect vmnic1

3. Configure the IP address of the endpoint. Replace tun_if_ip with the IP address:

nsxcli uplink/set-ip vmnic1 tun_if_ip 255.255.255.0

4. Configure the default gateway. In this example, the IP address is 192.168.110.2:

nsxcli gw/set tunneling 192.168.110.2

5. Create a bridge for instance connectivity. In this case, the bridge name is NSX-Bridge:

nsxcli network/add NSX-Bridge NSX-Bridge nsx.network manual

CHAPTER 3. INTEGRATE VMWARE NSX WITH OPENSTACK NETWORKING

25

6. Enable SSL on the nsx_controller, using its IP address as the nsx_controller value:

nsxcli manager/set ssl:nsx_controller

7. Enable custom vifs:

nsxd --allow-custom-vifs

For further details on these steps, refer to VMware’s NSX Administrator Guide.

Red Hat Enterprise Linux OpenStack Platform 7 VMware Integration Guide

26