EMC VNX OpenStack Juno Cinder Driver Best Practices · EMC VNX OpenStack Juno Cinder Driver Best...

30
EMC Core Technologies Division, VNX BU Abstract This applied best practices guide provides recommended best practices for installing and configuring the EMC ® VNX Cinder Driver with the OpenStack Juno release. June 2015 EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide VNX Cinder Driver Version 4.2.0 VNX OE for Block 05.32 or above

Transcript of EMC VNX OpenStack Juno Cinder Driver Best Practices · EMC VNX OpenStack Juno Cinder Driver Best...

EMC Core Technologies Division, VNX BU

Abstract

This applied best practices guide provides recommended best practices for installing and configuring the EMC® VNX™ Cinder Driver with the OpenStack Juno release.

June 2015

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide VNX Cinder Driver Version 4.2.0 VNX OE for Block 05.32 or above

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

2

Copyright © 2015 EMC Corporation. All rights reserved. Published in the USA.

EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on EMC Online Support.

EMC VNX OpenStack Juno Cinder Driver Best Practices

Applied Best Practices Guide

Part Number H14268.3

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

3

Contents

Chapter 1 OpenStack Overview ........................................................... 7

Introduction .................................................................................................... 8

OpenStack Architecture .................................................................................. 8

Cinder Architecture ......................................................................................... 9

Chapter 2 VNX Cinder Driver ................................................................ 11

General Considerations................................................................................. 12

Driver History ....................................................................................................... 12

Obtaining the VNX Cinder Driver ........................................................................... 12

Installation.................................................................................................... 13

NaviSecCLI Installation ........................................................................................ 13

Cinder-Volume Installation ................................................................................... 13

Host Registration ................................................................................................. 13

MPIO Configuration .............................................................................................. 14

Cinder.conf Configuration .................................................................................... 14

Advanced Functionality ................................................................................. 17

Multiple Authentication Type Support .................................................................. 17

Security File Support ............................................................................................ 17

Creating a Volume with a Different Provisioning Type ........................................... 17

Creating a Volume with a Different Tiering Policy .................................................. 18

Create a Volume with FAST Cache ........................................................................ 19

iSCSI Target Connectivity Check ........................................................................... 19

Force Deleting LUNs in Storage Groups ................................................................ 20

LUN Number Threshold Check .............................................................................. 20

SP Toggle for HA ................................................................................................... 20

Chapter 3 Additional Information ........................................................ 21

Miscellaneous Topics.................................................................................... 22

HA Deployment .................................................................................................... 22

Volume Migration................................................................................................. 22

Volume Retyping .................................................................................................. 22

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

4

Instance Migration ............................................................................................... 23

Upgrades ............................................................................................................. 24

Troubleshooting ............................................................................................ 25

Timeout Settings .................................................................................................. 25

iSCSI Multipath Faulty Device Handling ................................................................ 25

Logging ................................................................................................................ 25

Conclusion 27

Appendices 28 Appendix A: VNX Cinder Driver History ................................................................. 28

Appendix B: Cinder.conf Configuration List .......................................................... 29

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

5

Preface

As part of an effort to improve and enhance the performance and capabilities of its product line, EMC from time to time releases revisions of its hardware and software. Therefore, some functions described in this guide may not be supported by all revisions of the hardware or software currently in use. For the most up-to-date information on product features, refer to your product release notes.

If a product does not function properly or does not function as described in this document, please contact your EMC representative.

Note: This document was accurate as of the time of publication. However, as information is added, new versions of this document may be released to EMC Online Support. Check the website to ensure that you are using the latest version of this document.

Purpose

The Applied Best Practices Guide delivers straightforward guidance to the majority of customers using the VNX Cinder Driver in a mixed business environment. The focus is on environment performance and maximizing the functionality of the advanced storage features, while avoiding mismatches of technology. Some exception cases are addressed in this guide; however, less commonly encountered edge cases are not covered by general guidelines and are addressed in use-case-specific white papers.

Guidelines can and will be broken, appropriately, owing to differing circumstances or requirements. Guidelines must adapt to:

Different sensitivities toward data integrity Different economic sensitivities Different problem sets

Audience

This document is intended for EMC customers, partners, and employees who are installing and/or configuring the VNX Cinder Driver with a VNX or VNX2 system. Some familiarity with EMC storage systems is assumed.

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

6

Related Documents

The following documents provide additional, relevant information. Access to these documents is based on your logon credentials. All of the documents can be found on http://support.emc.com. If you do not have access to the following content, contact your EMC representative.

EMC OpenStack Github Repository: https://github.com/emc-openstack/ Host Connectivity Guide Introduction to the EMC VNX2 Series - A Detailed Review Introduction to EMC VNX2 Storage Efficiency Technologies Virtual Provisioning for the VNX2 Series - Applied Technology VNX FAST Cache – A Detailed Review VNX FAST VP – A Detailed Review VNX Replication Technologies – An Overview VNX Snapshots VNX2 Deduplication and Compression – Maximizing Effective Capacity

Utilization VNX2 Multicore FAST Cache - A Detailed Review VNX2 FAST VP - A Detailed Review

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

7

Chapter 1 OpenStack Overview

This chapter presents the following topics:

Introduction…………………….......................................................................... 8

OpenStack Architecture ............................................................................. 8

Cinder Architecture .................................................................................... 9

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

8

Introduction OpenStack is an emerging open source cloud computing solution implemented as an Infrastructure as a Service (IaaS) to control large pools of compute, storage, and networking resources throughout a datacenter.

OpenStack Block Storage, also known as Cinder, provides persistent block-level storage devices for use with OpenStack compute instances. The block storage system manages the creation, attachment, and detachment of block devices to servers along with attendant snapshot, replication, and deduplication services.

The VNX Cinder Driver for OpenStack is a Python-based software plugin that integrates VNX Block components into an OpenStack Cloud. OpenStack integration represents a tremendous growth opportunity by making VNX reliability, performance, and support available to customers engineering an OpenStack hosted domain or private cloud solution.

This guide introduces specific configuration recommendations that enable good performance with the VNX Cinder Driver.

OpenStack Architecture

Figure 1 – OpenStack Architecture Overview

The OpenStack architecture, shown in Figure 1, provides services to connect together a cloud of compute, network and storage resources through an interdependent set of programs that tie together assorted compute, networking and storage nodes.

Current OpenStack services and their code names are listed in Table 1:

Table 1 – OpenStack Services

OpenStack Service Code/Project Name

Compute Nova

Networking Neutron

Object Storage Swift

Block Storage Cinder

File Storage Manila

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

9

OpenStack Service Code/Project Name

Identity Keystone

Image Glance

Dashboard Horizon

Orchestration Heat

Database Trove

Data Processing Sahara

Cinder Architecture

Figure 2 – Cinder Application Flow

The major components of Cinder are the Cinder-API, Cinder-Scheduler, and Cinder-Volume (Figure 2). Components outside of Cinder send requests through a REST (Representational State Transfer) protocol. Within Cinder, AMQP (Advanced Message Queuing Protocol) is used for communication.

Cinder-Volume

Direct

Driver

Cinder-Volume

Direct

Driver

Cinder Client

such as Nova, Horizon and end users

Cinder-API

Cinder-Scheduler

Cinder-Volume

VNX

Cinder

Driver

Database

REST

AMQP

AMQP AMQP

DB

Access

DB

Access

DB

Access

Volume Back Ends

NaviSecCLI

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

10

The Cinder-API defines a RESTful API for volume operations such as Create Volume, Delete Volume, Create Snapshot, and Delete Snapshot. It will ask Cinder-Scheduler to dispatch the request if there is not enough information to decide which Cinder-Volume instance can serve the request. If there is sufficient information, the request will be sent directly to the specific Cinder-Volume instance.

Cinder-Scheduler is the decision maker who decides which Cinder-Volume instance will be the request worker. The policy used by Cinder-Scheduler depends on its scheduler driver. There are some Cinder schedulers already implemented in OpenStack. For example, FilterScheduler is a scheduler that attempts to determine a host using filtering and weighing. This is typically the default selection for Cinder.

Cinder-Volumes are workers with a Volume Driver loaded. Each worker will invoke methods in their specific driver to serve the requests from the Cinder-API or Cinder-Scheduler.

The VNX Cinder Driver is an EMC implementation of the Volume Driver. It serves each method call with a sequence of Navisphere CLI requests to the VNX.

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

11

Chapter 2 VNX Cinder Driver

This chapter presents the following topics:

General Considerations .............................................................................. 12

Installation……….... .................................................................................... 13

Advanced Functionality .............................................................................. 17

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

12

General Considerations The VNX Cinder Driver is supported on VNX1 systems running Block OE 5.32 or above. The driver is also supported on all VNX2 systems. For both systems, Snapshot and Thin Provisioning features must be enabled.

Driver History The VNX Cinder Driver’s version number is formatted as <Major Number>.<Middle Number>.<Minor Number>. For example, the current version of the VNX Cinder Driver for OpenStack Juno is 4.2.0. The change in the various numbers can be attributed to:

The driver’s Major Number is incremented upon the release of a new OpenStack version. Backwards compatibility with earlier major versions is not guaranteed.

o An exception to this rule is with the 2.*.* and 3.*.* releases, which were both targeted at the OpenStack Icehouse release.

The driver’s Middle Number is incremented when new features are added to the VNX Cinder driver, and backward compatibility is preserved.

The driver’s Minor Number is incremented when a bug fix or minor enhancement has been added. Backward compatibility is preserved.

Appendix A lists the full feature history of the VNX Cinder Driver.

Obtaining the VNX Cinder Driver EMC has been contributing the VNX Cinder Driver to the official OpenStack project since the Havana release. Any OpenStack release from Havana and onward will include the VNX Cinder Driver.

New features are published to the upstream OpenStack Cinder repository. In the event that critical issues are found with the VNX Cinder Driver, fixes will be produced as quickly as possible, and may fall outside of OpenStack’s release cycle. In this case, the critical bug fixes will be published to EMC’s OpenStack Github repository. A link to EMC’s OpenStack Github repository can be found in the Additional References section.

As of publishing, EMC pledges to support the latest generally available Cinder release (known as the N release), as well as the version of Cinder from the previous OpenStack release (known as the N-1 release). Updates for the N-1 Cinder release will be uploaded to the corresponding release branch of the EMC OpenStack Github repository, or to the master branch if it has not been used by the N release.

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

13

Installation

NaviSecCLI Installation For Ubuntu x64, the Debian Package is available on the EMC OpenStack Github under the EMC Freeware License. It can be found at the following location:

• https://github.com/emc-openstack/naviseccli/raw/master/navicli-linux-64-x86-en-us_7.33.2.0.51-1_all.deb

For all other variants of Linux, the Navisphere CLI is available at the following locations. If possible, it is recommended to install the same CLI version as the VNX Operating Environment on the VNX array.

• VNX2 Series: https://support.emc.com/downloads/36656_VNX2-Series • VNX1 Series: https://support.emc.com/downloads/12781_VNX1-Series

On the nodes that will be running Cinder, before the Cinder-Volume services are started, install Navisphere CLI. After installation, issue the following command to lower the security level of Navisphere CLI, then start the Cinder-Volume services.

/opt/Navisphere/bin/naviseccli security -certificate -setLevel low

Cinder-Volume Installation The VNX Cinder Driver is loaded by the Cinder-Volume service. Please refer to the Cinder-Volume installation steps for your particular deployment platform, available on the official OpenStack website (http://www.openstack.org).

Host Registration Before a server or host is able to access the block storage on a VNX array, its initiators need to be registered to the VNX target ports. A server or host may be a node running the Nova-Compute service or the Cinder-Volume service.

If you allow initiators to access through any of the VNX target ports, you can simply set initiator_auto_registration to True in the VNX Cinder Driver section of /etc/cinder/cinder.conf. If you would rather prefer that certain VNX target ports are used, manual registration is required.

The following section describes the host registration process: • For Fibre Channel, the physical FC connection and zoning must be done first. • For iSCSI, an iSCSI initiator utility such as iscsiadm must be executed by the

user on the host to establish the iSCSI session. • After the initiators have been logged in to the VNX, they must be registered to

a host entry. Use a corresponding Host Agent application or in Unisphere, go to Host > Initiators and select each initiator path. Click the Register button to register the initiators to a host entry. The following information can be provided:

o Initiator Type: CLARiiON/VNX o Failover Mode: ALUA o Host Name: Hostname of your server/host o IP Address: IP Address assigned to the NIC of the server/host

In some OpenStack deployments, such as FC NPIV (N_Port ID Virtualization) with Auto Zoning, each virtual instance may have its own set of initiators. This could result in a large, dynamic number of initiators existing. It would be resource efficient if the initiators were deregistered when the VM is not using them. To accomplish this, Initiator Automatic Deregistration can be used. Set the

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

14

initiator_auto_deregistration and destroy_empty_storage_group parameters to True in the VNX Cinder Driver section of /etc/cinder/cinder.conf to enable Initiator Automatic Deregistration.

MPIO Configuration Enabling multipath volume access is recommended for robust data access. These are the actions to consider when configuring MPIO:

Install multipath-tools, sysfsutils, and sg3-utils on nodes that are running the Nova-Compute and Cinder-Volume services.

Set use_multipath_for_image_xfer to True in /etc/cinder/cinder.conf for each node.

Set iscsi_use_multipath to True in /etc/nova/nova.conf for any nodes using iSCSI.

Finally, modify the configuration file of your multipathing software to accommodate the VNX multipathing model. An example configuration of /etc/multipath.conf is provided in Figure 3.

blacklist { # Skip the files uner /dev that are definitely not FC/iSCSI devices # Different system may need different customization devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z][0-9]*" devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]" # Skip LUNZ device from VNX device { vendor "DGC" product "LUNZ" } } defaults { user_friendly_names no flush_on_last_del yes } devices { # Device attributed for EMC CLARiiON and VNX series ALUA device { vendor "DGC" product ".*" product_blacklist "LUNZ" path_grouping_policy group_by_prio path_selector "round-robin 0" path_checker emc_clariion features "1 queue_if_no_path" hardware_handler "1 alua" prio alua failback immediate } }

Figure 3 – Example Multipath Configuration

Cinder.conf Configuration In addition to the host and MPIO-related parameters mentioned earlier in this guide, the /etc/cinder/cinder.conf file must include information on the VNX system. A full list of parameters, including their mandatory or optional nature are documented in Appendix B.

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

15

The VNX Cinder Driver utilizes the default FilterScheduler offered by Cinder-Scheduler. In order to ensure cooperation, verify that scheduler_driver is not specified in /etc/cinder/cinder.conf or if specified, has the value cinder.scheduler.filter_scheduler.FilterScheduler.

Note that when changes are made to /etc/cinder/cinder.conf, the Cinder-Volume service must be restarted for the changes to take effect. To do this, stop the cinder-volume service on the node and then start it.

Example /etc/cinder/cinder.conf configurations are provided for a single pool setup using iSCSI (Figure 4), a single pool setup using Fibre Channel (Figure 5), and a multi-pool setup using iSCSI (Figure 6).

[DEFAULT] # Other Content # ... # Storage Pool Name storage_vnx_pool_name = Pool_01_SAS # Typically SPA IP address san_ip = 10.10.72.41 # Typically SPB IP address san_secondary_ip = 10.10.72.42 # Global account username is used san_login = username san_password = password storage_vnx_authentication_type = Global # NaviSecCLI is installed in /opt/Navisphere/bin naviseccli_path = /opt/Navisphere/bin/naviseccli # VNX Cinder Driver EMCCLIISCSIDriver is used volume_driver=cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver # Do not remove empty storage group destroy_empty_storage_group = False # Register initiator to all iSCSI target portals initiator_auto_registration=True # Choose backendA-Pool_01_SAS as the name volume_backend_name=backendA-Pool_01_SAS # Node node1hostname will use 10.0.0.1 and 10.0.0.2 to connect to VNX’s iSCSI target portals; Node node2hostname will use 10.0.0.3 iscsi_initiators = {"node1hostname":["10.0.0.1", "10.0.0.2"],"node2hostname":["10.0.0.3"]} # Disable multipath (set it to true if you need multipath for HA) use_multipath_for_image_xfer = false # Other Content # ...

Figure 4 – Example cinder.conf for a Single Pool Based iSCSI Backend

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

16

[DEFAULT] # Other Content # ... storage_vnx_pool_name = Pool_01_SAS san_ip = 10.10.72.41 san_secondary_ip = 10.10.72.42 san_login = username san_password = password naviseccli_path = /opt/Navisphere/bin/naviseccli default_timeout = 10 volume_driver=cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIFCDriver destroy_empty_storage_group = False initiator_auto_registration=True volume_backend_name=backendA-Pool_01_SAS # Enable mulitipath. use_multipath_for_image_xfer = true # Other Content # ...

Figure 5 – Example cinder.conf for a Single Pool Based FC Backend

[DEFAULT] # Other Content # ... # Give the section names, which are corresponding to different backends # Here 2 back ends backendA and backendB are enabled enabled_backends=backendA,backendB # Configuration for backendA, which is similar to single back-end examples [backendA] storage_vnx_pool_name = Pool_01_SAS san_ip = 10.10.72.41 san_secondary_ip = 10.10.72.42 san_login = username san_password = password naviseccli_path = /opt/Navisphere/bin/naviseccli volume_driver=cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver initiator_auto_registration=True volume_backend_name=backendA-Pool_01_SAS # Configuration for backendB, which is similar single back-end examples [backendB] storage_vnx_pool_name = Pool_02_SSD san_ip = 10.10.72.41 san_secondary_ip = 10.10.72.42 san_login = username san_password = password naviseccli_path = /opt/Navisphere/bin/naviseccli default_timeout = 10 volume_driver=cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver initiator_auto_registration=True volume_backend_name=backendB-Pool_02_SSD # Other Content # ...

Figure 6 – Example cinder.conf for a Multi-Pool based iSCSI Backend

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

17

Advanced Functionality

Multiple Authentication Type Support The VNX Cinder Driver uses Navisphere CLI to communicate with the underlying VNX array. Navisphere CLI requires credentials to be authenticated by the array. The VNX array supports credentials from 3 scopes: Global, LDAP, and Local. An account from any of these scopes can be used. Note that the appropriate scope type must be specified in /etc/cinder/cinder.conf.

Security File Support While Navisphere CLI credentials are accepted in plaintext, the best practices for security involve providing the credentials in the form of a security file. The process for leveraging security file support in Navisphere CLI is detailed below.

1. Find out the Linux user account of the Cinder-Volume processes. For this example, the user cinder will be the owner of the Cinder-Volume processes.

2. Switch to the root account with the sudo command: o sudo su

3. Change the default shell of the cinder user by modifying the line in /etc/passwd:

o Before: cinder:x:113:120::/var/lib/cinder:/bin/false o After: cinder:x:113:120::/var/lib/cinder:/bin/bash

4. Save the Navisphere CLI credentials on behalf of the cinder user to a security file. The example provided uses admin/admin as the credentials and saves to the file /etc/secfile/array1.

o su -l cinder -c '/opt/Navisphere/bin/naviseccli -

AddUserSecurity -user admin -password admin -scope 0 -

secfilepath /etc/secfile/array1'

5. Repeat step 4 and create a separate security file for each of the VNX storage systems you will be using.

6. Revert the changes made in step 3 by modifying /etc/passwd: o Before: cinder:x:113:120::/var/lib/cinder:/bin/bash o After: cinder:x:113:120::/var/lib/cinder:/bin/false

7. If plaintext credentials were previously added to /etc/cinder/cinder.conf, remove them now.

8. In /etc/cinder/cinder.conf, add the parameter storage_vnx_security_file_dir. Assign the value to be the security file you created in step 4.

o storage_vnx_security_file_dir = /etc/secfile/array1

9. Restart the Cinder-Volume service.

Creating a Volume with a Different Provisioning Type The VNX Cinder Driver supports the provisioning of LUNs with Thick Provisioning, Thin Provisioning, Compression, and Deduplication features. The VNX array must be licensed for these features in order for the VNX Cinder Driver to utilize them.

Specifying these provisioning types is performed using the storagetype:provisioning parameter during volume creation. If the parameter is not included, the volume will default to Thick Provisioning, with Compression and Deduplication disabled. Examples of using the various provisioning types are provided in Figure 7.

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

18

cinder --os-username admin --os-tenant-name admin type-create "ThickOrOtherName" cinder --os-username admin --os-tenant-name admin type-key "ThickOrOtherName" set storagetype:provisioning=thick cinder --os-username admin --os-tenant-name admin type-create "ThinOrOtherName" cinder --os-username admin --os-tenant-name admin type-key "ThinOrOtherName" set storagetype:provisioning=thin cinder --os-username admin --os-tenant-name admin type-create "CompressedOrOtherName" cinder --os-username admin --os-tenant-name admin type-key "CompressedOrOtherName" set storagetype:provisioning=compressed compression_support=True cinder --os-username admin --os-tenant-name admin type-create "DeduplicatedOrOtherName" cinder --os-username admin --os-tenant-name admin type-key "DeduplicatedOrOtherName" set storagetype:provisioning=deduplicated deduplication_support=True

Figure 7 – Volume Creation with Different Provisioning Types

Creating a Volume with a Different Tiering Policy The VNX Cinder Driver can interface with the FAST VP capabilities of the underlying VNX storage array. FAST VP (Fully Automated Storage Tiering for Virtual Pools) enables a VNX array to measure, analyze, and implement a dynamic storage tiering policy across a Storage Pool with varying levels of drives. For more information on FAST VP, please refer to the white paper for your VNX system: VNX2 FAST VP – A Detailed Review or VNX FAST VP – A Detailed Review.

Specifying a different storage tiering policy of the LUN backing an OpenStack volume is possible by using the storagetype:tiering parameter in the Extra Specs field of the Volume Type specification. Five tiering policies are supported by the VNX Cinder Driver:

StartHighThenAuto Auto HighestAvailable LowestAvailable NoMovement

Note that if the parameter is not specified, the LUN will be created with the tiering policy of StartHighThenAuto. Examples of using the various tiering policies for a volume type is provided in Figure 8.

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

19

cinder --os-username admin --os-tenant-name admin type-create "StartHighThenAutoOrOtherName" cinder --os-username admin --os-tenant-name admin type-key "StartHighThenAutoOrOtherName" set storagetype:tiering=StartHighThenAuto fast_support='<is> True' cinder --os-username admin --os-tenant-name admin type-create "AutoOrOtherName" cinder --os-username admin --os-tenant-name admin type-key "AutoOrOtherName" set storagetype:tiering=Auto fast_support='<is> True' cinder --os-username admin --os-tenant-name admin type-create "HighestAvailableOrOtherName" cinder --os-username admin --os-tenant-name admin type-key "HighestAvailableOrOtherName" set storagetype:tiering=HighestAvailable fast_support='<is> True' cinder --os-username admin --os-tenant-name admin type-create "LowestAvailableOrOtherName" cinder --os-username admin --os-tenant-name admin type-key " LowestAvailableOrOtherName" set storagetype:tiering=LowestAvailable fast_support='<is> True' cinder --os-username admin --os-tenant-name admin type-create "NoMovementOrOtherName" cinder --os-username admin --os-tenant-name admin type-key "NoMovementOrOtherName" set storagetype:tiering=NoMovement fast_support='<is> True'

Figure 8 – Volume Creation with Different Tiering Policies

Create a Volume with FAST Cache The VNX Cinder Driver can be configured to create a backing LUN from a Storage Pool that has FAST Cache enabled or disabled. FAST Cache is a VNX software feature that extends the storage system’s existing caching capacity by copying frequently accessed data to FAST Cache Optimized Flash drives. For more information on FAST Cache, refer to the white paper for your VNX system: VNX2 Multicore FAST Cache – A Detailed Review or VNX FAST Cache – A Detailed Review.

Managing FAST Cache utilization is accomplished via the fast_cache_enabled parameter in the Extra Specs field for Volume Type.

If the value is set to True, the volume will be created from a Storage Pool that has FAST Cache enabled, if available. If the value is set to False, the volume will be created from a Storage Pool that has FAST Cache disabled, if available. Examples for setting the FAST Cache parameter are found in Figure 9.

cinder --os-username admin --os-tenant-name admin type-create "FASTCacheEnabledOrOtherName" cinder --os-username admin --os-tenant-name admin type-key "FASTCacheEnabledOrOtherName" set fast_cache_enabled='<is> True' cinder --os-username admin --os-tenant-name admin type-create "FASTCacheDisabledOrOtherName" cinder --os-username admin --os-tenant-name admin type-key "FASTCacheDisabledOrOtherName" set fast_cache_enabled='<is> False'

Figure 9 – Volume Creation with FAST Cache

iSCSI Target Connectivity Check When a node attempts to attach a VNX LUN-backed volume through iSCSI, the node needs Cinder to specify an iSCSI portal to use. By default, the VNX Cinder Driver will return a random target portal from the SP that owns the LUN. But under certain circumstances, such as during network issues or hardware maintenance, the node may only be able to connect to certain target portals of the VNX. This leaves the potential for a target portal that is outside of the node’s access to be chosen.

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

20

To deal with this difficulty, the iscsi_initiators parameter in /etc/cinder/cinder.conf can be modified to specify the pool of initiators from which the VNX Cinder Driver should choose from. The value for the iscsi_initiators parameter is provided in JSON (JavaScript Object Notation) format, whose keys are hostnames and values are lists of IP addresses. Figure 10 shows an example of the format.

{"<$(sudo hostname) on host1>": ["<iSCSI initiator IP 1.1>", "<iSCSI initiator IP 1.2>"], "<$(sudo hostname) on host2>": ["<iSCSI initiator IP 2.1>", "<iSCSI initiator IP 2.2>"]}

Figure 10 – JSON Format for iscsi_initiators

Note that the value for iscsi_initiators should be specified on a single line in /etc/cinder/cinder.conf. All nodes that need to attach volumes managed by the back-end through iSCSI will need to be specified.

Force Deleting LUNs in Storage Groups When deleting a volume backed by a LUN on a VNX, the operation may fail, placing the volume into an error_deleting state. This could occur because of timeout reached within OpenStack, or because of an interruption during the deletion task. Consequently, the corresponding LUN may remain assigned to a Storage Group on the VNX, which will prevent it from being deleted.

To address this issue, the force_delete_lun_in_storagegroup parameter can be assigned to True in /etc/cinder/cinder.conf so that the VNX Cinder Driver will check and remove the LUN from its Storage Group before forcing the deletion.

LUN Number Threshold Check The VNX has a hard limit on the number of Pool LUNs that can be created on the system. When this limit is reached, no more Pool LUNs can be created, even if there is free storage space.

To make Cinder-Scheduler aware of this limit, set the check_max_pool_luns_threshold parameter to True in /etc/cinder/cinder.conf. When a volume creation request is made in OpenStack, Cinder-Volume will report no free capacity to Cinder-Scheduler, so a different storage system is chosen.

SP Toggle for HA The VNX uses an Active-Active control path on each Storage Processor (SP). If one of the SPs is down, the VNX Cinder Driver will send Navisphere CLI commands to the surviving SP. So long as one SP is alive, the system can serve VNX Cinder Driver requests properly.

To ensure this behavior is operating properly, the san_ip and san_secondary_ip parameters in /etc/cinder/cinder.conf should be set to SPA’s and SPB’s IP addresses, respectively.

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

21

Chapter 3 Additional Information

This chapter presents the following topics:

Miscellaneous Topics ................................................................................ 22

Troubleshooting………................................................................................. 25

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

22

Miscellaneous Topics

HA Deployment Deploying Cinder in a highly available (HA) configuration involves considerations for all three of the Cinder services:

Cinder-API: The OpenStack recommendation is to use HAProxy for Active/Active setup. Refer to the official OpenStack High Availability Guide for more information and an example configuration.

Cinder-Scheduler: Because Cinder-Scheduler services listen on the same AMQP topic, Active/Active HA will be supported automatically as long as more than one node is running the Cinder-Scheduler service.

Cinder-Volume: There are current limitations in OpenStack that prevent an effective Active/Active HA solution. However, other software applications such as Pacemaker can be used to deliver an Active/Passive HA configuration for Cinder-Volume. In the event the Cinder-Volume service needs to fail over to another node, the new node will be considered a completely new service rather than a replacement for the failed one. To avoid this behavior, specify the replacement node’s hostname as the value for the parameter backend_host in /etc/cinder/cinder.conf.

Volume Migration OpenStack allows an administrator to migrate volumes between back-ends. There are two types of volume migration: Host-Assisted and Storage-Assisted. Storage-Assisted Volume Migration is available when the source and destination back-end are managing the same VNX array. Host-Assisted Volume Migration will be used if Storage-Assisted Volume Migration is not available. Example executions of Host-Assisted Volume Migration and Storage-Assisted Volume Migration are provided in Figure 11 and Figure 12.

# Check os-vol-host-attr:host in the output to find volume’s current back end cinder show <volume id> # List available back ends and pick the destination cinder service-list | grep cinder-volume # Start migration cinder migrate --force-host-copy True <volume id> <destination back end>

Figure 11 – Performing a Host-Assisted Volume Migration

# Check os-vol-host-attr:host in the output to find volume’s current back end cinder show <volume id> # List available back ends and pick the destination cinder service-list | grep cinder-volume # Start migration (--force-host-copy is False by default) cinder migrate <volume id> <destination back end>

Figure 12 – Performing a Storage-Assisted Volume Migration

Volume Retyping OpenStack allows a user to change the Volume Type of a volume, after it has been created. The retype operation can be done in two different ways: Host-Assisted and Storage-Assisted. The method chosen will depend on the details of the retype:

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

23

If the original back-end supports the new Volume Type, the Storage-Assisted Retype method will be used. The VNX Cinder Driver will modify the properties of the LUN backing the volume to complete the retype operation.

If the original back-end is incapable of supporting the new Volume Type, a new back-end will be chosen. If the destination back-end is managing the same VNX using the VNX Cinder Driver, Storage-Assisted retyping will be used.

If the original back-end is incapable of supporting the new Volume Type, and the new back-end chosen is not managing the same VNX, Host-Assisted Volume Migration and Retyping will be performed.

Examples of performing a volume retype operation are provided in Figure 13.

# Reject the migration if Host-Assisted Volume Migration need be involved. cinder retype --migration-policy never <volume id> <new volume-type id> # Change the volume type even if Host-Assisted Volume Migration is needed. cinder retype --migration-policy on-demand <volume id> <new volume-type id>

Figure 13 – Performing a Volume Retype Operation

Instance Migration OpenStack supports the migration of a virtual instance from one Compute node to another. This is useful for establishing load balance across multiple Compute nodes. Two types of instance migration are supported: Cold Migration and Live Migration.

Cold Migration will shut down a running instance before migrating it to another Compute node, then boot the instance. The steps for performing an instance migration are shown in Figure 14.

# Check OS-EXT-SRV-ATTR:host in the output to find the current compute node nova show <vm instance id> # Start the migraton (destination will be decided by Nova automatically nova migrate <vm instance id> # Check the status of the VM and find out the change in OS-EXT-SRV-ATTR:host nova show <vm instance id>

Figure 14 – Instance Cold Migration

Live Migration will migrate a running instance to another Compute node while preserving its state. The instance will not have to be shut down if it is running when the Live Migration occurs. There are additional requirements for Live Migration to be possible for an instance, and are shown in Error! Reference source not found..

Table 2 – Live Migration Criteria Storage

Configuration of VM

Storage Configuration of Compute Node

E E+V V R+Any

/var/lib/nova/instances

is shared lm lm lm lm

/var/lib/nova/instances

is NOT shared lm-b X lm X

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

24

E = Ephemeral Storage R = Read-only storage like CD-ROMs and Configuration Drive (config_drive) V = Volume Storage lm = nova live-migration <vm instance id> <destination node hostname> lm-b = nova live-migration --block-migrate <vm instance id> <destination node hostname>

X = Not Supported

If the instance undergoing a Live Migration has volume storage from a VNX, additional requirements must be met:

The source and destination Compute nodes must have MPIO enabled. The OpenStack release version must be Juno or later. If iSCSI is used, iscsi_use_multipath must be set to True in

/etc/nova/nova.conf for each Compute node.

Upgrades Currently, Cinder does not offer a non-disruptive upgrade path. In order to minimize the impact, consider following these steps when performing an upgrade:

Define an outage window and announce it in advance. Make sure there are no volumes in the attaching state. If this cannot be

avoided, choose a time when there are as few attaching volumes as possible. Stop the Cinder-API service to make sure no further requests are accepted. Stop the Cinder-Scheduler and Cinder-Volume services after all ongoing

volume operations have completed. Upgrade the packages for Cinder-API, Cinder-Scheduler, and Cinder-

Volume. If Cinder-API and Cinder-Scheduler need to be changed, the DB schema must also be updated.

Once the upgrade has completed, start the Cinder-Scheduler and Cinder-Volume services first. Then, start the Cinder-API service.

When considering the upgrade of a VNX system, know that during the non-disruptive process, the SPs of the VNX array will be rebooted one by one. Due to a limitation in Nova, if a volume is attached during this reboot time window, iSCSI traffic to the volume may be interrupted even if MPIO is used. To elaborate on this sequence of events:

During the upgrade, a reboot of the secondary SP occurs. While the reboot occurs, a user begins to attach a volume to an instance. Because the secondary SP is rebooting, the Nova service will only see iSCSI

paths to the primary SP. Access to the volume will only be provided over these paths.

After the secondary SP completes its reboot, the primary SP will be rebooted. While the primary SP reboots, service is handled by the secondary SP.

Because the iSCSI paths to the secondary SP have not been exposed to the instance, I/O to the volume is interrupted.

Because of this limitation, there is no solution to ensure that I/O is not interrupted during a VNX upgrade. As a workaround, when the VNX is being upgraded, shut down the Cinder-Volume service on the corresponding back-end to prevent any volume attaching during this time period. After the VNX upgrade has completed, the Cinder-Volume service on the back-end may be started again.

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

25

Troubleshooting

Timeout Settings By default, Cinder-API will timeout a request if a response is not received after 30 seconds. For instances where the Cinder-Volume service or VNX is under high load, 30 seconds may be too little for proper operation. The recommendation is to increase this timeout setting to a higher value. The value itself will vary by deployment, but starting with a value of 600 and modifying it based on outcome is the recommended approach.

To alter the timeout value, modify rpc_response_timeout in /etc/cinder/cinder.conf. If the rpc_response_timeout parameter does not exist, the default value of 30 seconds will be chosen.

Similarly, the rpc_response_timeout parameter in /etc/nova/nova.conf should be raised to accommodate for high activity situations. This is necessary as certain functionality of Nova will involve requests to Cinder-API.

Back-ends will periodically send updates to the Cinder-Scheduler service for check in. The parameter service_down_time in /etc/cinder/cinder.conf defines the maximum number of seconds since the last check in for the Cinder-Scheduler service to be considered alive. The value of service_down_time, which is 60 seconds by default, should be raised to match the value of rpc_response_timeout.

Mentioned in the HA Deployment section, HAProxy is a fast and reliable solution that offers high availability, load balancing, and proxying for TCP and HTTP-based applications. When used with the Cinder-API service, HAProxy will have its own default timeout value when waiting for a response from Cinder-API. To ensure proper functionality, raise the timeout server parameter to be larger than the rpc_response_timeout set for Cinder.

Finally, OpenStack components rely on the system clock for timeout calculation. If different nodes have varying amounts of time drift, the cooperation of components may become unpredictable. It is suggested that all nodes have NTP configured in order to synchronize system times.

iSCSI Multipath Faulty Device Handling When iSCSI based MPIO is used in OpenStack, faulty multipath devices may occur over time due to various OpenStack issues. There is currently no complete solution provided by the OpenStack community for avoiding faulty devices.

When VNX iSCSI storage is used, the faulty_device_cleanup.py script can be executed to mitigate this issue. The script will query the multipath devices interacting with the VNX and remove any faulted devices, while keeping any healthy devices. It is suggested that the script is deployed on all nodes running the Nova-Compute service and configured to run periodically via a CRON job. The script can be obtained from EMC’s OpenStack Github page (https://github.com/emc-openstack/vnx-faulty-device-cleanup).

Logging When errors occur in OpenStack, the most accessible method of root causing the issues is by analyzing the OpenStack logs. To enhance the logging provided by OpenStack, set the verbose and debug parameters to True in /etc/cinder/cinder.conf. The same parameters can be specified in

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

26

/etc/nova/nova.conf, /etc/glance/glance-api.conf, and /etc/glance/glance-registry.conf to produce additional logging for the Nova and Glance services. Note that the services need to be restarted before the enhanced logging will take effect.

The log files can be found in the /var/log directory, under a directory specific to the service. For example, the logs for Cinder can be found in /var/log/cinder. Use searching tools like grep to locate traces of the error in the log files, and review the individual log files to help deduce the sequence that resulted in the failure.

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

27

Conclusion

This best practices guide provides configuration and usage recommendations for the VNX Cinder Driver in general usage cases.

For detailed discussion of the reasoning or methodology behind these recommendations, or for additional guidance around more specific use cases, see the documents the Related Documents section.

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

28

Appendices

Appendix A: VNX Cinder Driver History Driver

Version Base GA Date GA

Repository Supported OpenStack

Release

New Features

1.0.0 N/A N/A N/A Havana Initial minimum iSCSI support

2.0.0 1.0.0 2014-04-17 OpenStack Github

Havana & Icehouse

Advance LUN Features o Thin/Thick Support

Robust enhancement

3.0.0 2.0.0 2014-05-04 EMC Github

Havana & Icehouse

Array-based Back End Support

FC Basic Support

Target Port Selection for MPIO

Initiator Auto Registration

Storage Group Auto Deletion

Multiple Authentication Support

Storage-Assisted Volume Migration

SP Toggle for HA

3.0.1 3.0.0 2014-05-23 EMC Github

Havana & Icehouse

Security File Support

3.0.3 3.0.1 2014-11-17 EMC Github

Icehouse Performance Improvement by Code Tuning

Performance Improvement by Batch Processing

3.0.4 3.0.3 2014-12-26 EMC Github

Icehouse Force delete LUN in Storage Groups

4.0.0 3.0.1 N/A OpenStack Github

Juno Advanced LUN Features o Compression Support o Deduplication Support o FAST VP Support o FAST Cache Support

Storage-assisted Retype

External Volume Management

Read-only Volume

FC Auto Zoning

4.1.0 4.0.0 2014-10-16 OpenStack Github

Juno Initial Consistent Group Support

4.2.0 4.1.0 2015-03-13 EMC Github

Juno Performance Improvement by Code Tuning

LUN Number Threshold Support

Force delete LUN in Storage Groups

Initiator Automatic Deregistration

Pool-aware Scheduler Support

iSCSI Multipath Enhancement

Performance Improvement by Batch Processing

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

29

Appendix B: Cinder.conf Configuration List Option Type Default

Value Description

storage_vnx_pool_name Optional None Name of the storage pool managed by the back-end

san_ip Required SP IP address of the VNX array (Usually SPA’s IP address)

san_login Required Username of the VNX array san_password Required User password of the VNX array storage_vnx_authentication_type Optional Global Scope of the user account storage_vnx_security_file_dir Optional Directory path that contains the VNX security file.

Can be used in place of san_password. naviseccli_path Required File path of NaviSecCLI binary san_secondary_ip Optional The other SP IP address of VNX array (Usually

SPB’s IP address) default_timeout Optional Default timeout for CLI operations in minutes.

For example, LUN migration is a typical long running operation, which depends on the LUN size and the load of the array. An upper bound in the specific deployment can be set to avoid unnecessary long wait.

volume_driver Required Driver class:

cinder.volume.drivers.emc.emc_cli_i

scsi.EMCCLIISCSIDriver for iSCSI

cinder.volume.drivers.emc.emc_cli_i

scsi.EMCCLIFCDriver for FC volume_backend_name Optional Name of this back-end. If it is not specified, the

driver class name will be used. initiator_auto_registration Optional False If enabled, the driver will register the initiator

node to VNX as a side effect of volume attaching. destroy_empty_storage_group Optional False If enabled, empty storage groups will be deleted

as the side effect of volume detaching. initiator_auto_deregistration Optional False If enabled along with

destroy_empty_storage_group, the driver will deregister the corresponding initiators after the storage group is deleted.

max_luns_per_storage_group Optional 255 The maximum number of LUNs supported by each storage group.

iscsi_initiators Optional None Mapping between hostnames and their iSCSI initiator IP addresses.

iscsi_ip_address Deprecated None One of iSCSI Target IP addresses of the VNX array managed by the backend. This option is needed due to the limited functionality of 2.0.0 driver. The post-2.0.0 driver does not need this option anymore.

use_multipath_for_image_xfer Optional False If true, multipath will be used when Cinder loads image to a volume, upload a volume as an image or performing host-assisted volume migration.

force_delete_lun_in_storagegroup Optional False If true, the driver will move the LUNs out of storage groups and then delete them if the user tries to delete the volumes whose corresponding LUNs remain in storage group on the VNX array.

EMC VNX OpenStack Juno Cinder Driver Best Practices Applied Best Practices Guide

30

check_max_pool_luns_threshold Optional False If true, the pool-based back end will check the limit and report 0 free capacity to the scheduler if the limit is reached. So the scheduler will be able to skip this kind of pool that runs out of pool LUN number.