XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This...

58
WHITE PAPER XtremIO Integration with Mirantis OpenStack Deployment, configuration and integration of Mirantis OpenStack Mitaka with Dell EMC XtremIO all-flash array ABSTRACT This white paper describes the process and considerations involved in deploying a MirantisOpenStack ® environment on the Dell EMCXtremIO ® all-flash storage array. It also explains how XtremIO’s unique features (such as Inline Data Reduction techniques, scale-out architecture, data protection, etc.) provide high performance, simplicity and space saving benefits when deploying it as a primary storage for OpenStack environments. May, 2017

Transcript of XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This...

Page 1: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

WHITE PAPER

XtremIO Integration with Mirantis OpenStack

Deployment, configuration and integration of Mirantis

OpenStack Mitaka with Dell EMC XtremIO all-flash array

ABSTRACT

This white paper describes the process and considerations involved in deploying a

Mirantis™ OpenStack® environment on the Dell EMC™ XtremIO® all-flash storage

array. It also explains how XtremIO’s unique features (such as Inline Data Reduction

techniques, scale-out architecture, data protection, etc.) provide high performance,

simplicity and space saving benefits when deploying it as a primary storage for

OpenStack environments.

May, 2017

Page 2: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

2

The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the

information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any software described in this publication requires an applicable software license.

Copyright © 2016 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its

subsidiaries. Other trademarks may be the property of their respective owners. Published in the USA: 05/17. Dell EMC White Paper,

H16028.

Dell EMC believes the information in this document is accurate as of its publication date. The information is subject to change without

notice.

Page 3: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

3

TABLE OF CONTENTS

EXECUTIVE SUMMARY ...........................................................................................................5

Business Case .................................................................................................................................. 5

Audience ........................................................................................................................................... 5

INTRODUCING XTREMIO .........................................................................................................6

Architecture ....................................................................................................................................... 7

XtremIO’s Benefits for Cloud Environments ...................................................................................... 8

Scale-Out ................................................................................................................................................... 8

Data Reduction .......................................................................................................................................... 9

Copy Data Services ................................................................................................................................... 9

OPENSTACK OVERVIEW ..................................................................................................... 10

OpenStack Modules ........................................................................................................................ 11

OpenStack Nodes ........................................................................................................................... 12

Storage in OpenStack ..................................................................................................................... 13

XtremIO Cinder Driver ............................................................................................................................. 14

Mirantis Fuel .................................................................................................................................... 15

DEPLOYING OPENSTACK WITH XTREMIO ........................................................................ 16

Overview ......................................................................................................................................... 16

Using the XtremIO Cinder Driver on OpenStack ............................................................................. 16

XtremIO Cinder Driver Configuration using the Fuel Plugin ...................................................................... 16

Manual Configuration of the XtremIO Cinder Driver ................................................................................. 20

OPENSTACK BLOCK STORAGE OPERATIONS WITH XTREMIO ..................................... 23

Cinder Operations when using XtremIO .......................................................................................... 23

Verifying XtremIO Volume Type ............................................................................................................... 23

Creating and Mapping XtremIO Volumes in OpenStack ........................................................................... 23

OpenStack Snapshots and Clones of XtremIO Volumes .......................................................................... 28

Manage and Unmanage Volumes ............................................................................................................ 33

Consistency Groups ................................................................................................................................. 35

BOOTING INSTANCES USING XTREMIO ............................................................................ 40

OpenStack Boot Options ................................................................................................................. 40

Boot from Image Using Local Storage ...................................................................................................... 40

Boot from Image Using Cinder Volumes .................................................................................................. 41

Boot from an Instance’s Snapsot ............................................................................................................. 43

Boot a Bootable Cinder Volume ............................................................................................................... 45

Page 4: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

4

Boot from a Bootable Cinder Volume Snapshot ....................................................................................... 49

Summary ......................................................................................................................................... 50

APPENDIX A – CONFIGURATION SUMMARY .................................................................... 51

cinder.conf File ................................................................................................................................ 51

Basic Parameters ..................................................................................................................................... 51

Image-Cache Parameters ........................................................................................................................ 51

Image Upload to Cinder Parameters ........................................................................................................ 51

Over-Subscription Parameters ................................................................................................................. 52

Multipathing Parameters .......................................................................................................................... 52

SSL Certification Parameters ................................................................................................................... 52

cinder.conf File Example .......................................................................................................................... 52

Cinder’s policy.json File ................................................................................................................... 53

Consistency Group Parameters ............................................................................................................... 53

glance-api.conf File ......................................................................................................................... 53

Store in Cinder Parameters ...................................................................................................................... 53

nova.conf File .................................................................................................................................. 53

Thin Provisioning Parameters .................................................................................................................. 53

APPENDIX B – CLI COMMANDS .......................................................................................... 54

Fuel CLI Commands ....................................................................................................................... 54

Fuel Plugin Commands ............................................................................................................................ 54

Fuel Environment Commands .................................................................................................................. 54

Fuel Deployment Commands ................................................................................................................... 54

OpenStack CLI Commands ............................................................................................................. 54

Volume Type Commands ......................................................................................................................... 54

Basic Volume Commands ........................................................................................................................ 55

Snapshots and Clones Commands .......................................................................................................... 55

Volumes Manage and Unmanage Commands ......................................................................................... 55

Consistency Group Commands ............................................................................................................... 56

Bootable Volumes, Images and Instance Launch Commands ................................................................. 56

REFERENCES ........................................................................................................................ 58

OpenStack ...................................................................................................................................... 58

Mirantis ............................................................................................................................................ 58

XtremIO ........................................................................................................................................... 58

Page 5: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

5

EXECUTIVE SUMMARY

This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell EMC™ XtremIO® all-flash array as its

storage backend. It describes the process of deploying such an environment and how to work with it. The paper also discusses the

benefits of using XtremIO for such an environment, and how its unique features can be leveraged to create a cloud environment which

is easy to operate, quick on resource deployment, provides excellent I/O performance and saves on organizational resources which

include physical footprint and storage capacity requirements, as well as operations budget.

BUSINESS CASE

OpenStack has been widely adopted around the world by many users and organizations as a private and public cloud solution to

control and utilize large pools of compute, storage and network resources in their datacenters. The open-source Infrastructure-as-a-

Service (Iaas) software is being used by a growing number of companies to allow quick deployments of virtual machines, networks,

storage and other services to counter the industry’s requirements for an easy, non-expensive, fast solution for dynamic and scalable

computer environments.

While the amount of data being saved everywhere grows larger, Information Technology (IT) departments are also being asked to cut

costs and deliver quickly – both in time to deploy new services and in those services response time. Adopting OpenStack as a cloud

solution gives IT departments a software-based management layer with which to oversee and orchestrate their datacenter resources.

OpenStack provides fast deployment of services, but the need of storing the massive amount of data produced in the cloud, from where

it can be saved efficiently and accessed quickly remains.

Dell EMC’s XtremIO All Flash Array complements that need perfectly. Its unique scale-out architecture enables maintaining a dynamic

amount of data at each scale, and its Inline Data Reduction abilities (such as thin provisioning, deduplication and compression) cut the

physical storage needed for the logical data saved by a few multiples at least, allowing customer savings in both space and cost. Being

a 100% flash-based technology, XtremIO was built specifically to utilize its flash disks at an optimal level, which allows it to deliver ultra-

high performance for its storage clients at a very low latency, for both FC and iSCSI connections. XtremIO’s Copy Data Services also

present a great benefit for cloud environments, as entire projects and tenants in the cloud can be copied to new test, development and

analytics environments for almost no extra space in the storage system and with no performance degradation to either environment.

XtremIO comes with an easy-to-use user interface to provide storage administrators a quick and convenient way of setting up

enterprise class storage environments, provisioning storage to client hosts and applications and monitoring performance.

This paper discusses:

The XtremIO features and added value for OpenStack environments.

OpenStack’s cloud architecture and storage implementations.

Guidance, considerations and best practices for deploying, configuring and operating OpenStack with XtremIO.

AUDIENCE

This paper is intended for:

IT administrators

Storage administrators

Storage/Data center architects

Technical managers

Any other IT personnel who are responsible for designing, deploying, and managing cloud or storage infrastructures for their

companies

Page 6: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

6

INTRODUCING XTREMIO

Dell EMC’s XtremIO is an enterprise class scalable all-flash storage array that provides rich data services with high performance. It is

designed from the ground up to utilize flash disks to their full potential and uses advanced inline data reduction methods to reduce the

physical data that is actually needed to be stored on the disks. XtremIO comes with a simple, easy-to-use interface for storage

administrators and fits a wide variety of use cases for customers in need of a fast and efficient storage system for their datacenters.

Figure 1. Dell EMC XtremIO All-Flash Storage Array

Page 7: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

7

ARCHITECTURE

The XtremIO All Flash Storage Array is based upon a scale-out architecture and is comprised of building blocks, called X-Bricks, which

can be clustered together to grow performance and capacity as required. Each X-Brick is a high-availability high-performance unit that

consists of dual Active-Active Storage Controllers and a set of twenty five SSDs. X-Bricks are clustered together to increase

performance and capacity. Communication across the cluster is performed using a high-speed, low-latency Remote Direct Memory

Access (RDMA) through InfiniBand switches. X-Bricks are available in configurations of 5, 10, 20 or 40 TB per X-Brick.

Figure 2. XtremIO Building Block – The X-Brick

XtremIO employs a content-aware storage engine. Data streams that enter the system are broken down into data blocks, which are

fingerprinted with a unique signature based on the content of the block. Those fingerprints are later checked against a mapping table

maintained by the system to detect duplicate blocks. Only unique data blocks are further compressed and written to the SSDs. The data

is distributed between the X-Bricks and Storage Controllers in a manner that ensures parallelism for optimal performance and balance

of resource utilization. The system chooses the Storage Controller for each block depending on its fingerprint value, and the

mathematical process that calculates those fingerprints results in a uniform distribution across all Storage Controllers, both theoretically

and practically, what allows the system to utilize its resources in an ideal manner.

With its intelligent architecture, XtremIO provides a storage system that is easy to set-up and needs no tuning by the client and zero

complex capacity or data protection planning, as the system is built to take care of it alone. The system also comes with a simple user

interface for creating and mapping volumes to hosts, and presents the customer with data metrics and graphs to monitor usage and

performance.

Page 8: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

8

XTREMIO’S BENEFITS FOR CLOUD ENVIRONMENTS

XtremIO storage system serves many use cases in the IT world, due to its high performance and advanced abilities. One major use

case is for virtualized environments and cloud computing. We will detail some of XtremIO’s strong advantages for cloud environments

like OpenStack’s:

SCALE-OUT

Cloud environments like OpenStack’s tend to be very dynamic in size and performance needs. Many configurations start up small and

grow bigger and bigger as their use increases over time. Such environments need a storage system that can start small and grow in

both size and performance according to needs. XtremIO’s storage system is built in a way that satisfies this use case perfectly. It can

be installed with an initial chosen number of X-Bricks to satisfy current environment needs and grow to up to eight X-Bricks (currently

supported number of bricks) as the environment grows in needs. Each X-Brick added to the system provides extra capacity and

additional connections and compute cores plus RAM to add to the system’s overall performance capabilities. As opposed to storage

systems that provide only scale-up capabilities to increase capacity, XtremIO’s architecture offers a solution to environments that also

grow in performance needs, as they often do. The performance of an XtremIO storage system grows linearly with each X-Brick added.

Each X-Brick is comprised of, amongst other things:

A disk array enclosure containing 25 SSDs (of size 400GB, 800GB or 1.6TB each)

Two Storage Controllers, each includes:

o 2 processors (8/10 cores each)

o 256/512 GB RAM

o Two 8 GB/s FC ports

o Two 10GbE iSCSI ports

And so, for each additional X-Brick we get the extra capacity from the SSDs, extra compute and memory resources from each Storage

Controller and extra bandwidth for incoming and outgoing traffic from each Storage Controller’s additional connections, all for the cloud

services to utilize.

Figure 3. XtremIO Scale-Out capabilities

Page 9: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

9

DATA REDUCTION

With XtremIO, the logical usable amount of storage capacity available is substantially higher than the physical flash storage in place

due to the array’s always-on in-line data reduction technologies:

With XtremIO, LUNs are always Thin-Provisioned, meaning space on the disk will only be provisioned when data is written to it, and

only the amount that is written will be provisioned. This allows the cloud and storage administrators to provision more space than what

is physically provided by the system – i.e. over-provisioning – giving the assumption that hosts and applications will not use the entire

capacity they get immediately or at all, and capacity planning could be based on usable space alone. This enables a more flexible

storage provisioning approach, which is important in the cloud, and saves costs as there will be no need to purchase extra storage if the

existing amount is not being used to the fullest.

XtremIO uses a highly effective Data Deduplication technique, which divides the data to blocks and writes only the unique blocks

(blocks that do not already exist in the system) to the storage array. This is an in-line operation, meaning not only is duplicated data not

saved more than once in the storage, it is also no longer processed by the system as soon as it is marked a duplicate, with only in-

memory pointer values being adjusted. This of course makes writes to the system both fast and space-efficient, and reduces latency

and storage cost for the system. This especially comes in handy in virtualized and cloud environments, as those tend to have a large

amount of duplicated systems holding duplicated data for reasons of achieving highly-available, high-performance services. This

combination of a highly duplicated data set in conjunction with XtremIO’s data services yields a high multiplication factor of the data

being saved by data deduplication.

Another in-line mechanism being used by XtremIO as part of the data flow is Data Compression. XtremIO uses a “buckets”

compression method which tries to compress every block written (after deduplication is calculated) and places it in “buckets” of half a

block, a quarter block or an entire block. This increases the data reduction rate in the system and helps saving more space.

COPY DATA SERVICES

One of the most usable technologies XtremIO offers is its agile copy data services. XtremIO elevates snapshots and clones beyond

simple data protection, and its copy data services can instantly create full-size, full-performance volume copies to use for non-

production needs. Whether customers need to create multiple instances of their data for test, analytics or other purposes, XtremIO

provides a way to efficiently deploy all of the application’s environments on a single array with no loss of performance and minimal

space consumption. Copies of volumes are done using XtremIO Virtual Copies (XVCs), which can be writable or read-only, and are

created instantly with no actual data copied but only new pointers created for the existing data. Latency of all copies remains low and

steady for every instance as the data is kept in the same place, and only changes made in each copy from the moment the snapshots

were taken are written to the system. This ability means no planning in advance for the number of copies a project would need

throughout its course – whenever a new copy of the project is needed, with its entire data, it can be generated quickly with XtremIO and

will consume almost no extra space on the array. This ability gives application and infrastructure teams a breakthrough workflow and

business process agility by eliminating time-wasting, performance-sapping, capacity-hungry brute force copies, and provides flexibility

to work on more added-value projects and innovations.

Figure 4. XtremIO Data Services

Page 10: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

10

OPENSTACK OVERVIEW

OpenStack is a cloud operating system that controls large pools of compute, storage and network resources throughout a datacenter.

Its job is to allow IT administrators to distribute those resources through a dashboard to tenants and projects and give each tenant

administrator the power to provision their resources as they see fit. OpenStack lets users deploy virtual machines and other instances

that handle different tasks to their environments. It makes horizontal scaling of applications easy and dynamic simply by adding more

instances with compute power to the scope.

OpenStack is considered an Infrastructure as a Service (IaaS) platform, as it provides an easy service for users to quickly add new

instances or virtual machines, upon which services and applications can run. OpenStack appeals to IT managers as it provides a non-

expensive and easy solution to manage cloud environments of different dynamic sizes.

Page 11: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

11

OPENSTACK MODULES

OpenStack is comprised of different software components called modules. Being an open source application, anyone can add

additional components to OpenStack for its own use, but the OpenStack community has identified several key components to be the

“core” of the application and those are distributed as a part of any basic OpenStack system.

OpenStack’s modules include:

Nova – OpenStack’s primary compute engine. It is used for deploying and managing large numbers of virtual machines and

instances to handle a project’s computing tasks. It can work with a wide variation of virtualization technologies, such as KVM,

VMware, Xen, Hyper-V and containers.

Glance – The Image repository service of OpenStack that acts as a registry for virtual disk images. It provides discovery,

registration and delivery of operating system images, and allows users to add new images or take snapshots of existing servers to

create images from them.

Swift – OpenStack’s object storage service that provides HTTP accessible containers for large amounts of data including static

entities such as videos, images, email messages, files or VM images. Objects are stored as binaries on the underlying storage

along with metadata attributes. It supports horizontal scaling and high-availability. Containers have no file hierarchy and files are

stored as blobs of data.

Neutron – OpenStack’s networking service, which handles the management of virtual networks in the cloud. It includes

management of virtual networks, subnets, routers, switches, firewalls, VPNs and load balancers. This module helps to ensure that

different components of an OpenStack deployment can communicate with one another, and with the outside world, efficiently.

Cinder – The block storage service of OpenStack that provides persistent block storage management. Its purpose is to virtualize

the management of block storage and enable end users to consume storage resources without the need of knowing where the

storage is actually deployed. It also provides an API to allow vendors to write drivers for support of their designated storage arrays

to integrate with the module.

Heat – An orchestration and automation framework for OpenStack’s cloud services. It consumes all other OpenStack’s APIs and

allows users to configure a template for their static or dynamic environments. It includes stack monitoring and auto-scaling of the

components within the template.

Ceilometer – OpenStack’s telemetry service that collects data across all of OpenStack’s core components to gather statistics,

provide customer billing, track resources and send alarms when necessary. It receives data from the other components of the

cloud and is in charge of normalizing and transforming it on.

Keystone – The identity service module of OpenStack which provides user authentication and authorization to all of OpenStack’s

components. It is a central component of the software as all other modules need to contact it for their task authorization.

Horizon – The OpenStack dashboard. A python-written web-based self-service UI which interacts with underlying OpenStack

services and allows users to manage their cloud through it.

Figure 5. OpenStack Modules

Page 12: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

12

OPENSTACK NODES

In a practical manner, the OpenStack platform provides its services via the OpenStack Nodes. A node is any kind of host, physical or

virtual, on which the OpenStack services will run and whose resources they will use. Each OpenStack Node will be assigned with a role

that will determine what services will run on it and how its resources will be used. Some roles for the OpenStack nodes are:

Controller Node – a primary node in an OpenStack environment, on which core services like Glance, Keystone, Horizon, Heat

and other modules and module schedulers are running. Its disks will hold primary logs, the system’s databases, Horizon services

and more.

Compute Node – the Compute Node is also a basic node in an OpenStack environment. On these nodes the virtual machines of

the environment will run and they will provides them with CPU and memory resources (and optionally virtual storage).

Cinder Node – an optional node to provide block storage to virtual machines. It uses the space on its disks to provision volumes

through an iSCSI network. (Block storage can also be provided through external storage arrays, as we will see throughout this

document.)

* Some roles can co-exist on the same node, and some cannot.

Figure 6. OpenStack Nodes

Page 13: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

13

STORAGE IN OPENSTACK

We take a deeper look at the different types of storage provided in OpenStack:

Ephemeral Storage – Virtual on-instance volumes that exist only inside an instance and are created by the Nova module. They

are analogous to internal disks in a physical server. They are created on the compute nodes of the environment in a section of the

disk dedicated to virtual storage for instances. These volumes are dependent on their instances and can be accessed only through

them. Once an instance is terminated – so does its ephemeral storage.

Block Storage – Persistent volumes that exist until they are deleted. These volumes are created and managed by the Cinder

module. They can be used as additional storage to virtual machines for storing application data, or as their root volumes. A cinder

volume can be moved from one instance to another, but can basically only be attached to one instance at a time. Cinder volumes

can be created on one of these two options:

o A Cinder Node – An OpenStack node on which a section of the disk is dedicated to provision and provide block storage to

instances. This section on the disk is formatted with Linux LVM and each volume provisioned is a logical volume on the

disk. The volumes are attached from the Cinder Node to the instance on the Compute Node via iSCSI connectivity.

o An external storage system – Volumes provided by a storage array outside of the OpenStack nodes. These volumes

inherit the features and properties of their storage system which can be leveraged by the cloud environment for better

storage efficiency and performance. The integration between the OpenStack software and the storage array is made

possible with a Cinder Driver written by the storage array’s vendor that implements the Cinder module’s activities which

are provided by an API. External volumes can be mounted to an instance via FC or iSCSI connectivity.

Object Storage – Containers provided by the Swift module to store all sorts of objects required by the system or clients, such as

virtual machine images, email messages, media objects or any other data needed to be shared in or across a project. The data

and the containers persist until they are deleted and are not owned by any instance in the cloud. They are available from anywhere

and can be set to be private or public. Containers are easily scalable for future growth. Swift’s configuration allows setting how and

where objects are saved within the environment (number and type of nodes, devices on nodes, objects replication for availability

etc.). Objects can be saved on any device, including external storage mapped to the nodes.

File Storage – File sharing provided to clients in the cloud by the Manila module. These are shared file systems that service

projects via several file sharing protocols such as NFS, CIFS, GlusterFS, HDFS etc. (via the Ethernet network), and are used by

projects to share required data and files. Shares do not belong to a single instance in the cloud, and persist until they are deleted.

They are scalable and, like with Cinder, can be provided either through a Manila Node or through external storage using the right

driver.

The next table compares the properties of the four storage options in OpenStack:

Ephemeral Storage Block Storage Object Storage File Storage

Providing module Nova Cinder Swift Manila

Physical location Compute Nodes Cinder Nodes /

External Storage

Any device on

selected nodes

Manila Nodes /

External Storage

Used for Mainly operating

systems

Operating systems

& additional VM

storage

Virtual machine

images and other

global project data

Shared files across a

project

Accessed from Linked Instance only Attached Instance

only

Anywhere;

simultaneously

Any instance with

access; simultaneously

Persistency Until associated VM

terminated Until deleted Until deleted Until deleted

Protocol used Filesystems underlying

Compute Nodes FC / iSCSI REST API

NFS / CIFS /

GlusterFS / HDFS

Table 1. OpenStack’s storage options

Page 14: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

14

XTREMIO CINDER DRIVER

The XtremIO high performance All-Flash Array implements its own Cinder Driver to communicate with OpenStack’s Cinder Module and

offer block storage services to the cloud. Using the driver, an OpenStack environment can connect to an XtremIO Storage cluster and

use it as its backend storage, leveraging its capabilities for a more efficient environment storage-wise.

One of the top advantages of XtremIO that is utilized in its Cinder Driver is the use of XtremIO’s Virtual Copies (XVCs), which enables

cloning volumes inside the storage array instantaneously with no extra space consumed by the array. The driver uses XVCs for all of

OpenStack’s volume copy operations (snapshots, clones from a snapshot or a volume, consistency group snapshots, clones from a

consistency group snapshot or a consistency group and creating bootable volumes from Glance using the Glance Cache). This makes

Cinder operations work extremely quickly and contributes to storage efficiency, as much more data and entities can be created in the

cloud using consuming very little space on the storage cluster.

XtremIO Cinder Driver is supported starting with OpenStack Juno.

Figure 7. Cinder Driver Integration within

OpenStack’s Cinder Module

The supported operations that are implemented in the driver for the Mitaka distribution are:

Create, delete, clone, extend, attach and detach volumes.

Create and delete volume snapshots.

Create a volume from a snapshot.

Copy an image to a volume.

Create an image from a volume.

Manage and unmanage a volume.

Get volume statistics.

Create, modify, delete, and list consistency groups.

Create, modify, delete, and list consistency group snapshots.

Create a consistency group from another consistency group or from a consistency group snapshot.

Additional operations implemented in the driver for the Newton version onwards:

Manage and unmanage a snapshot.

Volume Migration (host assisted).

Page 15: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

15

MIRANTIS FUEL

One of OpenStack’s top contributors, Mirantis develops and supports its own distribution of OpenStack. It provides a software for

automated deployment and management of OpenStack environments called Fuel, which can be managed via a web-based GUI or CLI.

Using Fuel, the cloud administrator can create new OpenStack environments, configure their networks, storage, compute and other

settings prior to deployment, monitor high-level alerts for the environment, create workflows for deployments, run health checks for it,

and view the OpenStack logs in a convenient interface. This whitepaper was written using a demo OpenStack environment deployed by

Mirantis Fuel, and is primarily directed for Mirantis OpenStack environments.

Figure 8. Mirantis Fuel Web-based GUI

Page 16: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

16

DEPLOYING OPENSTACK WITH XTREMIO

OVERVIEW

Technically, this document is specific to a Mirantis distribution of OpenStack Mitaka. It was written using a demo environment of

Mirantis OpenStack 9.2 and Mitaka and some of the actions and comments detailed here may be specific to this distribution or this

version, even though most are common for multiple versions and distributions of OpenStack.

This paper assumes an XtremIO version 4.x and an XMS version 4.2.x are installed as the backend storage of the environment and

that the connections used for the deployment (Ethernet, iSCSI and FC) are all up and healthy (and ideally configured with multipathing

for performance and redundancy).

All actions and operations described here were performed and are to be performed on Mirantis’ Fuel WebUI and on OpenStack’s

Horizon dashboard, unless stated otherwise (some operations can only run through Fuel/OpenStack CLI). A list of the respective CLI

commands that correspond to the actions performed on Fuel UI and Horizon in this paper can be viewed at Appendix B. A few read-

only actions on the WebUI interface of the XtremIO Management Server (XMS) are also detailed here for verification of actions

performed on the backend storage, and those will also be stated explicitly.

USING THE XTREMIO CINDER DRIVER ON OPENSTACK

To deploy an OpenStack environment with an XtremIO array as its storage backend, one must enable the usage of the XtremIO Cinder

Driver in the environment. There are two possible ways to do that in a Mirantis OpenStack environment:

1. Installing the XtremIO Plugin for Fuel on the Fuel Master Server – allows the configuration of XtremIO parameters through Mirantis Fuel UI prior to deployment, which automatically creates the XtremIO volume type during the deployment process.

2. Manually editing the cinder.conf file with the required information and creating the XtremIO volume type after the deployment of the environment.

We will elaborate on both methods.

XTREMIO CINDER DRIVER CONFIGURATION USING THE FUEL PLUGIN

Using the XtremIO Plugin for Fuel will add the option to configure basic XtremIO connection information in the Fuel UI prior to the

deployment of a new environment, which will also create the XtremIO volume type during the deployment itself.

The XtremIO Plugin for Fuel comes as an RPM file to be installed on the Fuel Master Node. For the 9.x and Mitaka distribution, the

RPM’s version is 3.0-3.0.2-1 (the XtremIO Plugin version for Mirantis Fuel should not be confused with the XtremIO Cinder Driver

version which is at version 1.0.7 for the Mitaka release). Here is a list of XtremIO Plugin versions for Fuel and XtremIO Cinder Driver

versions that corresponds to Mirantis/OpenStack releases up to Mitaka:

OpenStack

Release

Mirantis

Distribution

XtremIO

XMS Version

XtremIO Fuel

Plugin Version

XtremIO Cinder

Driver Version

Juno 6.x 3.0.x - 1.0.4

Kilo 7.0 3.0.x & 4.0.x 1.0-1.0.1-1 1.07K.3

Liberty 8.0 4.0.x 2.0-2.0.1-1 1.0.7L1

Mitaka 9.x 4.2.x 3.0-3.0.2-1 1.0.7

Table 2. XtremIO Fuel Plugin version and XtremIO Cinder Driver version

corresponding to OpenStack release / Mirantis distribution

To install the XtremIO Fuel plugin:

1. Move the XtremIO Plugin RPM to the Fuel Master Node.

2. Run the fuel plugin install command in the Fuel Master Node (use the RPM version according to your Mirantis distribution):

# fuel plugins --install emc_xtremio-3.0-3.0.2-1.noarch.rpm

Page 17: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

17

To verify the installation of the Plugin:

1. Go to the PLUGINS tab in the Fuel UI:

Figure 9. XtremIO Fuel Plugin installed in a Mirantis distribution – Fuel UI

Note: It is recommended to install the XtremIO plugin in the Fuel Master prior to setting up any OpenStack environment, so that new

environments can be configured to use the XtremIO storage at initialization.

Page 18: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

18

Next, after installing the XtremIO Fuel Plugin, we will need to initialize and configure our XtremIO cluster as the Storage Backend for

our new OpenStack environment through Fuel.

To initialize XtremIO for a new OpenStack environment:

1. During the creation of a new environment:

i. Select “EMC” as the Block Storage option in the Storage Backends section when creating a new environment through the Fuel UI:

Figure 10. Selecting XtremIO as a Storage Backend option for a new OpenStack environment

Page 19: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

19

2. After a new environment is created:

i. In the new environment’s menus in Fuel go to Settings Storage.

ii. Tick the “EMC XtremIO driver for Cinder” box to enable it as a valid storage backend for the environment and fill in the details of the XtremIO cluster to be used by the new environment:

o XMS username

o XMS password

o XMS IP

o XtremIO cluster name

iii. Save the new settings:

Figure 11. Configuring connection information to the XtremIO Cluster from the Fuel UI

Note 1: It is recommended to initialize the XtremIO Cinder Driver prior to deploying any nodes in the environment so that connection

settings will be automatically set to the environment during deployment.

Note 2: Make sure to configure this step with a privileged XMS user to view and create all required entities in the storage array, with its

correct password.

The next step after creating an OpenStack environment is deploying new nodes to the environment. The procedure of node deployment

remains the same with the XtremIO Plugin, though if there is no intention to use Cinder LVM as a storage backend option for the

environment in addition to the XtremIO option, there is also no need to provision Cinder disks in the environment, since the volumes of

the environment will all be created on the XtremIO cluster and not on the Cinder Nodes. If the user does choose to deploy a Cinder

Node in the environment, it is recommended that it would be deployed as an additional role to a controller node.

During the deployment of the new environment, Fuel will create the [XtremIO] section in the cinder.conf file and set its parameters

according to the information set in the Fuel UI. It will also create and associate the XtremIO-backend volume type for use by the

OpenStack environment.

Page 20: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

20

MANUAL CONFIGURATION OF THE XTREMIO CINDER DRIVER

Configuring the XtremIO Cinder Driver could also be done post deployment (without the Fuel Plugin) by editing the cinder.conf file in the

controller nodes and creating an XtremIO volume type in the environment manually.

These are the basic required parameters to set in the cinder.conf file to apply a connection to an XtremIO cluster:

Under the [DEFAULT] stanza, set:

o enabled_backends = XTREMIO

Create a new stanza at the bottom of the file titled [XTREMIO]. Under it, set:

o volume_driver = XTREMIO_CINDER_DRIVER

o san_ip = XMS_IP

o san_login = XMS_USER

o san_password = XMS_USER_PASSWORD

o volume_backend_name = XTREMIO_BACKEND

o xtremio_cluster_name = CLUSTER_NAME

o xtremio_array_busy_retry_count = RETRY_COUNT

o xtremio_array_busy_retry_interval = RETRY_INTERVAL

Whereas:

The enabled_backends parameter value under the [DEFAULT] stanza should be the same string as the title of the new stanza.

- In order to enable several storage backends, use a comma-separated list in the enabled_backends parameter.

The volume_driver parameter should be one of the following two values, according to the protocol used to attach volumes in the environment, FC or iSCSI:

o cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver

o cinder.volume.drivers.emc.xtremio.XtremIOISCSIDriver

san_ip is the IP or DNS of the XMS; san_login is the XMS user designated for OpenStack operations (with sufficient privileges to view and create all storage components addressed by OpenStack); san_password is the XMS password of the XMS user set in the san_login parameter; xtremio_cluster_name is the name of the XtremIO cluster appointed to this OpenStack environment (optional if the XMS is only managing a single cluster).

The volume_backend_name parameter is any meaningful value to later relate to when creating the XtremIO volume type.

The xtremio_array_busy_retry_count and xtremio_array_busy_retry_interval parameters determines retry values if the storage array is busy or not responding (number of retries (count) and number of seconds between each retry (interval)).

Here is an example for setting these configurations in the cinder.conf file (mandatory parameters in bold):

Figure 12. Example XtremIO settings in the cinder.conf file

Configuration summary for all storage related parameters can be found at Appendix A.

After configuring the cinder.conf file, restart cinder services for changes to take effect.

[DEFAULT] enabled_backends = XtremIO [XtremIO] volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOISCSIDriver san_ip = 10.0.0.1 san_login = username san_password = password volume_backend_name = XtremIOAFA xtremio_cluster_name = XtremIO-Cluster xtremio_array_busy_retry_count = 5 xtremio_array_busy_retry_interval = 5

Page 21: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

21

Now we can create and associate an XtremIO Volume Type to the environment:

1. On Horizon go to Admin System Volumes Volume Types and click on the +Create Volume Type button at the top:

2. In the Create Volume Type dialog box, give a name and a description (optional) to the Volume Type and click Create Volume Type:

Figure 13. Create an XtremIO Volume Type on OpenStack Horizon

3. After creating the XtremIO volume type, open the dropdown list under the ACTIONS column in the list of Volume Types next to the newly created XtremIO volume type and click View Extra Specs:

Figure 14. XtremIO Volume Type View Extra Specs

4. In the Volume Type Extra Specs dialog box, click +Create to create a new Extra Spec.

Page 22: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

22

5. In the Create Volume Type Extra Spec dialog box, type volume_backend_name in the Key value and type in the Value that corresponds that key in the cinder.conf file in the XtremIO stanza, then click Create:

Figure 15. XtremIO Volume Type Create Extra Spec

6. Back in the Volume Type Extra Specs box, ensure the new spec was created and Close the box:

Figure 16. XtremIO Volume Type Extra Specs

Now we are ready to start working with our configured XtremIO cluster in our OpenStack environment.

Page 23: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

23

OPENSTACK BLOCK STORAGE OPERATIONS WITH XTREMIO

CINDER OPERATIONS WHEN USING XTREMIO

After an environment is deployed and both the configuration of the cinder.conf file and the creation and association of a new volume

type is done (either through the Fuel Plugin or manually), we can start working on the environment to launch instances, run Heat stacks

etc., all while using our XtremIO cluster as the storage backend for our Cinder volumes.

VERIFYING XTREMIO VOLUME TYPE

If the XtremIO Plugin for Fuel was used to configure the XtremIO cluster in the environment, we can start by verifying the creation of an

XtremIO volume type in the new OpenStack environment by going to Admin System Volumes Volume Types on the Horizon

dashboard and overviewing the XtremIO volume type and its Extra Specs.

CREATING AND MAPPING XTREMIO VOLUMES IN OPENSTACK

We can start by creating XtremIO volumes from the OpenStack interface and use them in our environment.

To create an XtremIO Cinder Volume:

1. Go to Project Compute Volumes Volumes.

2. Click +Create Volume.

3. In the Create Volume dialog box, fill in all required information and select XtremIO as the Type of the volume (or the Volume Type Name that is defined for XtremIO Volumes in your system) and click Create Volume:

Figure 17. Create an XtremIO Volume in OpenStack

Page 24: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

24

After creation, we will be able to see the new volume in the Volumes tab and get additional information on it when clicking its name:

Figure 18. List of Volumes

Figure 19. Volume details

We can also see the new volume created on the XMS UI under Configuration Volumes:

Figure 20. List of Volumes on the XMS

Note that the Name of the volume on the XMS is actually its ID in OpenStack.

Page 25: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

25

To attach a Volume to an existing Instance:

1. Go to Project Compute Volumes Volumes to view the list of existing volumes.

2. Choose a volume, and click on the dropdown list under the ACTIONS column.

3. Choose Manage Attachments:

Figure 21. Managing Attachments of a volume

4. In the Manage Volume Attachments dialog box choose an instance to attach the volume to and click Attach Volume:

Figure 22. Attaching a Volume to an Instance

Note that in order to appropriately attach an XtremIO volume to an instance there should be a working iSCSI or FC connection

(depends on the driver chosen) between all Compute Nodes in the environment to the XtremIO cluster, defined with multipath.

Page 26: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

26

After attaching a volume to an instance we will be able to see the attachment through both the Volumes list and the instance’s details:

Figure 23. An Attached Volume in the Volumes list

Figure 24. An Attached Volume in an

Instance’s details window

Page 27: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

27

We can further connect to the instance itself and find the new volume attached to it, get its size, format it and start using it.

On the XMS we will see that the volume is now mapped and has an NAA Identifier (some data was already written to the new volume

here):

Figure 25. An Attached Volume on the XMS

Volume detach and volume delete are done similarly through the Project Compute Volumes Volumes menu using the

Manage Attachments and Delete Volume options.

Volume extend can also be done using the Extend Volume option. Volumes can be extended only if they are available and not

attached to an instance.

Page 28: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

28

OPENSTACK SNAPSHOTS AND CLONES OF XTREMIO VOLUMES

XtremIO Cinder Driver supports OpenStack’s snapshots and clones of volumes. XtremIO uses a unique redirect-on-write technology

(XVC) to provide a fast, space-saving, efficient way to create and maintain snapshots and clones. During the snapshot/clone creation

process, no data copy is involved, which makes the process instantaneous, and accessing the snapshots and clones is done the same

way as it is done for regular volumes, with no performance hit, making them usable for all multiple copies purposes, including test and

development environments, data analysis, backups and more. XtremIO Cinder Driver uses the XtremIO Virtual Copies for all copy-

related operations in OpenStack, as we will see throughout this document (snapshots, clones, consistency group snapshots, image-

caching, volume-backed images, etc.) to make storage related operations quick and non-costly for the client (storage administrators

should pay attention to the number of snapshots in a single XtremIO Volume Snapshot Group, as they are currently limited to 512

snapshot per a single volume).

The volume created in the last section is used here as well.

To create a Snapshot of an existing Volume:

1. Go to Project Compute Volumes Volumes to see all existing volumes.

2. Choose a volume, and click on the drop down list under the ACTIONS column.

3. Choose Create Snapshot:

Figure 26. Create a Volume Snapshot

Page 29: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

29

4. In the Create Volume Snapshot dialog box give a name and a description to the snapshot and click Create Volume Snapshot (the (Force) notation will appear when trying to take a snapshot of an attached volume. Users should know whether it is safe to take a snapshot of a volume that is attached to an instance, depending on the content of the volume and the application using it):

Figure 27. Create Volume Snapshot dialog box

After creation, we will be able to see the new Snapshot in the Volume Snapshots tab and get additional information on it when clicking

its name:

Figure 28. List of Snapshots

Figure 29. Snapshot details

Page 30: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

30

On the XMS we will see the snapshot as a Read-only copy in the Volumes list and we can also see the hierarchy of it (who it was

copied from) in the Volume Snapshot Groups tab below:

Figure 30. Volumes and Snapshots on the XMS – Read Only snapshot

Figure 31. Volume Snapshot Groups hierarchy on the XMS – Read Only snapshot

Note that again, the Name of the snapshot on the XMS is actually its ID in OpenStack.

The main action that can be performed on an OpenStack snapshot is creating a new volume from it, for any reuse or restore purposes.

We will see how this is done and how it looks on XtremIO.

To create a Volume from an existing Snapshot:

1. Go to Project Compute Volumes Volume Snapshots, Choose a snapshot and click on the Create Volume button under the ACTIONS column:

2. In the Create Volume dialog box fill in required information (Type should remain XtremIO for the volume to be created on the XtremIO cluster) and click Create Volume (the new volume can be of size equal or greater than the snapshot it is created from):

Figure 32. Create Volume from a Snapshot

The new volume will be seen and can be used as a regular volume on OpenStack (and will hold the data written to the original volume).

When viewing the volume through the cinder show command in the CLI, the ID of the snapshot it was created will be mentioned.

Page 31: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

31

On the XMS we will see a new copy (this time a writable one) in the list of volumes and in the Volume Snapshot Groups hierarchy:

Figure 33. Volumes and Snapshots on the XMS – Writable clone

Figure 34. Volume Snapshot Groups hierarchy on the XMS – Writable clone

Notice how read-only and writable copies are non-costly in XtremIO – for both entities created from the same source volume, no

additional data is written to the disk, and the Logical Space In Use for the entire VSG (Volume Snapshot Group) remains the same as

the size of the original volume (extra data will be written to the disk only when changes will be made in either copy in the VSG).

Snapshot deletes are done similarly through the Project Compute Volumes Volume Snapshot menu using the Delete

Volume Snapshot button.

An OpenStack snapshot is not tied to the volume it was created from, and can be deleted at any time. The volume it was created from,

however, is tied to its snapshots, and cannot be deleted until all of its snapshots are deleted.

Footnote: Due to a current OpenStack bug, volumes created from an existing snapshot are automatically created with the size of the

source snapshot, even when specifying a larger size. A workaround for the bug could simply be extending the volume after its creation

to the desired size. This bug will be solved in OpenStack’s Pike distribution, but a fix for it was already pushed to current XtremIO

Cinder Driver versions (can be obtained separately for the Mitaka version).

Page 32: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

32

Next we will take a look at volume clones.

To create an XtremIO Volume clone, get to the Create Volume dialog box, select Volume as the Volume Source of the new volume

and select the volume to clone the new volume from in the Use a volume as a source dropdown list. Fill in other required information

and click Create Volume (the new volume can be of size equal or greater than the size of the volume it is cloned from):

Figure 35. Create a Volume clone

Note that attached volumes might not be listed as a possible volume source for new volumes, hence won’t be available to be cloned

from Horizon. In such a case, running the command through CLI should still work.

The new volume will be seen and can be used as a regular volume on OpenStack and will hold the data of the volume it was cloned

from. When viewing the volume through the cinder show command in the CLI, the ID of the volume it was cloned from will be

mentioned.

On the XMS we will see the cloned volume as a writable copy of the volume from which it was cloned.

The original and cloned volumes are not tied to each other and can be deleted with no regards to one another.

Footnote: As with creating a volume from a snapshot, the bug regarding the size of a volume created from a source also affects volume

clones. The same fixes are applied (for future OpenStack versions and current XtremIO Cinder Driver versions), and the same

workaround (manual resize) can be used to bypass this temporary bug.

Page 33: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

33

MANAGE AND UNMANAGE VOLUMES

XtremIO Cinder Driver gives OpenStack the ability to take control of existing volumes in the storage array and also let go of volumes

that are to be removed from the OpenStack environment but should not be deleted. These actions are called Manage and Unmanage

correspondingly.

To Unmanage a Volume in OpenStack:

1. Go to Admin System Volumes Volumes.

2. Choose a volume, and click on the drop down list under the ACTIONS column.

3. Choose Unmanage Volume:

Figure 36. Unmanage a Volume

4. In the Confirm Unmanage Volume dialog box validate the information of the volume to unmanage and click Unmanage:

Figure 37. Confirm Unmanage Volume dialog box

Note: OpenStack volumes can only be unmanaged if they are available and unattached.

After unmanaging a volume, it will no longer appear in the list of volumes in OpenStack. On the XMS we will see the same volume with

its name wrapped with “volume-<volume_id>-unmanaged”.

Page 34: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

34

To Manage an existing XtremIO Volume by OpenStack:

1. Go to Admin System Volumes Volumes.

2. Click the +Manage Volume button.

3. Fill in the volume’s Identifier (its name on the XMS), specify that the Identifier Type is a Name, specify the Host* to take the volume from, specify the Volume Type to be the XtremIO type, fill in other required information and click Manage.

Figure 38. Manage a Volume

* Note: the Host parameter is of the form host@volume_type#volume_backend_name. Potential Hosts can be viewed with the cinder

get-pools CLI command (it can also be seen under the Host column in the Volumes list in the Admin System Volumes menu, if

this Host have a volume in the system, as we’ve seen in the example in Figure 36). In our case, this value is openstack-controller-

cinder1.xiodrm.lab.emc.com@XtremIO#XtremIOAFA.

After completing the Manage action and taking control of a volume, it will retain all of its previous data and get a new ID as if it was just

created (even if the volume was previously created by OpenStack and was unmanaged and managed again). It will appear normally on

all volumes lists.

Page 35: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

35

CONSISTENCY GROUPS

OpenStack Consistency Groups allow us to group together several volumes into a single group so that when taking snapshots of them

for a backup or any other reason, they will be taken as a group on all volumes at the same time and will be consistent across them all.

XtremIO Cinder Driver supports all of OpenStack’s Consistency Group operations.

Note: It is common for OpenStack distributions to disable working with consistency groups by default by disallowing access through the

Cinder’s policy.json file. To enable the usage of consistency groups in the system, the administrator should change the existing

permissions for all consistency group actions in the /etc/cinder/policy.json file that is on the OpenStack Controller Nodes from

“group:nobody” to more permissive permissions (“rule:admin_or_owner”, for example). The edited consistency group actions in the

policy.json file should look like the following:

Figure 39. Consistency Group actions permissions in Cinder’s policy.json file

For the purpose of this section we created 2 XtremIO volumes, attached them to an instance and wrote some data to them.

“consistencygroup:create”: “rule:admin_or_owner”, “consistencygroup:delete”: “rule:admin_or_owner”, “consistencygroup:update”: “rule:admin_or_owner”, “consistencygroup:get”: “rule:admin_or_owner”, “consistencygroup:get_all”: “rule:admin_or_owner”, “consistencygroup:create_cgsnapshot”: “rule:admin_or_owner”, “consistencygroup:delete_cgsnapshot”: “rule:admin_or_owner”, “consistencygroup:get_cgsnapshot”: “rule:admin_or_owner”, “consistencygroup:get_all_cgsnapshots”: “rule:admin_or_owner”,

Page 36: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

36

To create a new Consistency Group in OpenStack:

1. Go to Project Compute Volumes Volume Consistency Groups.

2. Click the +Create Consistency Group button.

3. In the Consistency Group Information tab of the Create Consistency Group dialog box, give a Name and a Description to the Consistency Group and choose its Availability Zone.

Figure 40. Create a Consistency Group dialog box – Consistency Group Information

4. In the Manage Volume Types tab of the Create Consistency Group dialog box, add the XtremIO type to the Selected volume types list to be added to the Consistency Group and click Create Consistency Group:

Figure 41. Create a Consistency Group dialog box – Manage Volume Types

Page 37: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

37

After creation, we will be able to see the new Consistency Group in the Volume Consistency Groups tab and get additional

information on it when clicking its name:

Figure 42. List of Consistency Groups

Figure 43. Consistency Group details

On the XMS we will see the consistency group as a Read-only consistency group in the Consistency Groups tab under

Configuration:

Figure 44. Consistency Groups on the XMS

Note that again, the Name of the Consistency Group on the XMS is actually its ID in OpenStack.

Page 38: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

38

Next we need to choose the volumes that will be a part of this consistency group and add them to it.

To add Volumes to a Consistency Group:

1. Go to Project Compute Volumes Volume Consistency Groups and click on the Manage Volumes button in the ACTIONS column next to the right Consistency Group.

2. In the Add/Remove Consistency Group Volumes dialog box, add the required volumes to the Selected volumes list of this consistency group and click Edit Consistency Group:

Figure 45. Adding Volumes to a Consistency Groups

Note: Due to a current bug in some OpenStack versions, volumes with an underscore (“_”) in their names cannot be added to the

consistency group on Horizon (Error message: Unable to edit consistency group.). If this case occurs, either edit the volumes’ names or

use the CLI to add the volumes to the consistency group.

After adding volumes to a consistency group we will see them in the detailed page of the consistency group (the one from Figure 43) in

the Volumes section.

On the XMS we will see that now the consistency group has 2 Volumes and their total and used sizes. We can also see the volumes

that belong to the consistency group in the Volumes tab below:

Figure 46. Consistency Groups’ Volumes on the XMS

Through the CLI we will be able to see the relationship between a volume and its consistency group through the cinder show command

to view a volume (the consistencygroup_id of its consistency group, if there is one, will be stated. A volume cannot belong to more

than one consistency group on OpenStack).

To delete or edit a consistency group and to remove volumes from it we can similarly go to the Project Compute Volumes

Volume Consistency Group menu and use the Delete Consistency Group, Edit Consistency Group and Manage Volumes

options.

An OpenStack Consistency Group is tied to its volumes, and its volumes are tied to it. A Consistency Group cannot be deleted if it has

volumes belong to it, unless done through the CLI with the “force” flag (in which case, the volumes belong to it will be deleted together

with it). Volumes also cannot be deleted if they belong to some consistency group, and should be removed from the consistency group

to be deleted independently.

Page 39: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

39

Now that we have volumes in our new consistency group, we can take a Consistency Group Snapshot, which will take a snapshot of all

volumes in that consistency group at the same time.

To take a Consistency Group Snapshot:

1. Run the cinder cgsnapshot-create command through the CLI:

# cinder cgsnapshot-create --name XtremCGsnap XtremCG

After creating the consistency group snapshot, we will see two new snapshots in the snapshots list in Horizon, one for each volume,

both named after the consistency group snapshot name that we chose.

On the XMS we will see that the consistency group now has 1 Snapshot Set, which we can see in the Snapshot Sets tab below. We

will see the new snapshots in the volumes list as well. We can also go to Configuration Snapshot Sets and see the consistency

group snapshot that was created there:

Figure 47. A Consistency Group Snapshot Set on the XMS

To delete a consistency group snapshot we would use the cinder cgsnapshot-delete command.

Snapshots created from a consistency group snapshot are tied to it. They cannot be deleted individually. In order to delete them one

must delete the entire consistency group snapshot, which will delete all the snapshots created with it.

A consistency group cannot be deleted if it has existing consistency group snapshots that were created from it, and those should be

deleted first to delete a consistency group.

Footnote: Due to a bug in the 1.0.7 Cinder Driver, the creation of a consistency group snapshot is not fully completed and consistency

group snapshots cannot be used with this version of the driver. This issue is fixed for the next versions of the Cinder Driver, and was

also pushed to older Cinder Driver versions (to be obtained separately as of this writing).

To use a consistency group snapshot, we need to create a new consistency group from it, in a way that resembles creating a volume

from a snapshot: the consistency group snapshot, along with its snapshots, is cloned to a new consistency group with new volumes,

using XtremIO Virtual Copies, holding the same data as there was in the snapshots.

To create a Consistency Group from a Consistency Group’s Snapshot:

1. Run the cinder consisgroup-create-from-src command specifying the source consistency group snapshot to use:

# cinder consisgroup-create-from-src --cgsnapshot XtremCGsnap --name XtremCGsnapcopy

In addition to creating a consistency group from a consistency group snapshot, we can also create a consistency group from an existing

consistency group, in a way that resembles cloning a volume to a new volume: the consistency group is cloned, together with its

volumes, to a new consistency group with new volumes, using XtremIO Virtual Copies, holding the same data as there was in the

volumes of the source consistency group.

To create a Consistency Group from another Consistency Group:

1. Run the cinder consisgroup-create-from-src command specifying the source consistency group to use:

# cinder consisgroup-create-from-src --source-cg XtremCG --name XtremCGcopy

On the XMS, both clone actions (from a consistency group or from a consistency group snapshot) will result in a new consistency group

with new volumes, which are writable snapshots of their source volumes or snapshots.

Note: a consistency group can be cloned to a new consistency group only if its volumes are available and unattached to an instance.

Page 40: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

40

BOOTING INSTANCES USING XTREMIO

OPENSTACK BOOT OPTIONS

In OpenStack, we have 5 different ways of launching new instances, using 5 different options as boot sources:

Boot from Image using local storage on Compute Nodes

Boot from Image using Cinder Volumes

Boot from an Instance’s Snapshot

Boot a bootable Cinder Volume

Boot from a bootable Cinder Volume Snapshot

We will understand the differences between the boot sources and how can XtremIO’s AFA and the XtremIO Cinder Driver be utilized in

some of them for a faster, more efficient OpenStack deployment.

BOOT FROM IMAGE USING LOCAL STORAGE

This is the “regular” boot option. This takes the image that is stored in Glance and deploys it on local storage of a Compute Node on

which the instance will run. This is a rather slow method of deployment as the entire data is copied and transferred across the network

between the Controller Node (where Glance images are stored) and the Compute Node every time an instance is launched. No

external storage is used using this method, and XtremIO’s capabilities are not utilized.

To boot from an Image using local storage:

1. Go to Project Compute Instances and click the Launch Instance button at the top.

2. In the Source section of the Launch Instance form in the Select Boot Source select Image from the dropdown list and specify No in the Create New Volume option. Continue by selecting the image to use for the instance and complete the rest of the form to launch the new instance:

Figure 48. Booting an Instance from an Image using local storage

Page 41: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

41

BOOT FROM IMAGE USING CINDER VOLUMES

This option operates the same way as the regular boot from image option, only here the root volume of the new instance will be created

on a Cinder Volume, rather than a Compute Node’s local storage. This method may seem virtually the same regarding the data copied,

though the Glance image cache optimization implemented in the XtremIO Cinder Driver makes this option much more fast and efficient.

Glance image cache optimization works as follows: the first time an image is used as the source of a Cinder Volume, a volume copy is

created for it in the storage backend to be used as a source volume for future uses of that image. From here on out, each time a

request is made to use that image as a source of a new Cinder Volume (with or without booting a new instance from it), the storage

backend will clone that source volume and will not need to copy the image from Glance all over again. With XtremIO’s instantaneous,

space-efficient volume copy capabilities, this action will not only be extremely fast, it will also cost very little disk space on the storage

array, since no extra data blocks are written to the disk in an XtremIO Virtual Copy operation, and only unique changes in the volumes

will later cost space.

To enable the Glance image cache optimization in the system some additional parameters need to be set in the cinder.conf file:

Under the [DEFAULT] stanza, set:

o cinder_internal_tenant_project_id = PROJECT_ID

o cinder_internal_tenant_user_id = USER_ID

Under the [XtremIO] stanza, set:

o image_volume_cache_enabled = True

o image_volume_cache_max_size_gb = MAX_SIZE_GB

o image_volume_cache_max_count = MAX_COUNT

o xtremio_volumes_per_glance_cache = VOLUMES_PER _CACHE

Whereas:

The cinder_internal_tenant_project_id and the cinder_internal_tenant_user_id are the project and user that will own the cached image-volumes that are created in the backend storage. Those project and user require no special privileges and can basically be any value. They can be set with the services’ project ID and the cinder’s user ID, for example.

The image_volume_cache_max_size_gb is the maximum size allowed for the entire cache and the image_volume_cache_max_count is the maximum number of cache entries allowed in it. Both can be set to 0 for unlimited.

The xtremio_volumes_per_glance_cache is the limit number of volumes the storage backend will create from a single cached image. After that limit is reached for a single cached image, a new cache entry will be created for the same image and subsequent volumes will be created from it. The default recommended value here is 100 (and can be changed according to the environment’s snapshot needs).

Here is an example for setting these configurations in the cinder.conf file:

Figure 49. Glance Image Cache settings in the cinder.conf file

After configuring the cinder.conf file, restart cinder services for changes to take effect.

Also, for the image to be downloaded to an XtremIO volume (with or without using the image cache), there should be a working iSCSI

or FC connection (depending on the Cinder driver used) between all Controller Nodes and the XtremIO cluster, defined with multipath,

since the destination volume is attached to a Controller Node for the purpose of the image download operation.

[DEFAULT] cinder_internal_tenant_project_id = 62c2d95d88974bc395d14b32ffcfafbb cinder_internal_tenant_user_id = e503c1842087417e90ca5697cbcf57a3 [XtremIO] image_volume_cache_enabled = True image_volume_cache_max_size_gb = 0 image_volume_cache_max_count = 0 xtremio_volumes_per_glance_cache = 100

Page 42: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

42

Footnote: As with cloning a volume from a snapshot or another volume, the bug regarding the size of a volume created from a source

also affects volumes created from the image cache. The same fixes are applied (for future OpenStack versions and current XtremIO

Cinder Driver versions), and possible workarounds could either be resizing the created volumes manually, or resizing the cached image

volume manually to fit a certain acceptable size for all new volumes created from it.

To boot from an Image using a Cinder Volume, choose the Image option in the Select Boot Source dropdown list in the Source

section of the Launch Instance form, and choose Yes under the Create New Volume option. Specify the Volume Size (GB) required

and whether to Delete Volume on Instance Delete and choose whether to change the default (vda) Device Name of the volume in the

instance. Continue by selecting the Image to use for the instance and complete the rest of the form to launch the new instance:

Figure 50. Booting an Instance from an Image using Cinder Volumes

Page 43: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

43

BOOT FROM AN INSTANCE’S SNAPSOT

In OpenStack we can create snapshots of existing instances to save their current state in order to use them later as a source for future

instances if necessary.

To create an Instance Snapshot:

1. Go to Project Compute Instances and click the Create Snapshot button under the ACTIONS column.

2. In the Create Snapshot dialog box give a name to the new instance snapshot and click Create Snapshot.

Figure 51. Create an Instance Snapshot dialog box

When a snapshot of an existing instance is taken, it is saved in the Glance repository to be used as a potential source for future

instances. When taking a snapshot of a “regular” instance (whose root volume resides on local storage), it is presented in Glance as a

Snapshot, in which case in order to use it as the source of a new instance, the Instance Snapshot option should be selected from the

Select Boot Source dropdown list in the Source section of the Launch Instance form, and the desired instance snapshot need be

chosen:

Figure 52. Booting an Instance from an Instance Snapshot using local storage

Page 44: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

44

Though, when taking a snapshot of an instance which is running on an XtremIO Cinder Volume, it is presented in Glance as a regular

image and launching a new instance from it will be done the same way as launching a new instance from any other image. This will

also create a snapshot entry of the original instance’s root volume in the Volume Snapshots list of OpenStack. The benefit of using

XtremIO Volumes as root volumes for instances becomes even greater when taking an instance’s snapshot: both operations of taking

the snapshot and creating a new instance from it are extremely quick when using XtremIO Volumes since in both cases the storage

array takes a copy of existing volumes/snapshots, which is an instantaneous operation on XtremIO which costs no extra space. The

same procedure when using local storage will cost more both in time of creation and in storage space used.

Page 45: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

45

BOOT A BOOTABLE CINDER VOLUME

This option allows us to take an existing unused bootable Cinder Volume and use it as the root volume of a new instance.

To boot a new Instance using an existing bootable Cinder Volume, choose the Volume option in the Select Boot Source dropdown list

in the Source section of the Launch Instance from, and specify whether to Delete Volume on Instance Delete. Continue by selecting

the Volume to use for the instance and complete the rest of the form to launch the new instance:

Figure 53. Booting an Instance using a bootable Cinder Volume

Page 46: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

46

A bootable volume can be created either as a clone of another bootable volume or snapshot, or when creating a volume from an Image.

In the latter case, this can be seen as a two-step procedure of the previous “Boot from Image Using Cinder Volumes” option, as we can

first create a bootable volume from an image and use that volume later to launch a new instance, as oppose to doing both with a single

instruction. To create a bootable Volume from an existing Image, choose Image as the Volume Source in the Create Volume dialog

box and specify the image to use in the Use image as a source dropdown list:

Figure 54. Create a bootable Volume from an Image

Page 47: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

47

A bootable volume can also be used to create a new image for Glance.

To create an Image from a bootable Cinder Volume:

1. Go to Project Compute Volumes Volumes and click the Upload to Image button in the dropdown list under the ACTIONS column next to the right bootable volume.

2. In the Upload Volume to Image dialog box give a name to the new image and specify the Disk Format, and then click Upload:

Figure 55. Create an Image from a Bootable Cinder Volume

With this specific action and with default configurations, the image will be created in the default Glance store – usually Swift, meaning

that the process will upload the volume to a new image, transferring the entire data from the XtremIO cluster to a Controller Node in the

process. This process can be significantly shortened by making Cinder (i.e. XtremIO) a viable store option for Glance images, which will

enable the option of creating an image from a volume using XtremIO’s instantaneous snapshots alone, storing the image on the storage

array itself.

To enable this option, we will need to configure Glance to use Cinder as potential storage and configure Cinder to perform the image

upload to itself. The next parameters should be set:

In the Glance configuration file (/etc/glance/glance-api.conf) (mandatory parameters in bold):

Figure 56. Setting Cinder as a store option for Glance in the glance-api.conf file

- In order to enable several Glance stores, use a comma-separated list in the stores parameter (you can add the cinder store in addition to the existing stores).

[DEFAULT] show_multiple_locations = True [glance_store] stores = glance.store.cinder.Store

Page 48: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

48

In cinder.conf (mandatory parameters in bold):

Figure 57. Setting Image Upload to Cinder in the cinder.conf File

- The enable_force_upload parameter, when set to true, will allow forcing the image creation for an attached volume.

- The image_upload_use_internal_tenant parameter, when set to true, will create the volume-backed images with the cinder_internal_tenant and cinder_internal_user configured in the DEFAULT stanza of the cinder.conf file we’ve seen an example of in Figure 49.

After configuring the cinder.conf and glance-api.conf files, restart cinder and glance services for changes to take effect.

Now we can either create an image to point directly to a bootable cinder volume, or “upload” a bootable volume to a new image that will

reside on Cinder, meaning the “uploading” process will actually be the creation of an XtremIO Virtual Copy and the use of that copy as

the new image.

To create an Image from a bootable Cinder Volume, use the same method used to upload a volume to an image, but now specify the

Disk Format to be RAW.

To create a new Image and point it directly to a bootable Cinder Volume:

1. Run the glance image-create command with a raw disk format and a bare container format:

# glance image-create --disk-format raw --container-format bare --name XtremImage

2. Run the glance add-location command with the newly created image and the requested volume to point to (IDs only):

# glance location-add --url cinder://<volume-uuid> <image-uuid>

Both options will result in an image that points to an XtremIO Volume, and creating volumes or booting instances from it will be done

quickly using XtremIO Virtual Copies on the volume the image points to.

Deletion of an image which points to a volume will also delete the volume. Deleting the volume independently without deleting the

image is possible, but that would cause inconsistency if the image still points to it. It is better to first remove the association from the

image to the volume prior to deleting the volume.

[DEFAULT] glance_api_version = 2 allowed_direct_url_schemes = cinder enable_force_upload = True [XtremIO] image_upload_use_cinder_backend = True image_upload_use_internal_tenant = True

Page 49: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

49

BOOT FROM A BOOTABLE CINDER VOLUME SNAPSHOT

This option allows us to boot a new instance using an existing bootable Snapshot as a source. This option will first create a volume

from the snapshot and then use that volume as a root volume for the new instance. This operation is also very efficient when using

XtremIO volumes, as the creation of a volume from a snapshot will in fact create a copy of that snapshot on XtremIO, which is an

instantaneous non-costly operation.

To boot an Instance using an existing bootable Snapshot as a source, choose the Volume Snapshot option in the Select Boot Source

dropdown list in the Source section of the Launch Instance from, and specify whether to Delete Volume on Instance Delete.

Continue by selecting the snapshot to use as a source for the instance and complete the rest of the form to launch the new instance:

Figure 58. Booting an Instance from a bootable Snapshot

A bootable snapshot can be created either as a snapshot of a bootable volume, or when taking a snapshot of an instance that uses a

Cinder Volume as its root volume.

Page 50: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

50

SUMMARY

Here is a summary diagram of the differences between each instance boot option and other image related operations when using

XtremIO for Cinder Volumes:

Figure 59. Instance boot options diagram

We can see from the diagram that roughly all operations done on XtremIO Cinder Volumes include no data copies and are

instantaneous, no matter the size of the images used, and those volumes benefit from the XtremIO features, while operations done

through the OpenStack nodes all involve data copies which prolong their conclusion significantly (the larger the amount of data needed

to be copied the longer the operation will take).

Page 51: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

51

APPENDIX A – CONFIGURATION SUMMARY

Here are some basic and recommended settings to be configured in OpenStack’s configuration files to enable an XtremIO cluster as

the backend storage of your cloud environment, utilizing its benefits, when:

Bold parameters indicate mandatory parameters for each section.

ITALIC_CAPPED values indicate parameters to be replaced with environment values / user’s choice.

CINDER.CONF FILE

Parameters to set for /etc/cinder/cinder.conf file:

BASIC PARAMETERS

Basic parameters to set for basic configuration of an XtremIO cluster for OpenStack:

IMAGE-CACHE PARAMETERS

Parameters to set for the Image-Cache feature functionality for an XtremIO cluster:

IMAGE UPLOAD TO CINDER PARAMETERS

Parameters to set to upload images to the Cinder backend:

[DEFAULT] enabled_backends = XTREMIO [XtremIO] volume_driver = XTREMIO_CINDER_DRIVER san_ip = XMS_IP san_login = XMS_LOGIN san_password = XMS_PASSWORD volume_backend_name = BACKEND_NAME xtremio_cluster_name = CLUSTER_NAME xtremio_array_busy_retry_count = RETRY_COUNT xtremio_array_busy_retry_interval = RETRY_INTERVAL

[DEFAULT] cinder_internal_tenant_project_id = PROJECT_ID cinder_internal_tenant_user_id = USER_ID [XtremIO] image_volume_cache_enabled = True image_volume_cache_max_size_gb = MAX_SIZE image_volume_cache_max_count = MAX_COUNT xtremio_volumes_per_glance_cache = CACHE_VOLUME_LIMIT

[DEFAULT] glance_api_version = 2 allowed_direct_url_schemes = cinder enable_force_upload = True / False [XtremIO] image_upload_use_cinder_backend = True image_upload_use_internal_tenant = True / False

Page 52: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

52

OVER-SUBSCRIPTION PARAMETERS

Parameters to set for Over-Subscription of XtremIO Volumes:

MULTIPATHING PARAMETERS

Parameters to set for Multipathing of XtremIO Volumes:

SSL CERTIFICATION PARAMETERS

Parameters to set for SSL certification with an XtremIO array:

CINDER.CONF FILE EXAMPLE

An example cinder.conf file with the above parameters:

* This is not an entire cinder.conf file. Parameters not mentioned here were left with their default values.

[XtremIO] max_over_subscription_ratio = RATIO

[XtremIO] use_multipath_for_image_xfer = True enforce_multipath_for_image_xfer = True / False

[XtremIO] driver_ssl_cert_verify = True driver_ssl_cert_path = CERTIFICATE_PATH

[DEFAULT] enabled_backends = XtremIO cinder_internal_tenant_project_id = 62c2d95d88974bc395d14b32ffcfafbb cinder_internal_tenant_user_id = e503c1842087417e90ca5697cbcf57a3 glance_api_version = 2 allowed_direct_url_schemes = cinder enable_force_upload = True [XtremIO] volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOISCSIDriver san_ip = 10.0.0.1 san_login = username san_password = password volume_backend_name = XtremIOAFA xtremio_cluster_name = Cluster01 xtremio_array_busy_retry_count = 5 xtremio_array_busy_retry_interval = 5 image_volume_cache_enabled = True image_volume_cache_max_size_gb = 0 image_volume_cache_max_count = 0 xtremio_volumes_per_glance_cache = 100 image_upload_use_cinder_backend = True image_upload_use_internal_tenant = True use_multipath_for_image_xfer = True

Page 53: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

53

CINDER’S POLICY.JSON FILE

Parameters to set for /etc/cinder/policy.json file:

CONSISTENCY GROUP PARAMETERS

Parameters to set to enable consistency group operations in an OpenStack environment:

* The rule:admin_or_owner value is recommended to fit other permissions in the policy.json file. Any other required permissions can

be set instead.

GLANCE-API.CONF FILE

Parameters to set for /etc/glance/glance-api.conf file:

STORE IN CINDER PARAMETERS

Parameters to set to enable Cinder as a store option for Glance:

NOVA.CONF FILE

Parameters to set for /etc/nova/nova.conf file:

THIN PROVISIONING PARAMETERS

Parameters to set to make sure images aren’t formatted in their entire size, effectively disabling thin provisioning for volumes created

from images:

“consistencygroup:create”: “rule:admin_or_owner”, “consistencygroup:delete”: “rule:admin_or_owner”, “consistencygroup:update”: “rule:admin_or_owner”, “consistencygroup:get”: “rule:admin_or_owner”, “consistencygroup:get_all”: “rule:admin_or_owner”, “consistencygroup:create_cgsnapshot”: “rule:admin_or_owner”, “consistencygroup:delete_cgsnapshot”: “rule:admin_or_owner”, “consistencygroup:get_cgsnapshot”: “rule:admin_or_owner”, “consistencygroup:get_all_cgsnapshots”: “rule:admin_or_owner”,

[DEFAULT] show_multiple_locations = True [glance_store] stores = glance.store.cinder.Store

[DEFAULT] use_cow_images = False

Page 54: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

54

APPENDIX B – CLI COMMANDS

The CLI commands listed here are example commands of tasks equivalent to those shown throughout this paper in the Fuel UI and

OpenStack Horizon that were performed by the author in a demo environment. They are written here as general as possible, though

most of them can be run in different ways using various flags. For full command options and documentation refer to the OpenStack

Docs (Command-Line Interface Reference) or use the help option in the CLI.

FUEL CLI COMMANDS

To run on the Fuel Master Node.

FUEL PLUGIN COMMANDS

Installing a Fuel Plugin:

# fuel plugins --install <RPM>

Verifying installation of Fuel Plugins:

# fuel plugins

Figure 60. XtremIO Fuel Plugin Installed in a Mirantis Distribution – CLI

FUEL ENVIRONMENT COMMANDS

Creating a new OpenStack environment:

# fuel env create --name <environment_name> --rel <release_id>

Downloading an OpenStack environment attributes:

# fuel env --env <environment_id> --attributes --download

Uploading attributes to an OpenStack environment:

# fuel env --env <environment_id> --attributes --upload

FUEL DEPLOYMENT COMMANDS

Deploying changes to an OpenStack environment:

# fuel deploy-changes --env <environment_id>

OPENSTACK CLI COMMANDS

To run on an OpenStack Node.

VOLUME TYPE COMMANDS

Creating a new Volume Type on Cinder:

# cinder type-create <new_volume_type_name> [--description <volume_type_description>]

Setting Extra Specs (key values) for a volume type:

# cinder type-key <volume_type> set <key=value> [<key=value> … ]

Unsetting Extra Specs for a volume type:

# cinder type-key <volume_type> unset <key> [<key> … ]

Viewing a list of Volume Types:

# cinder type-list

Page 55: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

55

BASIC VOLUME COMMANDS

Creating a Cinder Volume:

# cinder create --name <new_volume_name> --volume-type <volume_type> <volume_size>

Viewing a list of Volumes (including attachment status):

# cinder list - Volumes in the current project

# cinder list --all-tenants - all Volumes in the system (for administrators only)

Viewing extended details of a single Volume:

# cinder show <volume_name>

Attaching a Volume to an Instance:

# nova volume-attach <instance_name> <volume_id>

Deleting a Volume:

# cinder delete <volume_name>

Detaching a Volume from an Instance:

# nova volume-detach <instnace_name> <volume_id>

Extending a Volume:

# cinder extend <volume_name> <new_volume_size>

SNAPSHOTS AND CLONES COMMANDS

Creating a Volume Snapshot:

# cinder snapshot-create [--force True] --name <new_snapshot_name> <volume_name>

Viewing a list of Snapshots:

# cinder snapshot-list - Snapshots in the current project

# cinder snapshot-list --all-tenants - all Snapshots in the system (for administrators only)

Viewing extended details of a single Volume Snapshot:

# cinder snapshot-show <snapshot_name>

Creating a Volume from an existing Snapshot:

# cinder create --snapshot-id <snapshot_id> --name <new_volume_name> --volume-type <volume_type>

Deleting a Snapshot:

# cinder create --snapshot-id <snapshot_id> --name <new_volume_name> --volume-type <volume_type>

Creating a Volume Clone:

# cinder create --source-volid <volume_id> --name <new_volume_name> --volume-type <volume_type>

VOLUMES MANAGE AND UNMANAGE COMMANDS

Unmanaging a Cinder Volume:

# cinder unmanage <volume_name>

Managing an existing Volume:

# cinder manage --name <new_volume_name> --volume-type <volume_type> <host> <identifier>

Viewing Cinder’s host/service list:

# cinder service-list

Page 56: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

56

CONSISTENCY GROUP COMMANDS

Creating a Consistency Group:

# cinder consisgroup-create --name <new_consistency_group_name> <volume_type>

Viewing a list of Consistency Groups:

# cinder consisgroup-list - Consistency Groups in a current project

# cinder consisgroup-list --all-tenants - all Consistency Groups in the system (for administrators only)

Viewing extended details of a single Consistency Group:

# cinder consisgroup-show <consistency_group_name>

Deleting a Consistency Group:

# cinder consisgroup-delete [--force] <consistency_group_name>

Adding Volumes to a Consistency Group:

# cinder consisgroup-update --add-volumes <volume1_id>,<volume2_id>,… <consistency_group_name>

Removing Volumes from a Consistency Group:

# cinder consisgroup-update --remove-volumes <volume1_id>,<volume2_id>,… <consistency_group_name>

Creating a Consistency Group Snapshot:

# cinder cgsnapshot-create --name <new_consistency_group_snapshot_name> <consistency_group_name>

Viewing a list of Consistency Group Snapshots:

# cinder cgsnapshot-list - Consistency Group Snapshots in the current project

# cinder cgsnapshot-list --consistencygroup-id <consistencygroup_id> - Consistency Group Snapshots of a specified Consistency Group

# cinder cgsnapshot-list --all-tenants - all Consistency Group Snapshots in the system (for administrators only)

Viewing extended details of a single Consistency Group Snapshot:

# cinder cgsnapshot-show <consistency_group_snapshot_name>

Deleting a Consistency Group Snapshot:

# cinder cgsnapshot-delete <consistency_group_snapshot_name>

Creating a Consistency Group as a copy of an existing Consistency Group:

# cinder consisgroup-create-from-src --source-cg <source_consistency_group> --name <new_consistency_group>

Creating a Consistency Group as a copy of a Consistency Group Snapshot:

# cinder consisgroup-create-from-src --cgsnapshot <source_consistency_group_snapshot> --name <new_consistency_group>

BOOTABLE VOLUMES, IMAGES AND INSTANCE LAUNCH COMMANDS

Booting an Instance from an Image/Instance Snapshot (both referred to as images in the CLI) using local storage:

# nova boot <new_instance_name> --image <image> --flavor <flavor> --nic <network_identifier_type=network_identifier> --security-groups <security_groups> --key-name <key_name> --availability-zone <availability_zone>

Booting an Instance from an Image using a Cinder Volume:

# nova boot <new_instance_name> --block-device “source=image,id=<image_id>,dest=volume,size=<new_volume_size>,device=vda,shutdown=<preserve|remove>,bootindex=0” --flavor <flavor> --nic <network_identifier_type=network_identifier> --security-groups <security_groups> --key-name <key_name> --availability-zone <availability_zone>

Creating an Instance Snapshot (to be created in the Glance Image Repository):

# nova image-create <instance_name> <new_snapshot_name>

Page 57: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

57

Booting an Instance from an existing bootable Volume:

# nova boot <new_instance_name> --boot-volume <bootable_volume_id> --flavor <flavor> --nic <network_identifier_type=network_identifier> --security-groups <security_groups> --key-name <key_name> --availability-zone <availability_zone>

Creating a bootable Volume from an existing Image:

# cinder create --image <image> --name <new_volume_name> --volume-type <volume_type> <volume_size>

Creating an Image from a bootable Volume:

# cinder upload-to-image –disk-format <disk_format> <bootable_volume> <new_image_name>

Creating an empty Glance Image:

# glance image-create --disk-format <disk_format> --container-format <container_format> --name <new_image_name>

Setting a Glance Image to use a Cinder Volume:

# glance location-add --url cinder://<volume_id> <image_id >

Booting an Instance using an existing bootable Snapshot as a source:

# nova boot <new_instance_name> --snapshot <bootable_snapshot_id> --flavor <flavor> --nic <network_identifier_type=network_identifier> --security-groups <security_groups> --key-name <key_name> --availability-zone <availability_zone>

Page 58: XtremIO Integration with Mirantis OpenStack - Dell EMC · PDF file5 EXECUTIVE SUMMARY This white paper reviews the integration of Mirantis™ OpenStack® cloud environments with Dell

58

REFERENCES

OPENSTACK

1. https://www.openstack.org/

i. OpenStack Main Site

2. https://docs.openstack.org/

i. OpenStack Docs

3. https://www.slideshare.net/openstack

i. OpenStack Presentations on slideshare.net

MIRANTIS

1. https://www.mirantis.com/

i. Mirantis Main Site

2. https://www.mirantis.com/software/openstack/

i. Mirantis OpenStack Software

3. https://www.mirantis.com/openstack-resources/

i. Mirantis Documents

4. https://www.mirantis.com/openstack-case-studies/

i. Mirantis OpenStack Case Studies

XTREMIO

1. https://support.emc.com/products/31111_XtremIO

i. Dell EMC XtremIO Product Page

2. https://www.emc.com/collateral/white-papers/h11752-intro-to-XtremIO-array-wp.pdf

i. Introduction to the Dell EMC XtremIO All-Flash Storage Platform

3. https://support.emc.com/docu71055_XtremIO_XIOS_4.0.2_and_4.0.4_and_4.0.10_and_4.0.15_with_XMS_4.2.0_and_4.2.1_Storage_Array_User_Guide.pdf?language=en_US

i. XtremIO User Guide

4. http://xtremioblog.emc.com/

i. XtremIO blog on DellEMC.com

5. https://xtremio.me/

i. XtremIO related product announcements and technology deep dives from the office of XtremIO CTO