INN694-2014-OpenStack installation process V5

56
INN694 – Project OpenStack Semester 2, 2014 STUDENT NAME STUDENT ID Fabien Chastel n8745064 SUPERVISOR Dr Vicky Liu

Transcript of INN694-2014-OpenStack installation process V5

Page 1: INN694-2014-OpenStack installation process V5

INN694 – Project

OpenStack

Semester 2, 2014

STUDENT NAME

STUDENT ID

Fabien Chastel n8745064

SUPERVISORDr Vicky Liu

Page 2: INN694-2014-OpenStack installation process V5

OpenStack – Final report

Executive SummaryThe purpose of this documentation is to provide a step by step installation

procedure of OpenStack. This include the configuration of the primary environment such as the network time synchronisation, the database that will store all data that OpenStack services need, the OpenStack package and the messaging service use by OpenStack services in order to communicate. Then it will explain how to install the core components that are needed to run a basic instance which are the Identity service, the Image service, the compute service, a compute node and the network services.

INN694 – Project Page |1/50|

Page 3: INN694-2014-OpenStack installation process V5

OpenStack – Final report

TABLE OF CONTENTS

EXECUTIVE SUMMARY.................................................................................................1

1 - INTRODUCTION.......................................................................................................3

2 - OPENSTACK ENVIRONMENT..................................................................................3

2.1 - HARDWARE REQUIREMENT...........................................................................................................42.2 - SOFTWARE REQUIREMENT............................................................................................................4

3 - INSTALLATION OF OPENSTACK..............................................................................5

3.1 - PRIMARY ENVIRONMENT CONFIGURATION......................................................................................63.2 - IDENTITY SERVICE.....................................................................................................................113.3 - IMAGE SERVICE: GLANCE...........................................................................................................143.4 - COMPUTE SERVICE: NOVA..........................................................................................................163.5 - NETWORKING SERVICE: NEUTRON..............................................................................................193.6 - DASHBOARD: HORIZON..............................................................................................................263.7 - BLOCK STORAGE: CINDER..........................................................................................................28

4 - TROUBLESHOOTING..............................................................................................33

5 - USEFUL COMMAND...............................................................................................34

5.1 - GENERAL COMMAND..................................................................................................................345.2 - GLANCE................................................................................................................................... 34

6 - TUTORIAL...............................................................................................................35

6.1 - LUNCH AN INSTANCE:................................................................................................................356.2 - PROVIDE PUBLIC ADDRESS TO INSTANCES....................................................................................386.3 - CREATE A NEW SECURITY GROUP (ACL)......................................................................................396.4 - FORCE THE MTU......................................................................................................................416.5 - CREATE A VIRTUAL MACHINE USING “VIRTINST”........................ERROR! BOOKMARK NOT DEFINED.

7 - TABLE OF FIGURES................................................................................................41

8 - TABLE OF TABLES.................................................................................................41

9 - REFERENCES.........................................................................................................42

INN694 – Project Page |2/50|

Page 4: INN694-2014-OpenStack installation process V5

OpenStack – Final report

1 - IntroductionOpenStack is a worldwide association of developers and cloud computing

technologists, managed by the OpenStack Foundation, that produce the omnipresent open source computing platform for public and private clouds [1]. Cloud computing is about sharing resource such as RAM, CPU and other among several machine. For instance, if you have two computers, one with 2 core CPU, 4GB of RAM and 100GB of storage and this other one has 4 core CPU, 16GB of RAM and 500GB of storage, it will summarised the resources and the use will perceive it as one server of 6 core CPU, 20GB of RAM and 600GB of storage (in theory). In this case, the OpenStack was installed on three computer provided by QUT, the specification of those computer will be listed later on the report.

OpenStack is an open-source software cloud computing platform mainly focused on IaaS (Infrastructure as a Service). It can control a big pools of compute, storage and networking resources of an entire datacentre using a single web-based dashboard.

Figure 1 - OpenStack overview [1]

2 - OpenStack environmentThe environment of OpenStack is highly scalable and depend on the needs of

each companies. The OpenStack scalability will probably always differ from one company to another according to the need of this company and “no one Solution meet everyone’s scalability goals [2]. For instance, some companies will need to have a plethora of big instances that need a lot of VPCU and RAM but less storage whereas other companies will need only small instances using few VCPU and RAM but need a huge amount of storage. OpenStack has been designed to be horizontally scalable in order to suit the cloud paradigm [2]. It means that after the initial installation of OpenStack, it is possible to add more power of or storable simply by adding another server on the cloud

INN694 – Project Page |3/50|

Page 5: INN694-2014-OpenStack installation process V5

OpenStack – Final report

OpenStack can be install in a virtual machine using software like VMware or VirtualBox for experiment purposes in order to run few small instances as it can be installed in a multinational enterprise can run thousands of instances, small or big, such as Amazon with it Amazon Cloud Services.

INN694 – Project Page |4/50|

Page 6: INN694-2014-OpenStack installation process V5

OpenStack – Final report

2.1 - Hardware requirement

A basic environment does not need a huge amount of resource to be functional. However, there is a minimum requirement in order to support several minimal CirrOS instances. This minimum are as bellow:

Node Processor Memory StorageController 1 2 BG 5 GB

Network 1 512 MB 5 GB

Compute 1 2 GB 10 GBTable 1 - Hardware requirement

2.2 - Software requirement

OpenStack need to be install on the top of a Linux distribution, the list of compatible distribution is as follow:

Debian openSUSE and SUSE Linux Enterprise Server Red Hat Enterprise Centos Fedora Ubuntu

Before installing OpenStack, it is necessary to have a good base in order to be able to install OpenStack services without or with only few problems. Firstly, it is strongly recommended to have only a minimal installation of Linux distribution on the purpose of allowing more resource for OpenStack and reduce confusion and it is highly recommended to have a 64-bit version of Linux for a number reason such as the limit of RAM (3-4GB) and the increase of the capability of the processor. It will also allow to create a 64-bit instances as well as 32-bit.

Secondly, the network topology should reflect the needs of the company and the IP addressing should be chosen quit carefully. All automatic IP assignments should be disable and be manually configure on each nodes. In addition to the addressing, it is better to have a DNS server with the record of all nodes but it is not compulsory as it can be done using the “hosts” files located on the folder “/etc”, however it become complicate to manage when the network grow up. Then, the time should be synchronised amongst all nodes from the controller using application like NTP.

The following step is to install a database as the majority of OpenStack services need a database in order to store information. A database must be installed, preferably on the controller, as well as the Python library related to the database chosen. The Python library also need to be installed on all additional nodes that need to access this database using the API from OpenStack. The recommended database software and python library for OpenStack is MySQL and MySQL Python library.

INN694 – Project Page |5/50|

Page 7: INN694-2014-OpenStack installation process V5

OpenStack – Final report

Then the OpenStack package need to be installed on each servers/nodes. The OpenStack package can be install by adding a specific repository and use the normal install command like “apt-get install” or “yum install”. Some recent Linux distribution such as Ubuntu 14.04 include those packages on their repository.

The final step before installing the main services is to install a message broker. Indeed, to coordinate operation and status information among services, OpenStack use a message broker. Several message brokers are compatible with OpenStack, yet the most commonly used is RabbitMQ. Same as the database, it is preferable to install the message broker on the main controller.

3 - Installation of OpenStackThe first stage of the installation of OpenStack is to install the operating system

that will store the cloud. This documentation assume that a basic installation of Linux was done using Ubuntu 14.04 where system was updated (apt-get update) and upgraded (apt-get upgrade) and a DNS was implemented to resolve all the IPs. In addition, it is important to test the configuration by following the verification on each section or by checking the respective log after restarting a service and if a problem occur, it is recommended to fix it before continue. After follow the Section from 10.1 to 10.5, all the core components necessary to launch a basic instance will be install. The rest will be optional.

All the command in red colour need to be change according to the actual network. For instance, most passwords were generated using a command (“openssl rand -hex 10”), also shown in section 3.2.1.5 - . The Table 2 show the list of password that will be needed during the installation. The password that need to be remembered such as the system and MySQL root password were chosen carefully and easy to remember like “osuc@123456”, otherwise all other passwords created for OpenStack services in Keystone or MySQL were generated using a command line for security reasons.

Location Username Password Description

System root root_password Password for Ubuntu

YourUsername

Your_Username_Password Password for Ubuntu

MySQL         

root MySQL_Root_Password Root password for the databasedbu_keystone MySQL_Keystone_Password Database user for Identity service

dbu_glance MySQL_Glance_Password Database user for Image Servicedbu_nova MySQL_Nova_Password Database user for Compute

servicedbu_horizon MySQL_Horizon_Password Database user for the dashboard

dbu_cinder MySQL_Cinder_Password Database user for the Block Storage service

dbu_neutron MySQL_Neutron_Password Database user for the Networking service

dbu_heat MySQL_Heat_Password Database user for the

INN694 – Project Page |6/50|

Page 8: INN694-2014-OpenStack installation process V5

OpenStack – Final report

Orchestration service

RabbitMQ

guest Rabbit_guest_password User guest of RabbitMQ

YourUsername

Rabbit_Strong_Password Another account of RabbitMQ

Keystone       

admin Keystone_Admin_Password Main user

glance Keystone_Glance_Password User for Image Servicenova Keystone_Nova_Password User for Compute service

cinder Keystone_Cinder_Password User for Block Storage serviceneutron Keystone_Neutron_Password User for Networking service

heat Keystone_Heat_Password User for Orchestration service

Table 2 - List of passwords

INN694 – Project Page |7/50|

Page 9: INN694-2014-OpenStack installation process V5

OpenStack – Final report

Figure 2 - Network topology

INN694 – Project Page |8/50|

Page 10: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.1 - Primary environment configuration

3.1.1 - Network configuration

3.1.1.1 - Controller node

vi /etc/network/interface

auto eth0

iface eth0 inet static

address 192.168.1.11

netmask 255.255.255.0

network 192.168.1.0

broadcast 192.168.1.255

gateway 192.168.1.12

dns-nameservers 192.168.1.12

dns-search labqut-osuc.com

iface eth0 inet6 static

pre-up modprobe ipv6

address 2402:ec00:face:1::11

netmask 64

gateway 2402:ec00:face:1::1

3.1.1.2 - Compute node

vi /etc/network/interface

# The primary network interface

auto em1

iface em1 inet static

address 192.168.1.13

netmask 255.255.255.0

gateway 192.168.1.12

dns-nameservers 192.168.1.12

dns-search labqut-osuc.com

iface em1 inet6 static

pre-up modprobe ipv6

address 2402:ec00:face:1::13

netmask 64

gateway 2402:ec00:face:1::1

auto p4p1

iface p4p1 inet static

address 192.168.2.13

netmask 255.255.255.0

iface p4p1 inet6 static

pre-up modprobe ipv6

address 2402:ec00:face:2::13

netmask 64

INN694 – Project Page |9/50|

Page 11: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.1.1.3 - Network node

vi /etc/network/interface

# The network interface that will be created after the section 3.5.2.9 - Step 8: Setup the Open vSwitch (OVS) service

auto br-ex

iface br-ex inet static

address 10.0.0.2

netmask 255.255.255.0

gateway 10.0.0.1

dns-nameservers 192.168.1.12

auto eth0

iface eth0 inet static

address 192.168.1.12

netmask 255.255.255.0

iface eth0 inet6 static

pre-up modprobe ipv6

address 2402:ec00:face:1::12

netmask 64

#This interface can be on DHCP for the beginning of the installation

auto eth1

iface eth1 inet manual

up ip link set $IFACE up

down ip link set $IFACE down

auto eth2

iface eth2 inet static

address 192.168.2.12

netmask 255.255.255.0

iface eth2 inet6 static

pre-up modprobe ipv6

address 2402:ec00:face:2::12

netmask 64

INN694 – Project Page |10/50|

Page 12: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.1.1.4 - Test the connectivityThe simple way to verify that the network configuration has been done

correctly if to use the command ping. It is important to make sure that all nodes are able to ping the nodes that are on the same network.

For instance the Compute node must be able to ping both interfaces of the network node in IPv4 and IPv6. Regarding the compute node, it should only ping one interface of the network and the compute node:

From the compute node:ping 192.168.1.12

ping 192.168.1.13

ping6 2402:ec00:face:1::12

ping6 2402:ec00:face:1::13

From the network node:ping 192.168.1.11

ping 192.168.1.13

ping 192.168.2.13

ping6 2402:ec00:face:1::11

ping6 2402:ec00:face:1::13

ping6 2402:ec00:face:2::13

From the Compute node:ping 192.168.1.11

ping 192.168.1.12

ping 192.168.2.12

ping6 2402:ec00:face:1::11

ping6 2402:ec00:face:1::12

ping6 2402:ec00:face:2::12

3.1.2 - Network Time Protocol

It is important to synchronise services on all machines and NTP will do it automatically. It is suggested to synchronise the time of all additional nodes from the controller.

3.1.2.1 - Step 1: Install the package

sudo apt-get install ntp

3.1.2.2 - Step 2: Remove the deprecated package ntpdate

sudo apt-get remove ntpdate

3.1.2.3 - Step 3: Setup the server

sudo vi /etc/ntp.conf

#Add iburst at the end of line for your favourite server

server 0.ubuntu.pool.ntp.org iburst

server 1.ubuntu.pool.ntp.org

server 2.ubuntu.pool.ntp.org

INN694 – Project Page |11/50|

Page 13: INN694-2014-OpenStack installation process V5

OpenStack – Final report

server 3.ubuntu.pool.ntp.org

#Ubuntu's ntp server

server ntp.ubuntu.com

# ...

# Authorise your own network to communicate with the server.

restrict 192.168.1.0 mask 255.255.255.224 nomodify notrap

3.1.2.4 - Step 4: Setup the client(s)

sudo vi /etc/ntp.conf

#comment the line that start with "server"

#server 0.ubuntu.pool.ntp.org

#server 1.ubuntu.pool.ntp.org

#server 2.ubuntu.pool.ntp.org

#server 3.ubuntu.pool.ntp.org

server IP/HostnameOfController iburst

#Leave the fallback, which is the Ubuntu's ntp server in case of your server break down

server ntp.ubuntu.com

3.1.2.5 - TestThe synchronisation of the time can be test by running the command “date”

on all server within one or two second of delay.

3.1.3 - Database

3.1.3.1 - Step 1: Install the packages

sudo apt-get install python-mysqldb mysql-server

3.1.3.2 - Step 2: Adapt MySQL to work with OpenStack

sudo vi /etc/mysql/my.conf

[mysqld]

...

#Allow other nodes to connect to the local databasebind-address = IP/HostnameOfMySQLServer

...

#enable InnoDB, UTF-8 character set, and UTF-8 collation by default

default-storage-engine = innodbinnodb_file_per_table

collation-server = utf8_general_ci

init-connect = 'SET NAMES utf8'

character-set-server = utf8

3.1.3.3 - Step 3: Restart MySQL

sudo service mysql restart

3.1.3.4 - Step 4: Delete the anonymous users (some connection problems might happen if still present)

sudo mysql_install_db (Optional: to be used if the next command fail)sudo mysql_secure_installation (Answer "Yes" to all question unless you have a good reason to answer no)

INN694 – Project Page |12/50|

Page 14: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.1.3.5 - Step 5: Install the MySQL Python library on the additional nodes (Optional)

sudo apt-get install python-mysqldb

3.1.4 - OpenStack packages

The latest version of OpenStack packages can be downloaded through the Ubuntu Cloud Archive, which is a special repository.

3.1.4.1 - Step 1: Install python software

sudo apt-get install python-software-properties

Remark: The following steps are not require for Ubuntu 14.04.

3.1.4.2 - Step 2: Add the Ubuntu Cloud archive for Icehouse (optional)sudo add-apt-repository cloud-archive:icehouse

3.1.4.3 - Step 3: Update the packages list and upgrade the system (optional)sudo apt-get update

sudo apt-get dist-upgrade

3.1.4.4 - Step 3: Install “Backported Linux Kernel” (Only for Ubuntu 12.04: improve the stability)

sudo apt-get install linux-image-generic-lts-saucy linux-headers-generic-lts-saucy

3.1.4.5 - Step 4: Restart the system

sudo reboot

3.1.5 - Messaging server

This documentation do not explain all the option of RabbitMQ, for more information about RabbitMQ access control please follow the website provided in the reference [7].

3.1.5.1 - Step 1: Install the package

sudo apt-get install rabbitmq-server

3.1.5.2 - Step 2: Change default password of the existing user (guest/guest)Remark: it is strongly recommended to change the password for the guest

user for security purpose.sudo rabbitmqctl change_password guest Rabbit_Guest_Password

3.1.5.3 - Step 3: Create a unique accountRemark: it is possible to use the guest user-name and password for each

OpenStack service, but it is not recommended.sudo rabbitmqctl add_user YourUserName StrongPassword

3.1.5.4 - Step 4: Set up the access control 

sudo rabbitmqctl add_vhost NameVHost

sudo rabbitmqctl set_user_tags NameVHost administrator

rabbitmqctl set_permissions -p /NameVHost YourUserName ".*" ".*" ".*"

INN694 – Project Page |13/50|

Page 15: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.2 - Identity service

3.2.1 - Installation of Keystone

3.2.1.1 - Step 1: Install the package

sudo apt-get install keystone

3.2.1.2 - Step 2: Connect keystone to the MySQL database

sudo vi /etc/keystone/keystone.conf

[database]

# The SQLAlchemy connection string used to connect to the database

connection = mysql://dbu_keystone:MySQL_Keystone_Password@IP/HostnameOfController/keystone

3.2.1.3 - Step 3: Create the  database and the user with MySQL

mysql -u root -p

mysql> CREATE DATABASE keystone;

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'dbu_keystone'@'localhost'

IDENTIFIED BY 'MySQL_Keystone_Password';

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'dbu_keystone'@'%'

IDENTIFIED BY 'MySQL_Keystone_Password';

mysql> exit

3.2.1.4 - Step 4: Create the tables

su -s /bin/sh -c "keystone-manage db_sync" keystone

3.2.1.5 - Step 5: Define the authorization token to communicate between the Identity Service and other OpenStack services and the log

The following command generate a random shared key and should be used to generate all the password use for OpenStack services:

openssl rand -hex 10

Result: 3856bdace7abac9cfc78sudo vi /etc/keystone/keystone.conf

[DEFAULT]

admin_token = 3856bdace7abac9cfc78

log_dir = /var/log/keystone

3.2.1.6 - Step 6: Restart the service

sudo service keystone restart

3.2.1.7 - Step 7: Purge the expired token every hoursThe identity service save all expired tokens in the local database without

erase them at any time. This could be helpful for auditing in production environment but it will increase the size of the database as well as affect the performance of other services. It is recommended to purge the expired token every hour using cron.

(crontab -l -u keystone 2>&1 | grep -q token_flush) || \

echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/crontabs/keystone

INN694 – Project Page |14/50|

Page 16: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.2.2 - Set environment variable

In order to use the command related to OpenStack command-line such as “Keystone”, “Neutron” and other, you need to provide the address of the identity service (–os-auth-url), an username (–os-username) and a password (–os-password). However, when it is the first time to use the identity service, so there is no user yet. Therefore, you need to connect to the service using the token generate before and export the following variable:

INN694 – Project Page |15/50|

Page 17: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.2.2.1 - Export Variable for initial installation of keystone

export OS_SERVICE_TOKEN=3856bdace7abac9cfc78

export OS_SERVICE_ENDPOINT=http://IP/HostnameOfController:35357/v2.0

After created the username and password needed to connect to the identity service, it is possible to set environment variable using the OpenStack RC file (“Openrc.sh”), it is a project-specific environment files that contain the credentials needed by all OpenStack services. To use this variable, you need to run the command “source” and the name of the file. However, this solution is possible only when at least one user, one tenant and the endpoint for the administration are created.

3.2.2.2 - Create the file and add the information for the authentication

sudo vi /home/NameOfProject-openrc.sh

export OS_USERNAME=Username

export OS_PASSWORD=Keystone_Username_Password

export OS_TENANT_NAME=NameOfProject

export OS_AUTH_URL=http://IP/HostnameOfController:35357/v2.0

3.2.2.3 - Step 2: Export the variable using the command “source”

sudo source /home/NameOfProject-openrc.sh

3.2.3 - Users, tenants and roles

3.2.3.1 - Step 1: Create and “admin” user (within 1 line)

sudo keystone user-create --name=admin --pass=Keystone_Admin_Password [email protected]

3.2.3.2 - Step 2: Create an “admin” role

sudo keystone role-create --name=admin

3.2.3.3 - Step 3: Create an “admin” tenant

sudo keystone tenant-create --name=admin --description="Description of Admin Tenant"

3.2.3.4 - Step 4: Link the user, the role and the tenant together

sudo keystone user-role-add --user=admin --tenant=admin --role=admin

3.2.3.5 - Step 5: Link the user, the _member_ role and the tenant together

sudo keystone user-role-add --user=admin --role=_member_ --tenant=admin

3.2.3.6 -  Step 5: Create a common user (you can create as many user as you want) (within 1 line)

sudo keystone user-create --name=Username --pass=StrongPassword [email protected]

3.2.3.7 - Step 6: Create a normal tenant (one tenant can have many user) (within 1 line)

sudo keystone tenant-create --name=NameOfTenant --description="The description of your denant Tenant"

3.2.3.8 - Step 7: Link them together

sudo keystone user-role-add --user=Username --role=_member_ --tenant=NameOfTenant

INN694 – Project Page |16/50|

Page 18: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.2.3.9 - Step 8: Create a tenant call “service” to access OpenStack services

sudo keystone tenant-create --name=service --description="Service Tenant"

INN694 – Project Page |17/50|

Page 19: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.2.4 - Service and endpoints

3.2.4.1 - Step 1: Create a service entry for the Identity Service (within 1 line)

sudo keystone service-create --name=keystone --type=identity --description="OpenStack Identity"

3.2.4.2 - Step 2: Specify a API endpoint for the Identity service

sudo keystone endpoint-create \

--service-id=$(keystone service-list | awk '/ identity / {print $2}') \

--publicurl=http://IP/HostnameOfController:5000/v2.0 \

--internalurl=http://IP/HostnameOfController:5000/v2.0 \

--adminurl=http://IP/HostnameOfController:35357/v2.0

3.3 - Image service: Glance

3.3.1 - Step 1: Install the packages

sudo apt-get install glance python-glanceclient

3.3.2 - Step 2: Add the database section in the file glance-api.conf and glance-registry.conf

sudo vi /etc/glance/glance-api.conf

...

[database]

connection = mysql://dbu_glance:MySQL_Glance_Password@IP/HostnameOfController/glance

sudo vi /etc/glance/glance-registry.conf

...

[database]

connection = mysql://dbu_glance:MySQL_Glance_Password@IP/HostnameOfController/glance

3.3.3 - Step 3: Add the information about the message broker in the file glance-api.conf

sudo vi /etc/glance/glance-api.conf

[DEFAULT]

rpc_backend = rabbit

rabbit_host = IP/HostnameOfController

rabbit_port = 5672

rabbit_use_ssl = false

rabbit_userid = YourUserName

rabbit_password = StrongPassword

rabbit_virtual_host = NameVHost

rabbit_notification_exchange = glance

rabbit_notification_topic = notifications

rabbit_durable_queues = False

3.3.4 - Step 4: Delete the default database

sudo rm /var/lib/glance/glance.sqlite

INN694 – Project Page |18/50|

Page 20: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.3.5 - Step 5: Create the Database and the user using MySQL

sudo mysql -u root -p

mysql> CREATE DATABASE glance;

mysql> GRANT ALL PRIVILEGES ON glance.* TO 'dbu_glance'@'localhost'

IDENTIFIED BY 'MySQL_Glance_Password';

mysql> GRANT ALL PRIVILEGES ON glance.* TO 'dbu_glance'@'%'

IDENTIFIED BY 'MySQL_Glance_Password';

3.3.6 - Step 6: Create tables

su -s /bin/sh -c "glance-manage db_sync" glance

3.3.7 - Step 7: Create the user “glance” in the Identity service

sudo keystone user-create --name=glance --pass=Keystone_Glance_Password [email protected]

sudo keystone user-role-add --user=glance --tenant=service --role=admin

3.3.8 - Step 8: Add the authentication for the Identity Service in the file glance-api.conf and glance-registry.conf

sudo vi /etc/glance/glance-api.conf

[keystone_authtoken]

auth_uri = http://IP/HostnameOfController:5000

auth_host =IP/HostnameOfController

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = glance

admin_password = Keystone_Glance_Password

...

[paste_deploy]

...

flavor = keystone

sudo vi /etc/glance/glance-registry.conf

[keystone_authtoken]

auth_uri = http://IP/HostnameOfController:5000

auth_host =IP/HostnameOfController

auth_port = 35357

auth_protocol = http a

dmin_tenant_name = service

admin_user = glance

admin_password = Keystone_Glance_Password

...

[paste_deploy]

...

flavor = keystone

INN694 – Project Page |19/50|

Page 21: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.3.9 - Step 9: Register the Image Service with the Identity service (within 1 line for the first command)

sudo keystone service-create --name=glance --type=image --description="OpenStack Image Service"

sudo keystone endpoint-create \

--service-id=$(keystone service-list | awk '/ image / {print $2}') \

--publicurl=http://IP/HostnameOfController:9292 \

--internalurl=http://IP/HostnameOfController:9292 \

--adminurl=http://IP/HostnameOfController:9292

3.3.10 - Step 10: Restart the services

sudo service glance-registry restart

sudo service glance-api restart

3.3.11 - Verify the installation

In order to verify the installation of glance, it is necessary to download at least one virtual machine image into the server using any method such as “wget”, “scp” or other. This example will assume that the server has an internet connection and download a CirrOS image.

3.3.11.1 - Step 1: create a temporary folder

mkdir /home/iso

3.3.11.2 - Step 2: change the directory

cd /home/iso

3.3.11.3 - Step 3: Download the image

wget http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img

3.3.11.4 - Source OpenStack RC file

source /home/NameOfProject-openrc.sh

3.3.11.5 - Add the image into glance

sudo glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 \

--container-format bare --is-public True --progress < cirros-0.3.2-x86_64-disk.img

3.3.11.6 - Check if the image has been successfully added to glance:

sudo glance image-list

3.4 - Compute service: Nova

3.4.1 - Service

3.4.1.1 - Step 1: Install the packages (within 1 line)

sudo apt-get install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient

3.4.1.2 - Step 2: Add the MySQL connection on the file “nova.conf” as well as setup RabbitMQ

sudo vi /etc/nova/nova.conf

INN694 – Project Page |20/50|

Page 22: INN694-2014-OpenStack installation process V5

OpenStack – Final report

[DEFAULT]

#Use the Identity service (keystone) for authentication

auth_strategy = keystone

#Set up the message brokerrpc_backend = rabbit

rabbit_host = IP/HotsnameOfController

rabbit_userid = YourUserName

rabbit_password = StrongPassword

rabbit_virtual_host = NameVHost

auth_strategy = keystone

my_ip = IP/HostnameOfController

vncserver_listen = IP/HostnameOfController

vncserver_proxyclient_address = IP/HostnameOfController

[database]

connection = mysql://dbu_nova:MySQL_Nova_Password@IP/HostnameOfController/nova

[keystone_authtoken]

auth_uri = http://IP/HostnameOfController:5000

auth_host = IP/HostnameOfController

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = Keystone_Nova_Password

3.4.1.3 - Step 3: Create the database and user

mysql -u root -p

mysql> CREATE DATABASE nova;

mysql> GRANT ALL PRIVILEGES ON nova.* TO 'dbu_nova'@'localhost' \

IDENTIFIED BY 'MySQL_Nova_Password';

mysql> GRANT ALL PRIVILEGES ON nova.* TO 'dbu_nova'@'%' \

IDENTIFIED BY 'MySQL_Nova_Password';

3.4.1.4 - Step 4: Create the tables

su -s /bin/sh -c "nova-manage db sync" nova

3.4.1.5 - Step 5: Create user on the Identity service

sudo keystone user-create --name=nova --pass=Keystone_Nova_Password [email protected]

sudo keystone user-role-add --user=nova --tenant=service --role=admin

3.4.1.6 - Step 6: Create the service and endpoint

sudo keystone service-create --name=nova --type=compute --description="OpenStack Compute"

sudo keystone endpoint-create \

--service-id=$(keystone service-list | awk '/ compute / {print $2}') \

--publicurl=http://IP/HostnameOfController:8774/v2/%\(tenant_id\)s \

--internalurl=http://IP/HostnameOfController:8774/v2/%\(tenant_id\)s \

--adminurl=http://IP/HostnameOfController:8774/v2/%\(tenant_id\)s

INN694 – Project Page |21/50|

Page 23: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.4.1.7 - Step 7: Restart all nova service

sudo service nova-api restart

sudo service nova-cert restart

sudo service nova-consoleauth restart

sudo service nova-scheduler restart

sudo service nova-conductor restart

sudo service nova-novncproxy restart

3.4.2 - Compute node

The compute node can be install in the same server as the controller. However, it is recommended to install it in another server.

3.4.2.1 - Step 1: Install the packages

sudo apt-get install nova-compute-kvm python-guestfs libguestfs-tools qemu-system

3.4.2.2 - Step 2: Make the kernel readable by any hypervisor services such as qemu or libguestfs

The kernel is not readable by default for basic user and for security reason, but hypervisor services need to read it in order to work better

dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-$(uname -r)

The previous command makes the kernel readable, yet it is not permanent as when the kernel will be updated it will not be readable anymore. Therefore, you need to create a file to overwrite all upcoming update

vi /etc/kernel/postinst.d/statoverride

#!/bin/sh

version="$1"

# passing the kernel version is required

[ -z "${version}" ] && exit 0

dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-${version}

3.4.2.3 - Step 3: Edit the nova.conf

vi /etc/nova/nova.conf

[DEFAULT]

#Use the Identity service (keystone) for authenticationauth_strategy = keystone

#Set up the message brokerrpc_backend = rabbit

rabbit_host = IP/HotsnameOfController

rabbit_userid = YourUserName

rabbit_password = StrongPassword

rabbit_virtual_host = NameVHost

auth_strategy = keystone

#Interface for the consolemy_ip = IP/HostnameOfController

vnc_enabled = True

vncserver_listen = 0.0.0.0

INN694 – Project Page |22/50|

Page 24: INN694-2014-OpenStack installation process V5

OpenStack – Final report

vncserver_proxyclient_address = IP/HostnameOfController

novncproxy_base_url = http://IP/HostnameOfController:6080/vnc_auto.html

#Location of the image service (Glance)glance_host = IP/HostnameOfController

[database]

connection = mysql://dbu_nova:MySQL_Nova_Password@IP/HostnameOfController/nova

[keystone_authtoken]

auth_uri = http://IP/HostnameOfController:5000

auth_host = IP/HostnameOfController

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = Keystone_Nova_Password

3.4.2.4 - Step 4: Check if your system support hardware acceleration

sudo egrep -c '(vmx|svm)' /proc/cpuinfo

If the result if greater than 0 (1 or more), you can skip the following step

3.4.2.5 - Step 5: Only if the result of the previous command is “0”

vi /etc/nova/nova-compute.conf

3.4.2.6 - Step 6: Restart the service

sudo service nova-compute restart

3.5 - Networking service: Neutron

3.5.1 - Controller node

3.5.1.1 - Step 1: Create the database

mysql -u root -p

mysql> CREATE DATABASE neutron;

mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'dbu_neutron'@'localhost'

IDENTIFIED BY 'MySQL_Neutron_Password';

mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'dbu_neutron'@'%'

IDENTIFIED BY 'MySQL_Neutron_Password';

3.5.1.2 - Step 2: Create the user in the Identity service (within 1 line)

sudo keystone user-create --name neutron --pass Keystone_Neutron_Password --email [email protected]

3.5.1.3 - Step 3: Link the user to the service tenant and admin role

sudo keystone user-role-add --user neutron --tenant service --role admin

3.5.1.4 - Step 4: Create the service for Neutron in the identity service (within 1 line)

sudo keystone service-create --name neutron --type network --description "OpenStack Networking"

3.5.1.5 - Step 5: Create the service endpoint in the identity service

sudo keystone endpoint-create \

--service-id $(keystone service-list | awk '/ network / {print $2}') \

INN694 – Project Page |23/50|

Page 25: INN694-2014-OpenStack installation process V5

OpenStack – Final report

--publicurl http://IP/HostnameOfController:9696 \

--adminurl http://IP/HostnameOfController:9696 \

--internalurl http://IP/HostnameOfController:9696

3.5.1.6 - Step 6: Install the networking components (packages)

sudo apt-get install neutron-server neutron-plugin-ml2

INN694 – Project Page |24/50|

Page 26: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.5.1.7 - Step 7: Get the service tenant identifier (SERVICE_TENANT_ID)

sudo keystone tenant-get service

Example:+-------------+----------------------------------+

| Property | Value |

+-------------+----------------------------------+

| description | Service Tenant |

| enabled | True |

| id | 032ff6f1056a4d82b51a87ff106c8185 |

| name | service |

+-------------+----------------------------------+

3.5.1.8 - Step 8: Edit neutron configuration file (neutron.conf)

sudo vi /etc/neutron/neutron.conf

[DEFAULT]

#Rabbit information

rabbit_host = IP/HostnameOfController

rabbit_port = 5672

rabbit_use_ssl = false

rabbit_userid = YourUserName

rabbit_password = StrongPassword

rabbit_virtual_host = /NameVHostrabbit_notification_exchange = neutron

rabbit_notification_topic = notifications

#type of authentication

auth_strategy = keystone

#communication with the service nova

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

nova_url = http://IP/HostnameOfController:8774/v2

nova_admin_username = nova

nova_admin_tenant_id = SERVICE_TENANT_ID

nova_admin_password = Keystone_Nova_Password

nova_admin_auth_url = http://IP/HostnameOfController:35357/v2.0

#Configuration of the Modular Layer 2 (ML2)

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

#connection to the database

[database]

connection = mysql://dbu_neutron:MySQL_Neutron_Password@IP/HostnameOfController/neutron

[keystone_authtoken]

#Authentication information:

auth_uri = http://IP/HostnameOfController:5000

INN694 – Project Page |25/50|

Page 27: INN694-2014-OpenStack installation process V5

OpenStack – Final report

auth_host = IP/HostnameOfController

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = neutron

admin_password = Keystone_Neutron_Password

3.5.1.9 - Step 9: Edit the nova.conf to configure compute to use Networking

sudo vi /etc/nova/nova.conf

[DEFAULT]

...

network_api_class = nova.network.neutronv2.api.API

neutron_url = http://IP/HostnameOfController:9696

neutron_auth_strategy = keystone

neutron_admin_tenant_name = service

neutron_admin_username = neutron

neutron_admin_password = Keystone_Neutron_Password

neutron_admin_auth_url = http://IP/HostnameOfController:35357/v2.0

linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver

firewall_driver = nova.virt.firewall.NoopFirewallDriver

security_group_api = neutron

3.5.1.10 - Step 10: Restart the necessary services

sudo service nova-api restart

sudo service nova-scheduler restart

sudo service nova-conductor restart

sudo service neutron-server restart

3.5.2 - Network node

3.5.2.1 - Pre-step: Enable few networking function

sudo vi /etc/sysctl.conf

net.ipv4.ip_forward=1

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

Update the changessudo sysctl -p

3.5.2.2 - Step 1: Install the networking components (packages) (within 1 line)

apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent openvswitch-datapath-dkms neutron-l3-agent neutron-dhcp-agent

3.5.2.3 - Step 2: Edit neutron configuration file (neutron.conf)

sudo vi /etc/neutron/neutron.conf

[DEFAULT]

#Rabbit information

rabbit_host = IP/HostnameOfController

INN694 – Project Page |26/50|

Page 28: INN694-2014-OpenStack installation process V5

OpenStack – Final report

rabbit_port = 5672

rabbit_use_ssl = false

rabbit_userid = YourUserName

rabbit_password = StrongPassword

rabbit_virtual_host = /NameVHost

rabbit_virtual_host = qutlab-osuc

rabbit_notification_exchange = neutron

rabbit_notification_topic = notifications

#type of authentication

auth_strategy = keystone

#Configuration of the Modular Layer 2 (ML2)

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

[keystone_authtoken]

auth_uri = http:// IP/HostnameOfController:5000

auth_host = IP/HostnameOfController

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = neutron

admin_password = Keystone_Neutron_Password

3.5.2.4 - Step 3: Setup the Layer-3 (L3) Agent

vi /etc/neutron/l3_agent.ini

[DEFAULT]

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

use_namespaces = True

3.5.2.5 - Step 4: Setup the DHCP Agent

vi /etc/neutron/dhcp_agent.ini

[DEFAULT]

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

use_namespaces = True

3.5.2.6 - Step 5: Setup the metadata Agent

vi /etc/neutron/metadata_agent.ini

[DEFAULT]

auth_url = http:// IP/HostnameOfController:5000/v2.0

auth_region = regionOne

admin_tenant_name = service

admin_user = neutron

admin_password = Keystone_Neutron_Password

nova_metadata_ip = IP/HostnameOfController

metadata_proxy_shared_secret = Metadata_Secret_Key

#Uncomment the next line for troubleshooting

INN694 – Project Page |27/50|

Page 29: INN694-2014-OpenStack installation process V5

OpenStack – Final report

#verbose = True

3.5.2.7 - Step 6: Setup the nova service to inform about the metadata proxy informationRemark: this part need to be done on the controller-node and the

Metadata_Secret_Key must be the same as the previous step on the file “metadata_agent.ini”.

sudo vi /etc/nova/nova.conf

[DEFAULT]

...

#Metadata proxy information between Neutron and Nova

service_neutron_metadata_proxy = true

neutron_metadata_proxy_shared_secret = Metadata_Secret_Key

3.5.2.8 - Step 7: Setup the Modular Layer 2 (ML2) plug-in

sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers = gre

tenant_network_types = gre

mechanism_drivers = openvswitch

[ml2_type_gre]

tunnel_id_ranges = 1:1000

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

enable_security_group = True

[ovs]

local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS #the local ip address of the network interface or the hostname that can resolve the address of the instance network

tunnel_type = gre

enable_tunneling = True

3.5.2.9 - Step 8: Setup the Open vSwitch (OVS) serviceThe OVS service provide the virtual networking framework for instances. It

create a virtual bridge between the external network (e.g.: Internet) and the internal network (e.g.: Used for instances). The external bridge (named br-ex on this tutorial) need to be connected to a physical network interface in order to communicate with the external network.

Restart the service:sudo service openvswitch-switch restart

Add the integration bridge:sudo ovs-vsctl add-br br-int

Add the external bridge:sudo ovs-vsctl add-br br-ex

Add a physical network interface to the external bridge (Ex: eth0, eth1 …)

INN694 – Project Page |28/50|

Page 30: INN694-2014-OpenStack installation process V5

OpenStack – Final report

sudo ovs-vsctl add-port br-ex INTERFACE_NAME

INN694 – Project Page |29/50|

Page 31: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.5.2.10 - Step 9: Restart the necessary services

sudo service neutron-plugin-openvswitch-agent restart

sudo service neutron-l3-agent restart

sudo service neutron-dhcp-agent restart

sudo service neutron-metadata-agent restart

3.5.3 - Compute node

3.5.3.1 - Pre-step: Enable few networking function

sudo vi /etc/sysctl.conf

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

Update the changessudo sysctl -p

3.5.3.2 - Step 1: Install the networking components (packages) (within 1 line)

sudo apt-get install neutron-common neutron-plugin-ml2 neutron-plugin-openvswitch-agent openvswitch-datapath-dkms

3.5.3.3 - Step 2: Edit neutron configuration file (neutron.conf)

sudo vi /etc/neutron/neutron.conf

[DEFAULT]

#Rabbit information

rabbit_host = IP/HostnameOfController

rabbit_port = 5672

rabbit_use_ssl = false

rabbit_userid = YourUserName

rabbit_password = StrongPassword

rabbit_virtual_host = /NameVHost

rabbit_notification_exchange = neutron

rabbit_notification_topic = notifications

#type of authentication

auth_strategy = keystone

#Configuration of the Modular Layer 2 (ML2)

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

[keystone_authtoken]

#Authentication information:

auth_uri = http://IP/HostnameOfController:5000

auth_host = IP/HostnameOfController

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = neutron

admin_password = Keystone_Neutron_Password

INN694 – Project Page |30/50|

Page 32: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.5.3.4 - Step 3: Edit the nova.conf to configure compute to use Networking

sudo vi /etc/nova/nova.conf

[DEFAULT]

...

network_api_class = nova.network.neutronv2.api.API

neutron_url = http://IP/HostnameOfController:9696

neutron_auth_strategy = keystone

neutron_admin_tenant_name = service

neutron_admin_username = neutron

neutron_admin_password = Keystone_Neutron_Password

neutron_admin_auth_url = http://IP/HostnameOfController:35357/v2.0

linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver

firewall_driver = nova.virt.firewall.NoopFirewallDriver

security_group_api = neutron

3.5.3.5 - Step 4: Setup the Modular Layer 2 (ML2) plug-in

sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers = gre

tenant_network_types = gre

mechanism_drivers = openvswitch

[ml2_type_gre]

tunnel_id_ranges = 1:1000

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

enable_security_group = True

[ovs]

local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS #the ip address of the network interface or the hostname that can resolve the address of the instance network

tunnel_type = gre

enable_tunneling = True

3.5.3.6 - Step 5: Setup the Open vSwitch (OVS)service

sudo service openvswitch-switch restart

sudo ovs-vsctl add-br br-int

3.5.3.7 - Step 16: Restart the necessary services

sudo service nova-compute restart

sudo service neutron-plugin-openvswitch-agent restart

INN694 – Project Page |31/50|

Page 33: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.5.4 - Create an initial network

3.5.4.1 - Source OpenStack RC file

source /home/NameOfProject-openrc.sh

3.5.4.2 - Create the external network

neutron net-create NameOfExternalNetwork --shared --router:external=True

3.5.4.3 - Create a subnet for the external networkThe external subnet need to be carefully chosen and should have a different

range of IP address of the actual external network. For instance, if the network is 10.0.0.0/24, the DHCP on the external router could be from 10.0.0.2 to 10.0.0.99 and the subnet on OpenStack could be from 10.0.0.100 to 10.0.0.150.

neutron subnet-create NameOfExternalNetwork --name NameOfExternalSubnet \

--allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END \

--disable-dhcp --gateway EXTERNAL_NETWORK_GATEWAY EXTERNAL_NETWORK_CIDR

For example:neutron subnet-create ext-net --name ext-subnet \

--allocation-pool start=10.0.0.100,end=10.0.0.150 \

--disable-dhcp --gateway 10.0.0.1 10.0.0.0/24

3.5.4.4 - Create the external network

neutron net-create NameOfInternalNetwork

3.5.4.5 - Create a subnet for the external network

neutron subnet-create demo-net --name NameOfInternalNetwork \

--gateway TENANT_NETWORK_GATEWAY TENANT_NETWORK_CIDR

For example:neutron subnet-create demo-net --name demo-subnet \

--gateway 192.168.1.1 192.168.1.0/24

3.5.4.6 - Create a virtual router

neutron router-create MyRouter

3.5.4.7 - Attach the router to the internal network

neutron router-interface-add MyRouter NameOfInternalNetwork

3.5.4.8 - Attach the router to the external network by specifying it as the gateway

neutron router-gateway-set demo-router NameOfExternalNetwork

3.6 - Dashboard: Horizon

3.6.1 - Step 1: Install the packages

sudo apt-get install apache2 memcached libapache2-mod-wsgi openstack-dashboard

When the dashboard is installed using the packages from Ubuntu repository, it comes with a ubuntu theme that change the dashboard. To remove it use the following command:

INN694 – Project Page |32/50|

Page 34: INN694-2014-OpenStack installation process V5

OpenStack – Final report

apt-get remove --purge openstack-dashboard-ubuntu-theme

3.6.2 - Step 2: Change the “LOCATION” value to match the one on the file /etc/memcached.conf

Vi /etc/openstack-dashboard/local_settings.py

CACHES = {

'default': {

'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',

'LOCATION' : '127.0.0.1:11211'

}

}

3.6.3 - Step 3: Update the ALLOWED_HOSTS to include your computer (only if you wish to access the dashboard from a specific list of computer) and the address of the controller

Vi /etc/openstack-dashboard/local_settings.py

ALLOWED_HOSTS = ['localhost', 'Your-computer']

OPENSTACK_HOST = "IP/HostnameOfController"

3.6.4 - Step 4: Restart the service

service apache2 restart

service memcached restart

3.6.5 - Step 5: Access the dashboard with your favourite web browser

“http://IP/HostnameOfController/horizon”

Figure 3 - Dashboard login

INN694 – Project Page |33/50|

Page 35: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.7 - Block storage: Cinder

3.7.1 - On the controller

3.7.1.1 - Step 1: Install the packages

sudo apt-get install cinder-api cinder-scheduler

3.7.1.2 - Step 2: Set up the connection to the database

vi /etc/cinder/cinder.conf

[database]

connection = mysql://dbu_nova:MySQL_Cinder_Password@IP/HostnameOfController/cinder

3.7.1.3 - Step 3: Create the database and user

mysql> CREATE DATABASE cinder;

mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'dbu_cinder'@'localhost' \

IDENTIFIED BY 'MySQL_Cinder_Password';

mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'dbu_cinder'@'%' \

IDENTIFIED BY 'MySQL_Cinder_Password';

3.7.1.4 - Step 4: Create the tables

su -s /bin/sh -c "cinder-manage db sync" cinder

3.7.1.5 - Step 5: Create user on the Identity service

keystone user-create --name=cinder --pass= Keystone_Cinder_Password [email protected]

keystone user-role-add --user=cinder --tenant=service --role=admin

3.7.1.6 - Step 6: Add information about the identity service and the message broker

vi /etc/cinder/cinder.conf

[DEFAULT]

#Set up the message broker

rpc_backend = rabbit

rabbit_host = IP/HotsnameOfController

rabbit_userid = YourUserName

rabbit_password = StrongPassword

rabbit_virtual_host = NameVHost

#add the persmission to connect to the identity service;

[keystone_authtoken]

auth_uri = http://IP/HostnameOfController:5000

auth_host = IP/HostnameOfController

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = cinder

admin_password = Keystone_Cinder_Password

INN694 – Project Page |34/50|

Page 36: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.7.1.7 - Step 7: Create the service and endpoint

keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage"

keystone endpoint-create \

--service-id=$(keystone service-list | awk '/ volume / {print $2}') \

--publicurl=http://IP/HostnameOfController:8776/v1/%\(tenant_id\)s \

--internalurl=http://IP/HostnameOfController:8776/v1/%\(tenant_id\)s \

--adminurl=http://IP/HostnameOfController:8776/v1/%\(tenant_id\)s

3.7.1.8 - Step 8: Create the service and endpoint for the version 2

keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2"

keystone endpoint-create \

--service-id=$(keystone service-list | awk '/ volumev2 / {print $2}') \

--publicurl=http://IP/HostnameOfController:8776/v2/%\(tenant_id\)s \

--internalurl=http://IP/HostnameOfController:8776/v2/%\(tenant_id\)s \

--adminurl=http://IP/HostnameOfController:8776/v2/%\(tenant_id\)s

3.7.1.9 - Step 9: Restart all necessary services

service cinder-scheduler restart

service cinder-api restart

3.7.2 - On a storage node (Can be done in any machine)

This part assume that the type of partition of sda3 it LVM

3.7.2.1 - Step 1:Install the LVM package

apt-get install lvm2

3.7.2.2 - Step 2:Create a physical volume

pvcreate /dev/sda3

3.7.2.3 - Step 3:Create a volume group call “cinder-volume”Note: if cinder is install on more than one host, the name of the logical

should be different in every hostvgcreate cinder-volumes /dev/sda3

3.7.2.4 - Step 4:Change the configuration of LVM

vi /etc/lvm/lvm.conf

devices {

...

filter = [ "a/sda1/", "a/sda3/", "r/.*/"]

...

}

3.7.2.5 - Step 5:Test the configuration

pvdisplay

3.7.2.6 - Step 6: Install the packages

apt-get install cinder-volume

INN694 – Project Page |35/50|

Page 37: INN694-2014-OpenStack installation process V5

OpenStack – Final report

INN694 – Project Page |36/50|

Page 38: INN694-2014-OpenStack installation process V5

OpenStack – Final report

3.7.2.7 - Step 7: Add information about the identity service and the message broker

vi /etc/cinder/cinder.conf

[DEFAULT]

#Set up the message broker

rpc_backend = rabbit

rabbit_host = IP/HotsnameOfController

rabbit_userid = YourUserName

rabbit_password = StrongPassword

rabbit_virtual_host = NameVHost

#add the persmission to connect to the identity service;

enabled_backends=lvmdriver-NameOfDriver

[lvmdriver-NameOfDriver]

volume_group= NameOfVolumeGroup

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

volume_backend_name= NameOfBackEnd

[keystone_authtoken]

auth_uri = http://IP/HostnameOfController:5000

auth_host = IP/HostnameOfController

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = cinder

admin_password = Keystone_Cinder_Password

glance_host = controller IP/HotsnameOfController

[database]

connection = mysql://dbu_nova:MySQL_Cinder_Password@IP/HostnameOfController/cinder

3.7.2.8 - Step8: Restart all nova service

service cinder-volume restart

service tgt restart

3.8 - Orchestration: Heat

3.8.1 - Step 1: Install the packages

apt-get install heat-api heat-api-cfn heat-engine

3.8.2 - Step 2: Set up the connection to the database

vi /etc/heat/heat.conf

[database]

connection = mysql://dbu_heat:Keystone_Heat_Password@IP/HostnameOfController /heat

3.8.3 - Step 3: Create the database and user

mysql -u root -p

mysql> CREATE DATABASE heat;

mysql> GRANT ALL PRIVILEGES ON heat.* TO 'dbu_heat'@'localhost' \

INN694 – Project Page |37/50|

Page 39: INN694-2014-OpenStack installation process V5

OpenStack – Final report

IDENTIFIED BY 'Keystone_Heat_Password';

mysql> GRANT ALL PRIVILEGES ON heat.* TO 'dbu_heat'@'%' \

IDENTIFIED BY 'Keystone_Heat_Password';

3.8.4 - Step 4: Create the tables

su -s /bin/sh -c "heat-manage db_sync" heat

3.8.5 - Step 5: Change the configuration file

vi /etc/heat/heat.conf

#logging

verbose = True

log_dir=/var/log/heat

#Rabbit information

rpc_backend = rabbit

rabbit_host = IP/HotsnameOfController

rabbit_userid = YourUserName

rabbit_password = StrongPassword

rabbit_virtual_host = NameVHost

[keystone_authtoken]

auth_host = IP/HotsnameOfController

auth_port = 35357

auth_protocol = http

auth_uri = http://IP/HotsnameOfController:5000/v2.0

admin_tenant_name = service

admin_user = heat

admin_password = Keystone_Heat_Password

[ec2authtoken]

auth_uri = http://IP/HotsnameOfController:5000/v2.0

3.8.6 - Step 6: Create user on the Identity service

keystone user-create --name=heat --pass=Keystone_Heat_Password [email protected]

keystone user-role-add --user=heat --tenant=service --role=admin

3.8.7 - Step 7: Create the service and endpoint

keystone service-create --name=heat --type=orchestration --description="Orchestration"

keystone endpoint-create \

--service-id=$(keystone service-list | awk '/ orchestration / {print $2}') \

--publicurl=http://IP/HotsnameOfController:8004/v1/%\(tenant_id\)s \

--internalurl=http://IP/HotsnameOfController:8004/v1/%\(tenant_id\)s \

--adminurl=http://IP/HotsnameOfController:8004/v1/%\(tenant_id\)s

keystone service-create --name=heat-cfn --type=cloudformation \

--description="Orchestration CloudFormation"

keystone endpoint-create \

--service-id=$(keystone service-list | awk '/ cloudformation / {print $2}') \

--publicurl=http://IP/HotsnameOfController:8000/v1 \

INN694 – Project Page |38/50|

Page 40: INN694-2014-OpenStack installation process V5

OpenStack – Final report

--internalurl=http://IP/HotsnameOfController:8000/v1 \

--adminurl=http://IP/HotsnameOfController:8000/v1

3.8.8 - Step 8: Create the heat_stack_user role.

This role is used as the default role for users created by the Orchestration module.

keystone role-create --name heat_stack_user

3.8.9 - Step 9: Setup the URL of the metadata server

vi /etc/heat/heat.conf

[DEFAULT]

...

# URL of the Heat metadata server. (string value)

heat_metadata_server_url = http://IP/HotsnameOfController:8000

# URL of the Heat waitcondition server. (string value)

heat_waitcondition_server_url = http:/ IP/HotsnameOfController:8000/v1/waitcondition

3.8.10 - Step 10: Restart all necessary services

service heat-api restart

service heat-api-cfn restart

service heat-engine restart

INN694 – Project Page |39/50|

Page 41: INN694-2014-OpenStack installation process V5

OpenStack – Final report

4 - TroubleshootingWhen a problem occur, it Is important to remember that all the error are

recorded on the logs and located on respective folder depending on the service. For instance the logs for Nova will be located in “/var/lod/nova”, the log for Neutron will be located in “/var/log/neutron” and so on. Most of the time, the log are quite explicit and the problem can be fix quickly.

See log on /var/logError Possible solutionHost not found Cannot find the host for the service that

was askedAMQP server on controller:5672 is unreachable: Socket closed

The information for rabbit are not correct, the information on the configuration file need to be check:

- Username- Password- Virtual host- Is the hostname of the controller

can be resolve?Cannot ping or ssh the instance Check if the security rule has a ICMPI have installed a webserver in the instance, I can ping and ssh it. When I do a nmap of the instance the port 80 seems open but when I try to access the website it takes a long time and nothing happen.

Check the MTU of the virtual machine and make sure that you follow the section “6.4 - Force the MTU of the virtualmachine”

INN694 – Project Page |40/50|

Page 42: INN694-2014-OpenStack installation process V5

OpenStack – Final report

5 - Useful command

5.1 - General command

Command Descriptionopenssl rand –hex 10 Generate a random passwordping:-s NUMBER

Test the connectivitySpecify the size of the packet (MTU)

tail -f Show the end of a file and any upcoming update

rabbitmqctl list_users RabbitMQ: List of user

5.2 - Keystone

Argument Option Descriptionuser-create

--name Name of the user

--pass Password

--email Email of the user

endpoint-list List of endpoint

endpoint-get NameOfEndpoint

Information of one endpoint

role-list List of role

role-get NameOfrole Information of one role

service-list List of service

service-get NameOfservice Information of one service

tenant-list List of tenant

tenant-get NameOfTenant Information of one tenant

user-list List of User

user-get NameOfUser Information of one User

5.3 - Glance

The “glance” command is used to manage virtual image by command line. A list of possible options and arguments are listed below:

Argument Option Descriptionimage-create

--name Name of the image for OpenStack

--disk-format Format of the image file:qcow2, raw, vhd, vmdk, vdi, iso, aki, ari, and ami

--container-format Format of the container1:

1 Specify bare to indicate that the image file is not in a file format that contains metadata about the virtual machine. Although this field is currently required, it is not actually used by any of the OpenStack services and has no effect on system behaviour. Because the value is not used anywhere, it is safe to always specify bare as the container format [8].

INN694 – Project Page |41/50|

Page 43: INN694-2014-OpenStack installation process V5

OpenStack – Final report

Bare, ovf, aki and ami--is-public

< LocationOfTheImage

Example sudo glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 \ --container-format bare --is-public True --progress < cirros-0.3.2-x86_64-disk.img

6 - Tutorial

6.1 - Lunch an instance:

After start an instance and provide a public IP address, the instance will not be accessible from the outside network. The security group need to be setup and the port need to be opened according to the needs.

6.1.1 - Command line

6.1.1.1 - Source OpenStack RC file

source /home/NameOfProject-openrc.sh

6.1.1.2 - Generate the key

ssh-keygen

6.1.1.3 - Add the public key to Nova

nova keypair-add --pub-key ~/.ssh/id_rsa.pub demo-key

6.1.1.4 - Verify if the key has been added

nova keypair-list

6.1.1.5 - Check the list of flavours

nova flavor-list

6.1.1.6 - Check the list of images

nova image-list

6.1.1.7 - Check the list of networks

neutron net-list

6.1.1.8 - Check the list of security groups

nova secgroup-list

6.1.1.9 - Start the instance according to the previous information gathered

nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64 --nic net-id= IDofNetwork \

--security-group default --key-name demo-key NameOfInstance

6.1.1.10 - Verify if the instance is started

nova list

6.1.1.11 - Get the URL of the VNC console

nova get-vnc-console NameOfInstance novnc

INN694 – Project Page |42/50|

Page 44: INN694-2014-OpenStack installation process V5

OpenStack – Final report

6.1.2 - Dashboard

Click on project compute Instance

Click on “lunch Instance”

On the Details tab:- Set the availability zone (if more

than one)- Write the name of the instance- Select the flavor (resource that

are given to the instance)- The number of instance - Select “Boot from image”- Select your image

On the “Access and security” tab- If the image was made

specifically for cloud select a key pair, if the image has a username and password by default you do not need to select any key

- If the key does not exist,

INN694 – Project Page |43/50|

Page 45: INN694-2014-OpenStack installation process V5

OpenStack – Final report

click on the “+”- Select the security group

On the “Networking” tab:- Select the internal network

Click on the launch button

To see the virtual machine, click on more and then console

The console is not manage very well if not on full screen, therefore click on “Click here to show only console”

INN694 – Project Page |44/50|

Page 46: INN694-2014-OpenStack installation process V5

OpenStack – Final report

6.2 - Provide public address to instances

6.2.1 - Command line

6.2.1.1 - Create a floating IP

neutron floatingip-create NameOfExternalNetwork

6.2.1.2 - Associate the floating IP to the instance

nova floating-ip-associate MyInstance IPAddress

6.2.2 - Dashboard

Assign an external IP address:- Click on the arrow next to

“More”- Associate Floating IP

If no IP are available, click on the “+”

Select the external network and click on “Allocate IP”

INN694 – Project Page |45/50|

Page 47: INN694-2014-OpenStack installation process V5

OpenStack – Final report

Select the floating Ip that was associated on the previous step, the port to which the IP need to be associated and click on associate

6.3 - Create a new security group (ACL)

6.3.1 - Command line

6.3.1.1 - Create a security group

nova secgroup-create NameOfSecurityGroup "Description of the security group"

6.3.1.2 - Add a rule to allow ping

nova secgroup-add-rule NameOfSecurityGroup icmp -1 -1 0.0.0.0/0

6.3.1.3 - Add a rule to allow ssh

nova secgroup-add-rule NameOfSecurityGroup tcp 22 22 0.0.0.0/0

6.3.2 - Dashboard

6.3.2.1 - Go to the tab “Access & Security”Click on project compute Access & Security

INN694 – Project Page |46/50|

Page 48: INN694-2014-OpenStack installation process V5

OpenStack – Final report

6.3.2.1 - Click on the “Create Security Group”

6.3.2.2 - Specify the name (no space) and the description and click “Create Security Group”

6.3.2.3 - Click on manage rule

6.3.2.4 - Click add rule

6.3.2.5 - Select the right option depending on the service on the instanceRules: list of known service such as

SSH, http and so on as well as customer rule where the user can specify the number of the port

Direction:- Ingress: from outside to the VM- Egress: from the VM to outsideRemote: type of remoteCIDR: who does it concern

INN694 – Project Page |47/50|

Page 49: INN694-2014-OpenStack installation process V5

OpenStack – Final report

6.4 - Force the MTU of the virtual machine

6.4.1 - Setup the DHCP agent :

vi /etc/neutron/dhcp_agent.ini

[DEFAULT]

...

#add the following line

dnsmasq_config_file=/etc/neutron/dnsmasq/dnsmasq-neutron.conf

6.4.2 - Create the dnsmasq-neutron.conf

vi /etc/neutron/dnsmasq/dnsmasq-neutron.conf

dhcp-option-force=26, 1400

7 - Table of figuresFigure 1 - OpenStack overview [1]......................................................................................................3Figure 4 - Network topology................................................................................................................6Figure 4 - Dashboard login................................................................................................................27

8 - Table of tablesTable 1 - Hardware requirement.........................................................................................................4Table 2 - List of password....................................................................................................................5

INN694 – Project Page |48/50|

Page 50: INN694-2014-OpenStack installation process V5

OpenStack – Final report

9 - References

x

1. OpenStack. [Online]. Available from: https://www.openstack.org/.2. OpenStack Foundation. OpenStack documentation - Chapter 5. Scaling. [Online]. Available from:

http://docs.openstack.org/openstack-ops/content/scaling.html.3. rabbitmqctl(1) manual page. [Online]. Available from: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html.4. Verify the Image Service installation. [Online]. Available from:

http://docs.openstack.org/icehouse/install-guide/install/apt/content/glance-verify.html.

x

INN694 – Project Page |49/50|