Introductory Research on Open Stack

download Introductory Research on Open Stack

of 31

description

Basic Openstack research

Transcript of Introductory Research on Open Stack

Introductory Research on Openstack

Alexander Chamberlain

May & June 2015

Submitted to: Dr. Emil Salib

1. IntroductionOpenstack is an open source cloud computing platform. Commonly deployed in a data center, Openstack consists of a multitude of parts that correspond to different services such as networking, storage, identity management, and the hypervisor. Each of these are a separate piece of Openstack and can be run on different servers throughout the data center. [1] Openstack is often compared with other virtualization technologies, such as VMware, Citrix, and Hyper-V, because of its open source license all of these technologies have the ability to become integrated into an Openstack environment. This gives companies the ability to add another tool to their arsenal without having to invest into any new specialized software and hardware. [2] 2. Deployment OptionsWhen first researching how to deploy an Openstack environment, one may become overwhelmed with the number of options available to the administrator. Almost every distribution of linux supports Openstack and each flavor offers its own nuances.Easiest and quickest method to investigate Openstack is through a program, Devstack. This program has been designed for development and testing of the platform, by everyday users so that they can further the Openstack platform. [3] This program bootstraps an Openstack environment, through the use of python scripts to quickly get the user up and running. This deployment option has its drawbacks, first you are not able to administer the environment from the command line, you are only given access to the Web-API. The second drawback is that in order to customize the environment that is deployed the administrator must have knowledge of Python, in order to fully grasp what is occurring during the deployment. Finally, DevStack makes it difficult to launch a multi-node deployment in order to build a production like environment.The second way that was investigated to deploy Openstack is through a Ubuntu based program that created a separate virtual environment that ran on top of a Ubuntu Desktop OS, Openstack Ubuntu Installer. [5] This second environment is called a container, and is administered through a web based console. This container creates an almost NAT like connection with eth0 of the host OS, giving the container a separate internal network than that of the host. The bridged connection is given an interface of lxcbir0, which can be seen on the host after running the commands ifconfig or ip a. This is an all-in-one environment, where like devstack all of the Openstack components are running on the same machine. However it is different from Devstack, as the administrator can see and interact with the different Openstack components through the CLI, by SSHing into their respective virtual machines. This environment was good for learning the basics of administering an Openstack deployment, as the user can see the results of configuration options made through the web interface, on the CLI of the Openstack components. Perhaps the most important learning point on this environment is the location of both the nova configuration files and the Openstack log files. Finally, the last deployment option investigated was the Mirantis Openstack deployment. This deployment option ran on four virtualbox VMs, and its installation was made possible through scripts created by the Mirantis team. [6] This deployment option is the most production like in that the different Openstack components were now separated into three different VMs. These VMs included the Controller, Compute, and the Storage. The virtualbox networking environment created using these scripts is also closer to what would be found in an enterprise configuration, as each VM has 3 separate interfaces. These interface correspond to the External Network, Management network, and Storage network. The only issue that was encountered in this environment was the lack of a VNC console. Meaning all created instances could only be interacted through SSH.

3. Openstack Components3.1. HorizonHorizon is the web based Openstack Administrator interface, this allows for the administrator to make changes to the environment without having to go through the command line. From here we have access to images stored, the configuration of the Virtual Machine network, and the ability to launch and create instances. We can also change our security settings so that we can interface with our created instances.3.2. ControllerThe controller node is the node that hosts the Horizon interface and sends the configuration changes to the other components. [10]

OpenStack controller nodes contain: All OpenStack API services All OpenStack schedulers Memcached service

3.3. ComputeThe compute node does just as it sounds, handles all of the computing for the virtual environment. In our configurations this is where our hypervisor ran. On that hypervisor our Virtual Instances were also running. The compute node also handles the networking component of the environment, it houses both the Virtual Routers and Switches that make it possible for communication to occur with our created instances. [11]3.4. NeutronNeutron is the name given to the networking portion of the Openstack environment, as we previously mentioned this runs on the compute node. Neutron is different from other virtual networks as it allows for layer 3 routing on the virtual networks created. This project is one of the first SDNs (Software Defined Networks), created for virtual machines. [13]3.5. GlanceThe final Openstack node is the Storage node. This node is responsible for keep track of the images used for launching instances. Not only does this node store these images, it also can convert other image formats to a format suitable for the Openstack environment. [12] 3.6. MySQLA MySQL database is created on each of the nodes in order to track all of the components. These databases held the OS images, virtual HDDs of the instances, and the login information for the Openstack environment. [14] 3.7. RabbitMQRabbitMQ is a messaging technology that brokers the connection between the different components. This allows for the different nodes to communicate with each other. RabbitMQ sits between the different components controlling the flow of information between the two. [15] 3.8. KeystoneThe Keystone component is the identity service provider to the Openstack environment. Each component is verified by Keystone, so as to ensure there is no component that has not falsely joined. It also acts as the administrator security, because it hold the admin credential files that allow Openstack to be administered through the CLI, without this proper file the CLI cannot be used. [16] 3.9. Instances - Virtual MachinesAn instance is a virtual machine that has been created in the Openstack environment. It runs on the hypervisor on the compute node and its performance settings are controlled through different flavors chosen during its creation. These instances are subjected to networking rules, which must be configured to allow outside access.

4. MethodologyThe next section will now cover the set-up and configuration of the second and third environments investigated as we spent the most amount of time with these. We will first go over deploying the two environments, and then explain administering them so that the instances created can be interacted with.

4.1. Openstack-Install [7]It is important to note here that you should only use the stable release of the program, when attempting this with the latest release we ran into problems when leaving the environment running overnight. We we would return in the morning things would be broken and a uninstall/reinstall was needed in order to get up and running again.4.1.1. First add the Openstack-Installer repo to your host machine4.1.1.1. sudo apt-add-repository ppa:cloud-installer/stable4.1.2. Then update the software repositories4.1.2.1. sudo apt-get update4.1.3. After installing the deployment software we want to install some monitoring tools that will allows to track the progress of the software. This will allow us to investigate any potential crashes or hangs. These tools include nload a CLI network speed analyzer, htop a CLI task manager, and finally lxc container web based administration.4.1.3.1. sudo apt-get install nload4.1.3.2. sudo apt-get install htop4.1.3.3. wget http://lxc-webpanel.github.io/tools/install.sh -O - | bash4.1.3.3.1. Command must be ran as root!!!!!!4.1.3.3.2. UN: admin Pass: admin4.1.4. Once we have these monitoring tools in place go ahead and launch both nload and htop, so that we can monitor our next step when we go to install Openstack4.1.5. Once that is complete we can begin installation of the deployment software.4.1.5.1. sudo openstack-install4.1.6. After the installation is complete the LXContain will launch in the background. We can now navigate to http://localhost:5000 to view our container statistics. Our Openstack environment isnt up and running yet as all of the different components must be deployed. 4.1.6.1. We can view the status of our deployment by using the command openstack-status which will create a terminal for us to view the IP addresses of the different Openstack components. 4.1.7. It was important to note the commands to kill and uninstall the Openstack deployment so I have included them here4.1.7.1. sudo openstack-install -k4.1.7.2. sudo openstack-install -u4.2. Mirantis Openstack VirtualBox [6]4.2.1. First we installed VirtualBox, it is important to note that you must get the latest version of VirtualBox as the scripts we will use later contain commands that earlier versions of VirtualBox do not use.4.2.1.1. sudo apt-get install virtualbox4.2.2. Once completed we could then install the VirtualBox extensions pack, it is important to note here to make sure that you match the extension pack you downloaded with the same version number of your VirtualBox.4.2.2.1. https://www.virtualbox.org/wiki/Downloads [8] 4.2.3. While the extension pack is being downloaded we can also download our VirtualBox scripts and Mirantis Fuel Master .iso4.2.3.1. https://docs.mirantis.com/openstack/fuel/fuel-master/#downloads [9] 4.2.4. Once these are both downloaded extract the VirtualBox scripts using your prefered extraction method. Once extracted, copy the Mirantis .iso over to the ISO folder found inside the folder containing the VirtualBox scripts. 4.2.4.1. Once the copying is complete we can then begin launch the script to prepare our VirtualBox networking environment and begin installing our Fuel Master node. We can accomplish this by changing to the directory of Virtualbox script and inputting the command ./launch.sh. It is important to note that in the VirtualBox scripts folder there are more deployment options available. These were created for users that had more RAM available and could allow for a HA (High Availability) Openstack deployment, which is what one would expect to find in an enterprise environment. 4.2.4.2. DO NOT EXIT OUT OF THE SCRIPT DURING THE INSTALL!!!! It may appear to have frozen, but just given it some time.4.2.5. Once the Fuel Master node has finished install, this can take up to 2-3 hours depending on the machine, we can then access the Fuel Web API to begin deploying our different nodes. These nodes should just appear, as they have been created in the background and have been configured through the scripts to boot over the Network PXE option. 4.2.5.1. Navigate to http://10.20.0.2:8000/ and login with UN: admin Pass: admin [10]4.2.5.2. You will then be brought to this page where you will click New Openstack Environment Figure 1: Main webpage of the Fuel Master Node4.2.5.3. Click through pop up asking to send diagnostic information back to Mirantis with your choices4.2.5.4. We can then give a name to our Openstack Environment and configure the different deployment optionsFigure 2: Naming and selecting the Openstack distribution release.4.2.5.5. Then we decide if we are launching an HA environment, select Non-HA.Figure 3: Selecting HA or Non-HA for Openstack deployment.4.2.5.6. Next we select the type of HyperVisor we will be using, in our case we will be using QEMU because we are running this on VirtualBox.Figure 4: Choosing the proper Hypervisor for our environment.4.2.5.7. Then we choose our networking environment, we choose to use Neutron as this was the latest Networking release and would allow us to investigate the different layer 3 networking options.Figure 5: Choosing our networking environment.4.2.5.8. Next we choose our storage options , these we left at their default settings.Figure 6: Choosing 4.2.5.9. Finally we were given the option to install any additional services we choose not to. Figure 7: Choosing additional service to be installed.4.2.5.10. Launch your environmentFigure 8: Finish deploying your environment.4.2.5.11. If all went to plan the script should be finished by now creating and configuring the nodes for deployment, there should be 3. We can check that by looking at the top of our Openstack administration screen. We can then begin adding nodes to our environment using the green Add Nodes button. Figure 9: Created Openstack environment node deployment interface.4.2.5.12. In order to have a properly configured Openstack environment we must have three total nodes. These roles include Controller, Compute, and Storage. Figure 10: Selecting the role for the nodes in the environment.

Figure 11: One node selected to be the controller, the other two awaiting to be assigned their role. 4.2.5.13. Once the roles have been assigned we can then deploy the nodes by clicking the blue Deploy Changes command. Before we deploy our nodes it is important to configure them with names before deployment, or they will default to Untitled. Figure 12: Deploying roles to nodes.

Figure 13: Confirming deployment of nodes. 4.2.5.14. Once deployment has begun we can track the status of the deployment in the Openstack environment administration screen. It will occur in this order, first all nodes will be installed with your choosen Linux distrubution. Next the controller node will have the Openstack software installed on it first. Finally, the last two nodes will be installed with their configured role software.4.2.5.15. Once this is complete we can then access the Openstack administration console by navigating to http://172.16.0.2/horizon.Figure 14: Openstack web login page. 4.3. Configuring Openstack environment for host instance access4.3.1. First we must launch an instance from the pre configured image. In the case of the Mirantis deployment we were given a lightweight Linux distribution, Cirros. This image has a username and password preconfigured, which was contrary to the image given in the All-In-One deployment. Where we needed to generate a public key in order to gain access to the virtual machine. Once this key was generated we could then configured the user with a password for easier web console access. 4.3.1.1. Figure 15: Compute instances web consoleIn order to launch an instance navigate to the Instances web console found under Project Compute drop down. Click the launch instance button.4.3.1.2. Figure 16: Details pop-up for the instances about to be created. Once the Launch Instance configuration box appears, go ahead and fill out the details giving it a name, choosing its Flavor (Virtual Machine Specs.), and finally selecting what image to boot from.4.3.1.3. Figure 17: Access and Security tab for instances about to be created.

Figure 18: Import Key Pair for the instance.Next navigate to the Access and Security tab for the instance. Here we will create a key pair for the instance about to be launched. Using the first command, provided in the screenshot above, go ahead and create a new public key to allow SSH access to the VM. Be sure to use the file named, .key.pub which needs to be opened with gedit, as nano made copying and pasting diffacult. Paste the public key in the space provided. 4.3.1.4. Figure 19: Choosing the proper network under the launch instance tab. Finally we will select the proper network for the instance. Be sure to choose the internal network or the instance will not be able to accessed externally. Once ready click the launch button, to start the creation process. In a minute or so the VM should be in a ready state. Figure 20: Once the instance is ready we can manage it by clicking on the instance name. This table will also tell us the instances status. 4.3.1.5. Figure 21: Access and Security tab found under the Project and Compute drop down menu options. Under the Access and Security menu we can then configure the Firewall rules for the VM network. In order to achieve communication to the VMs we much implement two rules. The first being allow ICMP and the second allowing SSH access.

Figure 22: Adding in the rules for the instance to be accessed from the external network.

Figure 23: Final outcome of the rules for the default security group. 4.3.1.6. Figure 24: Network configuration page found under the Project network drop down menus.Next we will properly configure our internal network to properly route traffic navigate to the network name for the internal network. Navigate to the Subnet Detail arrow and input in your desired DNS servers. Figure 25: Proper configuration of the subnet DNS servers.

4.3.1.7. Figure 26: Under the actions drop down menu for the Instance gives option for interacting with the VM.

Figure 27: Configuration box for Associating a floating Ip.

Figure 28: Final configuration box for Allocating floating IP.Once the previous configurations are in place we can now assign an external IP to the Instance that we created. We can accomplish by going to the Actions drop down and selecting the Associate Floating IP. This choosing an external IP from the pool of created Floating IPs. Once this has be accomplished you will see that the Instance now has two different IPs, which we can see able for the Cirros Instance. 4.3.1.8. With this all in place we are now ready to ping and SSH into our created VM. First we will ping the instances to verify basic connectivity. The simple command, ping , which if all of the previous steps were followed should result in an ICMP reply. 4.3.1.9. If the ping to the Instance succeeded, we can then test SSH connectivity.Then by using the second command found in Figure 18 we can use the private key that was made alongside our public key to achieve SSH access. One must be sure to use the proper username, which in the case of the Cirros image is cirros. In our case we inputted ssh -i t2.key [email protected]. 4.3.1.10. We can then verify that the Instance does not know that it has been assigned a Floating IP using the command ip a. 4.3.1.11. Figure 29: NoVNC console access to the InstanceOne thing that was not covered in any of the previous steps was the ability to access a running Instance through the Openstack administrator screen. This is useful once a password has been set for the user eliminating the need to SSH into the instance every time an administrator needs access. 4.4. Administering through the CLI4.4.1. As previously mentioned Openstack can be interacted through not only the web console, but through the CLI. 4.4.1.1. In order to use any of the Openstack commands we first must source the Admin credentials into our Terminal. We can acquire these credentials by going to the Access and Security drop down, and then navigating to the API Access Tab as seen in the screenshot below. Finally click on the Openstack RC file, which will launch a download of the source file needed. Finally scp this file to a directory on the node you would like to configure.

Figure 30: API access tab of the Access and Security drop down menu.4.4.1.2. Once the file is copied to the proper node go ahead and source the file using the command source 4.4.1.3. Once the file has been source we can now being to use nova commands. First we will use the command nova list, this will show all of the running instances. Figure 31: Output of the command nova list4.4.1.4. Next we will use the command nova network-list, which will output all of the configured networks for our Openstack environment. Figure 32: Output of the command nova network-list4.4.1.5. Finally, we can view the log files by using the command nano /var/log/nova/ and viewing the log files found in the directory. One can also view the configuration file for the environment, by using the command nano /etc/nova/. 5. ConclusionUpon completing this research, we have gained a better understanding of the Openstack environment. We initially began this project to compare it to the VMware vSphere and Horizon software suites. During our time spent with the Openstack software, I can confidently say that they two completely different beasts. Openstack is a software meant for running and creating thousands of VMs in a public cloud. These VMs are then leased out by developers, which allows for easy creation of public applications. vSphere on the other hand is a private cloud, meant to operate inside of a business and to generally be accessed locally, or through a VPN. vSphere is also a more user friendly experience where getting up and running is as simple as installing the EsXI hypervisor and navigating to the assigned IP address. Openstack on the other hand gives you so many options to deploy its software that it's almost overwhelming. From having to chose what Linux distribution to use, to running it on one machine or multi-nodal the options are endless. Furthermore, now that vSphere and Horizon support access to Linux based clients Openstack advantage lessens. The only saving grace to Openstack is the fact that is freely available to the public. VMware charges an exorbitant amount of money for its software, which causes it to have a high barrier to entry.I would recommend to anyone continuing this project to delve deeper into the Layer 3 routing of Openstack as this is one other major difference from vSphere as well. As vSphere only supports Layer 2 networking. 6. References[1] https://en.wikipedia.org/wiki/OpenStack [2] https://www.openstack.org/foundation/companies/ [3] https://wiki.openstack.org/wiki/DevStack [4] http://docs.openstack.org/developer/devstack/ [5] https://github.com/Ubuntu-Solutions-Engineering/openstack-installer [6] https://docs.mirantis.com/openstack/fuel/fuel-6.1/virtualbox.html [7] http://ubuntu-cloud-installer.readthedocs.org/en/stable/ [8] https://www.virtualbox.org/wiki/Downloads [9] https://docs.mirantis.com/openstack/fuel/fuel-master/#downloads [10] http://docs.openstack.org/icehouse/training-guides/content/associate-controller-node.html [11] http://docs.openstack.org/icehouse/training-guides/content/associate-network-node.html [12] http://docs.openstack.org/icehouse/training-guides/content/associate-storage-node.html [13] https://wiki.openstack.org/wiki/Neutron [14] http://docs.openstack.org/havana/install-guide/install/apt/content/basics-database.html [15] http://docs.openstack.org/developer/keystone/ [16] http://docs.openstack.org/developer/keystone/