003_VMWare_Whitepaper_Screen.pdf

download 003_VMWare_Whitepaper_Screen.pdf

of 11

Transcript of 003_VMWare_Whitepaper_Screen.pdf

  • 8/12/2019 003_VMWare_Whitepaper_Screen.pdf

    1/11

    www.canonical.com Copyright Canonical 2014

    TECHNICAL WHITE PAPER

    Ubuntu OpenStack on VMware vSphere:A reference architecture for deployingOpenStack while limiting changes

    to existing infrastructure

    A collaboration between Canonical and VMware

    March 2014

  • 8/12/2019 003_VMWare_Whitepaper_Screen.pdf

    2/11

    01 09

    Summary

    Canonical, the Ubuntu and OpenStack experts, and VMware, the virtualization

    experts, have combined their collective experience in customer facing

    deployments. The team has built a series of reference architectures to help

    customers explore the benets of OpenStack within their data centers.

    The reference architecture in this paper is intended for organizations with

    existing VMware data center deployments or expertise who want to limit

    changes to their underlying VMware infrastructure, but see benets in

    a common abstraction and orchestration layer via OpenStack open APIs

    and Dashboard to control compute workloads.

  • 8/12/2019 003_VMWare_Whitepaper_Screen.pdf

    3/11

    02 09

    UBUNTU OPENSTACK PLUS VMWARE A PERFECT MATCH IN THE EVOLUTION

    OF THE DATA CENTER

    Organizations strive for increased operational eciency and agility within theirdata centers which is why the old methodology of a single server per application

    evolved into server virtualization. This technology helped reduce data center

    hardware sprawl by consolidating multiple workloads per server resulting

    in higher hardware utilization. Prior to virtualization, the procurement, racking,

    stacking, provisioning, and networking of hardware generated overhead and

    took time. VMware quickly established itself as the leader in this space with

    a range of solutions to suit dierent organizations needs.

    As the server virtualization footprint grew, organizations began to segregate

    and tailor deployments to various hypervisor stacks depending on cost, test

    and development workloads versus production deployments, compliance, etc.

    This led to virtual machine (VM) sprawl and it became dicult to manage and

    scale applications running across the dierent hypervisors, servers, brands, etc.

    The advent of cloud computing, especially private clouds, promised to alleviate

    this problem by creating a coherent environment that is easier to manage and

    scale. The OpenStack project is one of the prime examples in this area.

    OpenStack is an open source computing platform for public and private clouds.

    It is one of the largest and fastest growing open source projects to date.

    OpenStack takes a set of heterogenous and isolated hypervisors (i.e. KVM, ESXi,

    Xen, LXC), storage and networks across a data center or multiple data centers

    and turns them into pools of resources. All managed and consumed via open

    APIs and a web-based dashboard. Ubuntu quickly established itself as the

    reference platform to develop and deploy OpenStack. And Canonical,

    the commercial sponsor of Ubuntu and platinum member of the OpenStack

    project, became the leader in helping organizations adopt and deploy

    Ubuntu OpenStack as their public or private cloud technology. VMware joined

    the OpenStack project as a gold member in 2012 and announced a collaborative

    partnership with Canonical in 2013. The goal of this partnership was to aid

    organizations in their adoption of OpenStack, especially in combining it with

    their existing VMware infrastructure.

    OpenStack as a control layer above pools of resources in the data center has

    benets; however, organizations have heavy investments both nancially and

    in sta technical competency with established VMware technologies, so how

    can they reap the benets of a next generation cloud platform in OpenStack,

    while still getting the best out of their existing VMware hypervisor base?

    Whats the best approach to educate their sta on OpenStack? OpenStack APIs

    allow users to customize and congure down to the network level and VMware

    NSX is one of the most advanced and feature rich SDN solutions available today

    working seamlessly with OpenStack, ESXi and KVM, but how can this be done

    without major disruptions? What changes are needed to applications to achieve

    an open cloud using multiple hypervisors i.e. KVM for web tier apps and

    VMware ESXi for more heavyweight backend applications?

  • 8/12/2019 003_VMWare_Whitepaper_Screen.pdf

    4/11

    03 09

    Given the above pressures and scenarios organizations face in their adoption of

    OpenStack, VMware and Canonical created a collection of OpenStack migration

    best practices based on our experiences together in the eld. A high-level overview

    of OpenStack migration options is given below, from the least to most invasive:

    1. Maintain the existing VMware vCenter technology stack and deploy

    OpenStack services as VMs running on top of VMwares ESXi hypervisor.

    To minimize changes to the established VMware infrastructure even further,

    deploy OpenStack nova-network rather than OpenStack Neutron with

    an SDN. This allows organizations to familiarize and educate themselves

    on OpenStack (APIs) while maintaining a consistent and known infrastructure.

    This environment is for proof of concept only.

    2. Run OpenStack control services as hosts within VMware vCenter, but oer

    OpenStack compute options on multiple hypervisors, e.g. KVM and ESXi.

    Implement VMware NSX as the SDN for a richer network topology. Use

    OpenStack regions or host aggregates to allow users the choose which

    compute hypervisor to deploy their workload on. In this approach, developers

    learn to make their workloads/applications hypervisor-agnostic by moving

    from failover to fault resistant cloud oriented designs. The data center

    infrastructure changes are minimally invasive.

    3. Deploy OpenStack control services on bare-metal hardware or on an open

    source hypervisor such as KVM. Allow for multiple hypervisors (KVM, VMware

    ESXi, Xen, etc.) for OpenStack compute services and run VMware NSX as the

    SDN solution. This design encourages vendor diversity within the data center

    and turns a heterogeneous set of hypervisors, storage and network options

    into pools of resources available and congured on-demand.

    In the next sections, we will outline the reference architecture specic to

    migration option number one. This migration option contains our recommended

    conguration, design, and implementation path matching real-world deployments

    KEY ELEMENTS OF REFERENCE ARCHITECTURE

    OpenStack Havana:The open source software for building private and public clouds

    Ubuntu 12.04 Long-term Support:The reference operating system for

    OpenStack deployments and development

    VMware vCenter version 5.1 or greater:The platform for managing VMware

    vSphere environments

    INTENDED AUDIENCE

    This paper assumes the reader is experienced with VMware vCenter and Ubuntu.

    The reader should be familiar with OpenStack services (Compute, Keystone, etc.)

    along with techniques to scale and segregate an OpenStack deployment.

  • 8/12/2019 003_VMWare_Whitepaper_Screen.pdf

    5/11

    04 09

    VMware vSphere Design

    OVERVIEW

    The OpenStack components are installed as Virtual Machines in a vSphere Cluster.

    This approach provides the following benets:

    High availability via vSphere HA

    Better use of the hardware

    Flexibility to scale up and scale out easily as required

    Flexibility to adjust the specications of each component ( RAM, Disk, vCPU, etc. )

    Faster deployment times

    OPENSTACK DESIGN

    Logical Ubuntu Openstack Cloud Design

    Auth & API

    OpenStack Cloud

    CLIHorizon

    Dashboard

    Region One

    AZ1

    AZ2

  • 8/12/2019 003_VMWare_Whitepaper_Screen.pdf

    6/11

    05 09

    Logical Ubuntu Cloud on vSphere Design

    Networks

    VM Cluster N(vSphere)

    Instances(vSphere VMs)

    VM Cluster 1(vSphere)

    Instances(vSphere VMs)

    Management Cluster (vSphere)

    MAAS

    JuJu

    Nagios

    MySQLRabbitMQ

    NovaCompute

    VMCluster 1

    NovaCompute

    VM

    Cluster 1

    CephNodes (x3)

    Virtual Networks

    Management Network

    VM Network

    Virtual Networks

    Nova Cloud ControllerKeystone

    Cinder APIGlance API

    OpenStack DashboardCeph Rados Gateway

    Design Notes: A oating network (not shown) is optional

    Each vSphere cluster is associated with a nova-compute. One cannot map

    multiple clusters to the same nova-compute, otherwise the clusters would get

    merged to look like a single hypervisor thereby removing the option of having

    clusters in dierent OpenStack availability zones

    This setup allows for one nova service and one nova.conf for both clusters

    and each is represented as a separate nova-compute hypervisor instance

    to the OpenStack Nova scheduler

    As of this writing, using one nova.conf for both clusters is not recommended

    since there is no established method to dene clusters into individual

    OpenStack availability zones.

    OpenStack component HA is achieved via Juju and Metal-as-a-Service (MAAS)

    OpenStack services shown in the Management Cluster can be distributed

    to other clusters depending on resource availability (not shown)

  • 8/12/2019 003_VMWare_Whitepaper_Screen.pdf

    7/11

    06 09

    VMWARE ESXI HYPERVISORS

    Network

    OVERVIEW

    Virtual networks exist to attach the VMs vNICs to the right physical networks.

    These are the vSphere networks for the environment:

    In this design, OpenStack Havana is implemented with nova-network.

    OpenStack Neutron plus VMware NSX would be a recommended next step,

    but was not selected in this design.

    DHCP AND DNS FOR THE OPENSTACK COMPONENTS

    MAAS dynamically manages DHCP and DNS for all the OpenStack nodes using

    the Management Network.

    The MAAS node will also provide the Ubuntu Precise 12.04 LTS base images

    to the VMs in the Ubuntu Cloud via PXE boot through the same network.

    VM Attribute

    2

    Specication

    4 GB

    Number of CPUs

    Memory

    1 (Management network)

    20 GB

    20 GB

    Number of vNIC ports

    Disk 1

    Disk 2

    vSphere Network

    Network for the Ubuntu Cloud components:

    SSH traffic to access Ubuntu Cloud Components

    Internal Traffic between Ubuntu Cloud Components

    PXE booting iSCSI

    Ceph Storage

    Ceph Object Storage

    Description

    Management

    Only for vSphere, not related to the OpenStack infrastructure

    Flat network for the OpenStack instances trafficVM Network

    VMWare Management

  • 8/12/2019 003_VMWare_Whitepaper_Screen.pdf

    8/11

    07 09

    MANAGEMENT NETWORK ISOLATION

    This design consists of one main network called the Management Network.

    Depending on your network conguration, you can connect a cloud portalor clients to this network to access the OpenStack APIs from other networks

    via routing.

    For security reasons this network should be isolated and only accessible

    from trusted services like a portal or a management client machine.

    Because this design is entirely on top of VMware vSphere running nova-

    network, OpenStack security groups are not available. As of this writing,

    OpenStack compute security group functionality is only achievable

    on vSphere when used in combination with VMware NSX SDN solution.

    Storage

    Each availability zone should have a Tier 2 SAN with sucient resources for the

    planned workload available to be distributed via vSphere datastores to each

    vSphere cluster.

    Notes:

    The vSphere datastores used for the instances should not be used for any

    other purpose

    Disconnect any other datastore from the ESXi hosts not to be used for the

    instances: http://docs.openstack.org/havana/cong-reference/content/vmware.html

    OPENSTACK INSTANCES STORAGE

    The OpenStack Instances are stored in a dedicated vSphere datastore.

    BLOCK STORAGE WITH CINDER USING THE VMWARE DRIVER

    OpenStack Cinder is handled using the VMware driver released with OpenStack

    Havana. Note: The current Cinder Juju Charm needs manual conguration after

    deployment to set up the VMware driver.

    OBJECT STORAGE WITH CEPH RADOS GATEWAY

    A minimal conguration of Object Storage is needed to deploy OpenStack

    instances via Juju. For that purpose Ceph RADOS Gateway will be deployed

    with a default conguration in 3 VMs.

    Ceph RADOS Gateway will frontend the stored images and OpenStack Glance

    will point to it.

  • 8/12/2019 003_VMWare_Whitepaper_Screen.pdf

    9/11

    08 09

    VM Specication

    The recommended specs for the Ceph VMs:

    VM IMAGE STORAGE

    The storage of the VM templates (images) is handled by the OpenStack Glance.

    Glance provides multi-tenant image storage services for an OpenStack deployment.

    In this design, to maximise availability of the images, Object Storage with

    Ceph RADOS Gateway will be used.

    VM Attribute

    2

    Specication

    4 GB

    Number of CPUs

    Memory

    1 (Management network)

    20 GB

    20 GB

    Number of vNIC ports

    Disk 1

    Disk 2

  • 8/12/2019 003_VMWare_Whitepaper_Screen.pdf

    10/11

    09 09

    CONCLUSION

    This OpenStack reference architecture provides a common abstraction and

    orchestration layer via OpenStack open APIs and Dashboard to control computeworkloads while limiting changes to pre-existing VMware infrastructure.

    This approach allows organizations to extend the ROI of their infrastructure

    investment while developing and enhancing employees skills around a next

    generation platform in OpenStack. The cost saving extends further as teams

    understand the OpenStack paradigm enough to determine which workloads/

    applications should remain legacy and which ones be upgraded to cloud centric

    fault tolerant designs early in the infrastructure migration process.

    About Canonical and Ubuntu OpenStack

    Leading enterprises depend on Canonical to assist, guide and support them

    in making the most of their OpenStack-based production cloud oerings.

    Based on our experience of helping seven of the top 10 telcos and service

    providers, as well as numerous large organizations deploy production clouds,

    we have created tightly integrated cloud technologies that minimise

    deployment risk and speed time to market.

    Ubuntu OpenStack pre-integrates all the infrastructure, software, tools and

    services that companies need to achieve cloud success. With a tried-and-tested

    reference architecture and deployment methodology, we can help enterprises

    deploy clouds faster, and ensure that cloud services meet user requirements

    for performance and availability.

    As an integrated element of the Ubuntu OpenStack proposition, Canonical

    supports every stage of cloud deployment, from design and implementation,

    to ongoing technical support. We provide companies with an ecient,

    production ready and cost eective route to the open-source cloud.

    For more information, and to get in touch, please visit: www.ubuntu.com/cloud

  • 8/12/2019 003_VMWare_Whitepaper_Screen.pdf

    11/11

    Canonical Limited 2014. Ubuntu, Kubuntu, Canonical and t heir associated logos are the registered trademarksof Canonical Limited. All other trademarks are the properties of their respective owners. Any information referredto in this document may change without notice and Canonical will not be held responsible for any such changes.

    Canonical Limited, Registered in England and Wales, Company number 110334CRegistered Oce: One Circular Road, Douglas, Isle of Man IM1 1SB VAT Registration: GB 003 2322 47