OpenStack@NBU - Meetupfiles.meetup.com/13577132/07.Nikolay_Milovanov_OpenStack_NBU.pdf · •...

20
OpenStack@NBU or How does cloud technologies help us to produce better students Nikolay Milovanov [email protected]

Transcript of OpenStack@NBU - Meetupfiles.meetup.com/13577132/07.Nikolay_Milovanov_OpenStack_NBU.pdf · •...

OpenStack@NBUor

How does cloud technologies help us to produce better students

Nikolay [email protected]

NBU

• First and largest private university in Bulgaria• First to introduce credit system • Has started from two apartments • Now has about 14000 students • Mostly humanitarian university • Clear separation between administration and academic • Technology programs are in

– Telecommunications– Informatics

• OpenStack has been hosted by Telecommunications department in Building 2, lab 701a

Our issues

• Various courses

• Some research

• Some labs

• Equipment coming and going mostly as an asset from certain “research” projects

• Ideology to promote student to business collaboration through the so called studio projects initiative

• All those depend on compute, networking and storage resources in a well “undefined” or if you wish elastic add hoc way

Our current solution

• Create common design to incorporate all our labs, student halls, current needs

• Interconnect it over BGP with NBU IT infrastructure• Design is just a set of rules related to connectivity between

various labs, vlans & IP ranges that should be used for the purpose + certain core network rules

• Interconnect all that to an OpenStack setup so that: – each of our labs could benefit from its compute, networking or

storage resources– Students are typically divided in teams– Each team has a user and a tenant with certain quota* – Researchers or academic staff also could have tenants if they

wish so

*In certain cases depending of the course sometimes each student has a user and a tenant

Current diagram

About OpenStack

• The deployment is:– on Centos 7

– RDO based

– Started with Icehouse RC2

– Currently on Juno RC2

– Using RDO mostly to easily add hypervisors/network nodes and resolve dependences

– Currently still no custom patches

– Despite that …. OpenStack is a tricky mistress

OpenStack compute

• Based on a couple of servers that came in an add hoc way

• Nothing fancy just a bunch of crappy hardware been donated by different people or that has came from various research projects

• As a hypervisor we stick to libvirt/kvm• Since servers are shared between compute,

networking and storage we don’t give all our resources to the nova scheduler

• We guard our preshless with cgroups so others can’t steel it ;)

OpenStack networking

• Neutron ML2• Open vSwitch (ovs)• Internal overlay is vxlan• In Centos interfaces come with a bit strange names

– If you have different hardware cards as a practice make them bonds (even if you have a single port), use different bonds for different things

– This will ease your deployment and will allow you to scale up or down later if that is required

• Multiple vlan based external networks – So to interconnect our openstack with the various labs that we got– + with external Internet– IPv4 + IPv6 L3 networking

• OpenStack virtual routing is not distributed but is highly available (HA L3 networking)– Using native OpenStack harouter based on VRRP and keepalived

• OpenStack being able to do some of the more Network as a Service stuff like VPNAAS, FWAAS or LBAAS

Storage

• Mostly local and ephemeral due to hardware constraints

• We have experimented with glusterfs and ceph as block

• Unfortunately no resources for 10 G storage network• Thus the solution will converge for now to glusterfs

being used as persistent block and object storage– Our observation is that it is slow but stable in rugged

conditions– So block is used just for attaching storage to ephemeral

VMs with primary drives on the current host – No booting from block, no live migration

Monitoring

• We use SNMP (able to monitor regular stuff + libvirt)

• We export netflow from ovs and some of our network devices

• We gather ceilometer statistics* -> btw this is quite a crappy piece of code….

• We gather logs• Most of this is in SevOne NMS/PLA with xStats

adapter for Ceilometer – In my spare time I am working also as Cloud solutions

architect for SevOne so no surprise here ;)

Operation considerations

• Upgrades are nice* – However we did not have too major issues there

• HA L3 networking has been added since Juno

• DB is all women– maria+galera

• Still do not distribute the control plane due to lack of resources – otherwise we have tried and tested this

– works well enough for our needs

Where we would like to go

• Overall goal – quality education matters• Nowadays no student can become really an engineer without

proper access to equipment, labs, resources– Having personal laptop helps but is not sufficient– So simply there should be no student complaining that there are no

resources to learn or study stuff, do exercises or collaborate with other students

– It might sound strange but sometimes infrastructure is the key to produce the better student

• Collaboration is crucial and working as a team on well defined long term projects is important– So students need common, fully accessible cloud computing resources

ondemand in a lifecycle of up to 6 years (Bachelors + Masters)– ….and we should simply give them those– It won’t be all roses but hey ;) we can give it a try

Technology Wise

• OpenStack will move towards Liberty once RC2 is out and we prove to ourselves that it works

• We will grow our hypervisor and storages• Designate (DNSAAS) is also a candidate for rapid adoption

– Delayed mostly due to too much other stuff to do for both us and NBU IT department side

• Magnum (Container as a service) will be something useful for us• In general we would like to move the whole setup towards

OpenStack+OpenContrail or OpenStack+ONOS– Somehow seduced to use and show service chaining in the proper way– vSwitch is good but vRouter is the “salt” in this business – Ability to do exercises with the students on how to extend MPLS

towards DC in a scalable and reliable way

Questions

Nikolay Milovanov

[email protected]

EXAMPLE OPENSTACK RELATED CLOUD LAB EXERCISES

Sample labs – Introduction to OpenStack

– Create virtual network, a router, link them together, bridge it to external, add some VMs, dedicate them floating IP addresses

– Do this with IPv4 & IPv6

– Try some snapshot

– Attach some block

– Store stuff in object storage

– Do all that from the GUI and through OpenStackCLI clients

Sample labs - routing

• Instantiate 3 openwrt virtual machines

– Login and deploy into them quagga virtual router

– Deploy RIP/OSPF routing protocol

– Interconnect them over iBGP

– Grab an external BGP feed and enjoy having full internet BGP table

– Try to redistribute it in OSPF ;) See what happens

Sample labs – network management

• Continue using the 3 Openwrt VMs from the previous exercise

– Add snmp

– Add sflow

– Instantiate Network monitoring vm

• Discover your devices over SNMP

• Start to export flow towards them

– Does not work?

• check your OpenStack access groups

Sample Labs SDN Ephemeral traffic forwarding with ONOS

• Instantiate ONOS and mininet VM:– Create a simple mininet network:

• 4 openflow switches • 2 hosts• Point your SDN controller towards ONOS

– Verify that you can see your topology in ONOS– Play with the GUI options– Try to ping host B from host A

• Does not work .. Well

– Add a incentive allowing traffic from the mac of host A to the mac of host B and vice versa ;)

– Verify that you can monitor in ONOS how much traffic has been forwarded over the rules you just did

Sample labs IPv6

• Instantiate OpenStack ubuntu VM

• Subscribe for tunnelbroker.net

• Create a 6to4 tunnel between your VM and Hurricane electric Frankfurt POP

• Ensure that you have two way IPv6 connectivity from your VM

– ping6 ipv6.google.com