Presentation Robayet Nasim (IEEE CLOUD 2015)
-
Upload
robayet-nasim -
Category
Presentations & Public Speaking
-
view
95 -
download
0
Transcript of Presentation Robayet Nasim (IEEE CLOUD 2015)
NETWORK CENTRIC PERFORMANCE IMPROVEMENT FOR LIVE VM
MIGRATION
ROBAYET NASIM, ANDREAS KASSLER
KARLSTAD UNIVERSITY [email protected]
IEEE CLOUD 2015, NEW YORK, USA, JUNE 2015.
TRAFFIC COEXISTENCE IN DATACENTER NETWORKS
Aggregation Switches
K Pods with K Switches
each
K=4
Racks of servers
vm vm vm vm
vm vm vm vm
VM-VM traffic
Migration traffic
Migration traffic
Migrate communicating VMs towards the same server to maximize performance
Latency depends on – VM Placement
– Other VMs Background traffic
– Migration Traffic
– Queuing strategies of Datacenter switches
Implemented Public Transport App based on Pub/Sub – Evaluate Latency in local
Openstack testbed
– Different queuing strategies
HOW TO CONTROL VM TO VM LATENCY?
Evaluate the impact of utilizing multiple paths using MPTCP and different queuing strategies for live VM migration traffic
Architecture for Flexible Prioritization of: – VM-to-VM traffic
– Live VM Migration Traffic
Based on OpenStack Neutron and OpenDaylight SDN
Uses OpenVSwitch with different queuing strategies configurable via ODL – HTB, CODEL, FQ_CODEL (fair queuing controlled latency, not support by
traditional datacenter switches) • See: QoS Enabled WiFi MAC Layer Processing as an Example of a NFV Service, Jonathan Vestin, Andreas
Kassler, to appear in: Proceedings of the IEEE NetSoft2015, April 13-17 2015, London, UK.
MAIN CONTRIBUTIONS
LIVE VM MIGRATION
QUEUING STRATEGIES
FQ_CODEL
HTB
ARCHITECTURE
Datacenter Switches
Flexible priorization of Migration versus VM traffic
Enable MPTCP
Enable MPTCP
Live Vm Migration traffic: KVM QEMU, libvirt port range 49152 …49216,16509
Queue configuration for port 4 (c06df192-b6... ) on OVS using ODL Rest conf API
Configuring the Live VM Migration traffic Queue connected to QoS uuid 109b55a6-1e... :
OF Rule
EXAMPLE QUEUE CONFIGURATION AND OF RULES
{ "parent_uuid": "c06df192-b6bc-4f...", "row": { "QoS": { "other_config": [ "map", [[ "max-rate", "1000000000" ]]], "type": "linux-htb" }}}
{ "parent_uuid": “109b55a6-1ea6-42...", "row": { ”Queue": { "other_config": [ "map", [[ ”priority", ”1" ]]], }}}
in_port=3,tp_src=16509,actions:enqueue:4:1
EXPERIMENTAL SETUP
OpenStack Controller OpenStack Compute1 OpenStack Compute2
Gigabit Switch Open vSwitch
Traffic Generator Traffic Generator
Bottleneck link
OpenStack Internal network
OpenStack VM Network
Cross Traffic
Traffic Generator Traffic Generator
OpenDaylight Controller
Neutron Interface
ML2 Driver
Open vSwitch Configure
VM-to-VM path, em2
VM-to-VM path, em2
Management path, em1
Management path, em1
Extended to support HTB, CoDEL, Fq_CoDel,
etc.
Extended to support HTB, CoDEL, Fq_CoDel,
etc.
Extended to support HTB, CoDEL, Fq_CoDel,
etc.
Extended to support HTB, CoDEL, Fq_CoDel,
etc.
KVM-QEMU uses port range 49152-49216
KVM-QEMU uses port range 49152-49216
Libvirt uses 16509 Libvirt uses 16509
Openflow rules Openflow rules
Case 1: Unloaded VM
Case 2: VM with Memory Intensive and Storage-heavy Applications
Case 3: Impact of background load and queuing strategies.
Case 4: Real-time ITS application scenario
Different flavors of VMs in OpenStack
EVALUATION SCENARIOS
CASE 1: UNLOADED VM
Using MPTCP for Migration traffic reduction in Downtime
For medium VM (3 GB memory, 3.3 GB files on storage)
For large VM (6 GB memory, 5.8 GB files on storage)
For X-large VM (12 GB memory, 7.8 GB files on storage)
CASE 2: STRESS TEST
Using MPTCP for Migration traffic significant reduction in Downtime
VM UNDER DISK AND MEMORY LOAD TOGETHER
Using MPTCP for Migration traffic significant reduction in Downtime
CASE4: DIFFERENT NETWORK LOAD ON THE NETWORK PATHS
Management path, em1
Management path, em1
VM-to-VM path, em2
VM-to-VM path, em2
UDP flows at 500/800 Mbps
Netperf wrapper (8 tcp flows in both directions)
CASE 4: IMPACT OF DIFFERENT NETWORK LOADS AND QUEUEING STRATEGIES
Two HTB queues, 200 Mbps for VM-to-VM traffic, 800 Mbps for migration traffic
Two HTB queues, 200 Mbps for VM-to-VM traffic, 800 Mbps for migration traffic
OpenDaylight Rest API: Create flow rules to enqueue all management traffic and VM-to-VM traffic to the proper queues.
Using MPTCP for Migration traffic significant reduction in Downtime Prioritization of Mgmt traffic: better performance
CASE 4: REAL-TIME PUBLIC TRANSIT TRACKING APPLICATION SCENARIO
FQ_CODEL helps to keep latency of VM to VM traffic controlled
CASE 4: REAL-TIME PUBLIC TRANSIT TRACKING APPLICATION SCENARIO
MPTCP for migration + FQ_CODEL: shortest application latency
Conclusions – Combined different mechanisms at network and transport layer to
improve the performance of live VM migrations.
– Flexible architecture with SDN and Opendaylight.
– MPTCP for migration traffic reduces VM downtime significantly.
– FQ_CODEL allows to control VM-to-VM latency even under congestion.
Future work – Evaluation with larger topologies and diverse link
capabilities.
– Energy efficient traffic routing of migration and VM-to-VM traffic.
CONCLUSIONS AND FUTURE WORK
Thank you for your attention!
Q/A