SD Container Migration-1

41
SANTA CLARA UNIVERSITY DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING Date: June 14, 2021 I HEREBY RECOMMEND THAT THE THESIS PREPARED UNDER MY SUPERVISION BY Aditya Mohan Jonathan Yezalaleul Angeline Chen Tamir Enkhjargal ENTITLED Seamless Container Migration Between Cloud and Edge BE ACCEPTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF BACHELOR OF SCIENCE IN COMPUTER SCIENCE AND ENGINEERING Thesis Advisor Department Chair Nam Ling (Jun 15, 2021 10:58 PDT) Nam Ling

Transcript of SD Container Migration-1

Page 1: SD Container Migration-1

SANTA CLARA UNIVERSITYDEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Date: June 14, 2021

I HEREBY RECOMMEND THAT THE THESIS PREPARED UNDER MY SUPERVISION BY

Aditya MohanJonathan Yezalaleul

Angeline ChenTamir Enkhjargal

ENTITLED

Seamless Container Migration Between Cloud and Edge

BE ACCEPTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

BACHELOR OF SCIENCE IN COMPUTER SCIENCE AND ENGINEERING

Thesis Advisor

Department Chair

Nam Ling (Jun 15, 2021 10:58 PDT)Nam Ling

Page 2: SD Container Migration-1

Seamless Container Migration Between Cloud and Edge

by

Aditya MohanJonathan Yezalaleul

Angeline ChenTamir Enkhjargal

Submitted in partial fulfillment of the requirementsfor the degree of

Bachelor of Science in Computer Science and EngineeringSchool of EngineeringSanta Clara University

Santa Clara, CaliforniaJune 14, 2021

Page 3: SD Container Migration-1

Seamless Container Migration Between Cloud and Edge

Aditya MohanJonathan Yezalaleul

Angeline ChenTamir Enkhjargal

Department of Computer Science and EngineeringSanta Clara University

June 14, 2021

ABSTRACT

Considering the limited resources of edge devices, it is essential to monitor their current resource utilization anddevice resource allocation strategies that assign containers to edge and cloud nodes based on their priority. Edgecontainers may need to be migrated to a cloud platform to reduce the load of edge devices and allow for runningmissions critical applications. In this case, we proposed a prioritization method to exchange containers between theedge and cloud, while trying to assign delay-sensitive containers to edge nodes. We evaluate the performance ofrunning Docker container management systems on resource-constrained machines such as Raspberry Pi, and proposemethods to reduce the overhead of management and migration depending on the workload type.

ACKNOWLEDGMENTS

At this time, we would like to thank our advisors Dr. Behnam Dezfouli, Mr. Ted Kummert, and Mr. ChakrapaniChitnis for their continued support and insight throughout this process. We would also like to thank the Santa ClaraComputer Science and Engineering department for helping provide the resources for us to get to this point, as well asthank our friends and family for their continued support throughout our academic endeavors.

Page 4: SD Container Migration-1

Table of Contents

1 Introduction 11.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Background and Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Proposed Solution 42.1 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 Resource Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.4 Resource Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.5 Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.6 Load Balancing Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.7 Container Migration Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3 Performance Evaluation 173.1 Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.2 Workload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.3 Kubernetes Structure and Orchestration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.4 Linux Checkpoint/Restore in Userspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4 Societal Issues 224.1 Ethical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224.2 Social . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224.3 Political . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234.4 Economic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234.5 Health and Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244.6 Manufacturability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244.7 Sustainability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244.8 Environmental Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.9 Usability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.10 Lifelong Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.11 Compassion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

5 Conclusion 275.1 Advantages and Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

6 Appendices 306.1 Load Balancer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

iv

Page 5: SD Container Migration-1

List of Figures

2.1 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Containers and Virtual Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3 Containerization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.4 Predicted Trendline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.5 Load Balancing Flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.6 Load Balancing Strategy Case 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.7 Load Balancing Strategy Case 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.8 Load Balancing Strategy Case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.9 Load Balancing Strategy Case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.10 Load Balancing Strategy Case 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.11 Load Balancing Strategy Case 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.12 Load Balancing Strategy Case 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.13 Post-Copy Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3.1 System Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.2 Real Time Comparisons Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.3 Real Time Comparisons Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

v

Page 6: SD Container Migration-1

Chapter 1

Introduction

For the project, our team implemented an integrated edge and cloud computing system using specific load balancing

and container migration methods. The system allocated resources based on the delay requirements of tasks and trans-

ferred metadata using containers between edge nodes and cloud nodes. In real world mission critical applications, it

is crucial for systems to make real-time decisions based on critical data, so we engineered our system in a manner that

ensured that the delay did not exceed the system deadline.

1.1 Problem Statement

As of last year, 50% of all corporate data has shifted to the cloud. Great amounts of data are available on the cloud

platform, yet the transfer and analytics of this data had not fully been optimized until the introduction of edge com-

puting. Edge computing is an IT architecture that allows for computational processing of data away from centralized

nodes and closer to the local edge of a network. The benefits of using edge computing as opposed to a traditional

centralized approach include the following: lower latency, higher bandwidth, enhancement of user experiences, and

delivery of near real-time insights. Our focus in this project, as a group, is on a specific aspect of edge computing, in

which units of software called containers are placed at edge nodes to improve latency and migration in applications.

Since this is a fairly new, yet widely applicable idea, we are looking to evaluate the performance of containers on

edge nodes to optimize data storage and transfer, so that it can be used in improving applications in the manufacturing

industry, medical aid services, and smart home systems.

1

Page 7: SD Container Migration-1

1.2 Background and Related Work

The main purpose of edge computing is to bring computation and data storage closer to users. Edge computing uses

on-premises resources to process critical data and relies on the cloud solely for o✏oaded data. The main di↵erence

between edge and cloud computing is that cloud computing processes data in centralized cloud servers, whereas edge

computing processes data closer to users in edge devices and transports the results of the data processing over local

networks. There are many prevalent advantages of edge computing, which has led to its exponential growth over the

past decade. Some advantages of edge computing include lower latency, higher bandwidth, and improved reliability,

which essentially means more quantities of data can be processed even faster than ever before. On the other hand,

some drawbacks of edge computing include some deployment issues. Since edge devices are often less dependable,

less powerful, and less robust than centralized servers, edge devices have limited resources, which was a driving force

in our group choosing resource-constrained devices, such as Raspberry Pis, for our project.

As stated earlier, edge computing saves bandwidth and reduces latency, which is important for mission critical ap-

plications, such as medical monitoring and industrial control, that need to make real-time decisions based on critical

data. Data can be processed faster by sending more critical data to the edge and sending less critical data to the cloud.

Furthermore, edge computing enhances security and privacy. Edge computing reduces the amount of data sent over a

network. Edge computing also enhances privacy by uploading less data to the cloud and processing more data on the

device, which allows users to have more control over their data. As a result, edge computing solves many problems

by reducing the overhead of data communication with the cloud and keeping data at the edge instead of sharing with

the cloud.

An important aspect of our project was the deployment of containers, which allowed us to run various workloads.

Containers are standard units of software that package up code and all their dependencies. Containers abstract appli-

cations from the infrastructure, which allow applications to be run in di↵erent environments. In our implementation,

some of the various container technologies that we used were Docker, Docker Swarm, and Kubernetes. The purpose

of Docker is to automate the deployment of applications by instantiating containers that can run in di↵erent computing

environments. The purpose of Docker Swarm and Kubernetes is to automate deployment, scaling, and management of

containerized applications. Kubernetes is more useful for larger-scale projects and implementations, whereas Docker

is better when focusing on manipulating smaller projects or a specific container.

At the 2020 Fifth International Conference on Fog and Mobile Edge Computing (FMEC), a group of computer engi-

neers and data scientists focused on the field of industrial applications with certain real-time requirements for band-

width and latency that must [5]. They sought to implement an architecture of the fog layer nature that was used to

2

Page 8: SD Container Migration-1

monitor and manage these applications with respect to minimizing latency. By implementing Kubernetes, their system

was able to dynamically allocate resources in the most optimal manner. Then, they deployed these applications using

containers to ”automation system networks.” Their implementation is very similar to our proposed implementation,

and their research has helped us see the nuances of using Kubernetes with respect to minimizing latency.

The issue with the standard deployment of container management frameworks, such as Docker Swarm and Kubernetes,

is that container management systems run in data centers instead of at the edge of the network. Since the deployment

strategies were designed for container management systems running in data centers, they do not improve locality.

For edge computing, locality is critical because location selection impacts the latency to end users at the edge. For

cloud computing, locality is not critical because all containers run at the same data center site. Furthermore, the

requests are not classified or prioritized. The main task placement strategies are the following: binpack, spread, and

random [1]. Binpack task placement strategy minimizes costs by selecting the most loaded node and placing tasks

on as few instances as possible. Spread task placement strategy maximizes availability by selecting the node with

the least number of replicas of the container and placing tasks on as many di↵erent instances as possible. Random

task placement strategy randomly selects a node and randomly places tasks on instances. In order to implement an

integrated edge and cloud computing system, we designed a deployment strategy for container management systems

running at the edge of the network.

1.3 Objectives

The main objectives that our project aimed to accomplish were the following:

• Build an edge computing system to allocate resources to IoT devices.

• Evaluate the performance of running container management systems (Kubernetes and Docker) on resource-

constrained machines.

• Propose methods to reduce the overhead of management and migration depending on the workload type and

resource availability.

• Demonstrate advantages of adopting edge computing for faster and more reliable data processing.

3

Page 9: SD Container Migration-1

Chapter 2

Proposed Solution

When designing an implementation to handle the computing needs of an edge to cloud migration system, there are

many facets that need to be considered. We considered the actual composition of the system itself and what specific

components must be acquired to build the system architecture. When considering how to process the data from

the edge nodes most e�ciently, we decided between using a virtualized platform, such as a VM, or a containerized

platform, such as Docker. We decided that containers were more applicable to our implementation. When considering

how to manage and migrate workloads as e�ciently as possible, we decided between two container orchestration

tools: Kubernetes and Docker Swarm. We also considered how we will determine to migrate certain workloads based

on specific factors such as CPU usage, and we implemented a load balancing strategy to manage and distribute these

workloads.

2.1 System Architecture

We implemented a system that combines the advantages of both edge computing and cloud computing. Edge comput-

ing allows local data storage and instant data processing for technology that requires precise, quick actions. Data can

also be sent to the cloud for future planning and visualizing operations.

In order to properly implement this architecture, we utilized Kubernetes as a virtual containerization platform to run

and manage our containers due to its high availability policies and auto-scaling capabilities. We chose Kubernetes as

our virtual containerization platform as opposed to Docker Swarm was because of Kubernetes’ higher functionality

and powerful service management. Kubernetes has built-in high availability, failover, and healing of failed nodes. It

detects unhealthy pods, replaces them with new ones, and performs seamless load balancing of tra�c.

In order to implement a prioritization method, we created a controller that receives task requirements from IoT devices

4

Page 10: SD Container Migration-1

and assigns IoT devices to containers running on nodes in a Kubernetes cluster. When a pod is created, the pod is

placed on the edge node with the most resources. When a request with the requested delay is sent to the controller, the

controller uses a table of CPU usage and estimated delay to send the request to any pods with the estimated delay less

than the requested delay. If the request is sent to the pod with the most CPU usage, there will be spare capacity for

running containers that require more CPU usage. Furthermore, if there are not enough resources on the edge nodes

for the request, containers are exchanged between edge and cloud. Using a prioritization method, a container running

on an edge node is migrated to a cloud node. In an attempt to account for o✏oaded data, our system specifies that

containers receiving requests that require longer delay are migrated from the edge nodes to the cloud nodes before

containers receiving requests that require shorter delay.

In the figure below, we demonstrate the scenario in which one of the containers is assigned to an edge node, but since

there are not enough resources to process the image data, it is sent to the cloud node to be stored and processed at

a later time, while all the other nodes are still running, respectively. In a brief summary of events, the following

occurs: an image or data file is received at IoT1, and IoT1 sends the task requirements to the controller. Then, the

controller assigns IoT1 to CC1, and the request is then sent. To further understand how this process is able to occur,

a few concepts are important to be considered. The controller stores the containers running on the nodes on the table.

When the controller assigns an IoT device to a container running on a node, the controller checks if there are enough

resources on the edge nodes for the request. If there are not enough resources on the edge nodes for the request,

containers are migrated from the edge nodes to the cloud nodes. EC1/CC2 is migrated from an edge node to a cloud

node.

5

Page 11: SD Container Migration-1

Figure 2.1: System Architecture

2.2 Approach

When we implemented the system, we followed the following steps:

1. How to allocate processing resources in edge and cloud nodes?

2. How to manage resources?

3. How to balance the load of nodes?

4. How to migrate workload between edge and cloud node?

2.3 Resource Allocation

Containerization is used to allocate processing resources for edge and cloud nodes. Containerization has become an

alternative to virtualization since containers are flexible, lightweight, portable, loosely coupled, scalable, and secure.

A similarity between containers and virtual machines is that they both isolate and allocate resources. Furthermore,

6

Page 12: SD Container Migration-1

in containerization, multiple containers can run on the same machine, while in virtualization, the hypervisor allows

multiple virtual machines to run on the same machine. While containers virtualize the operating system, virtual

machines virtualize the hardware. As a result, containers are more lightweight than virtual machines. A key that

allows containers to have their respective lightweight nature is that multiple containers can share the operating system

kernel. In virtualization, each virtual machine runs its own respective operating system. For this reason, containers

take up less space and less time to start than virtual machines, which allows more containers to run on the same

machine.

Figure 2.2: Containers and Virtual Machines[4]

2.4 Resource Management

Because multiple containers can run on the same machine and share the operating system kernel, containers share

resources, such as CPU and RAM. Although a container has no resource constraints by default, resource constraints

can be set by limiting the amount of resources used by the container. As shown in the figure, resources are managed

by allocating CPU to containers. C1 through C5 are containers that run on the same machine and share the operating

system kernel. As shown in the figure below, various di↵erent percentages of CPU resources are allocated to the

di↵erent containers.

7

Page 13: SD Container Migration-1

Figure 2.3: Containerization

2.5 Hypothesis

In order to model the relationship between CPU usage and delay, we needed to determine how CPU usage a↵ects

delay by controlling the amount of resources allocated to each task. We tested our hypothesis by limiting the amount

of CPU used by the containers and measuring the amount of time needed to process di↵erent workloads for di↵erent

CPU usages.

Our hypothesis was simply the following: as delay decreases, CPU usage increases.

Our predicted trendline for the CPU percentage allocated over a certain period of time is shown on the figure below.

Figure 2.4: Predicted Trendline

8

Page 14: SD Container Migration-1

2.6 Load Balancing Strategy

Although Kubernetes does come with various in-built load balancing algorithms, we wanted to develop a load bal-

ancing algorithm that classifies and prioritizes requests. Unlike the typical L4 and L7 round robin load balancing

algorithms, our load balancing algorithm leverages the benefits of Kubernetes and IoT devices to optimize the migra-

tion of containers between cloud nodes and edge nodes.

In our load balancing algorithm, when IoT devices are added to the system, the workloads running on the IoT devices

are added to the table stored in the controller. Then, an IoT device sends a request with the workload and the requested

delay to the controller, and the controller sends the request to a container running the workload. A table of CPU usage

and estimated delay is used to estimate the CPU usage for the requested delay. The table is created by limiting the

amount of CPU used by the containers and measuring the amount of time needed to process di↵erent workloads for

di↵erent CPU usages. Essentially, if the estimated delay is less than the requested delay, the request can be sent to the

respective node.

Figure 2.5: Load Balancing Flowchart

There are three cases:

Case 1: There are enough resources on the edge nodes for the new request. The new request is sent to the edge node

with the most CPU usage.

9

Page 15: SD Container Migration-1

The controller finds an edge node that can receive the request. If there are enough resources on the edge nodes for the

request, the controller sends the request to the edge node with the most CPU usage. The table that stores requests sent

to the edge nodes is used to find an edge node with the estimated delay less than the requested delay. The new request

is sent to the edge node with the most CPU usage. The table that stores requests sent to the edge nodes is updated.

Let E be the edge nodes. The following pseudocode finds an edge node that can receive the request:

for e in E do

if delayr > delaye thensend the request r to the edge node e

end

end

Let R be the requests sent to the edge nodes. Let f be the function that estimates the delay for the given CPU usage.

The following equation estimates the delay if the request is sent to the edge node:

delaye = f (1 �Pr2R usager)

For example, two requests are sent to the controller.

There are enough resources on the edge nodes for the request because usage1 < 1. The first request is sent to the edge

node. IoT1 sends task requirements to the controller, the controller assigns IoT1 to EC1, and IoT1 sends the request

to EC1.

Figure 2.6: Load Balancing Strategy Case 1

10

Page 16: SD Container Migration-1

There are enough resources on the edge nodes for both requests because usage1 + usage2 < 1. The second request

is sent to the edge node. IoT2 sends task requirements to the controller, the controller assigns IoT2 to EC2, and IoT2

sends the request to EC2.

Figure 2.7: Load Balancing Strategy Case 1

Case 2: There are not enough resources on the edge nodes for the new request. An old request sent to an edge node

cannot be sent to a cloud node. The new request is sent to the cloud node with the most CPU usage.

The controller finds an edge node that can receive the request. If there are not enough resources on the edge nodes for

the request, the controller finds a request sent to an edge node that can be sent to a cloud node. A table of CPU usage

and estimated delay is used to estimate the CPU usage for the requested delay if the old requests are sent to the cloud

nodes. If the requests sent to the edge nodes cannot be sent to the cloud nodes, the controller sends the request to the

cloud node with the most CPU usage. The table that stores requests sent to the cloud nodes is used to find a cloud

node with the estimated delay less than the requested delay. The new request is sent to the cloud node with the most

CPU usage. The table that stores requests sent to the cloud nodes is updated.

Let C be the cloud nodes. The following pseudocode finds a cloud node that can receive the request:

for c in C do

if delayr > delayc thensend the request r to the cloud node c

end

end

11

Page 17: SD Container Migration-1

Let R be the requests sent to the cloud nodes. Let f be the function that estimates the delay for the given CPU usage.

The following equation estimates the delay if the request is sent to the cloud node:

delayc = f (1 �Pr2R usager)

For example, two requests are sent to the controller.

There are enough resources on the edge nodes for the request because usage1 < 1. The first request is sent to the edge

node. IoT1 sends task requirements to the controller, the controller assigns IoT1 to EC1, and IoT1 sends the request

to EC1.

Figure 2.8: Load Balancing Strategy Case 2

There are not enough resources on the edge nodes for both requests because usage1 + usage2 > 1. The first request

sent to the edge node cannot be sent to the cloud node because delay1 < delaycloud = delayedge + latency. The second

request is sent to the cloud node. IoT2 sends task requirements to the controller, the controller assigns IoT2 to CC1,

and IoT2 sends the request to CC1.

12

Page 18: SD Container Migration-1

Figure 2.9: Load Balancing Strategy Case 2

Case 3: There are not enough resources on the edge nodes for the new request. An old request sent to an edge node

can be sent to a cloud node. The container receiving the old request is migrated from the edge node to the cloud node.

The new request is sent to the edge node with the most CPU usage.

The controller finds an edge node that can receive the request. If there are not enough resources on the edge nodes for

the request, the controller finds a request sent to an edge node that can be sent to a cloud node. A table of CPU usage

and estimated delay is used to estimate the CPU usage for the requested delay if the old requests are sent to the cloud

nodes. If the requests sent to the edge nodes can be sent to the cloud nodes, containers are migrated from the edge

nodes to the cloud nodes, and the controller sends the request to the edge node with the most CPU usage. The table

that stores requests sent to the edge nodes is updated. The table that stores requests sent to the edge nodes is used to

find an edge node with the estimated delay less than the requested delay. The new request is sent to the edge node with

the most CPU usage. The table that stores requests sent to the edge nodes is updated.

Let E be the edge nodes. Let R be the requests sent to the edge nodes. Let C be the cloud nodes. The following

13

Page 19: SD Container Migration-1

pseudocode finds a request sent to an edge node that can be sent to a cloud node:

for e in E do

for r in R do

for c in C do

if delayr > delayc thensend the request r to the edge node e

end

end

end

end

Let R be the requests sent to the cloud nodes. Let f be the function that estimates the delay for the given CPU usage.

The following equation estimates the delay if the request is sent to the cloud node:

delayc = f (1 �Pr2R usager)

For example, two requests are sent to the controller.

There are enough resources on the edge nodes for the request because usage1 < 1. The first request is sent to the edge

node. IoT1 sends task requirements to the controller, the controller assigns IoT1 to EC1, and IoT1 sends the request

to EC1.

Figure 2.10: Load Balancing Strategy Case 3

There are not enough resources on the edge nodes for both requests because usage1 + usage2 > 1. The first request

14

Page 20: SD Container Migration-1

sent to the edge node can be sent to the cloud node because delay1 > delaycloud = delayedge + latency. The container

receiving the first request is migrated from the edge node to the cloud node. The controller assigns IoT1 to CC1. The

second request is sent to the edge node. IoT2 sends task requirements to the controller, the controller assigns IoT2 to

EC1, and IoT2 sends the request to EC1.

Figure 2.11: Load Balancing Strategy Case 3

Figure 2.12: Load Balancing Strategy Case 3

15

Page 21: SD Container Migration-1

2.7 Container Migration Strategy

Figure 2.13: Post-Copy Migration

A crucial aspect of our implementation was choosing the correct container migration strategy to optimize the data

migration between edge nodes themselves and between edge and cloud nodes. Because of the nature of our workloads,

we decided to focus on implementing a live migration strategy. Live container migration is the process of moving

applications between di↵erent physical machines or clouds without disconnecting the client.

Live migration gets the state, copies the state, and restores the state in the frozen time. The two main live migration

methods are pre-copy memory and post-copy memory [6].

In pre-copy memory, first, most of the memory pages are copied from the source node to the destination node, and the

source node freezes the container. Then, in the frozen time, the source node gets the state of the container, the rest of

the memory pages are copied from the source node to the destination node, and the destination node restores the state

of the container. Finally, the destination node unfreezes the container, and the container has been migrated.

As shown in the figure, in post-copy memory, first, the source node freezes the container. Then, in the frozen time, the

source node gets the state of the container, the fastest changing memory pages are copied from the source node to the

destination node, and the destination node restores the state of the container. Finally, the destination node unfreezes

the container, and the container has been migrated.

16

Page 22: SD Container Migration-1

Chapter 3

Performance Evaluation

After setting up our testbed, we began to run tests on our prototype. Our prototype consisted of Raspberry Pis, which

are a↵ordable components that can easily be acquired. We ran tests on our prototype by setting up containers that run

di↵erent sized workloads at various allocations of CPU percentage. These tests were run under di↵erent constraints to

test our original hypothesis: as delay decreases, CPU usage increases. From these tests, we constructed a graphical

representation of the data to visualize the trend lines of the respective workloads and reach a conclusive answer.

3.1 Prototype

Figure 3.1: System Prototype

Our system prototype was the following: Raspberry Pis with 8 Gigabytes of RAM as the cloud nodes and Raspberry

17

Page 23: SD Container Migration-1

Pis with 4 Gigabtyes of RAM as the edge nodes. For the implementation of our container migration, we used both

Docker and Kubernetes, as we could leverage the benefits of both systems for an optimized final system.

Our group chose Raspberry Pis to run tests because resource-constrained machines simulate edge devices in real world

applications that have limited resources, which forced our team to find a solution to optimize container migration and

resource allocation. Because cloud nodes usually have more resources than edge nodes, we used Raspberry Pis with

more RAM as the cloud nodes to simulate a cloud environment, and we used Raspberry Pis with less RAM as the edge

nodes to simulate IoT devices.

As shown in Figure 3.3, our testbed setup included two edge node Raspberry Pis with 4 GB RAM and one cloud node

Raspberry Pi with 8 GB RAM. The Raspberry Pis were connected to a monitor via a 3-port HDMI switch to toggle

among the di↵erent terminals in which the nodes were running containers in a Kubernetes cluster.

Initially, we ran tests on single Raspberry Pis and familiarized ourselves with the Raspbian OS as well as the function-

alities of the Pis themselves. Then, we used Amazon Elastic Kubernetes Service (EKS) as the cloud nodes to simulate

the delay of communication with the cloud. After we were familiar with the functionalities, we developed our load

balancing algorithm, and we ran tests on multiple Raspberry Pis with di↵erent amounts of RAM and various testbed

setup.

3.2 Workload

Our hypothesis and goal led us to test resource management on containers. We focused on process intensive workloads.

We used the Tensorflow Object Detection API, which is invoked via Flask, a micro-web framework written in Python,

to process images. To containerize this application, we used the Docker container management system.

The workloads used to run tests were three images of a dog, a girl, and a cat that had di↵erent resolutions and sizes.

For reference, the cat image was the largest of the three images. We limited the amount of CPU usage when setting

up the containers, and we ran our object detection application with the three test images. We tested at various CPU

allocations, checked the delays by timing the requests to the API, and measured how much time the request took to

process.

18

Page 24: SD Container Migration-1

Figure 3.2: Real Time Comparisons Data

As shown in Figure 3.2, we initially set up each container at various allocations of CPU percentage, such as 10, 35,

and 75 percent. Then, we gave these images to the object detection algorithm to run, and we recorded how much time

it took for the algorithm to return a result, which is displayed in the delay column in the table above.

As shown in the results in the table above, for a 720 pixel resolution image, if the CPU usage was increased by 25

percent, the delay time was decreased by approximately 4 times. For a 1080 pixel resolution image, if the CPU usage

was increased by 25 percent, the delay time was decreased by more than 5 times. This proves our hypotheses that the

larger images take more time to process, and decreasing the CPU usage increases the amount of time to process the

images.

We expanded the test bench with more CPU allocations to find a stronger correlation in the data. At 10, 20, 35, 50,

75, and 90 percent CPU allocation, each image was tested, and the time it took to process the image was plotted. The

graph shows that an exponential regression line models the relationship between CPU usage and delay, which further

proved our hypothesis.

19

Page 25: SD Container Migration-1

Figure 3.3: Real Time Comparisons Graph

As shown in Figure 3.3, we fitted the data from the various CPU allocations to an exponential regression line. This

equation allows us to answer the following questions: If we wanted to process a cat image within 150 seconds, what

CPU percent do we need to allocate to our container? If we had 30 percent CPU allocated on a container, how long

will it take to process a cat image?

This process of using test images can be adapted to other systems by collecting data for di↵erent workloads and

modeling the relationship between CPU usage and delay.

3.3 Kubernetes Structure and Orchestration

We used Kubernetes to implement our system [3]. In Kubernetes, the highest level is called a cluster, and a cluster

consists of a set of worker machines called nodes. These nodes host pods, components of the application workload

that run containers. The control plane manages the worker nodes and the pods in the cluster.

The following are the Kubernetes node components:

1) kubelet - Kubelet runs on each node in the cluster, monitors work requests from the API server, and makes sure the

requested unit of work is running and healthy.

2) kube-proxy - Kube-proxy runs on each node and ensures that each node has a unique IP address. This command

implements rules to handle routing and load balancing of tra�c by using iptables and IPVS.

20

Page 26: SD Container Migration-1

3) container runtime - Container runtime runs containers on a Kubernetes cluster and fetches, starts, and stops

container images.

The features of Kubernetes were extremely critical in our implementation as it allowed us to manipulate specific pods

or nodes. Selectors constrain pods to nodes with particular labels, which allowed us to assign an IoT device to a

container running on a node. Furthermore, CPU and memory resources can be sliced, which allowed us to allocate

resources to containers. If the node where a pod is running has enough available resource, it allows the container to

use more resource than its request for that specific resource. However, a container is not allowed to use more than its

resource limit.

3.4 Linux Checkpoint/Restore in Userspace

We implemented a post-copy live container migration strategy, which optimizes the transfer of containers and metadata

across di↵erent nodes. Because we were using a Linux operating system, our group used Linux Checkpoint/Restore

in Userspace (Linux CRIU) to implement the post-copy migration on our system [2].

Specifically, our group used the following commands to implement the Linux Checkpoint/Restore in Userspace:

1) Dump - take tasks to migrate and dump them in a chosen directory (important while the container is frozen to save

state)

2) Copy - images to destination node

3) Restore - go to destination node and restore the apps from images on it, once the containers are unfrozen

Using post-copy live migration with Kubernetes yielded great results for the overall system as it was more of a proof-

of-concept implementation. We implemented a post-copy migration strategy because post-copy memory sends each

container only once across the network, whereas pre-copy sends the same container multiple times across the network

in case it is lost or corrupted at the source node.

21

Page 27: SD Container Migration-1

Chapter 4

Societal Issues

Containerized edge migration is a concept that has endless practical applications and can be utilized in almost every

industry to allow for safer and more e�cient project implementations. The implementation of containers on edge

nodes not only improves computing but also has the potential to make a revolutionizing impact on many industries

through numerous usages and applications. This will make global data transfer more e�cient and help advance our

society in numerous ways.

4.1 Ethical

An issue pertinent to the heavy Internet tra�c that our society contributes to today is lack of data privacy for users,

which can have severe ramifications if not handled properly. Users have concerns regarding their privacy when their

data are continuously collected by cloud providers. Users also have concerns regarding how cloud providers use their

data to improve the customer experience. Edge computing can be used to process and respond to users’ requests at

edge nodes and store users’ data locally, which allows users to have more control over their data. This is done through

eliminating the middleman being the cloud, where data is far out of reach from users. With the data being transferred

over vast distances, it becomes vulnerable to cyber attacks. With edge computing, because users do not need to trust

hidden algorithms that collect and anonymize their data, users do not need to be concerned about data collection.

4.2 Social

Edge computing can provide an infrastructure for blockchain nodes, which build trust by recording transactions in a

distributed ledger. Each blockchain node is a compute unit, and high processing power is needed to process blockchain

22

Page 28: SD Container Migration-1

transactions. Because communication requires high bandwidth and low latency, edge computing can be used to process

and store nodes locally. Mobile edge computing can provide an improved user experience when sharing data with other

nearby mobile users. This could be used in a variety of di↵erent applications for di↵erent social settings, such as for

ridesharing, dating and fitness location based apps. This better connects users to the world and people around them

faster and smarter than ever.

4.3 Political

Many households now use smart home systems, such as Amazon Alexa, yet are very skeptical of fully converting to

these kinds of technologies because of privacy and security issues. There are a variety of challenges when dealing

with smart-home devices, such as sending the huge amounts of data generated from connected devices to a remote

server causing latency issues and delays in the transfer of data. This is a major issue specifically in real-time analytics

and time-critical actions. Another challenge is security concerns of sending data across long distances to be saved in

remote servers and the security of those servers themselves. A simple solution to these privacy concerns, regarding

devices such as Alexa, would be to place containers at edge nodes, which would allow smart speakers to respond to

users’ requests without sending their data to a central server and giving companies the opportunity to collect their data.

Since only a subset of data is transmitted to a remote server, sensitive information can be processed locally at the edge.

The future of smart home systems is well-known and the switch to this technology is inevitable, yet the progress has

been stalled due to the concern of privacy and security violations. Implementing containers at edge nodes to store and

transfer data would usher in a new age of smart-home technology and make it widely adopted as a common household

item in the future without the fear of big tech’s data collection methods leading to invasion of privacy.

4.4 Economic

The cost of implementing the system is a↵ordable. When we prototyped the system, we used single-board computers

for cloud nodes and edge nodes and used open source software for data migration. We used Raspberry Pis, single-

board computers that can be used to build a computer system, for cloud nodes and edge nodes. We used Docker,

an open source containerization platform, and Kubernetes, an open source container orchestration engine, for data

migration. If one wanted to scale up the system, additional components to simulate more edge or cloud nodes and

handle more data would still be a↵ordable. Considering that the implementation of our system at such a scale would

require a significant amount of physical space to accommodate helped us choose the current sizing of the system.

From an economic standpoint, we do not recommend scaling higher than 5 times the scale of the prototype because at

23

Page 29: SD Container Migration-1

that point the pricing would require somewhere over ten thousand dollars.

4.5 Health and Safety

An important advantage that edge computing provides is that it saves bandwidth and reduces latency, which is ideal for

mission critical applications, such as medical monitoring and industrial control that need to make real-time decisions

based on critical data. Even though this does not seem very significant, the improvements in data transfer and storage

can have major impacts. There are several real-time scenarios that require immediate actions, such as wearable health

monitors using edge technology to trigger precautionary measures based on the user’s vitals. When it comes to

user health, life and death can be decided in a matter of seconds, and edge computing saves those seconds. In the

medical industry, using edge containers to store data in hospital sensors allows these sensors to continue operating

independently even during a widespread network or cloud failure. In the manufacturing industry, using edge containers

to process data allows a robot that requires a certain delay communicate with the controller through control loop, which

can lead to severe system issues if the delay exceeds the system deadline. Edge computing eliminates unnecessary

communication and processing time lags, which prevents mission critical failures.

4.6 Manufacturability

The system can be built by using single-board computers for cloud nodes and edge nodes and by using open source

software for data migration. Cloud nodes and edge nodes can be upgraded to powerful computers for real world

applications. If we had a faster internet connection and better processing power on our devices, we could download

packages faster when running the Raspberry Pis, and more cloud nodes and edge nodes could be added to the system.

Manufacturing the system instead of initializing all of the components of the system would help limit development

time, but depending on scale, as mentioned earlier in the economic impacts section, cost may or may not become an

issue.

4.7 Sustainability

When we prototyped the system, we used Raspberry Pis, energy e�cient computers powered by low-voltage micro

USB power supply, for cloud nodes and edge nodes. The mobile, decentralized nature of the system can greatly reduce

startup costs and time and help in creating a minimal data infrastructure footprint. Edge computing’s ability to provide

around the clock connectivity also limits the likelihood of system downtime. With edge computing, we can reduce

24

Page 30: SD Container Migration-1

the unnecessary use of bandwidth and server capacity, which comes down to infrastructure, electricity, and physical

space, while simultaneously taking advantage of underused device resources.

4.8 Environmental Impact

Edge computing saves resources by eliminating unnecessary data transmission. For example, machine learning tech-

niques used by autonomous vehicles, such as reinforcement learning, do not rely on training large models with large

data sets. Because less central storage is required, edge computing can be used to run inferences in the car or by

nearby facilities. Edge computing also increases the e�ciency of various ”smart” technologies, such as a network of

tra�c lights that reduces intersection wait times and emissions by logging tra�c and adapting sequencing to minimize

congestion.

4.9 Usability

Because the controller is an API that sends and receives HTTP requests, which are human-readable messages, we

consider our system to be usable. Once the user selects the files they want to be handled by the system and starts the

initiation protocol, the system takes care of the rest without the need of user intervention. CPU usage can be monitored

by using container monitoring tools to manage resources e↵ectively.

4.10 Lifelong Learning

In the process of implementing a new system, we brought ideas from conception to production and gained real world

R&D experience. We researched edge computing, a cutting-edge technology that will make an impact in the future.

The use cases of edge computing are nearly limitless as it can be applied to any form of technology that involves

transfer of data, which is nearly everything in our lives today from cars to cellphones to televisions. We used con-

tainerization technologies that we have not used before, which prepared us for lifelong learning.

4.11 Compassion

Implementing containers at edge nodes can make a di↵erence in the lives of medical patients who critically need it. In

hospitals, wearable Real Time Locating System (RTLS) badges implement utilize edge computing to track the badges

with sensors all around the hospital. When a patient calls, and the first responder steps through the doorway, the sensor

25

Page 31: SD Container Migration-1

is triggered, and the call is nullified. This saves hospitals millions of dollars in unnecessarily-used resources and most

importantly the time of their sta↵. This technology can also be implemented in autonomous vehicles to process data

faster locally and detect dangerous situations faster than if sensory data had to be sent to the cloud. The delay of

communication with the cloud could be the di↵erence in preventing an accident and preventing harm to the users.

26

Page 32: SD Container Migration-1

Chapter 5

Conclusion

We built an edge computing system to allocate resources to IoT devices simulated by Raspberry Pis. This system

simulates the interaction between a central controller device and numerous edge devices in connection with a cloud

device. We evaluated the performance of running container management systems, specifically Kubernetes and Docker,

on resource-constrained machines. We proposed methods to reduce the overhead of management and migration de-

pending on the workload type and resource availability by developing a load balancing algorithm. The load balancing

algorithm finds a node that can receive the request by monitoring the available resources that the node can allocate.

If there are enough resources on the edge nodes for the request, the request is sent to the edge node with the most

CPU usage. If there are not enough resources on the edge nodes for the request, a request is sent to a cloud node.

The load balancing algorithm ensures that all workloads can be processed by their respective deadlines. We clearly

demonstrated the advantages of adopting edge computing for faster and more reliable data processing.

5.1 Advantages and Disadvantages

Our system has many advantages. Our system can classify and properly prioritize requests in an e�cient manner as

well as address the requirements and deadlines of several mission critical applications, such as autonomous vehicles

or health monitors for patients with pre-existing conditions that have potentially fatal consequences if data is not

processed quickly enough. However, our system has some potential drawbacks, such as not implementing a live

migration scheme that does not freeze containers and not automating the process of migrating containers between

cloud nodes and edge nodes.

27

Page 33: SD Container Migration-1

5.2 Future Work

Our recommendations for future work are developing a graphical user interface (GUI) to monitor and control the

system for better user friendliness, identifying various types of applications that would implement this developed

system, and testing those applications to determine the system’s e�ciency. Performing an extensive performance

evaluation of the system would identify which components should be modified to increase productivity of the system.

Implementing a live migration scheme that does not freeze containers would reduce downtime. Automating the process

of migrating containers between cloud nodes and edge nodes would make the system easier to use.

28

Page 34: SD Container Migration-1

Bibliography

[1] Amazon ECS task placement. https://aws.amazon.com/blogs/compute/

amazon-ecs-task-placement/.

[2] CRIU. https://www.criu.org/Main_Page.

[3] Kubernetes components. https://kubernetes.io/docs/concepts/overview/components/.

[4] What is a container? https://www.docker.com/resources/what-container.

[5] Eidenbenz, R., Pignolet, Y. A., and Ryser, A. Latency-aware industrial fog application orchestration with kuber-

netes. 2020 Fifth International Conference on Fog and Mobile Edge Computing (FMEC) (2020), 164–171.

[6] Puliafito, C., Vallati, C., Mingozzi, E., Merlino, G., Longo, F., and Puliafito, A. Container migration in the fog:

A performance evaluation. Sensors 19, 7 (2019), 1488.

29

Page 35: SD Container Migration-1

Chapter 6

Appendices

6.1 Load Balancer

import math

import numpy as np

from s c i p y . o p t i m i z e import c u r v e f i t

from p y n v e r s e import i n v e r s e f u n c

import d a t e t i m e

c l a s s Reques t :

def i n i t ( s e l f , id , workload , d e l a y ) :

s e l f . id = id

s e l f . work load = workload

s e l f . s t a r t t i m e = d a t e t i m e . d a t e t i m e . now ( )

s e l f . e n d t i m e = s e l f . s t a r t t i m e + d a t e t i m e . t i m e d e l t a ( s e c o n d s=

d e l a y )

s e l f . d e l a y = d e l a y

s e l f . u sage = s e l f . work load . f i t i n v e r s e ( s e l f . d e l a y )

def r e p r ( s e l f ) :

re turn ” { ’ i d ’ : %s , ’ d e l a y ’ : %s , ’ usage ’ : %s } ” %( s t r ( s e l f . id ) ,

s t r ( s e l f . d e l a y ) , s t r ( round ( s e l f . usage , 2 ) ) )

30

Page 36: SD Container Migration-1

c l a s s Workload :

def i n i t ( s e l f , id , de l ay , usage ) :

s e l f . id = id

s e l f . d e l a y = np . a r r a y ( d e l a y )

s e l f . u sage = np . a r r a y ( usage )

def e x p o n e n t i a l ( x , a , b ) :

re turn a * np . exp ( b * x )

s e l f . param , s e l f . param cov = c u r v e f i t ( e x p o n e n t i a l , s e l f . usage

, s e l f . d e l a y )

# e s t i m a t e t h e d e l a y

def f i t ( s e l f , x ) :

re turn s e l f . param [ 0 ] * math . exp ( s e l f . param [ 1 ] * x )

# e s t i m a t e t h e usage

def f i t i n v e r s e ( s e l f , y ) :

i n v e r s e = i n v e r s e f u n c ( lambda x : s e l f . param [ 0 ] * math . exp ( s e l f .

param [ 1 ] * x ) )

re turn i n v e r s e ( y ) . i t em ( )

c l a s s Node :

def i n i t ( s e l f , id ) :

s e l f . id = id

s e l f . u sage = 0

s e l f . c o n t a i n e r s = [ ]

def r e p r ( s e l f ) :

re turn ” { ’ u sage ’ : %s , ’ c o n t a i n e r s ’ : %s } ” %( s t r ( round ( s e l f .

usage , 2 ) ) , s t r ( s e l f . c o n t a i n e r s ) )

c l a s s LoadBa lance r :

def i n i t ( s e l f , l a t e n c y , m i g r a t i o n t i m e ) :

31

Page 37: SD Container Migration-1

s e l f . r e q u e s t i d = 0

s e l f . work loads = { }

s e l f . e d g e n o d e s = { }

s e l f . c l o u d n o d e s = { }

s e l f . l a t e n c y = l a t e n c y

s e l f . m i g r a t i o n t i m e = m i g r a t i o n t i m e

# f i n d a node t h a t can r e c e i v e t h e r e q u e s t

def f i n d n o d e ( s e l f , n e w r e q u e s t ) :

f o r edge key , e d g e v a l u e in s e l f . e d g e n o d e s . i t e m s ( ) :

f i l t e r ( lambda c o n t a i n e r : c o n t a i n e r . e n d t i m e >

d a t e t i m e . d a t e t i m e . now ( ) , e d g e v a l u e . c o n t a i n e r s )

f o r c loud key , c l o u d v a l u e in s e l f . c l o u d n o d e s . i t e m s ( ) :

f i l t e r ( lambda c o n t a i n e r : c o n t a i n e r . e n d t i m e >

d a t e t i m e . d a t e t i m e . now ( ) , c l o u d v a l u e . c o n t a i n e r s )

# f i n d an edge node t h a t can r e c e i v e t h e r e q u e s t

node = ” ”

f o r key , v a l u e in s e l f . e d g e n o d e s . i t e m s ( ) :

# i f t h e e s t i m a t e d d e l a y i s l e s s than t h e r e q u i r e d

d e l a y

i f ( n e w r e q u e s t . work load . f i t (1 = v a l u e . usage ) <

n e w r e q u e s t . d e l a y ) :

i f ( node == ” ” ) :

node = key

i f ( ( node != ” ” ) and ( v a l u e . usage > s e l f .

e d g e n o d e s [ node ] . usage ) ) :

node = key

# i f t h e r e are n o t enough r e s o u r c e s on t h e edge nodes f o r t h e

r e q u e s t

i f ( node == ” ” ) :

# f i n d r e q u e s t s s e n t t o t h e edge nodes t h a t can be s e n t

t o t h e c l o u d nodes

32

Page 38: SD Container Migration-1

edge node = ” ”

c l o u d n o d e = ” ”

r e q u e s t = { }

maximum = 0

f o r edge key , e d g e v a l u e in s e l f . e d g e n o d e s . i t e m s ( ) :

f o r c o n t a i n e r in e d g e v a l u e . c o n t a i n e r s :

f o r c loud key , c l o u d v a l u e in s e l f .

c l o u d n o d e s . i t e m s ( ) :

# i f t h e e s t i m a t e d d e l a y i s

l e s s t han t h e r e q u i r e d

d e l a y

i f ( ( ( 1 = e d g e v a l u e . usage ) +

c o n t a i n e r . u sage >

n e w r e q u e s t . u sage ) and (

c o n t a i n e r . u sage > maximum )

and ( c o n t a i n e r . work load .

f i t (1 = c l o u d v a l u e . usage )

+ s e l f . l a t e n c y <

c o n t a i n e r . d e l a y ) and (

d a t e t i m e . d a t e t i m e . now ( ) +

d a t e t i m e . t i m e d e l t a ( s e c o n d s

= s e l f . m i g r a t i o n t i m e ) <

c o n t a i n e r . e n d t i m e ) ) :

edge node = edge key

c l o u d n o d e = c l o u d k e y

r e q u e s t = c o n t a i n e r

maximum = c o n t a i n e r .

u sage

# i f t h e r e q u e s t s s e n t t o t h e edge nodes ca nno t be s e n t

t o t h e c l o u d nodes

i f ( c l o u d n o d e == ” ” ) :

# send t h e r e q u e s t t o t h e c l o u d node w i t h t h e

33

Page 39: SD Container Migration-1

most CPU usage

c l o u d n o d e = ” ”

f o r key , v a l u e in s e l f . c l o u d n o d e s . i t e m s ( ) :

i f ( c l o u d n o d e == ” ” ) :

c l o u d n o d e = key

i f ( ( c l o u d n o d e != ” ” ) and ( v a l u e .

usage > s e l f . c l o u d n o d e s [

c l o u d n o d e ] . usage ) ) :

c l o u d n o d e = key

s e l f . a d d r e q u e s t ( s e l f . c l o u d n o d e s , c loud node ,

n e w r e q u e s t )

# i f t h e r e q u e s t s s e n t t o t h e edge nodes can be s e n t t o

t h e c l o u d nodes

e l s e :

# m i g r a t e c o n t a i n e r s from t h e edge nodes t o t h e

c l o u d nodes

s e l f . r e m o v e r e q u e s t ( c o n t a i n e r )

# send t h e r e q u e s t t o t h e edge node w i t h t h e

most CPU usage

c l o u d n o d e = ” ”

f o r key , v a l u e in s e l f . c l o u d n o d e s . i t e m s ( ) :

i f ( c l o u d n o d e == ” ” ) :

c l o u d n o d e = key

i f ( ( c l o u d n o d e != ” ” ) and ( v a l u e .

usage > s e l f . c l o u d n o d e s [

c l o u d n o d e ] . usage ) ) :

c l o u d n o d e = key

s e l f . a d d r e q u e s t ( s e l f . c l o u d n o d e s , c loud node ,

c o n t a i n e r )

s e l f . a d d r e q u e s t ( s e l f . edge nodes , edge node ,

n e w r e q u e s t )

# i f t h e r e are enough r e s o u r c e s on t h e edge nodes f o r t h e

34

Page 40: SD Container Migration-1

r e q u e s t

e l s e :

# send t h e r e q u e s t t o t h e edge node w i t h t h e most CPU

usage

s e l f . a d d r e q u e s t ( s e l f . edge nodes , node , n e w r e q u e s t )

#add t h e r e q u e s t t o t h e t a b l e

def a d d r e q u e s t ( s e l f , nodes , node , r e q u e s t ) :

nodes [ node ] . c o n t a i n e r s . append ( r e q u e s t )

nodes [ node ] . u sage += r e q u e s t . u sage

# remove t h e r e q u e s t from t h e t a b l e

def r e m o v e r e q u e s t ( s e l f , r e q u e s t ) :

f o r key , v a l u e in s e l f . e d g e n o d e s . i t e m s ( ) :

f o r c o n t a i n e r in v a l u e . c o n t a i n e r s :

i f ( r e q u e s t . id == r e q u e s t . id ) :

v a l u e . c o n t a i n e r s . remove ( r e q u e s t )

v a l u e . usage == r e q u e s t . u sage

f o r key , v a l u e in s e l f . c l o u d n o d e s . i t e m s ( ) :

f o r c o n t a i n e r in v a l u e . c o n t a i n e r s :

i f ( r e q u e s t . id == r e q u e s t . id ) :

v a l u e . c o n t a i n e r s . remove ( r e q u e s t )

v a l u e . usage == r e q u e s t . u sage

#add t h e work load t o t h e t a b l e

def add work load ( s e l f , w o r k l o a d i d , de lay , usage ) :

s e l f . work loads [ w o r k l o a d i d ] = Workload ( w o r k l o a d i d , de lay ,

usage )

# remove t h e work load from t h e t a b l e

def r emove work load ( s e l f , w o r k l o a d i d , de lay , usage ) :

d e l s e l f . work loads [ w o r k l o a d i d ]

35

Page 41: SD Container Migration-1

#add t h e edge node t o t h e t a b l e

def a d d e d g e n o d e ( s e l f , e d g e n o d e i d ) :

s e l f . e d g e n o d e s [ e d g e n o d e i d ] = Node ( e d g e n o d e i d )

# remove t h e edge node from t h e t a b l e

def r emove edge node ( s e l f , e d g e n o d e i d ) :

d e l s e l f . e d g e n o d e s [ e d g e n o d e i d ]

#add t h e c l o u d node t o t h e t a b l e

def a d d c l o u d n o d e ( s e l f , c l o u d n o d e i d ) :

s e l f . c l o u d n o d e s [ c l o u d n o d e i d ] = Node ( c l o u d n o d e i d )

# remove t h e c l o u d node from t h e t a b l e

def r e m o v e c l o u d n o d e ( s e l f , c l o u d n o d e i d ) :

d e l s e l f . c l o u d n o d e s [ c l o u d n o d e i d ]

# send t h e r e q u e s t t o t h e node

def s e n d r e q u e s t ( s e l f , workload , d e l a y ) :

i f ( work load in l i s t ( s e l f . work loads . keys ( ) ) ) :

s e l f . r e q u e s t i d += 1

r e q u e s t = Reques t ( s e l f . r e q u e s t i d , s e l f . work loads [

work load ] , d e l a y )

s e l f . f i n d n o d e ( r e q u e s t )

p r i n t ( s e l f . e d g e n o d e s )

p r i n t ( s e l f . c l o u d n o d e s )

36