SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our...

111
FEDERAL UNIVERSITY OF RIO GRANDE DO NORTE CENTER OF EXACT AND EARTH SCIENCES DEPARTMENT OF I NFORMATICS AND APPLIED MATHEMATICS POST- GRADUATION PROGRAM IN SYSTEMS AND COMPUTING MASTERS IN SYSTEMS AND COMPUTING SmartEdge: Fog Computing Cloud Extensions to Support Latency-Sensitive IoT Applications Flávio de Sousa Ramalho Natal-RN December, 2016

Transcript of SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our...

Page 1: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

FEDERAL UNIVERSITY OF RIO GRANDE DO NORTE

CENTER OF EXACT AND EARTH SCIENCES

DEPARTMENT OF INFORMATICS AND APPLIED MATHEMATICS

POST-GRADUATION PROGRAM IN SYSTEMS AND COMPUTING

MASTER’S IN SYSTEMS AND COMPUTING

SmartEdge: Fog Computing Cloud Extensions toSupport Latency-Sensitive IoT Applications

Flávio de Sousa Ramalho

Natal-RN

December, 2016

Page 2: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

Catalogação da Publicação na Fonte. UFRN / SISBI / Biblioteca Setorial Centro de Ciências Exatas e da Terra – CCET.

Ramalho, Flávio de Sousa. SmartEdge: fog computing cloud extensions to support latency-sensitive IoT

applications / Flávio de Sousa Ramalho. - Natal, 2016. 110f.: il. Orientador: Prof. Dr. Augusto José Venâncio Neto. Dissertação (Mestrado) – Universidade Federal do Rio Grande do Norte. Centro

de Ciências Exatas e da Terra. Programa de Pós-Graduação em Sistemas e Computação.

1. Computação em névoa. 2. Computação em nuvem. 3. Internet das coisas. 4.

Virtualização por containers. 5. Redes definidas por software. 6. Fog Computing. 7. Cloud Computing. 8. Internet of Things. 9. Container-based virtualization. 10. Software-defined networking. I. Venâncio Neto, Augusto José. II. Título.

RN/UF/BSE-CCET CDU: 004.2

Page 3: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

Flávio de Sousa Ramalho

SmartEdge: Fog Computing Cloud Extensions to SupportLatency-Sensitive IoT Applications

Master’s Defense Examination submitted to theGraduate Program in Systems and Comput-ing, Department of Computer Science and Ap-plied Mathematics at Federal University of RioGrande do Norte as a requirement for a Master’sDegree in Computer Systems.

Research Area:Integrated and Distributed Systems

Supervisor

Prof. Dr. Augusto José Venâncio Neto

PPGSC – GRADUATE PROGRAM IN COMPUTER SYSTEMS

DIMAP – DEPARTMENT OF INFORMATICS AND APPLIED MATHEMATICS

CCET – CENTER OF EXACT AND EARTH SCIENCES

UFRN – FEDERAL UNIVERSITY OF RIO GRANDE DO NORTE

Natal-RN

December, 2016

Page 4: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

Agradecimentos

Este trabalho de dissertação foi orientado pelo professor Augusto Neto, o qual agradeço pe-

los diversos ensinamentos, horas e horas de reunião, cobranças e enorme paciência. Agradeço

também aos colegas do Laboratório de Sistemas Ubíquos e Pervasivos (UPLab), pelas incon-

táveis conversas e sábios conselhos que foram essenciais para a concepção desse trabalho.

Agradeço profundamente a minha família, em especial aos meus pais Francisco de Sousa

Ramalho e Deolinda Maria de Sousa Ramalho.

Agradeço aos meus amigos, mesmo que a ajuda não tenha sido diretamente relacionada ao

conteúdo científico desse trabalho. Sempre estiveram comigo, comemorando os momentos de

vitória e também dando suporte nos momentos difíceis e de tristeza.

Esse trabalho é dedicado aos meus pais, que despertaram em mim a ambição que resultou

na minha busca por uma pós-graduação e foi capaz de prover os meus estudos através de muito

amor e trabalho árduo. Eles são os maiores responsáveis por quem sou e pela pessoa que sonho

um dia me tornar.

Page 5: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

SmartEdge: Fog Computing Cloud Extensions to SupportLatency-Sensitive IoT Applications

Author: Flávio de Sousa Ramalho

Supervisor: Prof. Dr. Augusto José Venâncio Neto

ABSTRACT

The rapid growth in the number of Internet-connected devices, associated to the increasing

rates in popularity and demand for real-time and latency-constrained cloud application services

makes the use of traditional cloud computing frameworks challenging to afford such environ-

ment. More specifically, the centralized approach traditionally adopted by current Data Center

(DC) pose performance issues to suit a high density of cloud applications, mainly in terms to

responsiveness and scalability. Our irreplaceable dependency on cloud computing, demands DC

infrastructures always available while keeping, at the same time, enough performance capabili-

ties for responding to a huge amount of cloud application requests. In this work, the applicability

of the fog computing emerging paradigm is exploited to enhance the performance on supporting

latency-sensitive cloud applications tailored for Internet of Things (IoT). With this goal in mind,

we introduce a new service model named Edge Infrastructure as a Service (EIaaS), which seeks

to offer a new edge computing tailored cloud computing service delivery model to efficiently

suit the requirements of the real-time latency-sensitive IoT applications. With EIaaS approach,

cloud providers are allowed to dynamically deploy IoT applications/services in the edge com-

puting infrastructures and manage cloud/network resources at the run time, as means to keep

IoT applications always best connected and best served. The resulting approach is modeled in a

modular architecture, leveraging both container and Software-Defined Networking technologies

to handle edge computing (CPU, memory, etc) and network resources (path, bandwidth, etc) re-

spectively. Preliminary results show how the virtualization technique affects the performance of

applications at the network edge infra. The container-based virtualization takes advantage over

the hypervisor-based technique for deploying applications at the edge computing infrastructure,

as it offers a great deal of flexibility under the presence of resource constraints.

Keywords: Fog Computing, Cloud Computing, Internet of Things, Container-based Virtualiza-

tion, Software-Defined Networking.

Page 6: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

SmartEdge: Extensões de Nuvem para Computação emNévoa para Suportar Aplicações IoT Sensíveis a Latência

Autor: Flávio de Sousa Ramalho

Orientador: Prof. Dr. Augusto José Venâncio Neto

RESUMO

O rápido crescimento do número de dispositivos conectados à Internet, associado às taxas cres-

centes de popularidade e demanda de aplicações e serviços em tempo real na nuvem, com

restrições de latência, torna muito difícil para estruturas de computação em nuvem tradicionais

acomodá-los de forma eficiente. Mais especificamente, a abordagem centralizada adotada tradi-

cionalmente por Data Centers (DC) atuais apresentam problemas de desempenho para aten-

der de aplicações em nuvem com alta densidade, principalmente quanto a capacidade de re-

sposta e escalabilidade. Nossa dependência insubstituível por computação em nuvem, exige

infra-estruturas de DCs sempre disponíveis, enquanto mantém ao mesmo tempo capacidades

de desempenho suficientes para responder a uma enorme quantidade de solicitações de aplica-

tivos em nuvem. Neste trabalho, a aplicabilidade do emergente paradigma de computação em

névoa é explorada para melhorar o desempenho no suporte de aplicações em nuvem sensíveis à

latência voltadas a Internet das Coisas (do inglês Internet of Things - IoT). Com base neste ob-

jetivo, apresentamos o novo modelo denominado Infraestrutura de Borda como um Serviço (do

inglês Edge Infrastructure as a Service - EIaaS), que procura oferecer um novo modelo de com-

putação em nuvem com serviço de entrega baseado em computação de borda voltado a atender

de forma eficiente as exigências de aplicações IoT em tempo real sensíveis à latência. Com a

abordagem EIaaS, provedores de nuvem podem implantar dinamicamente aplicações/serviços

IoT diretamente nas infra-estruturas de computação de borda, nem como gerir seus recursos de

núvem/rede em tempo de execução, como forma de manter as aplicações IoT sempre melhor

conectadas e melhor servidas. A abordagem resultante é arquitetada em uma estrutura modular,

tendo como base tecnológica ferramentas de Rede Definida por Software (do inglês, Software-

Defined Networking - SDN) para lidar com recursos de computação de borda (CPU, memória,

etc.) e de rede (caminhos, largura de banda, etc.), respectivamente. Os resultados preliminares

Page 7: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

mostram como as principais técnicas de virtualização utilizadas no âmbito deste trabalho, afe-

tam o desempenho das aplicações na infraestrutura de borda da rede. A virtualizaçào por con-

tainers leva vantagem sobre a técnica de virtualização por máquinas virtuais para implantar

aplicações na borda da rede, uma vez que oferece grande flexibilidade mesmo na presença de

demanda de recursos.

Palavras-chave: Computação em Névoa, Computação em Nuvem, Internet das Coisas, Virtual-

ização por Containers, Redes Definidas por Software.

Page 8: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

List of Pictures

1 Cloud-computing service models and responsibilities. . . . . . . . . . . . . . p. 27

2 Cloud-computing deployment models representation. . . . . . . . . . . . . . p. 29

3 Fog-computing Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . p. 32

4 (a) Virtual Machine and (b) Container isolation layers. . . . . . . . . . . . . p. 36

5 Docker app scheduling steps . . . . . . . . . . . . . . . . . . . . . . . . . . p. 41

6 Software-Defined Networking APIs . . . . . . . . . . . . . . . . . . . . . . p. 43

7 SmartEdge Stack deployed on a OpenStack Cloud Infrastructure . . . . . . . p. 55

8 SmartEdge Modular Architecture . . . . . . . . . . . . . . . . . . . . . . . . p. 58

9 SmartEdge Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 61

10 SmartEdge usage workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 69

11 Sequence diagram of adding a node to the SmartEdge cluster . . . . . . . . . p. 71

12 Sequence diagram of deploying an application on SmartEdge . . . . . . . . . p. 72

13 Linpack results on each platform over 15 runs, with N=2000 . . . . . . . . . p. 78

14 Disk throughput results from running Bonnie++ using a file size of 3 GiB . . p. 79

15 Disk rnrw from SysBench using a file size of 3 GiB . . . . . . . . . . . . . . p. 80

16 Disk throughput from DD using a file size of 3 GiB and a block of 1024b . . p. 81

17 Network throughput results from running netperf during 600 seconds . . . . . p. 82

18 Network request/response results from running netperf during 600 seconds . . p. 83

19 SmartEdge Evaluation Testbed . . . . . . . . . . . . . . . . . . . . . . . . . p. 85

20 Impact of latency on a simple Request/Response of 1 byte . . . . . . . . . . p. 90

21 Impact of latency on the application FPS by its provisioning platform. Only

the application deployed using SmartEdge presented a high QoE . . . . . . . p. 92

Page 9: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

22 CPU usage over the different deployment scenarios. . . . . . . . . . . . . . . p. 93

23 Impact of CPU allocation on the application performance. Confidence inter-

val for the mean of the values, with a confidence level of 95% . . . . . . . . p. 95

Page 10: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

List of Tables

1 Docker Swarm API (Container Operations) . . . . . . . . . . . . . . . . . . p. 39

2 IoT Platforms Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 52

3 SmartEdge Authentication API . . . . . . . . . . . . . . . . . . . . . . . . . p. 66

4 SmartEdge’s authentication method format . . . . . . . . . . . . . . . . . . . p. 66

5 SmartEdge’s account management API . . . . . . . . . . . . . . . . . . . . . p. 66

6 SmartEdge’s network API . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 67

7 SmartEdge’s node management API . . . . . . . . . . . . . . . . . . . . . . p. 67

8 SmartEdge’s registry management API . . . . . . . . . . . . . . . . . . . . . p. 67

9 SmartEdge’s event management API . . . . . . . . . . . . . . . . . . . . . . p. 68

10 CPU Benchmark: NBench . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 76

11 CPU/Scheduler Benchmark: SysBench . . . . . . . . . . . . . . . . . . . . . p. 77

12 Disk rnrw Benchmark: Bonnie++ . . . . . . . . . . . . . . . . . . . . . . . . p. 79

13 Memory Benchmark: STREAM . . . . . . . . . . . . . . . . . . . . . . . . p. 81

14 Application Provisioning Time . . . . . . . . . . . . . . . . . . . . . . . . . p. 89

Page 11: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

List of Abbreviations and Acronyms

3G – Third Generation

4G – Fourth Generation

API – Application Programming Interface

AWS – Amazon Web Service

BLE – Bluetooth Low Energy

CapEx – Capital Expenses

CDN – Content Delivery Networks

CM – Configuration Management

CSP – Cloud Service Provider

DC – Data Center

DCN – Data Center Networks

EC2 – Elastic Cloud Computing

EIaaS – Edge Infrastructure as a Service

EPA – Environmental Protection Agency

HOT – Heat Orchestration Template

HTTP – Hypertext Transfer Protocol

IaaS – Infrastructure as a Service

ICT – Information and Communications Technology

IoT – Internet of Things

IP – Internet Protocol

IT – Information Technology

Page 12: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

ITU – International Telecommunication Union

KVM – Kernel-based Virtual Machine

LTE – Long-Term Evolution

LXC – Linux Containers

M2M – Machine to Machine

MPLS – Multiprotocol Label Switching

NAT – Network Address Translation

NIC – National Intelligence Council

ODL – OpenDaylight

ONF – Open Networking Foundation

OpEx – Operating expenses

OS – Operational System

OSI – OpenSource Initiative

PaaS – Platform as a Service

QoS – Quality of Service

REST – Representational State Transfer

RFID – Radio Frequency IDentification

SaaS – Software as a Service

SDN – Software-Defined Networking

SLA – Service Level Agreement

SNS – Simple Notification Service

SOA – Service Oriented Architecture

SoC – System on Chip

TCO – Total Cost of Ownership

U.S. – United States

VM – Virtual Machine

Page 13: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

VPC – Virtual Private Cloud

VPN – Virtual Private Network

WiFi – Wireless Fidelity

WSN – Wireless Sensor Networks

Page 14: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

Contents

1 Introduction p. 15

1.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 17

1.1.1 Specific Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 18

1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 19

1.3 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 20

1.4 Work Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 20

2 Theoretical Background p. 21

2.1 Smart Cities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 21

2.2 Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 22

2.3 Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 24

2.3.1 Service Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 26

2.3.2 Deployment Models . . . . . . . . . . . . . . . . . . . . . . . . . . p. 27

2.3.3 Data Center Bottleneck . . . . . . . . . . . . . . . . . . . . . . . . . p. 28

2.4 Fog Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 29

2.4.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 31

2.4.2 Applicability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 32

2.5 Virtualization at the Network Edge . . . . . . . . . . . . . . . . . . . . . . . p. 33

2.5.1 Hypervisor-Based Virtualization . . . . . . . . . . . . . . . . . . . . p. 33

2.5.2 Container-Based Virtualization . . . . . . . . . . . . . . . . . . . . . p. 34

2.5.3 Virtual Machines vs. Containers . . . . . . . . . . . . . . . . . . . . p. 35

Page 15: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

2.5.4 Resource Allocation in Virtual Machines and Containers . . . . . . . p. 36

2.5.5 Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 37

2.5.5.1 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . p. 38

2.5.5.2 Networking . . . . . . . . . . . . . . . . . . . . . . . . . p. 40

2.5.5.3 Scheduling, Cluster Management, and Orchestration . . . . p. 40

2.6 Software-Defined Networking . . . . . . . . . . . . . . . . . . . . . . . . . p. 42

2.6.1 API Standardization . . . . . . . . . . . . . . . . . . . . . . . . . . p. 43

2.6.2 Flow-Based Control . . . . . . . . . . . . . . . . . . . . . . . . . . p. 44

2.6.3 Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 44

2.6.4 Software-Defined Networking and Internet of Things . . . . . . . . . p. 45

3 Related Works p. 47

3.1 Container-based Softwarized Control Plane . . . . . . . . . . . . . . . . . . p. 47

3.2 Network-based Softwarized Control Plane . . . . . . . . . . . . . . . . . . . p. 49

3.3 Key Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 50

4 Work Proposal p. 54

4.1 SmartEdge Key Design Principles . . . . . . . . . . . . . . . . . . . . . . . p. 54

4.1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 54

4.1.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 57

4.2 Design and Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 59

4.2.1 Application Programming Interface . . . . . . . . . . . . . . . . . . p. 65

4.2.1.1 Authentication . . . . . . . . . . . . . . . . . . . . . . . . p. 65

4.2.1.2 Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 66

4.2.1.3 Network . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 66

4.2.1.4 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 67

4.2.1.5 Registries . . . . . . . . . . . . . . . . . . . . . . . . . . p. 67

Page 16: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

4.2.1.6 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 67

4.2.2 Usage Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 68

4.3 Edge-Infrastructure-as-a-Service . . . . . . . . . . . . . . . . . . . . . . . . p. 72

5 Evaluation p. 74

5.1 Preliminary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 74

5.1.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 74

5.1.2 Benchmark Results . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 75

5.1.2.1 CPU Benchmark . . . . . . . . . . . . . . . . . . . . . . . p. 75

5.1.2.2 Disk I/O Benchmark . . . . . . . . . . . . . . . . . . . . . p. 78

5.1.2.3 Memory Benchmark . . . . . . . . . . . . . . . . . . . . . p. 81

5.1.2.4 Network Benchmark . . . . . . . . . . . . . . . . . . . . . p. 82

5.1.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 83

5.2 SmartEdge Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 84

5.2.1 Evaluation Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . p. 84

5.2.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 87

5.2.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 89

5.2.3.1 Provisioning time . . . . . . . . . . . . . . . . . . . . . . p. 89

5.2.3.2 Latency Impact on Request/Response . . . . . . . . . . . . p. 90

5.2.3.3 Impact of latency on the application QoE and server CPU . p. 91

5.2.3.4 Impact of resource allocation on the client application QoE p. 93

6 Conclusion and Future Work p. 96

References p. 99

Page 17: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

15

Chapter 1

Introduction

The future era of computing intends to change the way humans interact with technology.

The technological revolution that has taken place in recent decades, driven by advances and

developments in Information and Communication Technologies (ICT) has revolutionized the

way people communicate, work, travel, live, etc. Cities are evolving towards intelligent dynamic

infrastructures that serve citizens fulfilling the criteria of energy efficiency and sustainability [1].

The society needs to address new challenges in order to minimize the consumption of natural

energy resources, promote renewable energy and reducing CO2 emissions to the atmosphere,

in these highly populated towns and new urban areas. The Smart City [2] concept is a powerful

tool to address this urban change that must be able to efficiently manage infrastructure and

services, while meeting the needs of the city and its citizens.

Focusing on the technology needed to build smart cities, the developments made in the

field of ICT plays a key role in the creation and development of smart cities. The analysis of

Forrester Research in [3] describes a smart city system that "uses information and communi-

cation technologies in order to make critical components of the infrastructure and services of

a city, more interactive, accessible and effective". Therefore, the creation of a smart city is not

restricted to provide services independently and individually, but it will be necessary to deploy

a whole infrastructure for efficient city data collection, transmission, storage and analysis to

supply services for the citizens. That is where the Internet of Things paradigm together with

Cloud Computing comes into play.

The Internet of Things (IoT) paradigm [4] offers an environment where many of the ob-

jects around us will be seamless on the network. Technologies like wireless sensor networks

(WSN) [5] and Radio Frequency IDentification (RFID) [6] are already rising to overcome these

challenges, in which information and communication systems are invisibly embedded in the

environment around us. The enormous amounts of data generated by these devices need to be

stored, processed and presented efficiently, as well as in an easy interpretable way. To accommo-

Page 18: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

16

date such type of computing, Cloud Computing [7] is an asset, a well-known and widely-used

virtual infrastructure providing sensation of infinite resource availability and varying hub of ser-

vices. For instance, a traditional Cloud Computing service infrastructure is likely to provision

monitoring devices, storage devices, analytic tools, visualization platforms and client delivery.

The cost based model that Cloud Computing offers will enable end-to-end service provisioning

for businesses and users to access applications on demand from anywhere [8].

The general model of cloud computing is based on centralized Data Center (DC) architec-

tures, which are treated as the whole monopolized hub of services addressing all computation

and storage capabilities. In the current cloud-based frameworks, all application requests and re-

source demands are processed and handled inside central server farms. However, the increasing

popularity and penetration of Internet-connected devices, drove by the innovation of the IoT

paradigm, has imposed many challenges in the performance of cloud DCs to handle the huge

amount of information that is expected to be generated. In 2012, global commercialization of

IoT-based application systems generated a revenue of $4.8 trillion [9], Cisco estimates that the

global corporate profits will also increase approximately by 21%, just because of the adoption

of IoT [10], it is estimated that by 2020, around 80 billion devices will be connected to the

Internet [11]. Thus, to provide computing and storage to these devices, the cloud DCs needs to

adopt new approaches to manage such heterogeneous scenarios.

The innovation of IoT is dependent on the advances in cloud computing. Data from the

billions of Internet-connected devices are voluminous and demand to be processed and stored

within the cloud DCs. However, most of the IoT applications, such as smart vehicular traffic

management systems, smart driving and car parking systems, and smart grids are observed to

demand real-time, low-latency services from the service providers [12]. As the traditional cloud

computing system involves both processing and storage of data carried out in a centralized way

within DC’s infrastructure, the increasing amount of IoT data traffic is likely to exponentially

damage the performance of cloud service applications, mainly in regards to high rates of latency

and poor Quality of Service (QoS) afforded by network bottlenecks.

The aforementioned performance issues of centralized Cloud Computing infrastructures

in affording latency-sensitive IoT service applications motivated the research community in

searching for alternative solutions. In this scope, the Fog Computing [12] concept arises as an

asset, by enabling applications on billions of connected devices, already connected in the Inter-

net of Things (IoT), to run directly at the network edge [13]. As Fog computing is implemented

at the edge of the network, it provides low latency, location awareness, and improves quality-of-

services (QoS) for streaming and real time applications. Typical examples include transportation

Page 19: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

17

and networks of sensors and actuators. Moreover, this new infrastructure supports heterogene-

ity as Fog devices include end-user devices, access points, edge routers and switches. The Fog

paradigm is well positioned for real time big data analytics, supports densely distributed data

collection points, and provides advantages in entertainment, advertising, personal computing

and other applications [14].

Although the high capabilities of Fog Computing in providing a low-latency infrastructure,

its deployment is not trivial. Most of the works on fog computing have primarily focused on

the principles, basic notions, and the doctrines of it. Not many works have contributed in the

technical aspect of the paradigm in terms of an implementation perspective.

In this work, we propose a platform for provisioning resources at the network edge, named

SmartEdge. The platform seeks to provide a new service model, named Edge Infrastructure

as a Service (EIaaS), based on the recent computing paradigm, fog computing, to afford the

demands of real time latency-sensitive applications in the context of IoT. Fog computing was

firstly introduced at [12] as a new paradigm focused on the infrastructure for Internet of Things

applications. The idea of Fog computing has emerged to enable data distribution and placing it

closer to the end-user, thus reducing service latency, improving Quality of Service and removing

other possible obstacles between the user and the service.

For the sake of completeness it is important to state that the logic to decide when the appli-

cation should use the cloud/fog provided resources is out of the scope of this work. The scope

of SmartEdge is on the infrastructure level only and it is expected that these types of decision

making functionalities are considered at the application logic level.

1.1 Objectives

The main objective of the SmartEdge platform is to offer on-demand access to resources

provided at the network edge, to serve real-time applications demanding low response time

(e.g. live streaming, smart traffic monitoring, smart parking etc.). However, for the requests

which demand semi-permanent and permanent storage or require extensive analytics involving

historical data-sets (e.g. social media data, photos, videos, medical history, data backups etc.)

that still require to use the standard IaaS provided by the cloud, the edge devices would act as

routers or gateways to redirect the requests to the core cloud computing framework.

Page 20: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

18

1.1.1 Specific Objectives

On the basis of the aforementioned main objective of this dissertation, the list of specific

objectives required for fully carrying out this work are provided in the following:

1. Propose new strategies to deal with the deployment of latency-sensitive applications.

2. Implement a Heat Orchestration Template (HOT) to enable the easy deployment of the

proposed solution on a cloud provider;

3. Design and implement a mechanism to enable the deployment of applications, in the

form of containers, on any compatible device, typically network edge devices, in order to

extend the cloud services to the edge of the network;

4. Design and implement a mechanism to provide network functions to interconnect in a

controlled and efficient way the deployed containers and the resources available on the

cloud;

5. Propose the EIaaS model, a new service model that takes advantage of the SmartEdge

platform by enabling operators to use their network edge devices as an extension to their

cloud infrastructure;

6. Integrate all the hereinabove mechanisms to feature the SmartEdge framework;

7. Design and develop a reference software architecture that instantiates the SmartEdge

framework proposal;

8. Implement a prototype in a real testbed featuring the SmartEdge reference architecture

and its EIaaS model, in order to afford benchmarking suitability and performance aspects

in the context of the IoT applications. The benchmarking considers analysis in the runtime

statistics that are collected while running IoT applications in the testbed, over typical

cloud-enabled frameworks.

On the basis of the different types of applications constraints on the IoT environment, and

associated to the fact that fog computing is a subfield of cloud computing, it is important to

highlight that the EIaaS interworks with the IaaS by complementing one another, not replacing.

The complementary functions of EIaaS along with IaaS, offer to the end-users a new option of

service delivery model able to fulfill the requirements of real-time low-latency IoT applications

running at the network edges. Moreover, complex analytics and long-term data storage at the

core cloud framework can be afforded.

Page 21: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

19

1.2 Motivation

The main source of storage and computing of the cloud computing architecture are the dis-

persed DCs which communicate among themselves through their Data Center Networks (DCN).

In that way, most of the internet traffic are centralized in these DCNs, thus creating a huge bot-

tleneck for latency-sensitive applications. Motivated by the concepts of fog computing proposed

by Bonami et al. [12], in this work, we propose and develop a platform to provide EIaaS for

the fog paradigm and assess its performance while supporting the IoT requirements. With the

increase in the number of the IoT devices demanding real-time services from the providers, the

traditional cloud computing framework is expected to face the following key challenges:

1. Nowadays, connected devices have already reached 9 billion and are expected to grow

more rapidly and reach 80 billion by 2020 [11]. With this tremendous increase in the

number envisioned IoT devices, the DCNs will be imposed to a heavy network traffic

demand, thus affecting their capability to suit low-latency application requirements. As

a consequence, real-time applications will experience quality degradation in the network

service transport, and thus QoE, specially at wireless communications.

2. According to a report [15] from the U.S. Environmental Protection Agency (EPA), in

the year of 2006, DCs have been identified as one of the fastest growing consumers of

energy, the DCs of U.S. consumed about 61 billion kilowatt-hours of power, which rep-

resents 1.5% of all power consumed in the U.S. and result in a total financial expenditure

worth $4.5 billion. It was also observed that in 2007, 30 million worldwide servers were

accounted for 100 TWh of the world’s energy consumption at a cost of $9 billion which is

expected to rise up to 200 TWh in the next few years [15], [16]. Therefore, it is important

to exempt the cloud DCs from being bombarded with service requests, and serve a part

of those requests from the network edge. This would relax the load experienced by the

DCs and also would serve the latency-sensitive application requests in a better way with

increased QoS.

3. Many works have contributed establishing the principles, basic notions, and the doctrines

of Fog Computing. However its deployment, suitability and technical aspects are still to

be experimented.

Considering the motivations listed in this section, next one aims at highlighting the main

contributions that are envisioned through carrying out this work.

Page 22: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

20

1.3 Contribution

As mentioned before, EIaaS is an extension to the standard IaaS offered by the cloud com-

puting providers. Rather, in this work, we analyze the development and suitability of EIaaS

combined with the traditional IaaS in supporting the ever-increasing demands of the latency-

hungry IoT-based applications. The expected contributions of this work are listed below.

1. Initially, this work constructs the architectural model of the proposed SmartEdge plat-

form, based on the concepts of fog computing – one of the first attempts of its kind in this

direction. We define the different modules and links within the cloud computing architec-

ture and explain the communication exchange pattern between them.

2. Based on this model, a prototype will be developed and deployed on a real test-bed, using

devices such as set-top-box, desktops and notebooks as edge devices.

3. The prototype will characterizes the performance metrics of the proposed EIaaS in terms

of the service latency, performance and scalability. Also, will provide a technical view of

the challenges involved on the deployment of the fog computing paradigm.

4. The work also performs a fair and equitable comparative study for both IaaS and EIaaS

models. We analyze the suitability of the EIaaS service type to support the demands of

IoT devices while serving latency-sensitive applications.

1.4 Work Organization

This work is organized as follows. Firstly, Chapter 2 provides a Theoretical Background

over technologies concerned in the proposed work as well as how they relate with each other

and with the proposed work. Chapter 3 presents related works where each solution is described

and compared according to some elicited features. Chapter 4 introduces the proposed work and

its architecture, as well as describe the new EIaaS service type proposed. Preliminary results

about virtualization techniques are provided at Chapter 5. Also on Chapter 5, the benchmarking

results and analysis of the solution is presented. Finally, Chapter 6 provides the conclusion and

future works to be considered.

Page 23: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

21

Chapter 2

Theoretical Background

This chapter aims at providing the concepts related to the main fields of research that are

considered in this dissertation, with emphasis in foundations, architectures and its relations to

the proposed work.

2.1 Smart Cities

A Smart City [2] is an urban system that uses information and communication technology

(ICT) to make both its infrastructure and its public services more interactive, more accessible

and more efficient.

A Smart City is a city committed to its environment, both environmentally and in terms

of the cultural and historical elements, and where the infrastructure is equipped with the most

advanced technological solutions to facilitate citizen interaction with urban elements.

The origin of Smart Cities is mainly based on two factors. Firstly, the increase in world

population and its growing migration from rural areas to urban centers, the urban population

is forecasted to reach 70% by 2050 [17]. Secondly, there is a concern about the shortage of

natural resources, which may compromise in the coming years the global supply to the world

population, along with concerns about the environment and climate change [18].

The society needs to address new challenges in order to minimize the consumption of natu-

ral energy resources, promote renewable energy and reducing CO2 emissions to the atmosphere,

in these highly populated towns and new urban areas [19]. The Smart City concept is a power-

ful tool to address this urban change that must be able to efficiently manage infrastructure and

services, while meeting the needs of the city and its citizens.

Focusing on the technological scenario, ICTs together with local governments and private

Page 24: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

22

companies, play a key role for implementing innovative solutions, services and applications

to make smart cities a reality. The Internet of Things paradigm is playing a primary role as an

enabler of a broad range of applications, both for industries and the general population [20]. The

increasing popularity of the IoT concept is also due to the constantly growing number of very

powerful devices like smartphones, tablets, laptops and lower powerful devices like sensors that

are able to join the Internet.

In the context of Smart Cities, it makes sense to consider the scenario of the various different

and heterogeneous devices, the Wireless Sensor Networks (WSN) interconnected to each other

and to exploit these "interconnections" to activate new type of services.

The ICT trends suggest that the sensing and actuation resources can be involved in the

Cloud and solutions for the convergence and evolution of IoT and cloud computing infras-

tructures arise [20]. Nevertheless, there are some challenges that need to be faced such as: 1)

the interoperability among different ICT systems; 2) a huge amount of data to be processed

and provided in real-time by the IoT devices deployed in the smart systems; 3) the significant

fragmentation deriving from the multiple IoT devices; 4) heterogeneous resources mashup, or

how to orchestrate resources on this heterogeneous environment. Concerning these items, the

proposed work seems a valid starting point to overcome these challenges.

2.2 Internet of Things

The concept of Internet of Things (IoT) was firstly introduced in 1999, by Kevin Ashton

[21], and has referred to IoT as uniquely identifiable interoperable connected objects with radio-

frequency identification technology. IoT was generally defined as a "dynamic global network

infrastructure with self-configuring capabilities based on standards and interoperable commu-

nication protocols. In an IoT environment Physical and virtual ’things’ have identities and at-

tributes and are capable of using intelligent interfaces and being integrated as an information

network" [22].

Basically, the IoT can be treated as a superset of connecting devices that are uniquely iden-

tifiable by existing communication technologies. The words "Internet" and "Things" mean an

interconnected world-wide network based on sensory, communication, networking, and infor-

mation processing technologies, which might be the new version of information and communi-

cations technology.

According to [4] the IoT introduction will affect users on both domestic and working fields.

Some examples of this domestic influence includes applications scenarios in assisted living,

Page 25: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

23

e-health and enhanced learning where the new paradigm will play a leading role in the near

future. On the working field, the most apparent consequences will be equally visible in scenar-

ios such as, automation and industrial manufacturing, logistics, business/process management,

intelligent transportation of people and goods.

The U.S. National Intelligence Council (NIC) included IoT in the list of six "Disruptive

Civil Technologies" with potential impacts on the U.S. national power [23]. NIC also foresees

that "by 2025 Internet nodes may reside in everyday things – food packages, furniture, paper

documents, and more". It highlights future opportunities that will arise, starting from the idea

that "popular demand combined with technology advances could drive a widespread diffusion

of an Internet of Things that could, like the present Internet, contribute invaluably to economic

development". The International Telecommunication Union (ITU) also discussed the enabling

technologies, potential markets, and emerging challenges and the implications of the IoT [24].

The IoT describes the next generation of Internet, where the physical things could be ac-

cessed and identified through the Internet. Depending on various technologies for the imple-

mentation, the definition of IoT definition varies. However, the fundamental of IoT implies that

objects composing an IoT environment can be identified uniquely in the virtual representations.

Within an IoT, all things are able to exchange data and, if needed, process data according to

predefined schemes.

Recent research has spawned the concept of IoT [25] [26] that connects billions of things

across the globe to the Internet and enables Machine to Machine (M2M) communication [27]

among these devices. Contemporary devices or Internet based systems are gradually converging

towards IoT [28]. According to [11], by 2020, around 80 billion devices will be connected to

the Internet. Thus, by 2020, it is estimated that a large number of applications will be required

to be processed and served through the technology of IoT [29] [30].

Analyzing contemporary data trends of large volume, heavy heterogeneity, and high veloc-

ity (Big data), it is also anticipated that a vast majority of these applications are highly latency-

sensitive and require real-time processing [31] [32] [33]. Therefore, to provision the resource

management and heavy computational potential to the applications, IoT leans highly on cloud

computing [34] [35]. Consequently, the performance of IoT is profoundly dependent on the

ability of cloud platforms to serve billions of devices and their applications, in real-time [36].

Page 26: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

24

2.3 Cloud Computing

According to [37] [38] Cloud computing is neither a completely new concept nor a new

technology. It is just a new business operational model originated from other existing technolo-

gies such as Virtualization, Service Oriented Architecture (SOA) and Web2. Already, there are

several definitions of Cloud computing in the academic and commercial world [39] [40] [7]

[41], these definitions can be summed up into this one:

“Cloud Computing is a parallel and distributed system consisting of a shared pool of vir-

tualized resources (e.g. network, server, storage, application, and service) in large-scale data

centers. These resources can be dynamically provisioned, reconfigured and exploited by a pay-

per-use economic model in which consumer is charged on the quantity of cloud services usage

and the provider guarantees Service Level Agreements (SLAs) through negotiations with con-

sumers. In addition, resources can be rapidly leased and released with minimal management

effort or service provider interaction. Hardware management is highly abstracted from the user

and infrastructure capacity is highly elastic.”

The aim is to concentrate computation and storage in data centers, where high-performance

machines are linked by high-bandwidth connections, and all of these resources are carefully

managed. The end-users make the requests that initiate computations and receive the results. In

spite of the existing differences among definitions of Cloud computing, some common charac-

teristics are described as follow.

• Virtualization: Hardware Virtualization mediates access to the physical resources, de-

couples applications from the underlying hardware and creates a virtualized hardware

platform using a software layer (hypervisor). The hypervisor creates and runs virtual ma-

chines (VM). A virtual machine is like a real computer, except that it uses virtual re-

sources, which enables isolation and independence from particular hardware. Also, per-

mits the assignment of virtual resources to another physical hardware in case of capacity

constraints or hardware failures. Through Virtualization, the underlying architecture is

abstracted from the user while it still provides flexible and rapid access to it.

• Multitenancy: Allows several customers (tenants) to share the DC infrastructure without

being aware of it and without compromising the privacy and security of each customer’s

data (through isolation). Even though multitenancy is cost-effective, it may affect perfor-

mance when simultaneously accessing shared services (multi-tenant interference).

• Service oriented architecture: Everything is expressed and exposed as a service which de-

Page 27: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

25

livers an integrated and orchestrated suite of functions to an end-user through composition

of both loosely and tightly coupled functions.

• On-demand self-service: Cloud computing allows self-service access so that customers

can request, customize, pay, and use services, as needed, automatically, without requiring

interaction with providers or any intervention of human operators [7].

• Elasticity: To provide the illusion of infinite resources, more virtual machines (on two or

more physical machines) can be quickly provisioned (scale out), in the case of peak de-

mand, and rapidly released (scale in), to keep up with the demand. These scaling methods

can automatically be done according to the user’s predefined conditions (Auto Scaling).

• Network access: Services are available over the network and accessed through standard

mechanisms that can be achieved by heterogeneous thin or thick client platforms such as

mobile phones and laptops.

• Resource pooling: The Cloud provider offers a pool of computing resources to serve mul-

tiple consumers using a multi-tenant model, with different physical and virtual resources.

Location transparency in the Cloud that hides resource’s physical location from the cus-

tomer (may be provided in a higher level of abstraction, like country) provides more

flexibility to Cloud providers for managing their own resource pool.

• Measured service: Cloud systems can transparently monitor, control and measure service

usages for both the provider and the consumer by leveraging a metering capability at

some level of abstraction appropriate to the type of service, similar to what is being done

for utilities such as Electricity, Gas, Water, Telecommunication, etc.

In addition to the outstanding characteristics, Cloud computing brought cost saving for

consumers through removing capital expenses (CapEx) as well as reducing operating expenses

(OpEx). In CapEx, this saving is achieved by eliminating the total cost of the entire infrastruc-

tures. In OpEx, saving is achieved by sharing the cost of electricity, system admins, hardware

engineers, network engineers, facilities management, fire protection, and insurance or local and

state taxes on facilities. There are other hidden OpEx costs that a Cloud instance can elimi-

nate, such as purchasing and acquisition overhead, asset insurance, and business interruption

planning and software.

At least three well established delivery models exist on the Cloud: Software as a Service

(SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS) deployed as public,

Page 28: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

26

private, community, and hybrid Clouds. These service types and boundaries are elaborated in

the following sections.

2.3.1 Service Models

Consumers purchase Cloud services in the form of infrastructure, platform, or software.

Infrastructure services are considered to be the bottom layer of Cloud computing systems. In-

frastructure as a Service offers virtualized resources (such as computation, storage, and commu-

nication) on-demand to the infrastructure specialists (IaaS consumers) who are able to deploy

and run arbitrary operating systems and customized applications. IaaS Cloud providers often

provide virtual machines with a software stack that allows to make them customized, similar to

physical servers. They grant privileges to users for doing some operations on their virtual server

(such as starting and stopping it). Therefore, an infrastructure specialist does not manage or con-

trol the underlying Cloud infrastructure while having control over operating systems, storage,

deployed applications, and possibly limited control of some networking components, e.g. host

firewalls. This type of service is particularly useful for start-ups, small and medium businesses

with rapidly expanding or dynamic changes, that do not want to invest in infrastructure [40].

Platform as a Service is another model in the Cloud that offers a higher level of abstraction

to make a Cloud easily programmable. PaaS Cloud providers provide a scalable platform with

a software stack containing all tools and programming languages supported by the provider.

They allow developers (PaaS consumers) to create and deploy applications without the hassle

of managing infrastructure, and regardless of the concerns about processors and memory ca-

pacity. Therefore, the developer does not manage or control the underlying Cloud infrastructure

including network, servers, operating systems or storage while having control over the deployed

applications and possibly application-hosting environment configurations [7].

Delivering applications supplied by service providers at the highest level of abstraction in

the Cloud to the end-users (SaaS consumers) through a thin client interface such as a web portal

is known as Software as a Service. SaaS Cloud providers supply a software stack containing an

operating system, middlewares such as database or web servers, and an instance of the Cloud

application, all in a virtual machine. Therefore, the end-user does not manage or control the

underlying Cloud infrastructure including network, servers, operating systems, storage, or even

individual application capabilities, with the possible exception of limited user-specific applica-

tion configuration settings. SaaS alleviates the burden of software maintenance for customers

and simplifies development and testing for providers [42].

Figure 1 (adapted from [43]) summarize the service models and its responsibilities.

Page 29: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

27

Figure 1: Cloud-computing service models and responsibilities.

2.3.2 Deployment Models

There are four general Cloud deployment models known as private, community, public, and

hybrid Cloud.

In Private Cloud, the infrastructure is owned and exclusively used by a single organiza-

tion, and managed by the organization or a third-party and may exist on or off the premises

of the organization. Many organizations, particularly governmental or very large organizations,

embraced this model to exploit the Cloud benefits like flexibility, reliability, cost reduction,

sustainability, elasticity, and so on.

In Community Cloud, the infrastructure is shared by several organizations and supports a

specific community with shared concerns such as mission, security requirements, policy, and

compliance considerations. It may be owned and managed by the organizations or by a third-

party and may exist on premises or off-premises.

In Public Cloud, the infrastructure exists on the premises of the Cloud provider and is avail-

able to the general public or a large industry group, and is owned by an organization selling

Cloud services. There are many Public Cloud computing styles based on the underlying resource

abstraction technologies, for example Amazon Web Services [44], Google Cloud Platform [45]

and Rackspace Public Cloud [46]. However, a public Cloud can use the same hardware in-

Page 30: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

28

frastructure (with large scale) as a Private one. In contrast with Private, Public Cloud lacks

fine-grained control over data, network and security settings, which hampers their effectiveness

in many business scenarios [47]. This model is suitable for small and medium businesses to

support their growing business without huge investment in the infrastructure.

Sometimes the best infrastructure to fit an organization’s specific needs requires both Cloud

and on-premises environments. In Hybrid Cloud, the services within the same organization are

a composition of two or more Clouds (Private, Community, or Public) to address the limitations

of each model with more scalability and flexibility whilst both saving money and providing ad-

ditional security. On the down side, Hybrid Cloud environments involve complex management

and governance challenges.

Some other deployment models, such as Virtual Private Cloud and Managed Cloud are well

known but not widely used. Virtual Private Cloud is a Private Cloud that leverages a Public

Cloud infrastructure using advanced network capabilities (such as VPN) in an isolated and

secured manner. Managed Cloud is a type of private Cloud that is managed by a team of experts

in a third-party company. Managed Cloud includes access to a dedicated, 24 x 7 x 365 support

team via phone, chat, online support, and so on to support Cloud servers from the OS up through

the application stack. Amazon VPC [48] and Rackspace Managed Cloud [49] are examples of

Virtual Private Cloud and Managed Cloud, respectively. There are also Managed Virtual Private

Cloud such as HP Helion Managed Virtual Private Cloud [50].

Figure 2 (adapted from [51]) depicts the Cloud Computing deployment models.

2.3.3 Data Center Bottleneck

Over the last few years, some researches [52] [53] [54] on cloud computing illustrate the

detailed underlying process behind the provisioning of cloud services. Generally, complete pro-

cess of virtualization of cloud services involves several cloud DCs, dispersed across multiple

geographical locations. Cloud systems are, therefore, DCN-centric and for every user request,

service provisioning involves one or more DCNs. In [55], Xiao et al. addressed the problem

of design and optimal positioning of DCs to improve the QoS in terms of service latency and

cost efficiency. However, the work is strongly affected by the efficiency of the DCNs. In another

work, Chen et al. [56] focused on the problem of latency for video streaming services. The work

suggests the usage of a single DC under a single Cloud Service Provider (CSP). However, the

situation might be hypothetical as in real-life scenarios of IoT, a single DC under a single global

CSP may hinder the overall service efficiency due to lack of proper management and shortage

of cloud storage. Tziritas et al. [57] addressed process migration to improve the performance

Page 31: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

29

Figure 2: Cloud-computing deployment models representation.

of cloud systems and demonstrated experimental results with 1000 processes. However, IoT

concerns billions of processes and in such a scenario, process migration within DCs might be

of overhead degrading the performance. Other scheduling techniques that focus on real-time

workload scheduling [58] or energy-efficient scheduling [59] have also worked with low scale

scenarios.

For each of the above works, the DCs form the computing resources hub and the DCNs are

invoked every time an application makes a service request. Therefore, with the increase in the

number of IoT consumers and with every request being required to be processed within the DCs,

it is likely that the cloud DCNs will encounter a serious difficulty in serving the IoT applications

in real-time. Additionally, with the increase in the number of latency-sensitive applications, the

efficiency of service provisioning will also reduce to a significant extent.

2.4 Fog Computing

The contemporary trends in data volume, velocity, and variety and the limitations of cloud

computing make it easy to speculate the need to propose new techniques of data management

Page 32: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

30

and administration. In this context, Cisco proposed the revolutionary concept of fog computing

[12] [60].

Fog computing is defined as a distributed computing infrastructure that is able to handle

billions of Internet-connected devices. It is a model in which data, processing and applications

are concentrated in devices at the network edge, rather than existing almost entirely in the

Cloud, to isolate them from the Cloud systems and place them closer to the end-user, which is

the aim of fog computing. The Fog is organizationally located below the Cloud and serves as

an optimized transfer medium for services and data within the Cloud.

The Fog computing happens outside the Cloud and ensures that Cloud services, such as

compute, storage, workloads, applications and big data, can be provided at any edge of the

network (Internet) in a truly distributed way.

By controlling data in various edge points, Fog computing integrates with core Cloud ser-

vices for turning data center into a distributed Cloud platform for users. In other words, FOG

brings computing from the core to the edge of the network.

In this context, it may be just another name for Edge computing [61]. Edge Computing is

pushing the frontier of computing applications, data, and services away from centralized nodes

to the logical extremes of a network. It enables analytics and knowledge generation to run at the

source of the data. This approach requires leveraging resources that may not be continuously

connected to a network such as laptops, smart phones, tablets and sensors [62]. Two systems

that can provide resources for computing near the edge of the network are the MediaBroker [63]

for live sensor stream analysis, and Cloudlets [64] for interactive applications. However, neither

currently supports widely distributed geospatial applications [65].

The idea of Fog computing has emerged to distribute all data and place it closer to the end-

user, eliminate service latency, improve QoS and remove other possible obstacles connected

with data transfer. Because of its wide geographical distribution, the Fog paradigm is well posi-

tioned for big data and real time analytics and it supports mobile computing and data streaming.

Fog computing is not a replacement for Cloud computing. It is just an addition which devel-

ops the concept of Cloud services. Services are hosted at the network edge or even end devices

such as set-top-boxes or access points, etc. Conceptual Fog computing builds upon existing

and common technologies like Content Delivery Networks (CDN) [66], but based on Cloud

technologies it should ensure the delivery of more complex services. However, developing ap-

plications using fog computing resources is more complex [65].

Already, the everything-as-a-service models have been applied by the industry. This means

Page 33: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

31

that the future of the computing paradigms must support the idea of the Internet of Things in

order to successfully emerge, wherein sensors and actuators blend seamlessly with the envi-

ronment around us and the information is shared across platforms in order to develop a com-

mon operating picture. Fog computing supports emerging Internet of Things applications that

demand real-time or predictable latency such as industrial automation, transportation, sensor

networks and actuators.

The concept of Fog computing is not something to be developed in the future. It is already

here and a number of distributed computing and storage start-ups are adopting the phrase [67].

A lot of companies have already introduced it, while other companies are ready for it [68].

Actually, any company which delivers content can start using Fog computing. A good example

is Netflix, a provider of media content, who is able to reach its numerous globally distributed

customers. With the data management in one or two central data centers, the delivery of video-

on-demand service would not be efficient enough. Fog computing thus allows providing very

large amounts of streamed data by delivering the data directly into the vicinity of the customer.

2.4.1 Architecture

The Fog Computing architecture is highly based on a virtualized platform that provides

compute, storage, and networking services between end devices and traditional Cloud Com-

puting data centers, typically, but not exclusively located at the edge of network [69]. Figure 3

presents the architecture and illustrates an implementation of Fog Computing.

At the bottom layer, million of connected devices (smart things, vehicles, wireless or wired

machines) can take advantage of the network edge for processing and storage, it will also enable

M2M communication between these devices in real-time.

The next layer represents the network edge, where dozens of thousands of devices will

compose the infrastructure for deploying the Fog nodes, these devices provides connectivity

to the bottom layer through many technologies, such as 3G, 4G, LTE, WiFi. This layer will

also provide the connectivity between the Cloud and the Fog nodes. Thousand of devices using

different technologies (IP/MPLS, Multicast, QoS) will improve the communication between the

Cloud and the network edge, at the same time it will provide security between the two entities.

The top layer represents the Cloud, hosted at dedicated Data centers, it will manage the

Fog nodes and host the applications, it will also store and process data from sources that do not

require real-time information.

Page 34: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

32

Figure 3: Fog-computing Architecture

2.4.2 Applicability

Recent researches, have revealed some of the important aspects of fog computing. In [70],

the authors considered various computing paradigms inclusive of cloud computing, and inves-

tigated the feasibility of building up a reliable and fault-tolerant fog computing platform. Do et

al. [71] and Aazam and Huh [72] have inspected the different intricacies of resource allocation

in a fog computing framework. Research into security for this paradigm has explored various

theoretical vulnerabilities [73] [74]. However, most of the works on fog computing have pri-

marily focused on the principles, basic notions, and the doctrines of it. Not many works have

contributed in the technical aspect of the paradigm in terms of an implementation perspective.

Cirani et al. [75] explored one such implementation of Fog computing, creating a Fog node

named “IoT Hub”.

Page 35: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

33

2.5 Virtualization at the Network Edge

Internet of Things specific applications may require the deployment of gateways at the

network edge to enable its interaction with physical sensors, pre-processing data from these

sensors, and synchronizing it with the cloud. The orchestration, deployment, and maintenance

of the software running on the gateways in large-scale deployments is known to be challeng-

ing. Due to the limited resource available in the network edge the evaluation of virtualization

techniques is fundamental for better using this resources.

Recent advances in the virtualization field improved the existent technology (hypervisor-

based) and also created a new virtualization class, that is container-based virtualization, also

referred as lightweight-virtualization. Contrary to VMs (created by hypervisor-based virtualiza-

tion), containers can be seen as more flexible tools for packaging, delivering and orchestrating

both software infrastructure services and applications, i.e., tasks that are typically a PaaS focus.

Containers allow a more portable way aiming at more interoperability [76] while still utilizing

operating systems (OS) visualization principles. VMs on the other hand are about hardware

allocation and management (machines turned on/off and provisioned). With them, there is an

IaaS (Infrastructure-as-a-Service) focus on hardware virtualization.

2.5.1 Hypervisor-Based Virtualization

The core of the hypervisor based virtualization is a software technology called hypervisor,

which allows several operating systems to run side-by-side on a given hardware. Unlike con-

ventional virtual-computing programs, a hypervisor runs directly on the target hardware’s "bare

metal", instead of as a program in another operating system. This allows both, guest OS and the

hypervisor, to perform more efficiently.

The current crop of hypervisors run on commodity hardware — the x86/x64 processor

family, as opposed to specialized server hardware. Some of this is again due to the fact that

processor architecture makes virtualization easier, but some of it is due to the way various

hypervisor technologies were developed and improved in the open source domain making it

much more simple for their technologies to be used all the more broadly.

During the last decade, hypervisor-based virtualization has been widely used for imple-

menting virtualization and isolation. As hypervisors operate at the hardware level, it enables

supporting standalone virtual machines to be independent and isolated of the host system mak-

ing possible to virtualize different guest OSes, e.g., Windows-based VMs on top of Linux.

However, the trade-off here is that a full operating system is installed into a virtual machine,

Page 36: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

34

which means that the image will be substantially larger. In addition, the emulation of the virtual

hardware device incurs more overhead.

According to [77] hypervisors are classified in two different types:

• Type-1: hypervisor that runs directly on the hardware and that hosts guest operating sys-

tems (operate on top of the host’s hardware).

• Type-2: hypervisor that runs within a host OS and that hosts guest OSes inside of it

(operate on top of the host’s operating system).

However, this categorization is steadily being eroded by advances in hardware and operating-

system technology. For example, Linux Kernel-based Virtual Machine (KVM) has characteris-

tics of both types [78].

2.5.2 Container-Based Virtualization

Containers are a lightweight approach to virtualization that can be used to rapidly develop,

test, deploy, and update IoT applications at scale. Many web and mobile app developers make

use of hypervisors, such as VirtualBox [79], to run virtual machines that virtualize physical

hardware as part of a cross-platform development, testing, and deployment. Such workflow can

be highly improved by the use of containers. Container-based virtualization (sometimes called

lightweight-virtualization) is much more lightweight. Each container runs as an isolated user

space instance on top of a shared host operating system kernel. Although the operating system is

shared, individual containers have independent virtual network interfaces, independent process

spaces, and separate file systems. These containers can be allocated with system resources like

RAM, CPU and network by using control groups that implement resource isolation. Compared

to hypervisor-based virtualization, where each VM runs its own operating system which only

increases its use of system resources, containers use much less disk and memory resources.

The container technology is not new, it has been built into Linux in the form of Linux

Containers (LXC) [80] for almost ten years, and similar operating system level virtualization has

also been offered by FreeBSD jails, AIX Workload Partitions and Solaris Containers. However

Docker [81] has exploded onto the scene a couple of years ago, and has been causing excitement

in IT circles ever since. The application container technology provided by Docker promises to

change the way that IT operations are carried out just as virtualization technology did a few

years previously.

Page 37: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

35

2.5.3 Virtual Machines vs. Containers

Containers and virtual machines both allow multiple applications to run on the same phys-

ical systems. They differ in the degree to which they meet different kinds of business and IT

requirements.

The main objective of virtual machines is to offer abstraction from the underlying hard-

ware, and they do it very well. This enable costs reduction and automation when provisioning a

complete software stack, including the operating system, the application, and its dependencies.

By automating infrastructure as a service and platform as a service solutions, it is possible to

reduce overall data center total cost of ownership (TCO). These savings come from server con-

solidation and simplified system administration, as different operating systems can run on the

same hardware.

However, there are cases where Virtual machines does not fit very well: Virtual machines

need minutes to boot up, this give hackers time to exploit known vulnerabilities during boot up

and can degrade the user experience. Also, as every virtualized application has at least two oper-

ating systems for operators to manage and secure, the hypervisor and the guest OS, patching and

life-cycle management for virtual machines requires a significant effort. Even the simplest OS

process needs its own virtual machine. This requirement increases flexibility, but it also makes

virtual machines inefficient to use for micro-services architectures with hundreds or thousands

of processes. When each physical server is replaced by one virtual machine, physical resource

utilization tends to remain low. Businesses, academia, and government can gain from a more ef-

ficient way to build, ship, deploy, and run applications, and that’s where Linux containers come

in.

On this context, a Linux container is basically a set of processes and its dependencies are

isolated from the rest of the machine. For example, if a web server relies on a particular python

library, the container can encapsulate that library. In that way, containers makes possible to

have multiple versions of the same library, or any other dependency, co-existing in the same

environment, without the administrative overhead of a complete software stack, including the

OS kernel [77].

As can be seen in Fig. 4 - (a) Virtual machines include the application, the necessary binaries

and libraries, and an entire guest operating system – all of which can amount to tens of GBs.

While containers (Fig 4 - (b)) include the application and all of its dependencies – but share the

kernel with other containers, running as isolated processes in user space on the host operating

system.

Page 38: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

36

Physical Server

Hypervisor

Host Operating System

Guest OS

Bins/Libs

App 1

Guest OS

Bins/Libs

App 2

Guest OS

Bins/Libs

App 3

Physical Server

Container Engine

Host Operating System

Bins/Libs

App 1

Bins/Libs

App 2

Bins/Libs

App 3

(a) (b)

Figure 4: (a) Virtual Machine and (b) Container isolation layers.

Regarding the performance, containerized applications perform about as well as applica-

tions deployed on bare metal. Containers run in isolation, sharing an operating system instance,

different from hypervisors which provide a logical abstraction at the hardware level. The con-

tainer approach can also be used to improve application deployment in IoT. For example, it can

lower costs, speed up the application development, simplify security, and also offer a easier way

to adopt new IT models such as hybrid clouds and micro-services architecture. On a container-

ized environment there is fewer operating systems to manage, as each virtual machine can be

carved into multiple containers, all sharing the same operating system kernel.

Regarding the mobility, it is easier to move workload between private and public clouds, as

using containers there is much fewer data to move. A virtualized application running ten virtual

machines has eleven operating systems (the hypervisor and each guest operating system) and

each needs patching. In contrast, a containerized server running ten different applications has

only one operating system to maintain.

For application patching, Docker container images are composed of layers, so it can easily

patched by just adding a new layer. New layers does affect the others. For example, the layers

of a web application image might constitute of three different layers Apache web server [82], a

Python runtime system [83], and a MariaDB database [84].

2.5.4 Resource Allocation in Virtual Machines and Containers

A key benefit of virtualization is the ability to consolidate multiple workloads onto a single

computer system. This consolidation yields savings in power consumption, capital expense,

Page 39: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

37

and administration costs. The degree of savings depends on the ability to allocate hardware

resources such as memory, CPU cycles, I/O, and network bandwidth [85].

There are many technologies for resource allocation, depending on the virtualization tech-

nology used. For example KVM (virtual machine) [86] and Docker (container) [81] make use of

the Linux "Control Groups" (cgroups) [87] facility for applying resource management to their

virtual machines and containers.

Cgroups are organized hierarchically, like processes, and child cgroups inherit some of the

attributes of their parents. However, there are differences between the two models.

All processes on a Linux system are child processes of a common parent: the init process,

which is executed by the kernel at boot time and starts other processes (which may in turn

start child processes of their own). Because all processes descend from a single parent, the

Linux process model is a single hierarchy, or tree. Additionally, every Linux process except init

inherits the environment and certain other attributes of its parent process.

Cgroups are similar to processes in that 1) they are hierarchical and 2) child cgroups inherit

certain attributes from their parent cgroup. The fundamental difference is that many different

hierarchies of cgroups can exist simultaneously on a system. If the Linux process model is a

single tree of processes, then the cgroup model is one or more separate, unconnected trees of

tasks (i.e. processes).

Multiple separate hierarchies of cgroups are necessary because each hierarchy is attached to

one or more controller. A controller represents a single resource, such as CPU time or memory.

For KVM and Docker, resource allocation is applied on the start of the virtual machine

or container by specifying the number of CPUs, memory, etc. For example, when starting a

docker container, using the "-c" parameter it is possible to specify a relative weight, witch is

some numeric value used to allocate relative CPU share. The resources are allocated dynami-

cally, meaning that it will only be used when needed. It is also possible to resize (add/remove

resources) an already running virtual machine or container.

2.5.5 Docker

Containerization is the process of distributing and deploying applications in a portable and

predictable way. It accomplishes this by packaging components and their dependencies into

standardized, isolated, lightweight process environments called containers. Many organizations

are now interested in designing applications and services that can be easily deployed to dis-

Page 40: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

38

tributed systems, this allows the system to scale easily and survive machine and application

failures. Docker, a containerization platform developed to simplify and standardize deployment

in various environments, was largely instrumental in spurring the adoption of this style of ser-

vice design and management. A large amount of software has been created to build on this

ecosystem of distributed container management.

Docker came along in March, 2013, when the code, invented by Solomon Hykes, was re-

leased as open source. It is also the name of a company founded by Hykes that supports and

develops Docker code. Both the Docker open source container and company’s approach have a

lot of appeal, especially for cloud applications and agile development. Because many different

Docker applications can run on top of a single OS instance, this can be a more efficient way to

run applications. The company’s approach also speeds up applications development and testing,

because software developers don’t have to worry about shipping special versions of the code for

different operating systems. Because of the lightweight nature of its containers, the approach

can also improve the portability of applications. Docker and containers are an efficient and fast

way to move pieces of software around in the cloud.

Nowadays, Docker is the most common containerization software in use. While other con-

tainerizing systems exist, Docker makes container creation and management simple and inte-

grates with many open source projects.

The Docker’s main advantages are:

• Lightweight resource utilization: instead of virtualizing an entire operating system, con-

tainers isolate at the process level and use the host’s kernel.

• Portability: all of the dependencies for a containerized application are bundled inside of

the container, allowing it to run on any Docker host.

• Predictability: The host does not care about what is running inside of the container and the

container does not care about which host it is running on. The interfaces are standardized

and the interactions are predictable.

2.5.5.1 Clustering

Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single,

virtual Docker host. Because Docker Swarm serves the standard Docker API, any tool that

already communicates with a Docker daemon can use Swarm to transparently scale to multiple

hosts.

Page 41: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

39

To dynamically configure and manage the services in your containers, a discovery backend

is needed by to be configured for use with Docker Swarm. There are many backends available,

some of them are:

• Etcd [88]: service discovery / globally distributed key-value store

• Consul [89]: service discovery / globally distributed key-value store

• Zookeeper [90]: service discovery / globally distributed key-value store

• Crypt [91]: project to encrypt etcd entries

• Confd [92]: watches key-value store for changes and triggers reconfiguration of services

with new values

The Docker Swarm API is compatible with the Docker remote API, and extends it with

some new endpoints. Table 1 list the available API for container operations.

Table 1: Docker Swarm API (Container Operations)

Function Description

GET /containers/json List containersPOST /containers/create Create containerGET /containers/(id or name)/json Return low-level information on the container idGET /containers/(id or name)/top List processes running inside the container idGET /containers/(id or name)/logs Get stdout and stderr logs from the container idGET /containers/(id or name)/changes Inspect changes on container id’s filesystemGET /containers/(id or name)/export Export the contents of container idGET /containers/(id or name)/stats Returns a stream of a container’s statisticsPOST /containers/(id or name)/resize Resize the TTY for container with idPOST /containers/(id or name)/start Start the container idPOST /containers/(id or name)/stop Stop the container idPOST /containers/(id or name)/restart Restart the container idPOST /containers/(id or name)/kill Kill the container idPOST /containers/(id or name)/update Update configuration of one or more containersPOST /containers/(id or name)/rename Rename the container id to a new_namePOST /containers/(id or name)/pause Pause the container idPOST /containers/(id or name)/unpause Unpause the container idPOST /containers/(id or name)/attach Attach to the container idPOST /containers/(id or name)/wait Block until container id stopsDELETE /containers/(id or name) Remove the container id from the filesystemGET /containers/(id or name)/archive Get a resource in the filesystem of container id.PUT /containers/(id or name)/archive Upload to a path in the filesystem of container id.

Page 42: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

40

2.5.5.2 Networking

Docker’s native networking capabilities provide two mechanisms for hooking containers

together. The first is to expose a container’s ports and optionally map to the host system for

external routing. In that way, it is possible to select the host port to map to or allow Docker to

randomly choose a high, unused port. This is a generic way of providing access to a container

that works well for most purposes.

The other method is to allow containers to communicate by using Docker "links". A linked

container will get connection information about its counterpart, allowing it to automatically

connect if it is configured to pay attention to those variables. This allows contact between con-

tainers on the same host without having to know beforehand the port or address where the

service will be located.

This basic level of networking is suitable for single-host or closely managed environments.

However, most advanced features, such as the ones listed below, needed by the Fog/IoT envi-

ronment are not available. This work seeks to fill this gap, by providing most of these features

through a Software-Defined-Networking controller, together with the Neutron module of Open-

Stack.

• Overlay networking to simplify and unify the address space across multiple hosts.

• Virtual private networks adapted to provide secure communication between various com-

ponents.

• Assigning per-host or per-application subnetting

2.5.5.3 Scheduling, Cluster Management, and Orchestration

Another component needed when building a clustered container environment is a scheduler.

Schedulers are responsible for starting containers on the available hosts.

Figure 5 demonstrates a simplified scheduling decision. The request is given through an

API or management tool. To decide where to put eh application, the scheduler evaluates the

conditions of the request and the state of the available hosts. In this example, it pulls information

about container density from a distributed data store / discovery service (as discussed above) so

that it can place the new application on the least busy host.

The host selection process is one of the core responsibilities of the scheduler. Usually, it has

functions that automate this process, providing options to the administrator to specify certain

Page 43: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

41

Scheduler Interface

App 6App 2App 4App 1 App 5App 3

Scheduler Distributed Information Store

Schedule App 6

Query hosts stats

Return hosts stats

Host A Host B Host C

Schedule App 6 on Host B

12

34

Figure 5: Docker app scheduling steps

constraints. Some of these constraints may be:

• Schedule the container on the same host as another given container.

• Make sure that the container is not placed on the same host as another given container.

• Place the container on a host with a matching label or metadata.

• Place the container on the least busy host.

• Run the container on every host in the cluster.

Responsibilities of the scheduler includes loading containers onto relevant hosts and start-

ing, stopping, and managing the life cycle of the process.

Because the scheduler must interact with each host in the group, cluster management func-

tions are also typically included. These allow the scheduler to get information about the mem-

bers and perform administration tasks. Orchestration in this context generally refers to the com-

bination of container scheduling and managing hosts.

Some popular projects that function as schedulers and fleet management tools are:

• Fleet [93]: scheduler and cluster management tool.

Page 44: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

42

• Marathon [94]: scheduler and service management tool.

• Swarm [95]: scheduler and service management tool.

• Mesos [96]: host abstraction service that consolidates host resources for the scheduler.

• Kubernetes [97]: advanced scheduler capable of managing container groups.

• Compose [98]: container orchestration tool for creating container groups.

In this work, Swarm will be used as our scheduler/container orchestrator solution. However

it may be replaced by any other solution without much effort.

2.6 Software-Defined Networking

Software Defined Networking (SDN) [99] has become one of the most popular subjects in

the ICT domain. SDN, often referred to as a "radical new idea in networking", promises to dra-

matically simplify network management and enable innovation through network programma-

bility [100].

The Open Networking Foundation (ONF) [101] is a non- profit consortium dedicated to

development, standardization, and commercialization of SDN. ONF has provided the most ex-

plicit and well received definition of SDN as follows:

"Software-Defined Networking is an emerging network architecture where network control

is decoupled from forwarding and is directly programmable."

Per this definition, SDN is defined by two characteristics, namely decoupling of control

and data planes, and programmability on the control plane. In fact, SDN offers simple pro-

grammable network devices rather than making networking devices more complex as in the

case of active networking. Moreover, SDN proposes separation of control and data planes in

the network architectural design. With this design, network control can be done separately on

the control plane without affecting data flows. As such, network intelligence can be taken out

of switching devices and placed on controllers. At the same time, switching devices can now

be externally controlled by software without on-board intelligence. The decoupling of control

plane from data plane offers not only a simpler programmable environment but also a greater

freedom for external software to define the behavior of a network.

Page 45: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

43

2.6.1 API Standardization

As depicted on Figure 6, SDN consists of a centralized control plane with a southbound

API for communication with the hardware infrastructure and a northbound API for communi-

cation with the network applications. The control plane can be further subdivided into a hyper-

visor layer and a control system layer. A number of controllers are already available. Floodlight

[102] is one example. OpenDaylight [103] is a multi-company effort to develop an open source

controller.

The main southbound API is OpenFlow [104], which is being standardized by the Open

Networking Foundation. A number of proprietary southbound APIs also exist, such as onePK

[105] from Cisco, which is especially suitable for legacy equipment from respective vendors.

Figure 6: Software-Defined Networking APIs

Regarding the Northbound APIs, they are not been standardized yet. Each controller may

have a different programming interface. Development of network applications for SDN will be

limited until the standardization of this API. There is also a need for an east-west API that will

allow different controllers from neighboring domains or in the same domain to communicate

with each other [106].

Page 46: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

44

2.6.2 Flow-Based Control

Over the last years, disk and memory sizes have grown exponentially using Moore’s law,

and so have the file sizes. The packet size, however, has remained the same (approximately

1518-byte Ethernet frames). Therefore, much of the traffic today consists of a huge sequence

of packets rather than a single packet. For example, a large file may require transmission of

hundreds of packets.

Streaming media generally consists of a stream of packets exchanged over a long period of

time.In such cases, if a control decision is made for the first packet of the flow, it can be reused

for all subsequent packets. Thus, flow-based control significantly reduces the traffic between the

controller and the forwarding element. The control information is requested by the forwarding

element when the first packet of a flow is received and is used for all subsequent packets of

the flow. A flow can be defined by any mask on the packet headers and the input port from

which the packet was received. The control table entry specifies how to handle the packets with

the matching header. It also contains instructions about which statistics to collect about the

matching flows.

2.6.3 Benefits

SDN, with its inherent decoupling of control plane from data plane, offers a greater control

of a network through programming. This combined feature would bring potential benefits of

enhanced configuration, improved performance, and encouraged innovation in network archi-

tecture and operations. Moreover, with an ability to acquire instantaneous network status, SDN

permits a real-time centralized control of a network based on both instantaneous network status

and user defined policies. This further leads to benefits in optimizing network configurations

and improving network performance. The potential benefit of SDN is further evidenced by the

fact that SDN offers a convenient platform for experimentation of new techniques and encour-

ages new network designs, attributed to its network programmability and the ability to define

isolated virtual networks via the control plane. Those benefits are better described as follows.

In network management, configuration is one of the most important functions. Specifically,

when new equipment is added into an existing network, proper configurations are required to

achieve coherent network operation as a whole. However, owing to the heterogeneity among

network device manufacturers and configuration interfaces, current network configuration typ-

ically involves a certain level of manual processing. This manual configuration procedure is

tedious and error prone. At the same time, significant effort is also required to troubleshoot a

Page 47: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

45

network with configuration errors. It is generally accepted that, with the current network de-

sign, automatic and dynamic reconfiguration of a network remains a big challenge. SDN will

help to remedy such a situation in network management. In SDN, unification of the control

plane over all kinds of network devices, including switches, routers, Network Address Trans-

lators (NATs), firewalls, and load balancers, renders it possible to configure network devices

from a single point, automatically via software controlling. As such, an entire network can be

programmatically configured and dynamically optimized based on network status.

In network operations, one of the key objectives is to maximize utilization of the invested

network infrastructure. However, the coexistence of various technologies in a single network,

optimizing performance of the network as a whole is not simple. Current approaches often focus

on optimizing performance of a subset of networks or the quality of user experience for some

network services. Obviously, these approaches, based on local information without cross-layer

consideration, could lead to non-optimal performance, if not conflicting with network opera-

tions. The introduction of SDN offers an opportunity to improve network performance at whole.

Specifically, SDN allows for a centralized control with a global network view and a feedback

control with information exchanged between different layers in the network architecture. As

such, many challenging performance optimization problems would become manageable with

properly designed centralized algorithms. It follows that new solutions for classical problems,

such as data traffic scheduling [107], end-to-end congestion control [108], load balanced packet

routing [109], energy efficient operation [110], and Quality of Service (QoS) support [111], can

be developed on demand and easily deployed to verify their effectiveness.

SDN also encourages innovation by providing a programmable network platform to imple-

ment , experiment, and deploy new ideas, new applications, and new revenue earning services

conveniently and flexibly. High configurability of SDN offers clear separation among virtual

networks permitting experimentation on a real environment. Progressive deployment of new

ideas can be performed through a seamless transition from an experimental phase to an opera-

tional phase.

2.6.4 Software-Defined Networking and Internet of Things

The benefits of employing SDN techniques in IoT environments is becoming recognized in

multiple domains beyond the smart transportation which is mostly discussed by both researchers

and industry practitioners. For example, [112] developed a robust control and communication

platform using SDNs in a smart grid settinFg. Similar efforts have been explored in the smart

home domain where IoT devices are extremely heterogeneous, ranging from traditional smart-

Page 48: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

46

phones and tablets, to home equipment and appliances with enhanced capabilities. There are

also efforts for a home network slicing mechanism [113] to enable multiple service providers to

share a common infrastructure, and supporting verifying policies and business models for cost

sharing in the smart home environment.

At a lower device level, [114] employs SDN techniques to support policies to manage

WSNs. In summary, while there is significant interest in managing IoT environments, many

of the efforts in this direction are isolated to specific domains, or a specific system layer. The

proposed work employs a layered SDN methodology to bridge the semantic gap between the

resources provided by the cloud computing infrastructure until the low level network/device

specifications at the network edge.

Page 49: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

47

Chapter 3

Related Works

In this chapter, a consistent study on the most relevant related works, in the field of research

integrating this dissertation, are listed. The main objective of this related work study consists

in providing better understanding on the context in which the proposal of this dissertation is

inserted, as well as on the corresponding contributions.

3.1 Container-based Softwarized Control Plane

Many solutions are already using Linux containers for software deployment on affordable

pocket-sized System on Chip (SoC) computers [115]. For example, resin.io [116], a platform

that encompasses client, server, and device software, offer an infrastructure for building soft-

ware in the cloud and deploying it on remote devices automatically through linux containers

[80] and Docker [81]. However they are not focused in offering an IoT platform, this solutions

provides ways of deploying and orchestrating containers at the network edge.

AWS IoT [117] is the Amazon solution for Internet of Things, it is a managed cloud plat-

form that lets connected (such as sensors, actuators, embedded devices, or smart appliances)

devices easily and securely interact with cloud applications and other devices, this is archived

through the MQTT [118] message broker. This platform facilitate required connections between

people and things. Real time data collection, analysis and processing of the position informa-

tion, data visualization transmission of messages using Amazon Simple Notification Service

(SNS) [119] module are the main features of AWS IoT. It helps in transferring data from em-

bedded devices such as Arduino [120], Raspberry Pi [121] to the cloud.

Cisco IOx [122] is its first software-only version of the IOS (Cisco network OS) wrapped

in with a Linux distribution. IOx act as an enabling application framework that will enable

computing and open connectivity on the edge of the network. It is essentially an intermediate

Page 50: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

48

layer between the hub and the edge of that resolves the network bandwidth challenge. IOx

enables smart applications on the edge of the network that can process information collected

from smart, sensor-enabled devices. The edge apps will process raw data and trigger actions

in response that are either machine-generated or require human intervention. If the system is

running smoothly and no unexpected signals are sensed, all captured data doesn’t need to be

transmitted to the cloud over expensive network bandwidth. However, when an unexpected

event is sensed meaning something is out of line, data is transmitted to the hub in order to take

real-time action. Critical updates can be transmitted over 3G networks while routine updates and

non-critical data can then be shared when a WiFi network becomes available. Such a distributed

network aims to resolves data latency issues and improves system reliability, resulting in faster

response time.

CloudOne [123] provides a platform for Internet of Things services and solutions. It also

offers a set of tools that enables stakeholder collaboration, requirements traceability, require-

ments reuse, and configuration management. A DevOps toolset for system upgrades, regulation

updates, and system integrations. A Analytics platform, which allows users to connect and mon-

etize their data. Hybrid Cloud, a logical environment that serves as an organization’s resource

pool. Virtual Private Cloud that enables users to granted access to the elements of their environ-

ment to contractors, partners, etc.

Thing+ Cloud [124] is a IoT platform that can be deployed using different cloud computing

infrastructure such as Amazon Web Service (AWS) [44] and Azure [125]. It also permits creat-

ing Virtual sensors from external data sources that can be used together with physical sensors,

alongside with a rule engine to Trigger/Condition/Action based on the data generated by the

sensors. Thing+ Embedded is the middleware used to connect sensors and actuators to Thing+

and needs to be installed on the devices. Its main features includes a web user interface for

secure gateway configuration via web and an always connected approach with offline DB sync

and modem connection management.

FIWARE [126] provides an IoT Stack which allows users to connect devices and receive

data, integrating all device protocols and connectivity methods, understanding and interpreting

relevant information. It isolates data processing and application service layers from the device

and network complexity, in terms of access, security and network protocols. The feature pro-

vided by the IoT Stack includes: simple sensor data integration, device-independent APIs for

quick app development and lock-in prevention, modular, scalable and high available architec-

ture.

For the sake of completeness, it is important to state that there are many other solutions

Page 51: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

49

already on the market which are not described here due its similar characteristics. For example,

IBM Watson IoT [127] uses the same approach of the AWS IoT solution, it offers a platform

for connecting cloud and IoT devices through MQTT, there is no significant difference between

their approaches on the architecture level. The solutions presented on this work were selected

due to their intrinsic characteristics to better compare each other and the proposed work.

3.2 Network-based Softwarized Control Plane

As discussed earlier, Software Defined Networking (SDN) simplifies the network man-

agement by separating the control plane from the data plane. In network communication, the

messages created by the user, which represents the data plane, needs to be transferred to an ap-

propriate destination. The network management component is responsible for finding the best

path to send the message by using the control messages, and maintain any information related

to the network, for example the traffic and congestion. In SDN, data plane and control plane

are separated. The data plane uses the forwarding tables prepared by the control plane in the

controller to forward the messages, flowpackets [128].

The control plane in SDN is centralized, while the data plane is still distributed. Maintain-

ing a centralized control plane speeds decision-making process, where the policy controls are

dynamically changed without the need to go through a long path to make the decision. With the

programmable control plane, the control program can be easily changed when needed. Also,

the network can be divided into several virtual networks which are located on the same shared

physical hardware, and every virtual network can be configured to have a policy different from

the other virtual networks [128].

A number research works and surveys have been published on SDN. Many SDN aspects

and how this paradigm can support the Software Defined Environments (SDE) are discussed in

[129]. In addition, it also illustrates the IBM vision to consolidate the SDN idea by integrating

their IBM SDN virtual environments (SDN-VE) product with Neutron, the network module of

the OpenStack platform [130], to extend the SDN-VE features.

In other published works [131] [132], the authors covered most of SDN/OpenFlow aspects

ranging from common concepts to solution deployment. The authors emphasize the motivations

behind SDN by showing their importance on supporting several organizations by facilitating

the control management operations and enhancing its performance with lower cost. Besides

that, they also showed how the performance can be improved by distributing workload across

virtualized hardware components. Furthermore, they discussed how cloud computing providers

Page 52: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

50

can exploit SDN benefits to overcome issues related to the heterogeneity of switches/routers

infrastructure. Different vendors have different switch characteristics. So, instead of managing

and customizing each switch separately, SDN provide the ability to manage all switches by a

single enforcement point, the SDN control layer. Also giving the cloud user the ability to use

the cloud resources in an efficient way by creating slices/slivers and letting the data to flow

transparently.

There are other factors relating SDN and the network performance, such as Quality of Ser-

vice (QoS). Many solutions were proposed to handle the QoS issues in SDN/OpenFlow systems.

OpenQoS [133] [134] considered one typical solution for these issues. The authors proposes an

optimization framework for controller design, with two different problem formulations, in order

to provide QoS support in OpenFlow networks. The solutions of these problems is based on new

routing paths for QoS flows which are different than shortest-path best-effort flows. Regarding

the Quality of Experience (QoE), in [135] the authors make use of SDN to develop an archi-

tecture of a system that allows service quality negotiation between the user and ISP, and apply

it to use-cases relating to streaming, browsing, and downloading. Another work [136] proposes

a device-to-device-communication-based algorithm for enhancing QoE in both downlink and

uplink streams of software defined multi-tier networks.

On the IoT scenario, the deployments are often derived from the integration of already

independently deployed IoT sub-networks, characterized by very heterogeneous devices and

connectivity capabilities. Based on such scenario, the work [137] compares SDN techniques in

traditional Data Center Networks (DCNs) and in IoT environments. Also, presents a vision on a

layered SDN controller in IoT settings. Furthermore, proposes a novel multiple-QoS-constraints

flow scheduling algorithm which can be applied on the SDN controller.

The integration of SDN, virtualization, and fog computing have many benefits [138]. Be-

sides providing the ability to manage the network by a single enforcement point, the centralized

control layer in software-defined networking opens the network for innovations in scheduling

and traffic management, which is essential for offering services through latency-sensitive appli-

cations.

3.3 Key Requirements

As the proposed solution focuses on IaaS, the following characteristics were considered as

key requirements we claim that must be met for the efficient design, development and usage of

a IoT platform on the IaaS level.

Page 53: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

51

• Open: The platform is based on Open Source technologies and licensed under the OSI-

approved licenses [139]. This allows the community to use, develop new functionalities

and improve the platform overall, free of charge. This is also important for enabling de-

vice/sensors companies to contribute to the platform as they can add new functionalities

regarding new devices/sensors being developed.

• Multi-platform: The platform should support multiple architectures and operating sys-

tems, for example it may be possible to work with a Raspberry Pi board with Debian, as

well as with a Desktop PC with Ubuntu. Considering the heterogeneity provided by the

IoT environment, this can be considered as a fundamental feature.

• Edge Multitenancy: Allows several customers (tenants) to share the edge infrastructure

without being aware of it and without compromising the privacy and security of each

customer’s data (through isolation). For example company A owns a device X which

is connected with some sensors, company B is interested on the sensors on device X.

Company A can share its device, while still keeping their processes/data isolated from

company B.

• Fog-enabled: Permits edge device to act as a IoT gateway and, at the same time, process

data from the sensors, according to the business logic. Besides that, the platform also

supports deploying applications directly on edge devices.

• Independent of cloud computing infrastructure: The platform can be easily deployed in

different Cloud Computing solutions such as OpenStack, Amazon Web Service, Azure,

etc.

• Device management: The platform provides means for device registration, configuration

and software updates.

• Sensor management: The platform provides means for sensor registration and dynamic

configuration.

• Service Model: Edge computing can be offered in the same ways as cloud computing.

If an IoT application have specific constraints (latency-sensitive, for example) it can be

hosted on a edge device and the users will be consuming it from the edge, hence it can be

classified as a SaaS. On the same ways PaaS and Iaas can be offered directly on the edge

devices.

• Deployment Model: In the same way as service model, IoT platforms can be deployed in

the same way as cloud computing, so the same models can be applied here. However, for

Page 54: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

52

simplicity onlie Public and Private models will be considered.

• Software-based control plane: To dynamically allocate resources in an autonomous way,

a centralized softwarized mechanism must be provided. It paves the way necessary for

applying more robust resource allocation mechanisms on both network and container

planes.

The Table 2 compare each solution introduced on this work by the characteristics described

above.

Table 2: IoT Platforms Comparison

Features AWS IoT Cisco IOx CloudOne Thing+ C. rsin.io FIWARE SmartEdge

Open 4 4

Multi-platf. 4 4 4

Multitenancy 4

Fog-enabled 4 4

C.C. Indep. 4

Device mgm. 4 4 4 4 4 4 4

Sensor mgm. 4 4 4 4 4

Soft. C. Plane 4

Service mod. PaaS IaaS PaaS PaaS IaaS IaaS IaaS

Deploy mod. Public Priv/Pub Private Private Private Priv/Pub Priv/Pub

As shown in Table 2, several works concentrates efforts on the device and sensor manage-

ment side, which is indeed important. However, to provide an efficient and reliable service, it is

also important to focus on how the data generated by the sensors will be processed and served.

Regarding this, most of the solutions push all the data to the cloud and then the data is pro-

cessed there. As stated previously, the increase in the number of IoT consumers and with every

request being required to be processed within the cloud DCs, it is likely that the cloud DCNs

will encounter a serious difficulty in serving the IoT applications in real-time.

Most of the works adopt the same subscription model implanted on cloud computing, with

the exception of FIWARE which is Open Source. FIWARE is based on the OpenStack [130]

platform and can be deployed by anyone on its own hardware, it is also available to receive

contribution from the community.

Page 55: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

53

By the best of our knowledge there is no related works approaches that supports Edge Mul-

titenancy. This feature is an important factor on reducing OpEx, as sharing infrastructure means

sharing the cost of electricity, system admins, hardware engineers, network engineers, facilities

management, fire protection, and insurance or local and state taxes on facilities. Besides that,

by sharing the infrastructure there is a reduction on the network traffic generated by the edge

devices, as the same device can serve multiple users.

SmartEdge does not support independence of cloud computing infrastructure (currently

supports OpenStack) and sensor management. However, as a open source solution, it can be

easily ported to different cloud computing solutions and features for sensor management is

expected to be added by the sensors companies.

Regarding the ability to allocate resources on-demand, to the best of my knowledge, Smart-

Edge is the only solution available providing it. SmartEdge follows a coupled approach, through

its API it is possible to specify the amount of resources to be provided for certain application on

both container and network planes. This is achieved by using a container orchestration solution

together with the SDN controller, which makes possible this type of control programmatically

through a unique API. It is important to state that our work does not focus on the solutions for

programmatically allocating resources, however it provides ways for applying these techniques

on a centralized controller.

So, this work contributions are clear, first for being designed with a set of important features

for the IoT, mainly focused on latency-sensitive IoT applications, making possible to apply

mechanisms to dynamically control resource allocation on both network an container planes,

and also providing means for the future development of new features and improvement of the

current ones.

Page 56: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

54

Chapter 4

Work Proposal

In this chapter the proposed platform is introduced, including an overview and a detailed de-

scription of the architecture. Also, preliminary results are presented regarding the performance

of different virtualization techniques applied in a typical network edge device.

4.1 SmartEdge Key Design Principles

SmartEdge is a set of software tools for offering and managing computing resources at the

network edge, in conjunction with a cloud infrastructure. Backed by the Fog computing [60]

concept, the platform seeks to provide a new service model, named Edge Infrastructure as a

Service, to serve the demands of the real time, latency-sensitive applications in the context of

IoT.

4.1.1 Overview

SmartEdge is a IoT platform that enable users to deploy applications in the form of docker

containers at the network edge. It offers on-demand access to resources provided by devices

at the network edge, to serve specific IoT applications which demand real-time, low-latency

response time (e.g. live streaming, smart traffic monitoring, smart parking etc.). For applications

that are not latency-sensitive, these devices act as a gateway, sending the data generated by the

sensors connected to the edge device to the cloud, where it can be stored and/or processed in a

higher scope.

One important reason Docker [81] was adopted for application deployment is that it de-

livers the promise of "develop once, run anywhere". Docker offers a simple way to package

an application and its runtime dependencies into a single container. Also, provides a runtime

abstraction that enables the container to run across different versions of the Linux kernel. Using

Page 57: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

55

Docker, a developer can make a containerized application on his workstation, then easily deploy

the container to any Docker-enabled server. There is no need to retest or retune the container

for the server environment, whether in the cloud or on premises. Such features are a must when

dealing with the heterogeneous environment provided by IoT.

Figure 7: SmartEdge Stack deployed on a OpenStack Cloud Infrastructure

The SmartEdge platform, depicted in Figure 7, is deployed on a OpenStack cloud infras-

tructure, preferable with the Orchestration service (Heat) where it can be deployed by a single

push of a button by using the developed Heat Orchestration Template (HOT). This template

creates five virtual machines (swarm-master, consul, database, registry and the SmartEdgeCon-

troller (se-dashboard and se-api)) responsible for the services infrastructure, it also configures

the service groups, volumes (using OpenStack Cinder) for storing data safely, and creates the

network (using OpenStack Neutron) where the containers will be attached and can be accessed

through the cloud virtual machines.

With the platform deployed, the user has access to the SmartEdge API, which is based on

the Docker Remote API, listed on Table [1]. It provides functions for managing edge devices

and containers. The SmartEdgeController also embeds a web interface where the authenticated

Page 58: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

56

user can access all the provided API functions by using the graphical interface. So, the user

can easily add a new device, deploy their applications (in docker container format) at this edge

device, and access/manage their device/application using the graphical interface or the API,

which is compatible with the Docker remote API. It is also important to state that, all containers

deployed at the edge device can be attached to a network (the network created by the HOT)

accessible from within the cloud to ensure their function as a gateway when the data needs to

be sent to the cloud.

Docker does not provide any mechanism for multitenancy. So, to achieve that, a docker

daemon is created on the device, for each entity (identified by its SmartEdge ID) interested in

it, this daemon will be listening on a unique port and will be responsible for all interactions

between the entity and its containers running on the device. In that way, the containers from

each entity will be isolated with their own bridge, runtime and variables.

It is expected that the complementary functions provided by the SmartEdgeAPI offers to the

cloud end-users a new option of service delivery model that serves the requirements of the real-

time, low latency IoT-applications, which can be running at the network edge, and also supports

complex analysis and long-term storage of data at the core cloud computing framework. The

platform also seeks to offer an automated, on-demand infrastructure from the cloud to the edge,

where the developer is free to use their own technologies, protocols and approaches.

Regarding the edge devices, the diversity of IoT applications results in a variety of use

cases that needs to be supported in different hardware and software configurations. Furthermore,

many applications require management of large-scale deployments, which requires automated

deployment and configuration infrastructure up to the network edge. Taking into account these

considerations, for the proposed platform, the following requirements was considered for the

IoT gateways:

• Modularity: for supporting of heterogeneous IoT applications with varying and changing

requirements to the gateway functionality, where functional modules can be installed,

upgraded, and retired over time;

• Automation: for provisioning of large-scale IoT deployments, including initial deploy-

ment and configuration as well as future updates and maintenance;

• Vendor and technology independence: for avoiding vendor lock-in and simultaneous de-

ployment of software from different vendors and/or implemented using different tech-

nologies;

Page 59: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

57

• Dependencies management: for portable deployment of the functional modules with all

of their dependencies, independent of the dependency management support of the vendor

and technology they use;

• Security: for accommodating varying security requirements of applications and end-users

to the deployment infrastructure;

• Low performance overhead: for efficient utilization of resources of the network and resource-

constrained devices;

• Usability: for getting a wider adoption of the platform and toolchain by a larger developers

community.

Considering the diversity of IoT applications and use cases together with the growing capa-

bilities of affordable SoC computers, it is tempting to adopt more resource-demanding deploy-

ment approaches. In that way, SmartEdge was developed based on technologies already in use

by the industry to offer more functionality and higher usability at the cost of increased perfor-

mance overhead, this would provide more versatility and lower the entry barrier for developers.

We believe that with such characteristics, the proposed platform has the potential of becoming

the foundation for application deployment approaches in IoT.

4.1.2 Architecture

For better understanding how the system works, Figure 8 presents a simple modular archi-

tecture. This architecture is completely decoupled of the technologies utilized on our prototype

implementation and clearly shows the function of each component, its provided and consumed

APIs, and the communication between each module.

As can be seen, the SmartEdge controller function as an interface between the centralized

Cloud Infrastructure and the distributed Edge Infrastructure, this is achieved by using the Con-

tainer orchestrator and the SDN controller to provide resources and interconnect the edge of the

network and the centralized Data Center. Besides that, there is a need to orchestrate these edge

devices as these devices tend to be numerous and in different architectures. By orchestrating

them it is possible to keep them as a unique entity, enabling the management to be done in a

scalable way even with a diversity of devices spread over the network.

The SmartEdge’s container module seeks to provide an API to orchestrate and deploy ap-

plications using resources located on edge of the network. Already, there are many technologies

that can be used as a middleware to provide functionalities for this module, such as Docker

Page 60: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

58

SmartEdge Controller

Container Orchestrator SDN Controller

Cloud Infrastructure

Cloud Network

Edge Node

ContainerVirtual switch

Northbound API

Container Module

Network Module

Cloud SDN Driver

Set virtual switch ControllerN

od

e O

rche

stra

tio

n

Clo

ud netw

ork

functions

AP

I

Orchestrator API

Container(s) Orchestration

Figure 8: SmartEdge Modular Architecture

Swarm [95], Kubernates [97], and Mesos [96]. Generally, these middleware also provides mech-

anisms for container scheduling and cluster management.

The network module provides network functions for connecting resources located at the

edge of the network and the centralized Data Center. This is achieved by using a SDN con-

troller, the same controller used in the the Cloud Infrastructure Network. The SDN controller is

the application that acts as strategic control point in the network, managing flow control to the

switches/routers (virtual switch on the edge node) "below" (via southbound APIs) and the appli-

cations and business logic "above" (via northbound APIs) to deploy intelligent networks. Also,

extensions can be inserted into the SDN controller to enhance the functionality and support

more advanced capabilities, such as running algorithms to perform analytics and orchestrating

new rules throughout the network. However, such advanced functionalities are not the focus of

this work.

The Cloud Infrastructure represents any cloud provider solution, such as OpenStack [130],

CloudStack [140] or OpenNebula [141], as long as their network function is provided through

an SDN controller. This is required to enable seamless communication between the resources

located at the network edge (containers) and the resources provided by the cloud (virtual ma-

chines). This is achieved by using the cloud API to create a network and as this network is

being controlled by a SDN controller, the same controller used by the virtual switches on the

edge nodes, it is possible to create a port on the network cloud and attach this port to a container

deployed on and Edge node. In that way, the virtual machine and the container will be on the

same network provided by the cloud and controlled by the SDN controller.

Page 61: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

59

The Edge Node is any compatible device that can be used to deploy applications on it,

for example it could be set-top-box, switch, router, PC. The only requirement regarding this

device is that it is supported by the SmartEdge controller to be orchestrated. The edge node

orchestration can be provided by using solutions such as Puppet [142], Ansible [143] or Chef

[144]. The orchestration is required in order to dynamically manage these devices.

All the functionalities cited above, such as container deployment, create network, attach

network interface, managing edge nodes, etc are provided via the SmatEdge API which will be

described on the following sections.

4.2 Design and Implementation

Based on the architecture described above, the design and implementation of the Smart-

Edge platform follows the microservices architecture for its deployment. The microservices

architecture is a cloud-native architecture that aims to realize software systems as a package

of small services. On the microservices architecture, each service is independently deployable

on a potentially different platform and technological stack. It can run in its own process while

communicating through lightweight mechanisms such as RESTful or RPC-based APIs. In this

setting, each service is a business capability that can utilize various programming languages and

data stores and is developed by a small team [145].

The microservices architecture provides many benefits, in particular, the adaptability to

technological changes to avoid technology lock-in and, more important, reduced time-to-market

and better development team structuring around services. Microservices architectural style is a

first realization of Service Oriented Architecture (SOA) that has happened after the introduction

of DevOps and this is becoming the standard for building continuously deployed systems [145].

The SmartEdge stack designed, shown on Figure 7, is composed by a set of cloud instances,

each running a service independently, and the processes communicates with each other over the

network in order to fulfill the client requests. This stack can be split in 4 different layers, the

cloud layer, network layer, edge layer and the container layer. Each layer is described as follows.

The cloud layer is composed by DCs offering a Cloud Computing Infrastructure, where

the platform will be deployed. Also, this layer is responsible for offering permanent storage of

huge, voluminous data chunks within its powerful DCs. The DCs are equipped with massive

computational ability. However, unlike conventional cloud architecture, the core cloud DCs are

not bombarded for every single query. Fog computing enables the cloud layer to be accessed

and utilized in an efficient and controlled manner. This represents the Cloud Infrastructure on

Page 62: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

60

the Modular Architecture.

The Cloud Infrastructure represents any cloud provider solution, such as OpenStack [130]

, CloudStack [140] or OpenNebula [141], as long as providing capacity to interwork with SDN

controller, to enable network function programmability. This is required to enable seamless

communication between the resources located at the network edge (containers) and the re-

sources provided by the cloud (virtual machines). This is achieved by using the cloud API

to create a network and as this network is being controlled by a SDN controller, the same con-

troller used by the virtual switches on the edge nodes, it is possible to create a port on the

network cloud and attach this port to a container deployed on and Edge node. In that way,

the virtual machine and the container will be on the same network provided by the cloud and

controlled by the SDN controller.

Using public cloud solutions, such as Amazon Web Services [44] may be challenging, due

to the inability of configuring the network infrastructure. However, this can be overcome by

a solution such as a proxy for providing an SDN infrastructure that offers network functions

between the virtual machines network and the edge infrastructure.

The network layer is responsible for the communication between the cloud layer and the

applications running on the container layer. At this layer a Software Defined Network (SDN)

solution is applied. The SDN paradigm offers many benefits compared to the traditional dis-

tributed approaches. Firstly, it simplifies networking in both development and deployment of

new protocols and applications. With software-based controller, network operators are much

easier to program, modify, manipulate and configure of a protocol in a centralized way with-

out independently accessing and configuring individual network hardware devices scattering

across the whole network. Secondly, SDN-based architectures provide centralized controller to

the network with global knowledge of the network state which are capable of controlling net-

work infrastructure in a vendor-independent manner. These network devices just simply accept

policies from the controller without understanding and implementing various network protocols

standards, resulting in directly control, program, orchestrate, and manage network resources at

the SDN controller. This feature, thus, saves a lot of workforce and resources. On the Modular

architecture, the SDN controller and its APIs together with the virtual switch represents this

layer.

The edge layer, composed by the Edge node and its orchestration functionalities on the

Modular architecture, is responsible for the management and orchestration of edge devices.

Configuring and keeping updated and secure fog devices in a large-scale infrastructure is chal-

lenging [146]. These tasks are labor intensive and error prone. For instance, in a cloud environ-

Page 63: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

61

ment, well-known Internet companies claim a single admin handles thousands of machines run-

ning a single service type. Taking that into the Fog Infrastructure, configuring and maintaining

many different types of services running on billions of heterogeneous devices will only exac-

erbate current management problems. The fog needs heterogeneous devices and their running

services to be handled in a more homogeneous manner; ideally fully automated by software. To

achieve that, Configuration Management (CM) tools such as Ansible [143], Chef [144], Puppet

[142], etc can be applied to manage these edge devices. CM deals with maintaining the hardware

and software of a business. It involves making a detailed recording of the information about the

computer system and updating it as needed. This includes listing all of the installed software,

the network addresses of the computers, and the configuration of different pieces of hardware.

It also means creating updates or ideal models that can be used to quickly update computers or

restore them to a predefined baseline. Configuration management software makes it easy for a

system administrator to see what programs are installed and when upgrades might be necessary.

For example, Ansible, which is an open source IT configuration management, provides large

productivity gains to a wide variety of automation challenges, it seeks to solve major unsolved

IT challenges such as clear orchestration of complex multi-tier workflows and cleanly unifying

OS configuration and application software deployment.

Figure 9: SmartEdge Implementation

The last layer representing the Container Orchestrator, its APIs and the container on the

Page 64: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

62

Modular Architecture, has the responsibility of provisioning resources at the network edge. At

the edge device, applications are deployed in the form of containers which, as stated earlier,

delivers the promise of "develop once, run anywhere". In that way, the application can be devel-

oped locally and deployed in any edge device, independent of its architecture or platform. The

container layer is composed by a large cluster of edge devices, controlled by a single entry point

in the SmartEdge Stack. This is done by using Docker Swarm, a native clustering for Docker. It

turns a pool of Docker hosts (edge devices) into a single, virtual Docker host. Because Docker

Swarm serves the standard Docker API, any tool that already communicates with a Docker

daemon can use Swarm to transparently scale to multiple hosts. This enables the developer to

deploy its applications directly on the network edge, closer to their users, without dealing with

compatibility problems. Others solution, such as Kubernates [97] or Apache Mesos [96] can

also be applied on this layer.

Figure 9 presents a implementation of the architecture of the proposed platform.

On Figure 9, OpenStack [130] represents the Cloud Infrastructure, it is a cloud infrastruc-

ture platform hosted on a central DC. OpenStack is a free and open-source software platform

for cloud computing, mostly deployed as an infrastructure-as-a-service. The software platform

consists of interrelated components that control hardware pools of processing, storage, and net-

working resources throughout a data center. Through its Orchestration service (Heat) it is pos-

sible to deploy the SmatEdge platform in a single push of a button, by using the developed Heat

Orchestration Template (HOT). The template will use another OpenStack services to create the

needed resources, such as virtual machines, volumes and network, as shown on Figure 7. As the

resources at the network edge are limited, the Cloud Infrastructure also has the responsibility of

providing resources needed to process large chunks of data when the resources at the network

edge are not capable of processing it.

The network layer is represented by the OpenDaylight SDN controller. Hosted by the Linux

Foundation, OpenDaylight Project is an open source software-defined networking project aimed

at enhancing SDN by offering a community-led and industry-supported framework for the

OpenDaylight Controller, which has been renamed the OpenDaylight Platform. It is open to

anyone, including end users and customers, and it provides a shared platform for those with

SDN goals to work together to find new solutions. Under the Linux Foundation, OpenDaylight

includes support for the OpenFlow protocol, but can also support other open SDN standards.

The OpenFlow protocol, considered the first SDN standard, defines the open communications

protocol that allows the SDN Controller to work with the forwarding plane and make changes

to the network. This gives businesses the ability to better adapt to their changing needs, and

Page 65: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

63

have greater control over their networks. The OpenDaylight Controller is able to deploy in a

variety of production network environments. It can support a modular controller framework,

but can provide support for other SDN standards and upcoming protocols. Also, it exposes

open northbound APIs, which are used by applications. These applications use the Controller

to collect information about the network, run algorithms to conduct analytics, and then use the

OpenDaylight Controller to create new rules throughout the network. In the proposed imple-

mentation OpenDaylight works in conjunction with the OpenStack Neutron (network service

from OpenStack) to, using the internet infrastructure, create an overlay network to intercon-

nect cloud resources (virtual machines) directly with the applications deployed at the network

edge (containers). So, each container will have a interface connected to a network in which the

container can access the resources provided by the cloud.

Ansible is the solution adopted for the orchestration of edge nodes. Ansible is an open

source IT configuration management, deployment, and orchestration tool. It is unique from

other management tools in many respects, aiming to provide large productivity gains to a wide

variety of automation challenges as a more productive drop-in replacement for many core ca-

pabilities in other automation solutions. Furthermore, Ansible seeks to solve major unsolved IT

challenges such as clear orchestration of complex multi-tier workflows and cleanly unifying OS

configuration and application software deployment under a single banner. Ansible is designed

to be minimal in nature, consistent, secure, and highly reliable, with an extremely low learning

curve for administrators, developers, and IT managers. Ansible seeks to keep descriptions of IT

easy to build, and easy to understand - such that new users can be quickly brought into new IT

projects, and longstanding automation content is easily understood even after months of being

away from a project.

Ansible performs automation and orchestration of IT environments via playbooks. Play-

books are a YAML definition of automation tasks that describe how a particular piece of au-

tomation should be done. Like their namesake, Ansible playbooks are prescriptive, yet respon-

sive descriptions of how to perform an operation - in this case, IT automation, that clearly states

what each individual component of your IT infrastructure needs to do, but still allows compo-

nents to react to discovered information, and to operate in concert with each other. Ansible also

supports encapsulating playbook tasks into reusable units called "roles". Ansible roles can be

used to easily apply common configurations in different scenarios, such as having a docker con-

figuration role that may be used in development, test, and production automation. The Ansible

Galaxy [147] community site contains thousands of roles that can be used and customized to

build playbooks.

Page 66: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

64

On the proposed architecture Ansible is used for orchestrating and managing the config-

uration on the edge devices. An Ansible role was developed for setting up Docker and Open-

VSwitch at the edge devices so, when adding a new edge device to the platform, Ansible is

capable of configuring this device independent of its platform, and in minutes the has joined

the cluster and is available for use. If a device have specific configuration needs, device vendors

(or any person) can use Ansible Galaxy for providing specific configurations for this devices as

soon as the device is launched and the SmartEdge platform can incorporate this changes with

minimal labor.

Docker swarm is applied as the Container orchestration solution. Docker Swarm [95] is a

clustering and scheduling tool for Docker containers (the applications deployed at the network

edge). With Swarm, IT administrators and developers can establish and manage a cluster of

Docker nodes as a single virtual system. Clustering is an important feature for container tech-

nology because it creates a cooperative group of systems that can provide redundancy if one

or more nodes fail. Clustering also provides administrators and developers with the ability to

add or subtract container iterations as computing demands change. Swarm uses scheduling ca-

pabilities to ensure that there are sufficient resources for distributed containers. Swarm assigns

containers to underlying nodes and optimizes resources by automatically scheduling container

workloads to run on the most appropriate host. This provides basic workload balancing for con-

tainerized applications, ensuring containers are launched on systems with adequate resources

while maintaining necessary performance levels. Docker Swarm uses the standard Docker API.

An IT administrator controls Swarm through a swarm manager (part of the SmartEdge stack),

which orchestrates and schedules containers. The swarm manager allows a user to create a pri-

mary manager instance and multiple replica instances in case the primary instance fails. Swarm

also has five filters for scheduling containers:

• Constraint: Key/value pairs associated to particular nodes. A user can select a subset of

nodes when building a container and specify one or multiple key/value pairs.

• Affinity: Tells one container to run next to another based on an identifier, image or label.

• Port: When a container tries to run on a port that’s already occupied, it will move to the

next node in the cluster.

• Dependency: This filter schedules dependent containers to run on the same node.

• Health: Prevent scheduling containers on a node is not functioning properly.

Page 67: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

65

On the proposed solution, the user is free to use any combination of the above filters to

better scheduler its application, for example based on the edge device location. If a developer

needs its application to run on a edge device located in Natal, this can be achieved by setting a

node tag of "location" with the value "Natal" on the edge device closer do Natal, and when the

application is deployed it should be deployed using the constraint filter with the corresponding

value.

By using this available technologies, the proposed architecture is one of the first attempts

of applying the concepts of fog computing in a real scenario. The architecture seeks to extend

the services provided by the cloud by offering on-demand access to resources provided at the

network edge, to serve applications which demand real-time, low-latency response time. This

in a simple and customizable way, where the developer is free to use their protocols, approaches

and technologies of choice, without having to deal with infrastructure constraints.

4.2.1 Application Programming Interface

Everything in SmartEdge is controlled through the Application Programming Interface API.

This section will explain how to use the SmartEdge API for custom integrations.

The SmartEdge API is a REST API and uses an open schema model. In this model, un-

known properties in incoming messages are ignored. It extends the API provided by the Ship-

yard Project [148], which also provides full compatibility for the Docker Remote API. In that

way, it is possible to use the Docker Remote API for container operations such as create, start,

stop and more. SmartEdge will pass any request to the Docker Remote API to Swarm. See the

Docker Remote API Documentation in [149] for full reference.

For better organization, the full API provided by the SmartEdge platform is classified in six

different sections according to its functions. Each section and its functions will be described as

follows.

4.2.1.1 Authentication

To access the SmartEdge API, it is necessary to be authenticated. The recommended way

is to use service keys however, it is also possible to use authorization tokens. All requests must

have a header including either the service key or authorization token. The authentication API is

described on Table 3.

For requests, it is necessary to include either an authorization token or service key with the

Page 68: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

66

Table 3: SmartEdge Authentication API

Function DescriptionPOST /auth/login Receives an username/password and returns an authorization tokenGET /api/servicekeys List service keysPOST /api/servicekeys Create a service keyDELETE /api/servicekeys Delete a service key

request as a header as in Table 4. Also, the data to be sent on the requests is expected to be

posted as JSON content.

Table 4: SmartEdge’s authentication method format

Auth Method Header Example

Service Key X-Service-Key X-Service-Key:01234Auth Token X-Access-Token X-Access-Token:user88:#06...#x

4.2.1.2 Accounts

SmartEdge provides authentication and access control. The accounts API, described on

Table 5 enables management around the creation and modification of the accounts.

Table 5: SmartEdge’s account management API

Function Description

GET /api/accounts List all accounts along with their rolesPOST /api/accounts Create a new accountDELETE /api/accounts Delete a existing accountGET /api/roles List roles and access privilegesGET /api/roles/(name) Returns a role named ’name’POST /api/roles Create a new roleDELETE /api/roles Delete a existing role

4.2.1.3 Network

The API described on Table 6 provides functions for managing the network that provides

connection between the application (container) running on the edge node and the Cloud Infras-

tructure.

Page 69: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

67

Table 6: SmartEdge’s network API

Function Description

GET /api/ports List all ports and the container it is connectedPOST /api/ports Create a new port and attach in on a containerDELETE /api/ports Delete a port on a specified container

4.2.1.4 Nodes

The SmartEdge API provide functions for managing the nodes that compose the Swarm

cluster. These functions are described on Table 7.

Table 7: SmartEdge’s node management API

Function Description

GET /api/nodes List all node devicesGET /api/nodes/(name) Return a node named ’name’POST /api/nodes Add a new node to the clusterDELETE /api/nodes Delete a node from the cluster

4.2.1.5 Registries

SmartEdge also provides an API for management of Docker Registries attached to the clus-

ter. This API is described on Table 8.

Table 8: SmartEdge’s registry management API

Function DescriptionGET /api/registries List registriesGET /api/registries/(name) Return a registry named ’name’GET /api/registries/(name)/repositories List repositoriesGET /api/registries/(name)/repositories/(repo) Return ’repo’ from registry ’name’DELETE /api/registries/(name)/repositories/(repo) Delete ’repo’ from registry ’name’

4.2.1.6 Events

Events provide auditing and information for all actions in SmartEdge. For example, when

a user creates a container a new event is created storing information such as who created the

container and the time the container was created. To manage these events SmartEdget provides

the API described in Table 9.

Page 70: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

68

Table 9: SmartEdge’s event management API

Function Description

GET /api/events List eventsDELETE /api/events Delete all events

4.2.2 Usage Workflow

To provide the API described above, firstly the SmartEdge needs to be deployed. In this

chapter the utilization of the platform will be described, from the platform deployment to the

deployment of an application.

There are some requirements for the deployment and usage of the platform. These require-

ments are:

• Cloud Infrastructure: As described earlier, the SmartEdge platform is dependent of an

existing cloud infrastructure to be deployed and provide a pool of resources;

• SDN Controller: An SDN controller is required to provide network functions between

the cloud and the edge of the network. This controller will be used by both the cloud and

the edge devices through a virtual switch.

• Edge device: At least one edge device must be available as it is where the application will

be deployed.

By fulfilling these requirements, the next step is to deploy the SmartEdge platform. Figure

10 depicts a basic case of use, starting from the platform deployment until the deployment of an

application. The deployment of the platform can be made manually, which involves configuring

each service, however it is recommended to use the developed Heat Orchestration Template

(HOT) as it automates the services installation and configuration.

Heat is an Orchestration service provided by OpenStack that enables the orchestration of

multiple composite cloud applications. This service supports use of both the Amazon Web Ser-

vices (AWS) CloudFormation template format through both a Query API that is compatible with

CloudFormation and the native OpenStack Heat Orchestration Template (HOT) format through

a REST API. These flexible template languages enable application developers to describe and

automate the deployment of infrastructure, services, and applications. The templates enable

creation of most OpenStack resource types, such as instances, floating IP addresses, volumes,

security groups, and users. The resources, once created, are referred to as stacks.

Page 71: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

69

heat_template_version: 2013-05-23description: > This template deploys the SmartEdge platform and its dependencies.

parameters: server_image: type: string default: ubuntu-16.04-Nov-16

Deploy SmartEdge using HOT

1

OpenStack creates required resources and

configures it2

Add a device into the cluster

3

Ansible installs the requirements

and configures the device to join the

cluster

4

Deploy an application

5

The application is deployed on the device

6

An network port is created on OpenStack and it is

attached to the container

7

Heat Orchestration

Template

OpenStack Cloud Provider

Operator / Dev/ DevOps

Edge Node

SmartEdge Stack

Figure 10: SmartEdge usage workflow

On Figure 10 the first step is the deployment of the SmartEdge using the HOT. The devel-

oped HOT expects the following parameters:

• Server Image: The O.S. image to be used to boot the virtual machines;

• Server Flavor: Flavor to use when booting the virtual machine, minimum recommended

is 2 VCPU, 4GB RAM and 15GB of storage;

• Internal Network: Network to use for internal communication;

• External Network: Network to use for floating IP addresses, at least two floating IPs

needs to be available, one for the SmartEdge API and another for the node registry service;

• DNS Nameserver: Address of a DNS nameserver reachable at the environment, defaults

to 8.8.8.8 (Google’s DNS);

• SSH Key Name: SSH key to be provisioned on the virtual machines, as the virtual ma-

chines are only accessible by providing this SSH key.

Launching the template, represented by step 2 on Figure 10, will create a stack (SmartEdge

Stack) composed by:

Page 72: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

70

• 5 Virtual Machines/Instances: The virtual machines where the services (SmartEdge Con-

troller, SmartEdge DB, Docker Swarm, Consul and Registry) will be provisioned. Each

service have a dedicated virtual machine;

• 4 Security Groups: Security groups are sets of IP filter rules that are applied to instances,

which controls networking access to the instance. This is required to permit the services

communication through its ports;

• 2 Floating IPs: Floating IP or public addresses, are used for communication with networks

outside the cloud, including the Internet. They are required for providing access to the

SmartEdge API and Consul (for node registering);

• 1 Network: The network that will be used by the containers to communicate with the

cloud. All ports created through the SmartEdge API will be created on this network.

• 1 Volume/Object Storage: This volume is attached to the SmartEdge DB instance and all

data from the Database is stored on this volume. This enables better control over the data

storage, as volumes can be dynamically resized and provides backup functionality.

After successfully creating the stack, the SmartEdge API and its web interface are ready for

use. As the stack was just created, there is no node available for application deployment. In that

way, the next step (step 3 on Figure 10) is adding a node.

Adding a node to the cluster can be easily done through the API, or using the web inter-

face. As our implementation uses Ansible, the node must be SSH accessible and the user used

for SSH must have administrator permissions, necessary for installation and configuration of

Docker and OpenVSwitch. To add a node, the username/password (used for SSH) and the SSH

port must be provided. Figure 11 depicts the process of adding a node, which consists in letting

ansible access the node, install Docker and OpenVSwitch, and configures it to join the Docker

Swarm cluster. After adding a node to the cluster, some information about the node, such as OS

and kernel version, can be through the API or by using the web interface on the nodes tab.

Now that there is a node on the cluster, it is possible to start deploying applications (contain-

ers) using the platform. Deploying an application using SmartEdge is as simple as deploying a

docker container. It can be done through the API or by using the web interface. If the applica-

tion is available on the Docker Hub [150], which provides a free-to-use, hosted Registry, plus

additional features (organization accounts, automated builds, etc), the application deployment

can be done right away. If not, it is necessary to send the application to the Registry server. The

Registry is a stateless, highly scalable server side application that stores and enables sharing

Page 73: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

71

SmartEdge Controller / Ansible Edge Node

POST /api/nodesinstall required packages

install docker, openv-switch

install OK

configure nodeconfigure docker daemon

set OVS controller

success

Actor

200 OK

Figure 11: Sequence diagram of adding a node to the SmartEdge cluster

Docker images. The Registry is open-source, under the permissive Apache license. The param-

eters required for deploying a container are: the name of the container, the docker image, and

the ports that should be exposed, optionally there are many other options for customization of

the deployment.

The process of deploying an application (steps 5 and 6 on Figure 10) is described on Fig-

ure 12. As the container being deployed will use resources from the cloud, before creating the

container a port should be created to connect the container to the cloud network. This is done

through the Neutron (OpenStack networking service) API, as Neutron is configured for using

an SDN controller (OpenDayLight on our implementation), creating a port means configur-

ing/adding flows to the table on the SDN controller. The port creation process will also trigger

the virtual switch on the edge node to create a port for the container. After creating the port,

the request to create the container is redirected to the Docker Swarm, which will schedule and

launch the container on the available node. Also, the container will be configured with another

Page 74: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

72

Container Orchestrator

OpenStack / Neutron SDN Controller Container Daemon

POST /containers/create

POST /v2/ports

PUT /restconf/opendaylight-inventory

:nodes/node/openflow:1/

table/{n}/flow/{n}

200 OK

201 Created

POST /containers/create

createContainer()

SmartEdge Controller

attachInterface()

created()201 Created

containerCreate()

Actor

201 Created

Figure 12: Sequence diagram of deploying an application on SmartEdge

network interface, connected to the port on the virtual switch, for joining the network provided

by the cloud.

The whole workflow of deploying and using the SmartEdge platform is depicted on Figure

10. It is important to state that the workflow shown here do not show the utilization of all

functions provided by the SmartEdge API. Besides that, considering that the implementation is

just a prototype, some of the functions are still to be implemented.

4.3 Edge-Infrastructure-as-a-Service

The service models offered by clouds are attractive for developing large-scale applications

due to their scalability and high-level programming models that simplify developing large-scale

web services. However, the existing service models are designed for traditional web applica-

tions, rather than future Internet applications running on various mobile and sensor devices.

Furthermore, public clouds, as they exist in practice today, are far from the idealized utility

computing model. Applications are developed for a particular provider’s platform and run in

data centers that exist at singular points in space. This makes their network distance too far

from many users to support highly latency-sensitive applications.

Page 75: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

73

In contrast to the cloud, putting intelligence in the network allows fog computing resources

to perform low-latency processing near the edge while latency-tolerant, large-scope aggregation

can still be efficiently performed on powerful resources in the core of the network. Data center

resources may still be used with fog computing, but they do not constitute the entire picture.

Based on the concept of Fog computing, the future needs of IoT applications and taking the

contributions of the proposed architecture, a new service model to serve applications, which

demand real-time and low-latency response time, can be offered by the cloud provider.

Named Edge-Infrastructure-as-a-Service, it is a form of cloud computing that provides vir-

tualized computing resources from the edge of the network. EIaaS is expected to be the de facto

delivery model for specific IoT applications that is dependent of a low-latency response time to

function.

In an EIaaS model, a third-party provider can host hardware, software, servers, storage and

other infrastructure components geographically distributed at the network edge on behalf of its

users. In the same way as IaaS, EIaaS providers also host users applications and handle tasks

including system maintenance, backup and resiliency planning. Furthermore, a EIaaS provider

can connect sensors to their edge devices, and offer this as an additional service. So, developers

can make their application interact with theses sensors directly on the data source and take

action in real-time.

As for the payment model, EIaaS can use the same model adopted by the IaaS vendors,

where customers pay on a per-use basis, typically by the hour, week or month. Some providers

can also charge customers based on the amount of container space they use, or the sensors avail-

able. The pay-as-you-go model eliminates the capital expense of deploying in-house hardware

and software.

It is important to state that EIaaS is not a replacement of IaaS, rather these two technologies

complement one another. The complementary functions of EIaaS and IaaS combined together

offer to the end-users a new option of service delivery model that serves the requirements of

the real-time, low latency IoT-applications, which can be running at the network edge, and also

supports complex analysis and long-term storage of data at the core cloud computing frame-

work.

Page 76: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

74

Chapter 5

Evaluation

5.1 Preliminary Results

To make a better decision on which virtualization approach fits better in the proposed archi-

tecture, considering the loT scenario, we have performed several synthetic benchmarks of two

different virtualization technologies (KVM and Docker containers) on a Cubieboard2 SoC plat-

form, which represents the architecture of loT gateways hardware. The methodology adopted

and the results found are described in the following sections.

5.1.1 Methodology

A native (non-virtualized) performance was used as a base case to compute the virtualiza-

tion overhead of both solutions. Benchmark tools were used to measure (using generic work-

loads) CPU, Memory, Disk I/O, and Network I/O performance.

Unless stated, each benchmark test was repeated 15 times and the results are the average

value of the repetitions. Different tools were used to verify the consistency between the obtained

results. The presented graphs shows these results.

The following hardware was used for the empirical experimentation:

• Computer model: Cubieboard2;

• Processor: ARMR CortexTM-A7 Dual-Core;

• Memory: 1GB DDR3 @960M;

• Disk: SanDisk microSDHC Card 16GB, CLASS 4;

• Network: 100Mb/s interface;

Page 77: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

75

• OS: Debian 8 (jessie), ARM EABI.

Cubieboard2 [151] is an affordable (US$ 60) and a popular hardware platform. The device

selected for evaluation correspond to a set of platforms which represents the evolution of the

IoT gateway hardware in the last few years. It is important to note that Cortex-A7 is 100%

ISA compatible with the Cortex-A15, this includes the new virtualization instructions, integer

divide support and 40-bit memory addressing. Any code running on an A15 can run on a Cortex

A7, just slower. This is an important feature as it enables SoC vendors to build chips with both

Cortex A7 and Cortex A15 cores, switching between them depending on workload requirements

[152].

Kernel-based Virtual Machine, together with QEMU emulator version 2.5.0, was used as

our hypervisor-based virtualization solution. KVM is managed using the standard Linux lib-

virt API and toolchain (virsh). When running QEMU with KVM, the hardware emulated is a

Versatile Express A15, one the reference platforms provided by ARM Holdings.

Docker 1.10.0 was used as the linux containers management tool, the image was provided

by the Hypriot repository [115] and it is running directly on the host OS. The container runs an

image based on Debian 8 (jessie), the same OS of the Host.

The benchmarks involving network communication were performed from an AMD A8-

5500B APU PC running Ubuntu Linux 14.04 LTS, Kernel version 3.19.0 with a NetXtreme

BCM5761 Gigabit Ethernet card. The PC was directly connected to the NIC of the board under

test providing a 100BASE-TX Fast Ethernet connection.

5.1.2 Benchmark Results

The results are depicted in tables and column charts showing the results of the benchmark

and the difference, in percentage, when comparing to the base case (native – non virtualized)

solution.

5.1.2.1 CPU Benchmark

To better measure and compare the CPU performance three different tools were used:

NBench [153], SysBench [154] and High-Performance LINPACK [155]. Each of them have

distinct characteristics on how to evaluate the CPU performance and it will be described below.

NBench is a synthetic computing benchmark program developed during the 90s intended to

measure CPU, FPU, and Memory System speed. Although being developed in the 90s, it can

Page 78: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

76

still be used as a valid tool for performance measurements. The tool quantifies an upper limit for

the mentioned performance characteristics. NBench is single threaded and its algorithm suite

consists of ten different tasks which generates three different indexes: Integer Index, Floating

Point Index, and Memory Index.

Table 10: CPU Benchmark: NBench

Platform Memory Index Integer Index Floating-Pt Index

Native 4.028 % 4.243 % 0.417 %

Docker 4.024 -0.10% 4.209 -0.80% 0.417 0.00%

KVM 3.856 -4.27% 4.121 -2.88% 0.411 -1.44%

The results of running NBench on the selected platforms are shown in Table 10. These re-

sults demonstrate that the overhead introduced by the hypervisor-based virtualization approach

is always higher when compared to the lightweight-virtualization. This results also show that

the Docker containers have almost a negligible impact on the CPU performance.

SysBench is a modular, cross-platform and multi-threaded benchmark tool for evaluating

OS parameters that are important for a system running a database under intensive load. How-

ever, It can also be used for a variety of low-level system tests, from CPU to disk I/O. The actual

workload produced by requests depends on the specified test mode. It is possible to limit either

the total number of requests, the total time for the benchmark, or both. Pre-available test modes

includes cpu, threads, mutex, fileio and oltp. Each test mode may have additional (or workload-

specific) options. For this evaluation, both cpu and threads tests mode were performed. The

results are available in Table 11. The tests results is the total time taken to conclude the test. In

the cpu test mode each request consists in calculation of prime numbers up to a specified value,

in our case 10000. All calculations are performed using 64-bit integers. We also specified the

test to run on two threads, each thread executes the requests concurrently until the total number

of requests exceed the limit specified.

In the thread test mode it evaluates the scheduler performance, more specifically the cases

when a scheduler has a large number of threads competing for some set of mutexes. We specified

SysBench to create two threads and eight mutexes. Each thread starts running the requests

consisting of locking the mutex, yielding the CPU, so the thread is placed in the run queue

by the scheduler, then unlocking the mutex when the thread is rescheduled back to execution.

For each request, the above actions are run several times in a loop, so the more iterations is

performed, the more concurrency is placed on each mutex.

Page 79: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

77

Table 11: CPU/Scheduler Benchmark: SysBench

Platform CPU (s) Threads (s)

Native 145.547 % 7.194 %

Docker 145.918 +0.25% 9.695 +34.77%

KVM 147.250 +1.17% 78.529 +991.65%

The results depicted in Table 11, like the NBench results, confirms that the overhead intro-

duced by the hypervisor-based virtualization approach is always higher when compared to the

lightweight-virtualization. In the threads test the KVM test run time is almost of ten times the

native execution time and seven times (735.84%) the Docker container execution time.

The last tool used for benchmarking the CPU performance was the High-Performance LIN-

PACK (HPL) benchmark, provided by the hpcc package [156]. The Linpack Benchmark is a

measure of a computer’s floating-point rate of execution. It is determined by running a com-

puter program that solves a dense system of linear equations. In particular, the algorithm uses a

random matrix A (size N), and a right hand side vector B that is defined as follows: A * X = B.

The benchmark tool executes two steps in order to solve the algebra problem:

1. LU (‘Lower Upper’) factorization of A.

2. LU factorization is used in order to solve the linear system A * X = B.

Linpack gives the results in Mflop/s which is the rate of execution, millions of floating point

operations per second.

The Linpack Benchmark results, presented on Figure 13, shows a specific use case with

N=2000, the default configuration from the hpcc package. Although the difference between the

values are not substantial, it follows the same pattern of the previous results. From this point of

view, all CPU benchmarks showed a negligible overhead introduced by the use of lightweight

virtualization when compared to the overhead of the hypervisor based virtualization. Also, it

should be noted that although the numerical differences between the platforms showed to be

relatively small, it can make a huge difference when the resources available are limited, which

is the case of network edge devices.

Page 80: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

78

Figure 13: Linpack results on each platform over 15 runs, with N=2000

5.1.2.2 Disk I/O Benchmark

For disk benchmark, like the CPU benchmark, three different tools were used to better

compare its results. The tools used were: Bonnie++ [157], SysBench and DD [158]. Bonnie++

is an open-source benchmark tool that is suited to perform a number of simple tests of hard drive

and file system to characterize the disk performance. The tests were performed with respect to

data read and write speed, the number of seeks and file metadata operations per second. It

should be stated that Bonnie++ was configured as recommended by the developer, that is, using

a test file size of at least twice the size of system memory, in our case a test file of 3GiB

was used. In Figure 14 the Bonnie++ results are depicted for sequential write (Block Output)

and sequential read (Block Input) speed. The native and container-based platforms offer very

similar performance in both cases. However, KVM write throughput is roughly a third and read

throughput almost a fifth of the native one.

Page 81: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

79

Figure 14: Disk throughput results from running Bonnie++ using a file size of 3 GiB

We have also used Bonnie++ to measure the speed at which a file is read, written, and then

flushed to the disk (Random Write test) and the number of random seeks per second. The results

are presented in Table III.

Table 12: Disk rnrw Benchmark: Bonnie++

Platform Random Write Speed (MB/s) Random Seeks

Native 4.55 % 1165 %

Docker 4.46 -1.98% 1006 -13.65%

KVM 1.38 -69.67% 311.9 -26.77%

The results on Table 12 show that the Random Write speeds are well aligned with the results

presented in Figure 14. In the Random Seeks measurements, there is a considerable loss on both

virtualized solutions, however it is more aggravated in KVM.

The second tool used to measure Disk performance is the SysBench tool, the same used

previously to benchmark CPU. This time it was used with the fileio test mode. This test mode

can be used to produce various kinds of file I/O workloads. At the prepare stage SysBench

creates a specified number of files with a specified total size (3GiB in our case), then at the run

stage, each thread performs specified I/O operations (we used rndrw - random read/write) on

this set of files. The obtained results are presented on Figure 15. The results presented clearly

Page 82: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

80

show a huge mismatch between the raw results of Bonnie ++ and Sysbench. This mismatch

is also evidenced in other works [159]. This suggests that Disk I/O performance measurement

can be intricate. However, considering the percentage value both tools presented results quite

similar.

Figure 15: Disk rnrw from SysBench using a file size of 3 GiB

The last tool used for Disk benchmark is DD. DD is a command-line utility for Unix-like

operating systems whose is commonly used for several operations such as recovering data from

hard disk, creating a disk image, data conversion, low-level formatting etc. In this work, DD

was used to test hypervisors and containers capacity to read and write from special device files

such as /dev/zero/, that is a virtual file which has the particular feature to return null (0x00)

characters when it is read. This test was performed using a 1024 bytes block size and a test file

size of 3 GiB. Figure x shows the obtained results for this test.

Unlike the SysBench results, the results from Figure 16 confirms what was achieved with

Bonnie++ (Figure 14). With Native, Docker and KVM we obtained roughly the same result

without any significant deviation. For the sake of completeness, it has to be stated that for Native

we used the default file format for disk images. Docker utilizes the AUFS (Another Union File

System), which supports layering and enables versioning of images, while KVM uses an image

in the raw format. Also, regarding the different results obtained with the SysBench tool, we did

not find any material evaluating different disk benchmarking tools for reliability.

Page 83: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

81

Figure 16: Disk throughput from DD using a file size of 3 GiB and a block of 1024b

5.1.2.3 Memory Benchmark

STREAM [160] was used to benchmark the performance of Memory I/O. STREAM bench-

mark is a simple synthetic benchmark software that measures sustainable memory bandwidth

(in MB/s) and the corresponding computation rate using very simple kernel operations. The

results are produced for four different operations: Copy, Scale, Add and Triad. The STREAM

author states that CPU cache size has great influence in in the measured performance. For this

reason, it is advised that the size of the "Stream Array" has to be set accordingly to the following

rule: "each array must be at least four times the size of the available cache memory". Follow-

ing the recommended configuration, the obtained results are presented on Table 13. Although

the difference between the results are quite small, Docker outperforms KVM in every situation

when compared to the Native execution.

Table 13: Memory Benchmark: STREAM

Platform Copy Scale Add Triad

Native 1759.4 % 806.8 % 654.6 % 534.7 %

Docker 1754.5 -0.28% 804.8 -0.25% 652.8 -0.27% 533.2 -0.28%

KVM 1723.4 -2.05% 786.3 -2.54% 641.2 -2.05% 523.7 -1.78%

Page 84: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

82

5.1.2.4 Network Benchmark

We used Netperf [161] to measure the Network I/O performance. Netperf is a benchmark

tool embedded with several tests that can be used to measure the performance of different types

of network between two hosts. It makes possible to backmark both unidirectional throughput,

and request/response data transfer with TCP and UDP protocols. The tests were performed

under the following configuration:

• As stated previously, a PC was directly connected to the NIC of the Cubieboard2 under

test providing a 100BASETX Fast Ethernet connection;

• The Cubieboard2 was running netperf server and the PC netperf client;

• Default values for socket and message sizes;

• Test duration: 600 seconds;

• IPv4 addressing.

First we ran the TCP_STREAM and UDP_STREAM tests, which are the default tests provided

by netperf. This type of test simply consist in transmitting TCP and UDP data between the

netperf client and the netperf server. Figure 17 presents the results obtained from this tests.

The results shows that the Docker container achieve almost equal performance compared to

the native one on both protocols, and KVM is 22.9% and 18% slower on the TCP and UDP

protocols consecutively. All platforms offer lower throughput with the UDP protocol.

Figure 17: Network throughput results from running netperf during 600 seconds

Page 85: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

83

The netperf TCP_RR and UDP_RR tests measure the number of TCP and UDP transac-

tions. Each transaction is composed by the following events:

1. Netperf client sends a request to netperf server.

2. Netperf server sends a response to netperf client.

The obtained results from this tests are depicted on Figure 18.

Figure 18: Network request/response results from running netperf during 600 seconds

This results clearly shows that the virtualized platforms are not responsive as the native one.

For the TCP and UDP request/response test Docker introduced a considerable overhead (44.2%)

when compared to the native execution. Alongside that, KVM introduces an even higher level

overhead on both protocols (85% average).

5.1.3 Conclusion

While the results on the hypervisor based solution showed a significant overhead that cannot

be easily mitigated, the results of the Docker platform are promising.

Containerized deployment for cloud applications clearly introduces many benefits. How-

ever this deployment approach has not yet been deeply investigated in IoT domain and on IoT

gateways in particular.

Page 86: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

84

To make a better decision on which virtualization approach fits better for a specific IoT

application, more understanding of the overall IoT landscape and a case-by-case analysis are

needed. Such analysis might include application workload, different virtualization solutions,

and also different SoC hardware models. A more generic benchmark might be feasible over

time with the maturation of the IoT and more first-hand experiences in different applications

are available. Until then, linux containers seems to take advantage over hypervisor-based vir-

tualization for deploying applications at the network edge. It offers a great deal of flexibility

under relaxed resource constraints.

5.2 SmartEdge Evaluation

In this section the evaluation results of the SmartEdge platform and its provided services

will be analyzed. The evaluation considers its suitability on providing the ability to deploy

applications closer to the end user to support latency-sensitive applications. To achieve this, a

comparative analysis was conducted comparing results from an application deployment using

the typical cloud computing framework and by using the proposed platform.

The SmartEdge evaluation scenario, methodology and results will be presented on the fol-

lowing sections.

5.2.1 Evaluation Scenario

Considering that the typical cloud computing framework is based on centralized data-

centers around the world, the testbed built to evaluate the scenarios involving the comparison

between the centralized cloud and the SmartEdge fog computing approach was composed by

three real production clouds. Each cloud located in a different continent.

As can be seen on Figure 19 the clouds composing the testbed were located in Brazil

(Cloud-BR), the nearest one, United States (Cloud-USA) and Germany (Cloud-GE). Each of

them having their own characteristic regarding their network paths and all clouds are based on

the OpenStack platform [130].

The Cloud-BR [162] was provided by the Federal University of Campina Grande (UFCG)

and the Distributed Systems Laboratory (LSD). It is the nearest one, located in the Campina

Grande city, about 300 kilometers distance from where the tests were executed (Natal/RN) and

the latency between the location of the experiment execution to an instance of the cloud was

about 50 milliseconds.

Page 87: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

85

SmartEdge Stack

Cloud USA

Cloud GE

Cloud BR+

SmartEdge

~ 9.000 Km~ 115 ms

~ 24.500 Km~ 400 ms

Figure 19: SmartEdge Evaluation Testbed

For the cloud located on United States (Cloud-USA) TryStack [163] was used. TryStack

is a public deployment of OpenStack that highlights a set of common reference architectures

and enables users to try out the OpenStack APIs for managing cloud computing resources.

However, it has set a few sensible limits in place for the good of the project. First, the server

instances launched are only available for 24 hours until the hardware is reclaimed for use by

new instances. It is possible to select from several available varieties of images (CentOS, Fedora,

Ubuntu, and OpenSUSE) or upload our own. TryStack hardware is housed in a data-center in

Las Vegas (USA) [164], about 9.000 kilometers away from where the experiment was executed,

and gives about 115 milliseconds of latency.

Our third cloud is another OpenStack cloud hosted in the data-center of the Dresden Uni-

versity of Technology (TU Dresden) [165]. It is the further one, as it is located in Europe, the

path to reach their instances goes through the North America. In total it gives about 24.500 kilo-

meters of distance from the location where the experiments were executed, it also gives about

400 milliseconds of latency.

Page 88: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

86

The SmartEdge platform was deployed on Cloud-BR, as it was the only cloud infrastructure

meeting the platform requirements. The deployment was performed using the developed Heat

Orchestration Template (HOT).

As described on the section before, for the usage of the platform it needs at least one node

to join the cluster. We used the Raspberry Pi 3 Model B [121] to represent the edge node. The

Raspberry Pi 3 Model B is an affordable (US$ 60) and a popular hardware platform. The device

selected for evaluation correspond to a set of platforms which represents the evolution of the

IoT gateway hardware in the last few years. The specifications of the Raspberry Pi is as follows:

• A 1.2GHz 64-bit quad-core ARMv8 CPU

• 802.11n Wireless LAN

• Bluetooth 4.1

• Bluetooth Low Energy (BLE)

• 1GB RAM

• 4 USB ports

• 40 GPIO pins

• Full HDMI port

• Ethernet port

• Combined 3.5mm audio jack and composite video

• Camera interface (CSI)

• Display interface (DSI)

• Micro SD card slot

• VideoCore IV 3D graphics core

The Raspberry Pi runs Ubuntu Server Minimal 16.04 as its Operating System (OS). The

OS image was provided by the Ubuntu Pi Flavor Maker [166], these images are built from the

regular Ubuntu armhf base, not the Snappy Ubuntu, which means that these images are closer

to the ubuntu images used on the cloud production servers. Also, the node was connected to

Page 89: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

87

the network through a 300Mbps 2.4Ghz IEEE 802.11b/g/n Wireless Router. It was added to the

SmartEdge cluster through the SmartEdge API.

To sum up, basically the testbed is composed by three real clouds distributed in three dif-

ferent continents, the SmartEdge platform which was deployed on the Cloud-BR and an edge

node device, representing the IoT gateway hardware, running Ubuntu. This scenario gives many

options for service provisioning, which will be explored on our evaluation.

5.2.2 Methodology

To evaluate the SmartEdge platform and the impact of the latency on both, the application

and server sides, the following constraints were considered:

1. Provisioning time: A comparison was conducted to evaluate the provisioning time of a

service through the SmartEdge and by using a cloud computing platform. This experiment

was performed 30 times on each platform and the results represents the average time spent

to provision the application.

2. Latency impact on Request/Response: This analyzes the impact of the latency on the

performance of simple replying a request, comparing results from a server deployed by

the SmartEdge and cloud computing platforms. This experiment was conducted using a

tool to perform requests and wait for responses, it was executed during a period of fifteen

minutes and the results represents the rate of transactions (request/response) per second

the tool was capable of handling from the server running on a container deployed by

SmartEdge compared to the server running on a cloud instance deployed by OpenStack.

3. Impact of latency on the application QoE and server CPU: On this experiment, a real

latency-sensitive application is deployed using the SmartEdge platform and also on the

three available clouds using OpenStack. The evaluation will compare the client appli-

cation FPS as a metric of QoE and the server resource usage between the application

deployed using the different solutions.

4. Impact of resource allocation on the client application QoE: On this evaluation, three

instances of the same application used on the item 3 will be running on the same machine.

However, at first the deployment of the application will use the SmartEdge platform to

reserve resources for one of the application instances, while the others two will compete

for resources. Then, on a second deploy made manually, directly on the machine, without

any type of resource allocation, all applications will be competing for resources. This

Page 90: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

88

evaluation seeks to analyze the importance of softwarized control plane for, for example,

performing resource allocation, when providing latency-sensitive services.

For all experiments running on the clouds, an instance was created to host the application.

This instance runs Ubuntu Server 16.04 as its Operating System, and the image was provided

by the Ubuntu Cloud Images repository [167]. The instance was also configured to use a flavor

closer to the configuration of the Raspberry Pi, used on the SmartEdge platform, the flavor has

the following configuration:

• 4 VCPUs

• 1GB RAM

• 40GB Disk

For analyzing the provisioning time, the experiment collected the time (in seconds) a ser-

vice takes to be available from the moment of the call to launch the instance (cloud), or con-

tainer(SmartEdge) is executed until the service port is available. For the request/response mea-

surement, a tool was used to measure how may request and response (transaction) the system

can handle per second.

The latter experiments uses an client-server application to measure how the latency impact

its use. To achieve that, the following metrics were measured:

• Frames per second (FPS): This metric was considered as it directly impacts on the QoE

of the application. Also, it is highly affected by the latency from the application to the

server.

• Latency (ms): It is necessary to keep track of the latency to analyze how it is affecting

both the client and the server application.

• CPU (%): On the server side, the percentage of CPU utilization was collected to make

possible to analyze how it is affected by the latency when communicating with the client

application.

The experiments involving the client-server application was conducted during a period of

fifteen minutes, collecting the above metrics. FPS and latency was collected on the client side

of the application and on the server side, the CPU utilization has been measured.

Page 91: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

89

5.2.3 Results

The results are presented in tables and graphics presenting the results according to the met-

rics collected. The tables and graphs are depicted to compare the cloud computing framework

with the SmartEdge approach.

5.2.3.1 Provisioning time

To measure the application provisioning time, a script was developed to automate the pro-

cess. The script uses curl [168] to make a call directly to the SmartEdge (container) or Open-

Stack (Virtual machine) API to provision the application. On both scenarios, the application

consists of a image with the application already installed and configured so, as soon as the in-

stance (or container) boots, the application starts. To check if the application is available, the

script keeps trying to connect to the application port until its is available. The application used

on this experiment is a simple server application listening on a specific port. The results from

the OpenStack provision time is the average value of provisioning the application on the three

clouds (Cloud-BR, Cloud-USA and Cloud-GE). Table 14 presents the obtained results.

Table 14: Application Provisioning Time

Platform Average Variance Confidence Interval

SmartEdge 2.412 % 0.004 2.391, 2.434

OpenStack 23.558 +976.7% 8.428 22.519, 24.596

As can be seen on Table 14 the time taken for an application to be available using the cloud

computing solution is almost ten times higher than when using SmartEdge. The confidence

interval (95% confidence level), also confirms this, as the lowest time necessary to launch an

application using the cloud computing solution is almost ten times higher than the highest value

when using the SmartEdge solution.

It is important to note that most of this difference is due to the SmartEdge platform being

based on the container technology to host its applications, while the OpenStack uses virtual

machine. As explained earlier, the process of booting a virtual machine is much more laborious,

the whole operating system needs to be virtualized. When booting a container, the container

does not need to virtualize the whole operating system, part of the host operating system, for

example its kernel, is reused by the container. This turns the booting process much more simple

and consequently faster.

Page 92: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

90

5.2.3.2 Latency Impact on Request/Response

To analyze how the latency affects the request/response performance, we used the same tool

used before on the preliminary results to benchmark the network (Netperf [161]). However, this

time the benchmarks were performed on a server provisioned by SmartEdge and OpenStack (on

the three available clouds) and the results consists of its comparison.

As explained earlier, Netperf is composed by a client and server application. The server,

responsible for replying a request were deployed using the different platforms, while the client,

which is responsible to send a request, was installed on a Desktop PC connected to the network

through an Gigabit Ethernet card. The benchmarks were performed during a period of fifteen

minutes using the system default socket buffer sizes, request size of 1 byte, and a response size

of 1 byte. The results are depicted on Figure 20.

1

10

100

10

1000

log(Latency (ms))

log(Transactions/s)

Cloud GE Cloud USA Cloud BR Local / SE

Latency Impact on R/R

Figure 20: Impact of latency on a simple Request/Response of 1 byte

The graph presented on Figure 20 is a logarithmic graph, for better visualization, of the

average values of the latency and transactions per second. A transaction consists of simple

sending a request and receiving a response from the server. Figure 20 clearly shows that- the

throughput of transactions is highly impacted by the latency, they are inversely proportional. As

Page 93: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

91

the SmartEdge platform deploys the server application closer to the client, it is expected that

its performance on latency-sensitive scenario to be much better. Also, as the Cloud-GE is the

further server, it presents the worst performance.

This results will also serve as a reference for the following experiments. As we will be

using an application that requires the server to process data from the client application before

replying, this results makes possible to compare how CPU processing will also impact on the

latency.

5.2.3.3 Impact of latency on the application QoE and server CPU

An sample application, named Fluid Simulation, was used to analyze how the latency im-

pacts the application performance on both sides of it (client/server). Fluid Simulation is an

interactive fluid dynamics simulation, that renders a liquid sloshing in a container on the screen

of a phone based on accelerometer inputs. The application back-end runs on Linux and per-

forms a smoothed particle hydrodynamics physics simulation using 2218 particles, generating

up to 60 frames per second. The structure of this application is representative of real-time (i.e.,

not turn-based) games. Its binary and code is available at [169].

Fluid Simulation is composed by a front-end android application and a back-end server.

For this evaluation, the back-end was deployed using the different platforms running Ubuntu

Server 16.04 as its Operating System. The front-end was running on a Nexus 6P Android smart-

phone. To collect the metrics the front-end application needed some modification for logging

its metrics.

The application Quality of Experience (QoE) was classified according to its frame rate

(FPS) as follows:

• High: 48-60 FPS (The application runs smooth)

• Medium: 35-47 FPS (Runs smooth, slowing for some milliseconds)

• Low: 15-34 FPS (Runs slow)

• Unusable: 0-14 FPS (Runs slow and freezing for seconds)

For the application back-end we measured its CPU consumption. The latency between the

application front-end and back-end was also measured during the evaluation. The results re-

garding the application front-end is depicted on Figure 21.

Page 94: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

92

0

10

20

30

40

50

0 200 400 600 800Latency (ms)

Fra

mes

per

sec

ond

(FP

S)

HostSmartEdgeCloud−BRCloud−USACloud−GE

Latency impact on FPS

Figure 21: Impact of latency on the application FPS by its provisioning platform. Only theapplication deployed using SmartEdge presented a high QoE

As can be seen on Figure 21 the only scenario meeting a high QoE is the scenario where

SmartEdge is hosting the application back-end. It was already expected since the application

server is closer its front-end. We can also notice that even the Cloud-BR, the nearest cloud, does

not support providing this sample application with high QoE, and almost half of the results from

Cloud-BR does not meet even medium level of QoE.

While running the front-end application, we also measured the CPU utilization by the back-

end of the application on the different deployed hosts. We expected that as the communication

between the front-end and back-end application is more intensive (lower latency) on the Smart-

Edge scenario, the back-end will requires more CPU to serve the front-end application. In that

way, we also expected that on the scenario of Cloud-GE, it would have the lowest CPU utiliza-

tion. Figure 22 shows the results for each scenario.

From the Figure 22 it is possible to see that, although the difference is not much, the CPU

utilization follows the expected pattern of a lower latency resulting in more CPU utilization.

On the SmartEdge scenario the third and fourth quartile is always higher than 43%, the third

quartile of Cloud-BR is also on this interval, however in the other scenarios they are always

Page 95: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

93

30

40

50

60

70

SmartEdge Cloud−BR Cloud−USA Cloud−GELocation

CP

U %

Latency impact on CPU

Figure 22: CPU usage over the different deployment scenarios.

lower.

We can also notice from the results presented on Figure 22 that there is no significant

difference on the CPU utilization between Cloud-BR and Cloud-USA scenarios. This can be

explained by the variance of the latency between them. From the SmartEdge scenario to the

Cloud-BR scenario there is an increase of about 270% on the average of the latency, while from

Cloud-BR and Cloud-USA the average increase of latency is about 178% and from Cloud-USA

to Cloud-GE the average increase was of 230%. From Cloud-BR to Cloud-USA the increase on

the latency was not sufficient to notice a difference on CPU utilization.

5.2.3.4 Impact of resource allocation on the client application QoE

To demonstrate the importance of a mechanism capable of providing resource allocation,

this experiment seeks to show how the application is affected by the concurrency for resources.

For this evaluation, two different scenarios were built. In one of them, the SmartEdge plat-

form was used to deploy three servers, in the form of docker containers, of the Fluid Simu-

lation application (App 1, App 2 and App 3) on the same node. These containers were de-

Page 96: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

94

ployed allocating 50% of CPUs (2 CPUs) for the first application (App 1), 25% (1 CPU) for

the second (App 2) and the other 25% (1 CPU) for the third application (App 3). On the sec-

ond scenario three servers of the Fluid Simulation application were also deployed, however

they were deployed manually on the physical node, without any type of virtualization/isolation,

sharing/concurring for the node resources. Basically the two analyzed scenarios are:

• Scenario 1: With CPU allocation

– Deployment of applications in the form of container using SmartEdge to provide

CPU allocation;

– 2/4 CPUs for App 1;

– 1/4 CPUs for App 2;

– 1/4 CPUs for App 3.

• Scenario 2: Without CPU allocation

– Manual deployment of applications on the physical node;

– None type of virtualization or isolation for the applications;

– App 1, App 2 and App 3 shares the same resources.

The results for this evaluation are depicted on Figure 23, which presents a graph composed

by the confidence interval for the mean, with a confidence level of 95%.

Regarding the CPU utilization, it can be seen on the Figure 23 that on the first scenario

(with CPU Allocation) the App 1 utilizes about 5% more CPU than the App 2 and 10% more

than the App 3, which is expected as the App 2/3 has only 1 CPU while the App 1 has 2 CPUs

available. On the second scenario, without CPU allocation, all three apps has almost the same

CPU utilization, also expected as all three are sharing the same 4 CPUs.

The FPS results presented on Figure 23, demonstrates how important resource allocation is

to guarantee a certain level of QoE. Using the QoE levels described earlier, it can be seen that

the only scenario where it is possible to offer the service with a high level of QoE is the scenario

where the resource allocation is applied. Just by allocating 50% of the available CPUs to the

App 1 it is possible to guarantee a high level of QoE for this application. Another important

observation from the FPS results is that even allocating 1 CPU seems better than allowing all

applications to compete for resources. It may be justified by the fact that when there are many

processes competing for a resource, the OS spend some time trying to get the resource and

give it for the process and also freeing it, in other words, there is an overhead to manage these

Page 97: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

95

CPU Allocation w/o CPU Allocation

28

30

32

34

36

20

30

40

50

50

100

150

CP

U %

FP

SLatency (m

s)

Application App 1 App 2 App 3

Impact of Resource Allocation

Figure 23: Impact of CPU allocation on the application performance. Confidence interval forthe mean of the values, with a confidence level of 95%

shared resources. When there is 1 CPU allocated for 1 process there is no need to manage the

resources, so, the process for its utilization is much more simple.

The latency results are just a confirmation of the FPS analyses, they are directly related.

As the App 1 has 2 CPUs available, it can process the input faster, so the reply to client is

faster, consequently the latency is lower and the FPS is higher. It is important to note that by

comparing the CPU utilization of App 3 on both scenarios, although it uses more CPU on the

scenario without CPU allocation it also presents a higher latency. This is also a confirmation

of the overhead introduced by the management of shared resources, as the App 3 is spending

some time waiting for the resource to be available, this time is accounted on its latency, and

consequently on the FPS of the front-end application.

Page 98: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

96

Chapter 6

Conclusion and Future Work

This work proposes SmartEdge a platform for provisioning resources at the network edge.

The platform seeks to provide a new service model, named Edge Infrastructure as a Service

(EIaaS), based on the recent computing paradigm, fog computing, to afford the demands of real

time latency-sensitive applications in the context of IoT.

The main benefit provided by the SmartEdge is that it can alleviate the burden upon the

centralized data centers by allowing applications to be deployed at the edge of the network.

Consequently, it also gives operators new options for application deployment that supports

latency-sensitive applications. Moreover, SmartEdge offer a coupled softwarized control plane,

allowing operators to apply mechanisms for dynamically allocation of resources for both net-

work and containers plane.

This work also performs a comparative performance evaluation of two virtualization ap-

proaches, the lightweight-virtualization (Containers) and the hypervisor virtualization (Virtual

Machines). The results showed that containers seems to take advantage over hypervisor-based

virtualization for deploying applications at the network edge. It offers a great deal of flexibility

under relaxed resource constraints. Another study presented on this work evaluates the proposed

solution and compares it with the typical cloud computing model of application deployment. By

using an latency-sensitive application the results demonstrated the importance of the platform

for provisioning real-time services. Furthermore the results clearly depict the enhanced perfor-

mance of both the fog computing approach and the coupled softwarized control plane in terms

of the provisioned QoE. We eventually justify fog paradigm as an improved computing platform

that can support IoT better compared to the existing cloud computing paradigm.

By following a coupled plane approach, the proposed SmartEdge platform makes possible

to, through its API (programmatically), specify the amount of resources to be provided for

certain application on both container and network planes. It also enables the possibility to apply

Page 99: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

97

different resource allocation techniques on-demand.

The proposed service model Edge Infrastructure as a Service (EIaaS), seeks to offer a

new edge computing tailored cloud computing service delivery model to efficiently suit the re-

quirements of the real-time latency-sensitive IoT applications. With the EIaaS approach, cloud

providers are allowed to dynamically deploy IoT applications/services on the edge computing

infrastructures and manage cloud/network resources at the run time, as means to keep IoT ap-

plications always best connected and best served.

A generic modular architecture is also provided. This architecture is completely decoupled

of the technologies utilized on our prototype implementation and clearly shows the function of

each component, its provided and consumed APIs, and the communication between each mod-

ule. This architecture seeks to be a reference software architecture when developing platforms

based on the fog computing approach for supporting IoT latency-sensitive applications.

Future works involves the safety and reliability of the platform. The same security concerns

that apply to current virtualized environments can be foreseen to affect the edge nodes hosting

applications. The presence of secure sandboxes for the deployment of applications poses new

interesting challenges: Trust and Privacy. Before using other devices or mini-clouds in the net-

work to run some software, isolation and sandboxing mechanisms must be in place to ensure

bidirectional trust among cooperating parties. The fog will allow applications to process users

data in third party hardware/software. This of course introduces strong concerns about data

privacy and its visibility to those third parties.

Another direction for future works is programmability. Controlling application lifecycle

is already a challenge in cloud environments [170]. The presence of small functional units in

more locations (devices) calls for the right abstractions to be in place, so that programmers do

not need to deal with these difficult issues [171]. Although SmartEdget provides easy to use

APIs for programmers allowing them to control the container and network plane, mechanisms

that provides the right abstractions to hide the massive complexity of the fog, and automate

functions like resource allocation are envisioned.

The proposed platform also enables the development of techniques for dynamic resource

allocation on the edge, regarding both container and network planes. The development of such

techniques are important due to the heterogeneity of the environment, the different application

requirements, and the limited hardware resources provided by the edge of the network. Using

these resources intelligently are key for the applications performance.

Finally, accountability is also an important field to be studied. Enabling users to share they

Page 100: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

98

spare resources to host applications is crucial to enable new business models around the concept

of the fog, such as the introduced EIaaS. A proper system of incentives needs to be created.

These incentives can be financial or technical (e.g. unlimited free data rates). This would ensure

the availability of edge nodes in a variety of locations.

Page 101: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

99

References

[1] H. Schaffers, N. Komninos, M. Pallot, B. Trousse, M. Nilsson, and A. Oliveira, SmartCities and the Future Internet: Towards Cooperation Frameworks for Open Innovation,pp. 431–446. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.

[2] R. G. Hollands, “Will the real smart city please stand up?,” City, vol. 12, no. 3, pp. 303–320, 2008.

[3] J. Belissent, “Getting clever about smart cities: New opportunities require new businessmodels,” Forrester Research, 2010.

[4] L. Atzori, A. Iera, and G. Morabito, “The internet of things: A survey,” Computer net-works, vol. 54, no. 15, pp. 2787–2805, 2010.

[5] F. L. Lewis et al., “Wireless sensor networks,” Smart environments: technologies, proto-cols, and applications, pp. 11–46, 2004.

[6] T. Yucek and H. Arslan, “A survey of spectrum sensing algorithms for cognitive radioapplications,” IEEE communications surveys & tutorials, vol. 11, no. 1, pp. 116–130,2009.

[7] P. Mell and T. Grance, “The nist definition of cloud computing,” 2011.

[8] R. Buyya, C. S. Yeo, and S. Venugopal, “Market-oriented cloud computing: Vision, hype,and reality for delivering it services as computing utilities,” in High Performance Com-puting and Communications, 2008. HPCC’08. 10th IEEE International Conference on,pp. 5–13, Ieee, 2008.

[9] L. Piras, “A brief history of the internet of things.” http://www.psfk.com/2014/03/internet-ofthings-infographic.html, 2014. [Infographic] Accessed: 2016-05-26.

[10] D. H. Joseph Bradley, Joel Barbier, “Internet of things market forecast: Cisco.” http://postscapes.com/internet-of-things-market-size, 2013. [Online] Accessed:2016-07-01.

[11] F. S. D. Silva, A. Neto, D. Maciel, J. Castillo-Lema, F. Silva, P. Frosi, and E. Cerqueira,“An innovative software-defined winemo architecture for advanced qos-guaranteed mo-bile service transport,” Computer Networks, 2016.

[12] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog computing and its role in the internetof things,” in Proceedings of the first edition of the MCC workshop on Mobile cloudcomputing, pp. 13–16, ACM, 2012.

[13] W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: Vision and challenges,”IEEE Internet of Things Journal, no. 99, 2016.

Page 102: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

100

[14] M. Yannuzzi, R. Milito, R. Serral-Gracià, D. Montero, and M. Nemirovsky, “Key ingre-dients in an iot recipe: Fog computing, cloud computing, and more fog computing,” in2014 IEEE 19th International Workshop on Computer Aided Modeling and Design ofCommunication Links and Networks (CAMAD), pp. 325–329, IEEE, 2014.

[15] D. Bouley, “Estimating a data center’s electrical carbon footprint,” white paper, vol. 66,2010.

[16] R. Brown et al., “Report to congress on server and data center energy efficiency: Publiclaw 109-431,” Lawrence Berkeley National Laboratory, 2008.

[17] L. Shen, S. Cheng, A. J. Gunson, and H. Wan, “Urbanization, sustainability and theutilization of energy and mineral resources in china,” Cities, vol. 22, no. 4, pp. 287–302,2005.

[18] I. P. on Climate Change, Climate Change 2014–Impacts, Adaptation and Vulnerability:Regional Aspects. Cambridge University Press, 2014.

[19] G. A. Meehl, T. F. Stocker, W. D. Collins, P. Friedlingstein, A. T. Gaye, J. M. Gregory,A. Kitoh, R. Knutti, J. M. Murphy, A. Noda, et al., “Global climate projections,” Climatechange, vol. 3495, pp. 747–845, 2007.

[20] C. Perera, A. Zaslavsky, P. Christen, and D. Georgakopoulos, “Sensing as a service modelfor smart cities supported by internet of things,” Transactions on Emerging Telecommu-nications Technologies, vol. 25, no. 1, pp. 81–93, 2014.

[21] K. Ashton, “That ’internet of things’ thing,” RFiD Journal, vol. 22, no. 7, pp. 97–114,2009.

[22] A. Whitmore, A. Agarwal, and L. Da Xu, “The internet of things—a survey of topics andtrends,” Information Systems Frontiers, vol. 17, no. 2, pp. 261–274, 2015.

[23] S. C. B. Intelligence, “Disruptive civil technologies–six technologies with potential im-pacts on us interests out to 2025 cr 2008-07.–34 s., 1 abb., 6 tab., 6 anh,” Washington(National Intelligence Council), 2008.

[24] ITU, “Overview of the internet of things,” Telecommunication Standardization Sector OfITU, 2012.

[25] D. He and S. Zeadally, “An analysis of rfid authentication schemes for internet of thingsin healthcare environment using elliptic curve cryptography,” IEEE internet of thingsjournal, vol. 2, no. 1, pp. 72–83, 2015.

[26] I. Akyildiz, M. Pierobon, S. Balasubramaniam, and Y. Koucheryavy, “The internet ofbio-nano things,” IEEE Communications Magazine, vol. 53, no. 3, pp. 32–40, 2015.

[27] A. Aijaz and A. H. Aghvami, “Cognitive machine-to-machine communications forinternet-of-things: a protocol stack perspective,” IEEE Internet of Things Journal, vol. 2,no. 2, pp. 103–112, 2015.

[28] J. Gubbi, R. Buyya, S. Marusic, and M. Palaniswami, “Internet of things (iot): A vi-sion, architectural elements, and future directions,” Future Generation Computer Sys-tems, vol. 29, no. 7, pp. 1645–1660, 2013.

Page 103: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

101

[29] J. Jin, J. Gubbi, S. Marusic, and M. Palaniswami, “An information framework for creatinga smart city through internet of things,” IEEE Internet of Things Journal, vol. 1, no. 2,pp. 112–121, 2014.

[30] C. Perera, C. H. Liu, S. Jayawardena, and M. Chen, “A survey on internet of things fromindustrial market perspective,” IEEE Access, vol. 2, pp. 1660–1679, 2014.

[31] J. Wei, “How wearables intersect with the cloud and the internet of things: Considerationsfor the developers of wearables.,” IEEE Consumer Electronics Magazine, vol. 3, no. 3,pp. 53–56, 2014.

[32] L. Wang and R. Ranjan, “Processing distributed internet of things data in clouds.,” IEEECloud Computing, vol. 2, no. 1, pp. 76–80, 2015.

[33] X. Zheng, P. Martin, K. Brohman, and L. Da Xu, “Cloudqual: a quality model for cloudservices,” IEEE Transactions on Industrial Informatics, vol. 10, no. 2, pp. 1527–1536,2014.

[34] X. Zheng, P. Martin, K. Brohman, and L. Da Xu, “Cloud service negotiation in internetof things environment: A mixed approach,” IEEE Transactions on Industrial Informatics,vol. 10, no. 2, pp. 1506–1515, 2014.

[35] S. U. Khan, P. Bouvry, and T. Engel, “Energy-efficient high-performance parallel anddistributed computing,” The Journal of Supercomputing, vol. 60, no. 2, pp. 163–164,2012.

[36] K. Bilal, S. U. R. Malik, S. U. Khan, and A. Y. Zomaya, “Trends and challenges in clouddatacenters,” Growth, vol. 10, p. 5, 2014.

[37] N. Antonopoulos and L. Gillam, Cloud computing: Principles, systems and applications.Springer Science & Business Media, 2010.

[38] B. Halpert, Auditing cloud computing: A security and privacy guide, vol. 21. John Wiley& Sons, 2011.

[39] W. Forrest and C. Barthold, “Clearing the air on cloud computing,” Discussion Documentfrom McKinsey and Company, 2009.

[40] R. Buyya, J. Broberg, and A. M. Goscinski, Cloud computing: Principles and paradigms,vol. 87. John Wiley & Sons, 2010.

[41] L. M. Vaquero, L. Rodero-Merino, J. Caceres, and M. Lindner, “A break in the clouds:towards a cloud definition,” ACM SIGCOMM Computer Communication Review, vol. 39,no. 1, pp. 50–55, 2008.

[42] D. C. Marinescu, Cloud computing: theory and practice. Newnes, 2013.

[43] “Cloud offering: Comparison between iaas, paas,saas, baas.” https://assist-software.net/blog/cloud-offering-comparison-between-iaas-paas-saas-baas, 2015. [Online]Accessed: 2016-12-04.

[44] “Amazon Web Services.” https://aws.amazon.com/. [Online] Accessed: 2016-07-01.

Page 104: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

102

[45] “Google Cloud Platform.” https://cloud.google.com/. [Online] Accessed: 2016-07-01.

[46] “Rackspace Public Cloud.” https://www.rackspace.com/cloud. [Online] Accessed:2016-07-01.

[47] Q. Zhang, L. Cheng, and R. Boutaba, “Cloud computing: state-of-the-art and researchchallenges,” Journal of internet services and applications, vol. 1, no. 1, pp. 7–18, 2010.

[48] “Amazon Virtual Private Cloud.” https://aws.amazon.com/vpc/. [Online] Accessed:2016-07-01.

[49] “Rackspace Managed Cloud.” https://www.rackspace.com/managed-cloud. [On-line] Accessed: 2016-07-01.

[50] “HP Helion Managed Virtual Private Cloud.” http://www8.hp.com/us/en/business-solutions/solution.html?compURI=1762950#.V3asX3UrKkA. [Online]Accessed: 2016-07-01.

[51] “Hybrid clouds: The best of both worlds.” http://www.nskinc.com/hybrid-clouds-the-best-of-both-worlds, 2013. [Online] Accessed: 2016-12-04.

[52] Q. Duan, Y. Yan, and A. V. Vasilakos, “A survey on service-oriented network virtual-ization toward convergence of networking and cloud computing,” IEEE Transactions onNetwork and Service Management, vol. 9, no. 4, pp. 373–392, 2012.

[53] T. H. Noor, Q. Z. Sheng, A. H. Ngu, and S. Dustdar, “Analysis of web-scale cloud ser-vices,” IEEE Internet Computing, vol. 18, no. 4, pp. 55–61, 2014.

[54] A. V. Dastjerdi and R. Buyya, “Compatibility-aware cloud service composition underfuzzy preferences of users,” IEEE Transactions on Cloud Computing, vol. 2, no. 1, pp. 1–13, 2014.

[55] J. Xiao, H. Wen, B. Wu, X. Jiang, P.-H. Ho, and L. Zhang, “Joint design on dcn placementand survivable cloud service provision over all-optical mesh networks,” IEEE Transac-tions on Communications, vol. 62, no. 1, pp. 235–245, 2014.

[56] W. Chen, J. Cao, and Y. Wan, “Qos-aware virtual machine scheduling for video streamingservices in multi-cloud,” Tsinghua Science and Technology, vol. 18, no. 3, pp. 308–317,2013.

[57] N. Tziritas, S. U. Khan, C.-Z. Xu, T. Loukopoulos, and S. Lalis, “On minimizing the re-source consumption of cloud applications using process migrations,” Journal of Paralleland Distributed Computing, vol. 73, no. 12, pp. 1690–1704, 2013.

[58] F. Zhang, J. Cao, W. Tan, S. U. Khan, K. Li, and A. Y. Zomaya, “Evolutionary schedul-ing of dynamic multitasking workloads for big-data analytics in elastic cloud,” IEEETransactions on Emerging Topics in Computing, vol. 2, no. 3, pp. 338–351, 2014.

Page 105: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

103

[59] D. Kliazovich, S. T. Arzo, F. Granelli, P. Bouvry, and S. U. Khan, “e-stab: Energy-efficient scheduling for cloud computing applications with traffic load balancing,” inGreen Computing and Communications (GreenCom), 2013 IEEE and Internet of Things(iThings/CPSCom), IEEE International Conference on and IEEE Cyber, Physical andSocial Computing, pp. 7–13, IEEE, 2013.

[60] F. Bonomi, R. Milito, P. Natarajan, and J. Zhu, “Fog computing: A platform for inter-net of things and analytics,” in Big Data and Internet of Things: A Roadmap for SmartEnvironments, pp. 169–186, Springer, 2014.

[61] F. Bonomi, “The smart and connected vehicle and the internet of things,” in Workshop onSynchronization in Telecommunication Systems (WSTS), 2013.

[62] R. LaMothe, “Edge computing,” 2013.

[63] D. J. Lillethun, D. Hilley, S. Horrigan, and U. Ramachandran, “Mb++: An integrated ar-chitecture for pervasive computing and high-performance computing.,” in RTCSA, vol. 7,pp. 241–248, 2007.

[64] M. Satyanarayanan, P. Bahl, R. Caceres, and N. Davies, “The case for vm-based cloudletsin mobile computing,” IEEE pervasive Computing, vol. 8, no. 4, pp. 14–23, 2009.

[65] K. Hong, D. Lillethun, U. Ramachandran, B. Ottenwälder, and B. Koldehofe, “Mobilefog: A programming model for large-scale applications on the internet of things,” in Pro-ceedings of the second ACM SIGCOMM workshop on Mobile cloud computing, pp. 15–20, ACM, 2013.

[66] C. Papagianni, A. Leivadeas, and S. Papavassiliou, “A cloud-oriented content deliverynetwork paradigm: Modeling and assessment,” IEEE Transactions on Dependable andSecure Computing, vol. 10, no. 5, pp. 287–300, 2013.

[67] B. Kleyman, “Welcome to fog computing: Extending the cloud to the edge,” URL:http://www. cisco. com, 2013.

[68] E. Rudenko, “Fog computing is a new concept ofdata distribution.” http://cloudtweaks.com/2013/12/fog-computing-is-a-new-concept-of-data-distribution/, 2013. [Online]Accessed: 2016-07-01.

[69] J. Zhu, D. S. Chan, M. S. Prabhu, P. Natarajan, H. Hu, and F. Bonomi, “Improving websites performance using edge servers in fog computing architecture,” in Service OrientedSystem Engineering (SOSE), 2013 IEEE 7th International Symposium on, pp. 320–323,IEEE, 2013.

[70] H. Madsen, B. Burtschy, G. Albeanu, and F. Popentiu-Vladicescu, “Reliability in theutility computing era: Towards reliable fog computing,” in 2013 20th International Con-ference on Systems, Signals and Image Processing (IWSSIP), pp. 43–46, IEEE, 2013.

[71] C. T. Do, N. H. Tran, C. Pham, M. G. R. Alam, J. H. Son, and C. S. Hong, “A proximalalgorithm for joint resource allocation and minimizing carbon footprint in geo-distributedfog computing,” in 2015 International Conference on Information Networking (ICOIN),pp. 324–329, IEEE, 2015.

Page 106: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

104

[72] M. Aazam and E.-N. Huh, “Fog computing micro datacenter based dynamic resourceestimation and pricing model for iot,” in 2015 IEEE 29th International Conference onAdvanced Information Networking and Applications, pp. 687–694, IEEE, 2015.

[73] I. Stojmenovic and S. Wen, “The fog computing paradigm: Scenarios and security is-sues,” in Computer Science and Information Systems (FedCSIS), 2014 Federated Con-ference on, pp. 1–8, IEEE, 2014.

[74] S. J. Stolfo, M. B. Salem, and A. D. Keromytis, “Fog computing: Mitigating insiderdata theft attacks in the cloud,” in Security and Privacy Workshops (SPW), 2012 IEEESymposium on, pp. 125–128, IEEE, 2012.

[75] S. Cirani, G. Ferrari, N. Iotti, and M. Picone, “The iot hub: a fog node for seamlessmanagement of heterogeneous connected smart objects,” in Sensing, Communication,and Networking-Workshops (SECON Workshops), 2015 12th Annual IEEE InternationalConference on, pp. 1–6, IEEE, 2015.

[76] R. Ranjan, “The cloud interoperability challenge,” IEEE Cloud Computing, vol. 1, no. 2,pp. 20–24, 2014.

[77] D. Merkel, “Docker: lightweight linux containers for consistent development and deploy-ment,” Linux Journal, vol. 2014, no. 239, p. 2, 2014.

[78] B. Pariseau, “Kvm reignites type 1 vs. type 2 hypervisor debate.” http://searchservervirtualization.techtarget.com/news/2240034817/KVM-reignites-Type-1-vs-Type-2-hypervisor-debate/, 2011. [Online]Accessed: 2016-07-01.

[79] “Virtual Box.” https://www.virtualbox.org/. [Online] Accessed: 2016-07-01.

[80] “Linux Containers.” https://linuxcontainers.org/. [Online] Accessed: 2016-07-01.

[81] “Docker.” https://www.docker.com/. [Online] Accessed: 2016-07-01.

[82] “Apache Web Server.” https://httpd.apache.org/. [Online] Accessed: 2016-07-01.

[83] “Python.” https://www.python.org/. [Online] Accessed: 2016-07-01.

[84] “MariaDB.” https://mariadb.org/. [Online] Accessed: 2016-07-01.

[85] R. Jain and S. Paul, “Network virtualization and software defined networking for cloudcomputing: a survey,” IEEE Communications Magazine, vol. 51, no. 11, pp. 24–31, 2013.

[86] “KVM.” http://www.linux-kvm.org/page/Main_Page. [Online] Accessed: 2016-07-01.

[87] “Linux Control Groups.” https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt. [Online] Accessed: 2016-07-01.

[88] “Etcd.” https://coreos.com/etcd/. [Online] Accessed: 2016-11-06.

[89] “Consul.” https://www.consul.io/. [Online] Accessed: 2016-11-06.

Page 107: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

105

[90] “Zookeeper.” https://zookeeper.apache.org/. [Online] Accessed: 2016-11-06.

[91] “Crypt.” http://xordataexchange.github.io/crypt/. [Online] Accessed: 2016-11-06.

[92] “ConfD.” http://www.tail-f.com/confd-netconf-server/. [Online] Accessed:2016-11-06.

[93] “Fleet.” https://coreos.com/fleet. [Online] Accessed: 2016-11-06.

[94] “Marathon.” https://mesosphere.github.io/marathon/. [Online] Accessed: 2016-11-06.

[95] “Docker Swarm Container Orchestrator.” https://docs.docker.com/swarm/. [On-line] Accessed: 2016-07-01.

[96] “Apache Mesos.” http://mesos.apache.org/. [Online] Accessed: 2016-07-01.

[97] “Kubernate Container Orchestration.” http://kubernetes.io/. [Online] Accessed:2016-07-01.

[98] “Compose.” https://www.docker.com/products/docker-compose. [Online] Ac-cessed: 2016-11-06.

[99] H. Kim and N. Feamster, “Improving network management with software defined net-working,” IEEE Communications Magazine, vol. 51, no. 2, pp. 114–119, 2013.

[100] T. A. Limoncelli, “Openflow: a radical new idea in networking,” Queue, vol. 10, no. 6,p. 40, 2012.

[101] “Open Networking Foundation.” https://www.opennetworking.org/index.php.[Online] Accessed: 2016-07-01.

[102] “Floodlight SDN Controller.” http://www.projectfloodlight.org/floodlight/.[Online] Accessed: 2016-07-01.

[103] “OpenDaylight SDN Platform.” https://www.opendaylight.org/. [Online] Ac-cessed: 2016-07-01.

[104] “OpenFlow Protocol.” https://www.opennetworking.org/sdn-resources/openflow. [Online] Accessed: 2016-07-01.

[105] “onePK.” https://developer.cisco.com/site/onepk/. [Online] Accessed: 2016-07-01.

[106] Z. Qin, G. Denker, C. Giannelli, P. Bellavista, and N. Venkatasubramanian, “A softwaredefined networking architecture for the internet-of-things,” in 2014 IEEE network oper-ations and management symposium (NOMS), pp. 1–9, IEEE, 2014.

[107] M. Al-Fares, S. Radhakrishnan, B. Raghavan, N. Huang, and A. Vahdat, “Hedera: Dy-namic flow scheduling for data center networks.,” in NSDI, vol. 10, pp. 19–19, 2010.

Page 108: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

106

[108] M. Ghobadi, S. H. Yeganeh, and Y. Ganjali, “Rethinking end-to-end congestion controlin software-defined networks,” in Proceedings of the 11th ACM Workshop on Hot Topicsin Networks, pp. 61–66, ACM, 2012.

[109] N. Handigol, M. Flajslik, S. Seetharaman, N. McKeown, and R. Johari, “Aster* x: Load-balancing as a network primitive,” in 9th GENI Engineering Conference (Plenary), pp. 1–2, 2010.

[110] B. Heller, S. Seetharaman, P. Mahadevan, Y. Yiakoumis, P. Sharma, S. Banerjee, andN. McKeown, “Elastictree: Saving energy in data center networks.,” in NSDI, vol. 10,pp. 249–264, 2010.

[111] A. D. Ferguson, A. Guha, C. Liang, R. Fonseca, and S. Krishnamurthi, “Participatorynetworking: An api for application control of sdns,” in ACM SIGCOMM computer com-munication review, vol. 43, pp. 327–338, ACM, 2013.

[112] A. Sydney, The evaluation of software defined networking for communication and controlof cyber physical systems. Kansas State University, 2013.

[113] Y. Yiakoumis, K.-K. Yap, S. Katti, G. Parulkar, and N. McKeown, “Slicing home net-works,” in Proceedings of the 2nd ACM SIGCOMM workshop on Home networks, pp. 1–6, ACM, 2013.

[114] T. Luo, H.-P. Tan, and T. Q. Quek, “Sensor openflow: Enabling software-defined wirelesssensor networks,” IEEE Communications Letters, vol. 16, no. 11, pp. 1896–1899, 2014.

[115] “Hypriot.” http://blog.hypriot.com/. [Online] Accessed: 2016-07-01.

[116] “Resin.io.” https://resin.io/. [Online] Accessed: 2016-07-01.

[117] “AWS IoT.” https://aws.amazon.com/iot/. [Online] Accessed: 2016-07-01.

[118] A. Banks and R. Gupta, “Mqtt version 3.1. 1,” OASIS standard, 2014.

[119] “Amazon Simple Notification Service.” https://aws.amazon.com/pt/sns/. [Online]Accessed: 2016-07-01.

[120] “Arduino.” https://www.arduino.cc/. [Online] Accessed: 2016-07-01.

[121] “Raspberry Pi.” https://www.raspberrypi.org/. [Online] Accessed: 2016-07-01.

[122] “Cisco IOx.” http://www.cisco.com/c/en/us/products/collateral/cloud-systems-management/iox/datasheet-c78-736767.html. [Online] Ac-cessed: 2016-07-01.

[123] “CloudOne.” http://oncloudone.com/iot-tools/. [Online] Accessed: 2016-07-01.

[124] “Thing+ Cloud.” https://thingplus.net/en/platform-en/. [Online] Accessed:2016-07-01.

[125] “Microsoft Azure Cloud Platform.” https://azure.microsoft.com/. [Online] Ac-cessed: 2016-07-01.

[126] “FIWARE.” https://www.fiware.org/. [Online] Accessed: 2016-07-01.

Page 109: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

107

[127] “IBM Watson IoT.” http://www.ibm.com/internet-of-things/. [Online] Ac-cessed: 2016-07-01.

[128] R. Jain and S. Paul, “Network virtualization and software defined networking for cloudcomputing: a survey,” IEEE Communications Magazine, vol. 51, pp. 24–31, November2013.

[129] C. Dixon, D. Olshefski, V. Jain, C. DeCusatis, W. Felter, J. Carter, M. Banikazemi,V. Mann, J. M. Tracey, and R. Recio, “Software defined networking to support the soft-ware defined environment,” IBM Journal of Research and Development, vol. 58, pp. 3:1–3:14, March 2014.

[130] “OpenStack Cloud Computing Platform.” http://www.openstack.org/. [Online] Ac-cessed: 2016-07-01.

[131] B. A. A. Nunes, M. Mendonca, X. N. Nguyen, K. Obraczka, and T. Turletti, “A surveyof software-defined networking: Past, present, and future of programmable networks,”IEEE Communications Surveys Tutorials, vol. 16, pp. 1617–1634, Third 2014.

[132] F. Hu, Q. Hao, and K. Bao, “A survey on software-defined network and openflow: Fromconcept to implementation,” IEEE Communications Surveys Tutorials, vol. 16, pp. 2181–2206, Fourthquarter 2014.

[133] H. E. Egilmez, B. Gorkemli, A. M. Tekalp, and S. Civanlar, “Scalable video streamingover openflow networks: An optimization framework for qos routing,” in 2011 18th IEEEInternational Conference on Image Processing, pp. 2241–2244, Sept 2011.

[134] H. E. Egilmez, S. T. Dane, K. T. Bagci, and A. M. Tekalp, “Openqos: An openflow con-troller design for multimedia delivery with end-to-end quality of service over software-defined networks,” in Proceedings of The 2012 Asia Pacific Signal and Information Pro-cessing Association Annual Summit and Conference, pp. 1–8, Dec 2012.

[135] H. Kumar, H. H. Gharakheili, and V. Sivaraman, “User control of quality of experiencein home networks using sdn,” in 2013 IEEE International Conference on Advanced Net-works and Telecommunications Systems (ANTS), pp. 1–6, Dec 2013.

[136] J. Liu, S. Zhang, N. Kato, H. Ujikawa, and K. Suzuki, “Device-to-device communicationsfor enhancing quality of experience in software defined multi-tier lte-a networks,” IEEENetwork, vol. 29, pp. 46–52, July 2015.

[137] Z. Qin, G. Denker, C. Giannelli, P. Bellavista, and N. Venkatasubramanian, “A softwaredefined networking architecture for the internet-of-things,” in 2014 IEEE Network Oper-ations and Management Symposium (NOMS), pp. 1–9, May 2014.

[138] Z. L. C. X. Liang, K. and H. Chen, “An integrated architecture for software defined andvirtualized radio access networks with fog computing,” pp. 1–17, November 2016.

[139] “Open Source Initiative Licenses & Standards.” https://opensource.org/licenses.[Online] Accessed: 2016-07-01.

[140] “CloudStack.” https://cloudstack.apache.org/. [Online] Accessed: 2016-11-12.

[141] “OpenNebula.” http://opennebula.org/. [Online] Accessed: 2016-11-12.

Page 110: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

108

[142] “Puppet.” https://puppet.com/. [Online] Accessed: 2016-07-01.

[143] “Ansible IT Automation.” https://www.ansible.com/. [Online] Accessed: 2016-07-01.

[144] “Chef IT Automation.” https://www.chef.io/chef/. [Online] Accessed: 2016-07-01.

[145] A. Balalaie, A. Heydarnoori, and P. Jamshidi, “Microservices architecture enables de-vops: Migration to a cloud-native architecture,” IEEE Software, vol. 33, no. 3, pp. 42–52,2016.

[146] A. Krylovskiy, “Internet of things gateways meet linux containers: Performance evalua-tion and discussion,” in Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on,pp. 222–227, IEEE, 2015.

[147] “Ansible Galaxy.” https://galaxy.ansible.com/. [Online] Accessed: 2016-07-01.

[148] “Shipyard.” https://shipyard-project.com/. [Online] Accessed: 2016-11-12.

[149] “Docker Remote API.” https://docs.docker.com/reference/api/docker_remote_api/#docker-remote-api. [Online] Accessed: 2016-11-12.

[150] “Docker Hub.” https://hub.docker.com/. [Online] Accessed: 2016-11-12.

[151] “Cubieboard2 is here.” http://cubieboard.org/2013/06/19/cubieboard2-is-here/. [Online] Accessed: 2016-07-01.

[152] “Allwinner A20.” http://linux-sunxi.org/A20. [Online] Accessed: 2016-07-01.

[153] “NBench.” http://www.tux.org/mayer/linux/bmark.html. [Online] Accessed:2016-07-01.

[154] “SysBench.” https://github.com/akopytov/sysbench. [Online] Accessed: 2016-07-01.

[155] “High-Performance Linpack Benchmark.” http://www.netlib.org/benchmark/hpl/. [Online] Accessed: 2016-07-01.

[156] “HPC Challenge Benchmark.” https://packages.debian.org/jessie/hpcc. [On-line] Accessed: 2016-07-01.

[157] “Bonnie++.” https://www.coker.com.au/bonnie++/. [Online] Accessed: 2016-07-01.

[158] “DD.” http://www.orangetide.com/Unix/V7/usr/man/man1/dd.1. [Online] Ac-cessed: 2016-07-01.

[159] R. Morabito, J. Kjällman, and M. Komu, “Hypervisors vs. lightweight virtualization: aperformance comparison,” in Cloud Engineering (IC2E), 2015 IEEE International Con-ference on, pp. 386–393, IEEE, 2015.

[160] “STREAM.” https://www.cs.virginia.edu/stream/ref.html. [Online] Ac-cessed: 2016-07-01.

Page 111: SmartEdge: Fog Computing Cloud Extensions to Support Latency … · 2017-11-04 · Our irreplaceable dependency on cloud computing, demands DC infrastructures always available while

109

[161] R. Jones, “Netperf.” http://www.netperf.org/netperf/. [Online] Accessed: 2016-07-01.

[162] “Cloud LSD.” https://cloud.lsd.ufcg.edu.br/. [Online] Accessed: 2016-11-21.

[163] “TryStack.” http://trystack.org/. [Online] Accessed: 2016-07-01.

[164] “TryStack.org – A Sandbox for OpenStack.” http://www.openstack.org/blog/2012/02/trystack-org-a-sandbox-for-openstack/. [Online] Accessed: 2016-11-21.

[165] “Dresden University of Technology.” https://tu-dresden.de/. [Online] Accessed:2016-11-23.

[166] “Ubuntu Cloud Images.” https://cloud-images.ubuntu.com/. [Online] Accessed:2016-11-23.

[167] “Ubuntu Pi Flavor Maker.” https://ubuntu-pi-flavour-maker.org/download/.[Online] Accessed: 2016-11-23.

[168] “Curl: command line tool and library.” https://curl.haxx.se/. [Online] Accessed:2016-11-23.

[169] “Fluid Simulation.” https://github.com/cmusatyalab/elijah-provisioning.[Online] Accessed: 2016-11-23.

[170] L. M. Vaquero, D. Morán, F. Galán, and J. M. Alcaraz-Calero, “Towards runtime recon-figuration of application control policies in the cloud,” Journal of Network and SystemsManagement, vol. 20, no. 4, pp. 489–512, 2012.

[171] K. Hong, D. Lillethun, U. Ramachandran, B. Ottenwälder, and B. Koldehofe, “Mobilefog: A programming model for large-scale applications on the internet of things,” inProceedings of the Second ACM SIGCOMM Workshop on Mobile Cloud Computing,MCC ’13, (New York, NY, USA), pp. 15–20, ACM, 2013.