Docker Swarm secrets for creating great FIWARE platforms

35
Docker Swarm secrets for creating great FIWARE platforms Federico M. Facca email: [email protected] twitter: @chicco785

Transcript of Docker Swarm secrets for creating great FIWARE platforms

Docker Swarm secrets

for creating great FIWARE platforms

Federico M. Facca

email: [email protected]

twitter: @chicco785

1

What you will learn?

How to deploy enablers using cloud architecture patterns

How to apply cloud patterns using Docker

How to deploy on multiple hardware architectures

Where to find examples for some enablers

2

In case you wanna run the 5 demo at the end of the Talk

If internet is good enough...

Install VirtualBox

• https://www.virtualbox.org/wiki/Downloads

Install Docker

• https://docs.docker.com/engine/installation/

Install Docker Machine

• https://docs.docker.com/machine/install-machine/

Create a Swarm Cluster (https://github.com/aelsabbahy/miniswarm)

• curl -sSL https://raw.githubusercontent.com/aelsabbahy/miniswarm/master/miniswarm -o /usr/local/bin/miniswarm

• chmod +rx /usr/local/bin/miniswarm # As root

• miniswarm start 3 # 1 manager 2 workers

Clone the recipes

• git clone https://github.com/smartsdk/smartsdk-recipes

3

Does this work only with Docker Swarm?

The code you will find in the repository is for Docker

Swarm

The principles are generic and can be applied on different

containerized (or not) platforms

4

Why Docker Swarm instead of Kubernetes?

K8S is more production ready

• More advanced features (e.g. autoscaling).

Swarm is simpler

• It is included in Docker (N.B. K8S will be soon)

• It is more suitable for “educational” purposes

• It runs better on a RasperryPI 0

Learn and understand the basics

6

What are cloud patterns? Why it is important to master them?

Virtualization <> Cloudification

7

Cattle vs Pets

Cloud Native Applications Legacy Applications

8

Monolith vs Modern SOA (aka microservices)

Monolith architectures run all their services

in a single process

Monolith architectures may scale by

replicating all the “monolith” on different

servers

Microservice architectures each

functionality in a separate (possibly

stateless) process

Microservices scale individually by

distributing on different servers

9

Cloud Architecture Patterns are the path to

Move from Pets to Cattle

... i.e. achieve

• Service resiliency

• Flexible scalability

• Lower latency

10

High Availability

11

Scalability

Horizontal Scaling Vertical Scaling

12

Multisite

13

Queue Centric Workflow

Message Queue

Producers Consumers

14

Stateless vs Stateful services

Stateless services

• The output of the service depends only on the input

• Easy to scale and distribute

Stateful

• The output of the service depends on the input and on a set of information

stored by the service itself

• Not so easy to scale and distribute (maintaining a consistent state)

15

CAP Theorem

The CAP theorem states that it is impossible for a distributed computer

system to simultaneously provide all three of the following guarantees:

• Consistency: Every read receives the most recent write or an error

• Availability: Every request receives a response, without guarantee that it contains the

most recent version of the information

• Partition tolerance: The system continues to operate despite an arbitrary number of

messages being dropped by the network between nodes

I.e. when you implement HA in a stateful service, you can choose of being CA, AP,

CP. In general you strive to AP and eventually consistent.

From concepts to practise

16

17

Context Broker

Context Broker is perhaps the most used

GE

It includes to components:

• The API

• The Backend

The API is HTTP based

The Backend in based on MongoDB

How to make it high available?

• An easy crossover mechanism for HTTP

APIs are Load Balancers

• MongoDB has its proprietary HA

mechanism (replica set)

Context Broker

MongoDB

18

Context Broker: Target architecture

Context Broker

MongoDB

Context Broker

MongoDB

Context Broker

MongoDB

LB LB LB

MongoDB replica set

Virtual IP

1. Provide high available and partition tolerant distributed data

2. Eventually consistent

3. MongoDB HA solutions use quora mechanism for evaluate consistency,

so O as to be an odd number (max actually is 7)

1. Provides the reliable cross over (i.e. transparent access to different

instances)

2. Provides the transparent detection failure

3. Relies on virtual IP mechanism

1. N-instances of context broker, removing single point of failure

19

Context Broker: How to implement that in Docker Swarm?

The Load Balancer

• It is the easy part: Docker Swarm implements

a simple Load Balancing mechanism

Context Broker API HA

• Context Broker is stateless, we don’t have to

worry about data

• We create a service (using replica mode to

scale it up and down)

• We leverage on health checks to evaluate

single instance health

MongoDB

• Now things get complex... Recall CAPs

Theorem

version: '3'

services:

orion:

image: fiware/orion:${ORION_VERSION:-1.7.0}

ports:

- "1026:1026”

command: -logLevel DEBUG -dbhost

${MONGO_SERVICE_URI:-"mongo-rs_mongo"} -rplSet

${REPLICASET_NAME:-rs} -dbTimeout 10000

deploy:

replicas: 2

healthcheck:

test: ["CMD", "curl", "-f",

"http://0.0.0.0:1026/version"]

interval: 1m

timeout: 10s

retries: 3

networks:

...

20

Data Layer HA Management

Your distributed data layer has some level of self discovery

• You can relay on it to automatically create the “data service cluster”.

• In some cases, you need pass service names... Luckily you can leverage on tricks (e.g. DNSRR mode of Docker Swarm – being VIP the default)

• E.g. elasticsearch / hadoop

Your distributed data layer has no self discovery

• You need a sidecar service that implements the data cluster management logic.

• E.g. mongodb / mysql

MongoDB MongoDBMongoDB

MongoDB replica set

ReplicaSet

Controller

Docker Swarm

MongoDB replica

set MongoDB replica set

21

Context Broker: How to implement that in Docker Swarm?

MongoDB

• We create a service for mongo (using

global, and volumes if we want persistency)

• We create a service for the sidecar

microservice

• We leverage on health checks to evaluate

single instance health

Why global?

• If you want to leverage on volume for data

persistency, you need to deal with the fact

that there can be only 1 volume with a

given name per swarm node.

• How can I scale up / down then?

□ Using placement constraints!

version: '3.2'

mongo:

image: mongo:${MONGO_VERSION:-3.2}

entrypoint: [ "/usr/bin/mongod", "--replSet",

"${REPLICASET_NAME:-rs}", "--journal", "--smallfiles"]

volumes:

- mongodata:/data/db

secrets:

- mongo-healthcheck

healthcheck:

test: ["CMD", "bash", "/run/secrets/mongo-healthcheck"]

interval: 1m

timeout: 10s

retries: 3

deploy:

mode: global

...

controller:

image: martel/mongo-replica-ctrl:latest

volumes:

- /var/run/docker.sock:/var/run/docker.sock

deploy:

mode: replicated

replicas: 1

placement:

constraints: [node.role==manager]

...

22

Scaling Up and Down

docker service scale orion_orion=3

docker service scale orion_orion=2

Global Mode does not support scale up /

down. Using Global Mode you can have as

many mongo as cluster nodes.

Add a placement constraint to the mongo

service

• placement:

constraints: [node.labels.mongo == yes]

Add/remove label to nodes to be (not) used

for MongoDB

• docker node update --label-add

mongo=yes NODE

Context Broker MongoDB

23

Multi Site (for replicated mode services)

In each site, have at least a Docker Swarm master.

• The number of master should be always odd.

Add a “site” label to all the nodes part of a given site.

• docker node update --label-add region=us NODE

• docker node update --label-add region=eu NODE

Add a placement preference to the service (not supported in compose files!)

• docker service update --placement-pref-add 'spread=node.labels.region’ SERVICE

Quick Demo

24

25

cd tools/

sh create_networks.sh

cd ../data-management/context-broker/ha/

sh deploy_back.sh

docker service ls

docker service logs -f orion-backend_controller

sh deploy_front.sh

docker service ls

curl http://192.168.99.101:1026/version

Advanced topics

26

27

Multi Site and Edge

Edge devices may not have a Public IP

Can we create a cluster connecting such devices?

OpenVPN is your friend!

• Configure OpenVPN server on all the master nodes in the cloud using a multipoint configuration.

• Configure OpenVPN clients on all the edge nodes.

• Unfortunately, due to the fact that docker service does not support privileged mode, you cannot run OpenVPN as a container to create a Docker Swarm cluster

What if my edge nodes are based on a different architecture (e.g. ARM)?

• Develop image manifests that implements v2.2 spec, this allows to redirect an image version to specific version per hardware platform.

image:

myprivreg:5000/someimage:latest

manifests:

- image:

myprivreg:5000/someimage:ppc64le

platform:

architecture: ppc64le

os: linux

- image:

myprivreg:5000/someimage:amd64

platform:

architecture: amd64

features:

- sse

os: linux

Træfik: advanced load balancing

Docker Swarm proxy is not configurable, for example it does not support sticky sessions

Traefik listens to backend /orchestrator API’s and detects any changes, applying it

Routes are dynamically managed

You can create / update / destroy routes at any time

Traefik reads service metadata on Docker / Kubernetes / etcd / etc

• Hosts, ports, load balancing algorithm etc

You can configure SSL certifications• Let’s Encrypt integration requires a key-value

storage

• Let’s Encrypt integration requires public IP

29

Testing your dockerized platform

Learn from the GURU’s of micro service architectures!

Chaos Monkey

https://github.com/gaia-adm/pumbaNetflix picture or /logo

On going and future activities in FIWARE

30

Did it look complex? I hope not

31

32

Smart Security

• Common architecture patterns: e.g. scalability

pattern

• Common generic enablers: e.g. orion context-

broker

• Common data models: e.g. geo-location

• Specific architecture patterns: e.g. secured data

access pattern

• Specific and customised generic enablers: e.g.

security risk detection filters for kurento media

server

• Specific data models: e.g. security’s events

Smart Security

Application

“recipe”

1. Analyse HA architectures for the different Data and IoT Management enablers2. Creating Docker compose recipes to allow easy deployment of HA enablers3. Making them available in FIWARE Lab to experimenters

Do you have questions?

Do you want to contribute?

33

Contact Us

w w w.mart e l - innov at e.com

Federico M. Facca

Head of Martel Lab

[email protected]

Dorfstrasse 73 – 3073

Gümligen (Switzerland)

0041 78 807 58 38

Thank you!

http://fiware.org

Follow @FIWARE on Twitter

34