Post on 15-Jul-2020
Continuous Delivery &
Infrastructure
Introduction
Benny Cornelissen
• Infrastructure Architect @ Xebia
• Platform Architect @ TNT
New demands for Infrastructure
On-demand
• Experimentation
• Demos
• Deploy every feature branch
• No human intervention to get something done
Ownership• Promote to staging / production
• Define how your application should run
• Inspect application logs & metrics
• You build it, you run it
• Power and Responsibility
Pets vs Cattle
• Specific —> Generic
• Fix —> Replace
• Easily scalable
Building a Platform
Basic Pillars
• Infrastructure as Code
• Design for Failure
• Abstract applications from the OS, and vice versa
• Automate All The Things
Infrastructure as Code
Infrastructure as Code
• Repeatable
• Testable
• Predictable
• Self-Documenting
Terraform
• Declarative language for creating infrastructure across providers
• Codify the required end-state, not the way to get there
• Create cross-provider dependencies
resource "aws_instance" "myinstance" { instance_type = "t2.small" ami = "ami-cda312be" root_block_device { delete_on_termination = true volume_size = 20 } key_name = "${aws_key_pair.data-team.key_name}" security_groups = ["${aws_security_group.data-team.name}"]}
myinstance.tf
Terraform
• Declarative language for creating infrastructure across providers
• Codify the required end-state, not the way to get there
• Create cross-provider dependencies
resource "aws_instance" "myinstance" { count = 1000 instance_type = "t2.small" ami = "ami-cda312be" root_block_device { delete_on_termination = true volume_size = 20 } key_name = "${aws_key_pair.data-team.key_name}" security_groups = ["${aws_security_group.data-team.name}"]}
myinstance.tf
Terraform
• Develop
• Review / Plan
• Apply
• Destroy
$ vim myplatform.tf
$ terraform plan
$ terraform apply
$ terraform destroy
Flexibility with Terraform• Create multiple isolated instances of the platform
• The production platform
• A platform for testing our own stuff
• …
• Create a custom-sized platform
Design for FailureBecause everything that can fail, will fail at some point
Design for Graceful FailureBecause nobody likes to get paged at 3AM…
Graceful Failure
• Networking: Elastic Load Balancing, multiple Nginx proxies
• Compute: Auto Scaling Groups for various instance types
• Applications: HA deployments, automatic rescheduling
Example
CoreOS CoreOSCoreOS CoreOS
Auto Scaling Group
App App App
Proxy Proxy Proxy Proxy
Backend
Backend
Backend
Elastic Load Balancer
CoreOS CoreOS CoreOS
Auto Scaling Group
App App
Proxy Proxy Proxy
Backend
Backend
Elastic Load Balancer
CoreOS CoreOS CoreOS
Auto Scaling Group
App App
Proxy Proxy Proxy
Backend
Backend
Elastic Load Balancer
App Backend
CoreOS CoreOS CoreOS
Auto Scaling Group
App App
Proxy Proxy Proxy
Backend
Backend
Elastic Load Balancer
App Backend
CoreOS
Proxy
Abstract App ↔ OS
OS-agnostic apps, App-agnostic OSes
CoreOS
App 1 (v1.1) App 2 App 1
(v1.2)
Abstract OS / Apps• Run everything in a container
• Your runnable artifact is the container
• build once, run anywhere:
• On your laptop
• On an on-premise datacenter
• On a cloud-based platform
• your artifact only requires a 64-bit Linux machine with a Docker daemon
Containerize everything
• you can:
• upgrade a library (or OpenSSL) without breaking 10 other things
• run apps that require conflicting libraries on the same machine
• run 2 versions of the same app
Containers and DevOps
• identical dev/tst/acc/prd environments, because we use the same container
• ownership and responsibility move to the product team
Building Containers
• Repeatable
• Testable
• Predictable
• Self-documenting
Dockerfile
• Describes how a container should be configured
• Describes a configuration process starting from a known starting-point
FROM docker.example.com/expample/java:masterENTRYPOINT ["bin/myapp"]
USER rootWORKDIR /home/docker/appADD target/universal/myapp-*.tgz \ /home/docker/app/RUN ["chown", "-R", "docker:docker", "."]USER docker
EXPOSE 9000ENV SERVICE_9000_TAGS="app,http,private"
Dockerfile
Automate All The Things
Wiring, glue, and magic tricks
Automate!
• Proxy (Nginx) configuration
• Application Dashboard
• CI/CD (Jenkins) configuration
Proxy configuration
• Which services are running?
• Where are they running?
• How many containers are running?
• Do they have special properties? (HTTPS-redirects, Proxy endpoints, BasicAuth, Auth exceptions)
ConsulService Registry, Service Discovery,
Key-value store, Health checking
Service Discovery
• Each application is registered in the Service Registry
• Metadata (key/value) is registered in the KV-tree
• When an application is stopped, it is removed from the Service Registry
Service Lookup
• Lookup through:
• DNS
• REST API
$ dig srv +short dev-mytnt-master.service.consul1 1 57753 infra-prod-dev-10.9.8.95.node.dc1.consul.
$ curl http://consul:8500/v1/catalog/service/dev-mytnt-master[{"Node":"infra-prod-dev-10.9.8.95","Address":"10.9.8.95","ServiceID":"registrator:dev-mytnt-master:3000","ServiceName":"dev-mytnt-master","ServiceTags":["app","http","https-redirect","dev"],"ServiceAddress":"10.9.8.95","ServicePort":57753,"ServiceEnableTagOverride":false,"CreateIndex":15689987,"ModifyIndex":15699641}]
Consul to ConfigA bit of magic..
Consul-template
• Create a configuration template
• Generate configuration files based on a template and data in consul
Configuring a Proxy
• Get all services tagged with ‘http’ and ‘public’
• Render basic Nginx config
• Based on additional tags and K/V data, render additional config blocks
Example
$ curl http://consul:8500/v1/catalog/service/dev-mytnt-master | jq .[ { "Node": "infra-prod-dev-10.9.8.95", "Address": "10.9.8.95", "ServiceID": "registrator:dev-mytnt-master:3000", "ServiceName": "dev-mytnt-master", "ServiceTags": [ "app", "http", "https-redirect", "dev" ], "ServiceAddress": "10.9.8.95", "ServicePort": 57753, "ServiceEnableTagOverride": false, "CreateIndex": 15689987, "ModifyIndex": 15699641 }]
$ curl http://consul:8500/v1/kv/services/dev-mytnt-master/?keys | jq .[ "services/dev-mytnt-master/ci_build_number", "services/dev-mytnt-master/creation_time", "services/dev-mytnt-master/git_branch_sanitized", "services/dev-mytnt-master/git_commit", "services/dev-mytnt-master/git_repository", "services/dev-mytnt-master/git_url", "services/dev-mytnt-master/noauth", "services/dev-mytnt-master/server_aliases"]
$ curl http://consul:8500/v1/kv/services/dev-mytnt-master/noauth | jq .
[ { "LockIndex": 0, "Key": "services/dev-mytnt-master/noauth", "Flags": 0, "Value": "fiogLyhwdWJsaWNhcGlzfGFwaSk=", "CreateIndex": 15690001, "ModifyIndex": 15787949 }]
$ curl http://consul:8500/v1/kv/services/dev-mytnt-master/noauth?raw
~* /(publicapis|api)
upstream dev-mytnt-master { least_conn; server 10.9.8.95:57753 max_fails=3 fail_timeout=60 weight=1;}
server { listen 80; server_name mytnt-master.* mytnt.*; client_max_body_size 0;
location / { include includes/basic-auth.conf; proxy_pass http://dev-mytnt-master; include includes/proxy-headers.conf; include includes/cors-headers.conf; }
location ~* /(publicapis|api) { auth_basic off; proxy_pass http://dev-mytnt-master; include includes/proxy-headers.conf; include includes/cors-headers.conf; }}
nginx.conf
FROM docker.tntdigital.io/tnt/node:master
COPY package.json /home/docker/app/COPY .npmrc /home/docker/app/RUN npm install > /dev/null
COPY . /home/docker/app/RUN gosu root chown -R docker:docker /home/docker/app/
EXPOSE 3000ENV SERVICE_3000_TAGS="app,http,https-redirect"ENV SERVICE_NOAUTH="~* /(publicapis|api)"ENV SERVICE_SERVER_ALIASES="mytnt.net mytnt.com *.mytnt.net *.mytnt.com"
ENTRYPOINT ["gulp"]CMD ["serve:ci"]
Dockerfile
Platform Development
• Low level infra: deploy a separate platform using Terraform
• Platform components:
• local dev setup (Vagrant, Docker Compose, Docker)
• Automated Docker builds
• lab platform
Thank You$ while true; do echo "Hello, World!"; done