Immutable Infrastructure Security

Post on 15-Apr-2017

143 views 2 download

Transcript of Immutable Infrastructure Security

Immutable Security

Featuring…

“The story of Sisyphus the

Security guy”

“Stop Hugging your VM’s because they

don’t hug Back”

-Werner Vogels, CTO of Amazon

About the Author - #whoami

Ricky L. Sanders is an Enterprise Security Architect at a leading Automotive Manufacturer

representing and advising the organization on the security design and impact of Cryptographic

Management, Virtualization, Software Defined Networking, Cloud Computing, Application

Security, and Emerging Technologies. He is a member of ISACA and ISC2.

However, he has not always been an IT security practitioner. Before Ricky started working in IT

he dreamt of economics, political science, and explaining complicated financial marketplaces.

His interest led him to graduate with a Bachelors in Business Economics & Finance from the

Kelley School of Business, Indiana (#1 Business School among all public Universities1).

As the U.S. exited the 2009 financial collapse he gazed towards Information Technology. Ricky

found passion for IT Risk Management and analyzing complicated system security designs. He was

fascinated with the complexity of security across heterogeneous technologies. He felt energized

to focus on an subject area riddled with political nuance. He decided to pursue a Masters from

his Alma Mater at Indiana University in Information Systems (MIS) with a security and risk focus.

His combined experiences and education square him up for complicated business analysis of IT

security design and strategies.

Ricky lives for his beautiful wife Jacquie and his dog. He truly believes in the quote etched into

the ceiling at the Rockefeller Center, NY “Knowledge will be the stability of our times”. With that

it is with great an equal pleasure that he extends this work to you.

About this Deck

Purpose

This deck outlines the author’s opinion regarding Security Strategy for Immutable

Infrastructure, Code-pipeline, PaaS, and Micro Service Architectures

Disclaimer

The content is assumed as-is. The author takes no responsibility of the content. It is up to the reader to use their own judgment.

This deck refers to a few open source tools and technologies as examples. However,

there are many competing tools and technologies that can be leveraged. I’m using

these as example only and their use not meant as an endorsement.

For specific details contact: Rick Sanders rickyleesanders88@gmail.com

Common Security Problems in IT

Production servers are never consistent

Production servers are misconfigured

Production servers are hard to patch because of technology drift

Privileged user management is impossible at scale

Unauthorized changes keep occurring, e.g. remote access and SSH

New vulnerabilities occur every day and we need to know

Auditing is hard and complex

Security Patches break stuff and cause planned downtime, therefore no one wants to

approve security changes

Why is Security so hard?Before

Deploying servers was once a manual process which took a lot of work

Servers cost money, capacity, storage

I.T. is bad at decommissioning stuff because we might break something

Getting code to servers was really hard so we gave developers access to boxes

Secrets, SSH Keys, Certs, Connection Strings are tied to the host OS

Now

OS’s are automatically deployed and redeployed from catalogs

Container Images can be automatically deployed from catalogs

Code is more portable when in Containers

Libraries can be automatically replaced from catalogs

Secrets, Connections strings etc. are no longer on the host, and instead move with the Container

So everything can be automatically built and deployed from a Gold Source these days..

The new tools deploy the Old and New Stack in parallel then failover to the new stack

automatically and decom the old stack with near Zero downtime

“I don’t believe you. VM vendors promised all this before”

- Every Security Guy I’ve spoke to

So do we keep doing

Security the same way as

before and expect

different results?

“VMware capabilities focused on the hosts.

PaaS is automating further up the stack”

“Deploy your Infrastructure-as-code. Security

fixes a just new OS’s, Containers, Libraries

from a versioned Repo”

The Old Way – Design to Deploy

• Many hands “Touch the box”• Requires remote access and

other nasty security stuff

• It’s how Technology Drift

occurs

• Host OS Engineering teams

release new builds with

patches that get “bolted on

during enhancements”

• Old builds are being patched

“live in production”

Anything wrong

with this picture?

Immutability

Code drives Immutability

You start with a brownfield stack

New code/Security Issues force automatic

provisioning of Greenfield Stack

Auto failover and auto rollback if

errors occur

HOST - OS

The old Way – Monolithic Application Architecture are heavy and not easily

deployable or scalable in clouds

• VM’s are heavy “For now” and take

time to provision

• Custom Cloud Init. scripts are required to add path variables and connection

strings and host specific data. Scaling

gets hard!• Tightly Coupled Methods and Functions.

What if we only want to scale a single

service? Do we replicate the entire box

and waste capacity? Crazy!• If a Library is bad and needs to be

replaced the entire system is affected,

Bad!

The New Way – Micro-service application architectures built on restful service

The New Way – Micro-service application architectures built on restful service

“Security Teams will need to

build out their Web Scanning

and Restful Service scanning

capabilities sets to enable

these new designs at scale”

Where each service is de-coupled from the

system and hosted “typically” in a

Container technology

Hypervisor / Physical

Linux OS Linux OS

Container Container Container Container

Service 1 Service 1 Service 1 Service 1

And if you go Hypervisor you get to

automate the replacement of Vulnerable

OS Kernel's so … that’s cool

Hypervisor

Linux OS Linux OS

Container Container Container Container

Service 1 Service 1 Service 1 Service 1

What I’m saying is that NEW Application

Architectures, Code Management, VM

management, and Container Management are

strategic enablers for a successful Immutable

Infrastructure strategy…Seems hard

All orgs must unite!

“If your security processes are bad now, you

will be happy to know that not much will

change if teams don’t come together”

Container Basics

What are Containers?

“Containers are not new, but recent advances in Linux security have made them

more viable”

Think…• FreeBSD Jails • OpenVZ• Solaris Zones• LXC for example

This is enabled by the use of two Linux kernel capabilities:

• C-groups: Are a resource management solution providing a generic process-grouping framework which limits

and prioritizes system resources (CPU, memory, I/O, network, etc.)

• Namespaces: Allow for lightweight process virtualization and enables processes to have different views of the

system (mnt, pid, net, pic, uts, user)

• SELinux (Redhat): Provides secure separation of containers by applying SELinux policy and labels. It integrates

with virtual devices by using the sVirt technology.

• AppArmour: Linux kernel security module that allows the system administrator to restrict programs' capabilities

with per-program profiles. Profiles can allow capabilities like network access, raw socket access, and the permission to read, write, or execute files on matching paths.

• SECCOMP: A computer security facility that provides an application sandboxing mechanism in the Linux kernel.

Docker aka the defacto standard today.

We’ll use as an example.

The Docker project provides the means of packaging applications in

lightweight containers. Running applications within Docker containers offers

the following advantages:

• Smaller than Virtual Machines: Because Docker images contain only the content needed to run an application, saving and sharing is much more efficient with Docker

containers than it is with virtual machines (which include entire operating systems)

• Improved performance: Likewise, since you are not running an entirely separate operating system, a container will typically run faster than an application that carries

with it the overhead of a whole new virtual machine.

• Secure: Because a Docker container typically has its own network interfaces, file system, and memory, the application running in that container can in theory be

isolated and secured from other activities on a host computer.

• Flexible: With an application’s run time requirements included with the application in the container, a Docker container is capable of being run in multiple environments.

Docker Container Architecture Example

“Host hardening becomes

even more important as we

vertically scale processes on

a single host. If your getting

host hardening wrong now

then get ready for a single

point of failure”

A Container is a just an Image. A lot like an OS

Image… but different because it shares the

underlying host kernel capabilities and

resources

There are many types of container images. Apache, Ubuntu, Node.js, etc.

Standard container images

should be hardened and

reviewed by Security before

being released into the

repository as “Production

Ready”

These different Containers are just deployed from Registries to the Linux runtime environment

“Over 30% of Official Images in

Docker Hub Contain High Priority

Security Vulnerabilities”

-BanyanOps

“Enterprise corporation should

deploy their own container images

and never allow images from the

public repositories.”

“Even corporate approved

container images provided by

vendors can be vulnerable so you

need a operational process and

dedicated resources monitor,

engineer, and deploy new stuff.”

-BanyanOps

Container Images can be vulnerable too…

Container Management

Containers can scale restful services to 6,000 service per host causing a nightmare to Manage

Large Enterprises need a way to automate the deployment, scaling, and management of containerized applications

As an Example: Kubernetes aka the de-facto open source Standard

• Automatically places containers based on their resource requirements

• Scale your application services up and down automatically based on CPU usage

• Automate rollouts and rollbacks during change events

• Self-healing

• Secret and configuration management

• Automated Virtual Network Security (Virtual Switching, Virtual Firewalls, Security Policies)

An open source container cluster manager originally designed by Google and donated to the

Cloud Native Computing Foundation. It aims to provide a "platform for automating deployment,

scaling, and operations of application containers across clusters of hosts

Security Organizations can leverage this technology to automatically redeploy “gold image” containers and new secure code

Security Organizations should

consider the governance and

operational process around

leveraging Container

Management capabilities and

repositories.

Security Organizations can leverage this (And Similar) technology to automatically template L3 virtual networks to isolate tenants and application without manual configurations to the physical underlay network

Infrastructure as a Service is a Strategic Enabler

If there is a hole in the

boat, maybe we just

use automation to

move our containers to

a safer boat that wont

sink?

e.g. Use your

automated VM stack as

a strategic enabler

Leveraging Infrastructure as a Service

for immutable OS replacements

You can leverage existing VMware

and OpeStack capabilities to auto

provision OS from service catalogs.

After a vulnerable OS image has been

patched. It should be replaced in the

catalog.

All containers leveraging that OS

should be rebuilt on top of the newly

secured OS.

This will be difficult in a physical world

unless you’ve deployed PXE bootstrap

technologies for automated physical

OS installs.

Release your Virtual Machines

VM Team

I think we’re over-

provisioning

“If your over-provisioning now, then you’ll be pleasantly surprised in your constancy when you try immutable infrastructure”

So you don’t want to release your

VM’s because of sprawl and

capacity issues?

“So, Let’s keep doing the

same thing we’ve been

doing and expect different

results?”

Minimizing disruption when we rip and

replace containers

Etcd is a key/value store typically leveraged in container deployments. It is based on a distributed

architecture and features hierarchical configuration system that can be used to build service

discovery. So when containers are spinning up and down the can rely on etcd to define and

connect to each other.

Registrator automatically registers and deregisters services by inspecting containers as they are

brought online or stopped. When this feature is combined with etcd we can bring up a container,

all the data will be stored in etcd and propagated to all nodes in the cluster. What we’ll do with

that information is up to us.

Think about path variables, service accounts, connection strings etc. These are decoupled from the

VM itself and stored on a server…Yay for Security! Now we can reduce the attack surface of secrets

to a singe point of failure… at least we can protect it … right?

Etcd – It’s a linux etc folder running as a distributed daemon that containers can call

upon. E.g. highly decoupled configs.

So when we replace a VM because of a security issue, the

lightweight containers (Our applications) can quickly move to a

new machine and be recognize because etdc & service

discovery is working hard behind he scenes..VM’s

Development Production

Technology Drift

“Unauthorized changes always happen,

therefore we must scan all the time just to be

sure”

Are we treating the

symptom or finding the

cause of the security issue?

Maybe a better Question…

How can we be sure that the base

OS and Containers have not been

changed from the catalog Image?

Configuration Management is like a File Integrity Monitor….a new way

to do an old security thing

Chef, Puppet, etc..

• A Chef server acts as a hub for configuration data. The

Chef server stores cookbooks, the policies that are

applied to nodes, and metadata that describes each

registered node that is being managed by the chef-

client.

• Nodes use the chef-client to ask the Chef server for

configuration details, such as recipes, templates, and file

distributions.

• The chef-client then does as much of the configuration

work as possible on the nodes themselves (and not on the

Chef server).

• This scalable approach distributes the configuration effort

throughout the organization.

Configuration Management

example with Chef

You can leverage a tool like to Chef

to push runs lists down to your nodes

running containers

A run-list defines all of the information necessary for Chef to configure a node

into the desired state.

A run-list is:

An ordered list of roles and/or recipes that are run in the exact order defined

in the run-list; if a recipe appears more than once in the run-list, the chef-client

will not run it twice

Always specific to the node on which it runs; nodes may have a run-list that is

identical to the run-list used by other nodes

You run lists can ensure no-one can access the box

• Remove SSH/Telnet

• Lockdown system admin group to daemons

running root etc.

• Enforce Linux based Mandatory Access controls

required for Containers

• And if one of these items mysteriously changes…

• Chef will revert the configuration back to the

“desired-state”

Configuration Management is a

File Integrity Monitor that has the

capability to correct itself back to

the “Desired State”.

If you’re a paranoid security

professional you could leverage Chef

Inspec or “Choose your flavor” of

agent based security compliance

tool to measure effectiveness of the

immutable infrastructure strategy

Development Production

Enabling Immutable Infrastructure

Security

Maybe security issues measure the effectiveness of the immutable infrastructure strategy?

Code Pipelines & Immutable Infrastructure

Code Pipelines Wild-Wild West

VS

“Code Pipeline capabilities are a strategic enabler for a successful Immutable Infrastructure and Security Strategy”

“If your code management processes are bad now, you’ll be pleasantly surprised in your consistency to fail when you move to PaaS and Immutable Strategies”

“Building your pipeline based on infrastructure and engineering steps is like solving the wrong problem precisely”

“It’s all about the Code”

Stop letting code follow the Infrastructure

Anything wrong

with this picture?

It’s like a moving target that leads to technology drift

When you have new code/containers in the pipeline then deploy them to a new Container and VM (If there is no capacity on existing Linux VM’s)

Code Pipeline

Commit Code Build / Compile Security

Scan Grab a fresh

Container Image

Grab a VM if

you need it

Write Binaries

to a Container Check the Container

w/ code into the

repo for versioning

Deploy Container to

the VM we have

provisioned

Container Pipeline

Host OS (VM) Pipeline

Add gold OS

Image to Repo

Make OS Image

available for auto-

provisioning

Auto deploy OS

when new code is

deployed

Scan and build Compliant

Immutable OS Image

Scan and build Compliant

Immutable Container Image

Make Container available

for auto-provisioning

Auto deploy

Container when

new code is

deployedAdd gold

Container to Repo

Dynamic Security

Scans

File Integrity Monitoring

with Configuration

Management

Version the New immutable

Container w/ Code

Feed Loop to

teams for Security

Remediation

Feed Loop to

teams for Security

Remediation

Do we need to scan production systems in this model?

Here’s the disruptive part ….

Development Production

Remember this?

Security

Maybe security issues measure the effectiveness of the immutable infrastructure strategy?

Development Production

Effective immutable strategies

should give us this

Versioned Catalog with

• Base OS

• Base Container

• Container w/ Code Production

Effective immutable strategies

should give us this

Simple Math ….

You’re selling me unicorns. I still don’t

believe you even with Configuration

Management..

Okay….then store cryptographic hashes of your

immutable images in your repo for auditing purposes

Containers w/ Code

Base Linux OSContainers w/ Code

Base Linux OS

And therefore a vulnerability, is a

vulnerability, is a vulnerability….

Benefits?

• Reduced Security Licensing for Tools

• Less Security Agents with root privileges running

on boxes

• Scan times reduced because we’re scanning

less stuff

• Frequency increases because we’re scanning

less stuff

• Easier on audit staff

• Mathematically provable, not just unicorns

So should we stop scanning the production systems in this model?

Probably not immediately…..

Not until you can validate that your Configuration Management and Hash capabilities are mature…

But it’s a nice Target …

Or we can keep doing what we’ve

been doing?

Other areas you could dedicate a

book too

Security Scanning integration points in the pipeline

Decoupled libraries and immutable code libraries

Immutable Virtual Network Security Design Patterns

Container Image and Host OS Trust tied to TPM chips

Agentless Container technologies

Emerging JavaScript micro service application architectures and the gaps in

JavaScript security tool capabilities

The End

Questions

Rickyleesander88@gmail.com