Let’s Talk Container Networking: Packet and Project Calico
Adam Rothschild <[email protected]>Co-Founder and SVP, Infrastructure
New York, NY - April 16, 2015
AGENDA
● Packet and Container Networking Use Cases
● Container Networking 101
● Leading Projects and Architectures
● To Overlay or Not to Overlay
● Review and Calico Intro
ABOUT PACKETAnd what’s with the robot...
PACKET’S MANIFESTO
● Provide high-performance bare metal compute nodes, available globally in <5 minutes.
● Ensure high-performance networking at all layers:
○ Bonded NICs at 2x1G or 2x10G○ Dense 10G/40G on the backbone (soon: 100G)○ Backbone network designed with copious capacity and eyeball delivery in mind○ High availability: no SPOF on the top of rack, or core network!
● Design for all of the above with “Internet scale” in mind:
○ Support many thousands of hosts inside a datacenter○ Support many thousands of disparate customers○ Support backend networking between datacenters○ Support IP mobility, global load balancing, between datacenters○ Legacy-free layer-3 datacenter network to meet these needs; zero VLANs and STP
throughout our entire production network (* some on our parallel management network, out of full disclosure :-)
OUR INTERNETS; L3 EVERYWHERE
to POP to POP
Optical (n*10G)
Copper (1G)
2*1G bonded 2*1G bonded
BBR
DSR DSR
ESR ESR ESR
Servar Servar
BBRPacket
Datacenter
Carrier Hotel POPCarrier Hotel POP
ESR
to NSP/peering
CSR CSRto NSP/peering
CUSTOMER NEEDS
Stripped down to the essentials, Packet’s ecosystem and “Internet scale” customers want:
● Simple node discovery, ability to move a service/container between hosts without provider coordination
● Secure and free/cheap back-end transfer● IP takeover (e.g. heartbeat) for HA/DR use cases● Security zoning between server tiers● VPN between diverse providers or cloud environments (e.g. mix and match
AWS, DO, SL compute)
CONTAINER NETWORKING 101
Defaults and Concepts
DOCKER NETWORKINGThe Problem:
DOCKER NETWORKING
● Remember, Containers != Small VM’s● Environments can be expected to have rapid creation / deletion of
containers● Containers are expected to move around alot● Finally something that is cross provider -- but what about the network?● Security? What security?
DOCKER NETWORKING
To recap, if you’re running containers today, your networking options include:
● Run docker separately on each box; exposing ports on public or private interfaces so containers can talk to each other.
● Run in-between / overlay solutions like Weave to fully abstract the networking.● Run "ready-to-go" multi-host platforms for docker like Deis, Flynn, or Rancher.● Build a shared bridge on a meshed network among your boxes and get your Docker services to
spawn containers there.
LEADING PROJECTSOpen Source Projects and Architectures
Route based
THE LANDSCAPE
FLANNEL
WEAVE
Overlay / tunnel based
ROCKET (PROPOSED)
WEAVE
RANCHER
1. Creates a full-mesh network between containers using IPSEC tunnels2. Rancher server orchestrates IPSEC setup and key management. Fully automatic. No manual steps required3. IPSEC tunnels are created on demand to reduce overhead4. Rancher server manage DHCP server and IP address allocation. Allocates 10.42/16 address globally by default.5. Rancher takes over container linking from Docker6. Works with Docker native API and command line. Rancher listens for Docker events and sets up networking in the background.7. Optionally disable encryption to improve performance
THE APPROACHES: OVS
OCTO TECHNOLOGY
...seriously?!
TO OVERLAY OR NOT TO OVERLAYThat’s the Question
OVERLAY CONCERNSI am an infrastructure operator, not a software engineer, by trade, and I see many operational challenges:
● Performance bottleneck as additional encapsulations (e.g. VXLAN, GRE, MPLS) and crypto (IPSec, SSL/TLS) are handled by the host○ Not a problem with VMs, as network is constrained before compute○ Very much a problem with powerful bare metal
● Overlay agents are required on the host/hypervisor layer● Topology is extremely complex, difficult to understand and troubleshoot; is
the connectivity problem with the overlay or underlay?● IP multicast required for some implementations; “BUM” flooding● Annoying defaults: [double] NAT, hardcoded subnets (which could overlap)● Scaling to large host count is still an issue with many implementations!● All of this complexity for questionable gain, as we are about to see...
NEEDS REVISITED
Do we actually need an overlay for these?
● Secure and free/cheap back-end transfer● Simple node discovery, ability to move a service/container between hosts
without provider coordination● IP takeover (e.g. heartbeat) for HA/DR use cases● Security zoning between server tiers● VPN between diverse providers or cloud environments (e.g. mix and match
AWS, DO, SL compute)
VPN is the only real use case today, and it’s becoming decreasingly relevant as the cloud interconnection market takes off. See: Equinix Cloud Exchange, SoftLayer and Amazon Direct Connect, et al.
GET INVOLVED!
● Rocket Networking Spec: https://github.com/coreos/rkt/issues/273
● Docker Networking Spec: https://github.com/docker/docker/issues/8951
Top Related