Kubernetes as Orchestrator for A10 Lightning Controller
-
Upload
akshay-mathur -
Category
Technology
-
view
62 -
download
1
Transcript of Kubernetes as Orchestrator for A10 Lightning Controller
Confidential | ©A10 Networks, Inc.
Using Kubernetes as Orchestrator for A10 Lightning Controller
Akshay MathurManu Dilip Shah
A10 Lightning Application Delivery Service
DATA
CONTROL
Analytics
Admin Portal API Client
A10 LightningController
RESTAPI
LADC Cluster Application ServicesClients
Lightning Controller• A micro services based application• Configuration, Visibility, Analytics• Multi-tenant portal • Programmability with REST APIs
Lightning ADC• Scale-out• Traffic Management• App Security
Controller Architecture
Why we thought of Kubernetes
• On failure, K8s brings up the pod automatically• Rolling upgrade of code can be done easily• Scaling policy can be setup to scale each micro service as needed• Pod health can be monitored easily and acted upon
What we achieved at high level
• Controller was only available as SaaS
• Launch and Scaling was manual• Installation was dependent on
underlying infrastructure platform
• Controller is available for on-premise
• It can be scaled from One VM to Multiple depending on use case
• Launch and Scaling is automated• Installation is independent of
underlying infrastructure platform
From AWS VMs to K8s Containers in Multiple Environments
Current Environment for Controller
• Kubernetes core components• Kube-dns – Internal DNS service• Flannel – Overlay networking• Heapster – Monitoring of pods• Kubernetes Dashboard - Helps monitoring the pods• jq – Programmatically Editing JSONs for K8s objects
The Journey: From to
• Everything was manual to start with• Selecting Master and Minion• Mapping node port to container port• Cross node communication Configuration
• Limitations Realized• Cant run same type of pod on one node• Packaging and distribution issues e.g. build process automation• Data loss when node stops
The Journey: From to
• Second Level Issues – After some level of simplifications• Cumbersome overlay network configuration• Passing env info to pod – Startup script env variables are not scalable• Installation was still too many steps
• Thought for Future – Solved now• Adding node to the K8s cluster when more capacity is needed• Migrating static IP of the node to other node when node is replaced• Adding component in future with minimal change in existing components
Design Choices
• Keep all micro-services as is• One K8s service per micro-service• One pod per k8s deployment• Multiple services exposed externally• Continue to use third-party registry service
• Kubernetes Registry Service can be used instead of third-party
Accessing Micro Services
• Multiple micro services of Controller are required to be accessed from outside
• Micro services accessing each other also can’t depend on IP address
• Kubernetes Services and kube-dns allow fixing name as well as a fixed IP address for each service
• All internal access (between components) is using service name• Service IP is mapped to Node IP for all external access
• Public static IP is assigned to the node for external access
Simplifying Networking
• Each pod gets the IP address that is internal to the node• Overlay networking facilitates communication between pods across nodes
• Flannel creates an overlay network that spans across nodes• Each pod gets IP address from same subnet• This subnet is internal to the K8s cluster
• This provides seamless communication between pods across nodes• Private Subnet for Service IPs is configured in K8s configuration
Overlay Network
Persisting Data
• Pods may come and go or can spawn across nodes• Persistence is required for maintaining the state across reboots or
across clusters
• NFS, AWS EBS, GCE Persistent Disk or Azure Disk can be used as K8s Persistent Volume (PV)
• In K8s Deployment object, ‘PV Claims’ can be done by each Pod, as needed
• K8s provides PV matching the Claim to the Pod• This mounts the PV file system into container’s file system
Storage Objects in Kubernetes
Deploying Clustered Applications
• Cluster application (e.g. datastores) each pod need to know about other pod running same application
• Such applications needs to be deployed using K8s Stateful Set
• K8s Stateful Set provide fixed names for each instance/pod• PV Claims in each instance of Stateful Set also have fixed names• Having fix names help a lot in the configuration and functioning of
clustered applications• When the application requires more capacity, it is easy to add