Kubernetes architecure
Overview of Kubernetes components
K8s cluster is composed of many nodes of 2 types:
- Master node: it runs the control plane that manages the whole k8s system.
- Worker nodes: they run the applications deployed to k8s.
A master node is composed of 4 components:
- API server: This is the API that you talk to & other components/nodes also talk to.
- Scheduler: Assigns worker nodes to the deployable components of your application.
- Controller manager: This does cluster level functions like replicating components/keeping track of worker nodes, handling failures etc.
- etcd: Distributed datastore that stores cluster config.
Worker nodes are composed of 3 components:
- Docker, rkt or other container runtime.
kubelet
: Manages containers within the node and talks to the API server in the master nodekube-proxy
: A load balancer that manages traffic between applications.
Running your application on Kubernetes
- You package your app into one or more container images.
- Push the images to a registry (e.g docker public registry).
- Write a descritpion of your application (how containers relate to each other, number of copies, which containers should run together, which are oferring service to internal/external clients and need a single IP address etc).
- Push the description files to k8s API server.
- The API will process it
- The scheduler will allocate a node for your app to run on based on available/allocated resources etc.
- The
kubelet
of these nodes will instruct the container runtime to pull these images & run these containers.
Pods
These are multiple co-located containers that run in the same worker nodes, share the same netwrok and Linux namespace. Each pod is like a separate logical machine with separate IP, processes, hostname etc running a single application.
Each worker node can have one or more pods running. Each pod can run one or more containers.
When a pod is created, it gets an IP address allocated to it that is only accessible within the cluster. You’d need to have a service object of type LoadBalancer
to allow public access.
Pods vs ReplicationController vs Service
Pod is where your container(s) run and it gets a private IP address. It usually hosts 1 container but it can have as many as you want.
ReplicationController takes care of spawning a new pod when an old one is deleted, dies or simply need another copy of the pod.
Service has a static IP that clients can connect to. When a ReplicationController
brings up a new pod, it assigns a new private IP address to it. The service will expose these pods a single service to the clients.
Labelling Pods
Pod labelling allows us to organize the apps/services better (by release stage, app/service group etc). It also allows querying, filtering and deleting pods of a specific set of labels.
K8s looks at your nodes as a deployment platform, therefore it deploys your pods to any node that provides the resources required by the pod (CPU, disk space etc). You can still tell k8s in what type of nodes you want to deploy your pods to via labelling.
Scheduling a pod to a specific node can result in pod being undeployable when that node is offline.