Kubernetes Cluster: 11 Key Components
Discover the key components of the Kubernetes cluster control plane and worker nodes, and learn best practices for operating clusters successfully.
What is a Kubernetes Cluster?
Kubernetes is a container orchestrator used to deploy, automate and manage large-scale containerized workloads. A Kubernetes cluster is a collection of nodes on which workloads can run—these can be physical (bare metal) machines, or virtual machines (VMs), or serverless compute systems like Amazon Fargate. It is managed by a control plane, which is responsible for the orchestrating container activity on the nodes, and maintaining the desired cluster state.
The architecture of a Kubernetes cluster enables you to schedule and run containers across a collection of nodes, regardless of where the nodes are physically deployed. Kubernetes clusters offer powerful capabilities for resource provisioning and utilization, to ensure the desired levels of performance and uptime for applications.
In this article, you will learn:
What are the Components of a Kubernetes Cluster?
A Kubernetes cluster consists of nodes and a control plane.
The control plane architecture is composed of an API server, a scheduler, a controller, and a key-value store called etcd.
Nodes running in the cluster are typically worker nodes, which run pods. Several other components are involved in the process, including container runtimes, kubelet, and kube-proxy.
Related content: read our guide to Kubernetes architecture ›
Control Plane
The control plane is in charge of maintaining the desired state of the Kubernetes cluster. It holds configuration and state data, which is used to maintain the state.
To ensure the cluster is running a sufficient number of resources, the control plane remains in constant communication with the nodes.
1. kube-apiserver
The API server serves as the front end of the control plane. It is responsible for exposing the Kubernetes API, which ensures the control plane can handle external and internal requests.
The API server accepts requests, determines if they are valid, and executes them. External resources can access the API directly via REST calls, while internal cluster components typically access it using command line tools like kubectl or kubeadm.
2. kube-scheduler
The Kubernetes scheduler is responsible for identifying that new pods need to be deployed, and finding a place to deploy them. The scheduler assigns a pod to compute nodes, by evaluating its resource requirements, and identifying which cluster nodes are healthy and able to provide the resources the pod needs.
3. kube-controller-manager
Kubernetes clusters use controllers to ensure cluster state remains consistent Some of the controllers operated by kube-controller-manager are:
- Node controller – identifies failed nodes and responds accordingly
- Job controller – deploys pods to run jobs, which are one-time tasks, until they complete
- Service account and token controllers – when a new namespace is created, this controller creates default accounts and API access tokens
4. etcd
etcd is a key-value store that holds information about configuration data and cluster status. It is a critical component of a Kubernetes cluster, and is therefore distributed and fault-tolerant. It is important to secure etcd in your Kubernetes clusters, because attackers who take control of etcd have effective control over the entire cluster.
5. cloud-controller-manager
This is a component of the control plane that can embed cloud control logic. You can use the cloud controller manager to link a Kubernetes cluster to the API of your cloud provider. Once the cluster and the cloud platform are connected, you can use it to scale horizontally. To ensure optimal performance, cloud-controller-manager separates the components that interact only with the cluster from the components that interact with the cloud platform.
Worker Nodes
6. Nodes
A Kubernetes node is a physical or virtual machine located in the cloud or on-premises. Nodes are responsible for running applications.
An application is deployed inside containers, but containers do not run alone. Containers are deployed inside pods, and the pods run inside nodes.
Nodes provision the cluster as a whole. If you need to scale up the capacity of the cluster, you can provision more nodes.
7. Pods
Pods are small and simple units that work as a single instance of an application. A pod can host one container of a set of tightly coupled containers.
Pods can run as non-persistent Deployments or as StatefulSets. The former are mainly useful for stateless workloads. The latter is designed for stateful workloads, and makes it easier to work with persistent storage.
Learn more in our detailed guide to Kubernetes pods ›
8. Container Runtime
A container runtime is a software designed for the purpose of running containers. To enable pods to run inside nodes, you need to install a container runtime.
There are several container runtimes you can use in Kubernetes, including Open Container Initiative-compliant runtimes like CRI-O, containerd, and Docker.
9. kubelet
Each compute node runs a small application called a kubelet, which communicates with the control plane, and makes sure containers are running correctly in each pod. If the control plane needs to do something on the node, it does so via the kubelet.
10. kube-proxy
Each compute node also includes a kube-proxy, which runs Kubernetes network services. kube-proxy handles network communication both within and outside the cluster. It can either use the operating system’s packet filtering layer, or forward traffic independently.
11. Container Networking
Container networking allows containers, and the applications running within them, to communicate with other containers or hosts.
A commonly used standard for container networking is the Container Network Interface (CNI), a project developed by Red Hat OpenShift, Cloud Foundry, Kubernetes, and others. The CNI is designed to have minimal specifications that only relate to the container’s network connection and the resources allocated when the container is deleted.
A few commonly used CNI plugins are:
- Project Calico – provides virtual networking for Kubernetes
- Weave – multi-host Docker network management
- Contiv – enables policy-based networking
- Infoblox – IP address management (IPAM) for containerized architectures
- Cilium – provides network services at layer 3, 4, and 7, with support for eBPF
Related content: read our guide to Kubernetes networking ›
Kubernetes Cluster Best Practices
Here are several best practices you can use to effectively run your Kubernetes cluster:
- Use namespaces—to prevent unauthorized access and use of cluster resources. Namespaces can help you create logical cluster partitions, which ensure users can concurrently use the same cluster resources.
- Set up resource requests and limits—to ensure a user or application does not drain cluster resources. You can do this on the cluster level, and also define this in the container spec.
- Autoscale clusters—you can leverage built-in features of Kubernetes, such as Cluster Autoscaler and Horizontal Pod Autoscaler, to automatically maintain the capacity, ensure overall health, and avoid downtimes.
- Secure clusters with role-based access controls (RBAC)—this is a functionality that enables you to implement access policies that limit user access and privileges. You can define a role for each namespace resource and a ClusterRole for other resources.