Kubernetes Nodes: Components and Basic Operations
Learn about the main components of a Kubernetes node, how to perform basic operations, and steps that can help you secure Kubernetes nodes.
What Are Kubernetes Nodes?
A Kubernetes node is a worker machine that runs Kubernetes workloads. It can be a physical (bare metal) machine or a virtual machine (VM). Each node can host one or more pods. Kubernetes nodes are managed by a control plane, which automatically handles the deployment and scheduling of pods across nodes in a Kubernetes cluster. When scheduling pods, the control plane assesses the resources available on each node.
Each node runs two main components—a kubelet and a container runtime. The kubelet is in charge of facilitating communication between the control plane and the node. The container runtime is in charge of pulling the relevant container image from a registry, unpacking containers, running them on the node, and communicating with the operating system kernel.
In this article:
Kubernetes Node Components
Here are three main Kubernetes node components:
kubelet
The Kubelet is responsible for managing the deployment of pods to Kubernetes nodes. It receives commands from the API server and instructs the container runtime to start or stop containers as needed.
kube-proxy
A network proxy running on each Kubernetes node. It is responsible for maintaining network rules on each node. Network rules enable network communication between nodes and pods. Kube-proxy can directly forward traffic or use the operating system packet filter layer.
Container runtime
The software layer responsible for running containers. There are several container runtimes supported by Kubernetes, including Containerd, CRI-O, Docker, and other Kubernetes Container Runtime Interface (CRI) implementations.
Related content: Read our guide to Kubernetes architecture ›
Working with Kubernetes Nodes: 4 Basic Operations
Here is how to perform common operations on a Kubernetes node.
1. Adding Node to a Cluster
You can manually add nodes to a Kubernetes cluster, or let the kubelet on that node self-register to the control plane. Once a node object is created manually or by the kubelet, the control plane validates the new node object.
Adding nodes automatically
The example below is a JSON manifest that creates a node object. Kubernetes then checks that the kubelet registered to the API server matches the node’s metadata.name field. Only healthy nodes running all necessary services are eligible to run the pod. If the check fails, the node is ignored until it becomes healthy.
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "<node-ip-address>",
"labels": {
"name": "<node-logical-name>"
}
}
}
Defining node capacity
Nodes that self-register with the API Server report their CPU and memory volume capacity after the node object is created. However, When manually creating the node, administrators need to set up capacity demands. Once this information is defined, the Kubernetes scheduler assigns resources to all pods running on a node. The scheduler is responsible for ensuring that requests do not exceed node capacity.
2. Modifying Node Objects
You can use kubectl to manually create or modify node objects, overriding the settings defined in --register-node
. You can, for example:
- Use labels on nodes and node selectors to control scheduling. You can, for example, limit a pod to be eligible only for running on a subset of available nodes.
- Mark a node as unschedulable to prevent the scheduler from adding new pods to the node. This action does not affect pods running on the node. You can use this option in preparation for maintenance tasks like node reboot. To mark the node as unschedulable, you can run:
kubectl cordon $NODENAME
.
3. Checking Node Status
There are three primary commands you can use to determine the status of a node and the resources running on it.
kubectl describe nodes
Run the command kubectl describe nodes my-node to get node information including:
- HostName—reported by the node operating system kernel. You can report a different value for HostName using the kubelet flag
--hostname-override
. - InternalIP—enables traffic to be routed to the node within the cluster.
- ExternalID—an IP that can be used to access the node from outside the cluster.
- Conditions—system resource issues including CPU and memory utilization. This section shows error conditions like OutOfDisk, MemoryPressure, and DiskPressure.
- Events—this section shows issues occurring in the environment, such as eviction of pods.
kubectl describe pods
You can use this command to get information about pods running on a node:
- Pod information—labels, resource requirements, and containers running in the pod
- Pod ready state—if a pod appears as READY, it means it passed the last readiness check.
- Container state—can be Waiting, Running, or Terminated.
- Restart count—how often a container has been restarted.
- Log events—showing activity on the pod, indicating which component logged the event, for which object, a Reason and a Message explaining what happened.
4. Understanding the Node Controller
The node controller is the control plane component responsible for managing several aspects of the node’s lifecycle. Here are the three main roles of the node controller:
Assigning CIDR addresses
When the node is registered, the node controller assigns a Cross Inter-Domain Routing (CIDR) block (if CIDR assignment is enabled).
Updating internal node lists
The node controller maintains an internal list of nodes. It needs to be updated constantly with the list of machines available by the cloud provider. This list enables the node controller to ensure capacity is met.
When a node is unhealthy, the node controller checks if the host machine for that node is available. If the VM is not available, the node controller deletes the node from the internal list. If Kubernetes is running on a public or private cloud, the node controller can send a request to create a new node, to maintain cluster capacity.
Monitoring the health of nodes
Here are several tasks the node controller is responsible for:
- Checking the state of all nodes periodically, with the period determined by the
--node-monitor-period
flag. - Updating the NodeReady condition to ConditionUnknown if the node becomes unreachable and the node controller no longer receives heartbeats.
- Evicting all pods from the node. If the node remains unreachable, the node controller uses graceful termination to evict the pods. Timeouts are set by default to 40 seconds, before reporting ConditionUnknown. Five minutes later, the node controller starts evicting pods.
Kubernetes Node Security
kubelet
This component is the main node agent for managing individual containers that run in a pod. Vulnerabilities associated with the kubelet are constantly discovered, meaning that you need to regularly upgrade the kubelet versions and apply the latest patches. Access to the kubelet is not authenticated by default, so you should implement strong authentication measures to restrict access.
kube-proxy
This component handles request forwarding according to network rules. It is a network proxy that supports various protocols (i.e. TCP, UDP) and allows Kubernetes services to be exposed. There are two ways to secure kube-proxy:
- If proxy configuration is maintained via the kubeconfig file, restrict file permissions to ensure unauthorized parties cannot tamper with proxy settings.
- Ensure that communication with the API server is only done over a secured port, and always require authentication and authorization.
Hardened Node Security
You can harden your noder security by following these steps:
- Ensure the host is properly configured and secure—check your configuration to ensure it meets the CIS Benchmarks standards.
- Control access to sensitive ports—ensure the network blocks access to ports that kubelet uses. Limit Kubernetes API server access to trusted networks.
- Limit administrative access to nodes—ensure your Kubernetes nodes have restricted access. You can handle tasks like debugging and without having direct access to a node.
Isolation of Sensitive Workloads
You should run any sensitive workload on dedicated machines to minimize the impact of a breach. Isolating workloads prevents an attacker from accessing sensitive applications through lower-priority applications on the same host or with the same container runtime. Attackers can only exploit the kubelet credentials of compromised nodes to access secrets that are mounted on those nodes. You can use controls such as node pools, namespaces, tolerations and taints to isolate your workloads.