Upgrading Self-Managed Kubernetes Clusters with kubeadm: Step-by-Step

Upgrading Self-Managed Kubernetes Clusters with kubeadm#

Upgrading a kubeadm-managed cluster is a multi-step procedure that must be executed in a precise order. The control plane upgrades first, then worker nodes one at a time. Skipping steps or upgrading in the wrong order causes version skew violations that can break cluster communication.

This article provides the complete operational sequence. Execute each step in order. Do not skip ahead.

Version Skew Policy#

Kubernetes enforces strict version compatibility rules between components. Violating these rules results in undefined behavior – sometimes things work, sometimes the API server rejects requests, sometimes components silently fail.

Managed Kubernetes vs Self-Managed: EKS/AKS/GKE vs kubeadm vs k3s vs RKE

Managed Kubernetes vs Self-Managed#

The fundamental tradeoff is straightforward: managed Kubernetes trades control for reduced operational burden, while self-managed Kubernetes gives you full control at the cost of owning everything – etcd, certificates, upgrades, high availability, and recovery.

This decision has cascading effects on team structure, hiring, on-call burden, and long-term maintenance cost. Choose deliberately.

Managed Kubernetes (EKS, AKS, GKE)#

The cloud provider runs the control plane: API server, etcd, controller manager, scheduler. They handle patching, scaling, and high availability for these components. You manage worker nodes and workloads.