Upgrading Self-Managed Kubernetes Clusters with kubeadm: Step-by-Step

Upgrading Self-Managed Kubernetes Clusters with kubeadm#

Upgrading a kubeadm-managed cluster is a multi-step procedure that must be executed in a precise order. The control plane upgrades first, then worker nodes one at a time. Skipping steps or upgrading in the wrong order causes version skew violations that can break cluster communication.

This article provides the complete operational sequence. Execute each step in order. Do not skip ahead.

Version Skew Policy#

Kubernetes enforces strict version compatibility rules between components. Violating these rules results in undefined behavior – sometimes things work, sometimes the API server rejects requests, sometimes components silently fail.

Velero Backup and Restore: Disaster Recovery for Kubernetes

Velero Backup and Restore#

Velero backs up Kubernetes resources and persistent volume data to object storage. It handles scheduled backups, on-demand snapshots, and restores to the same or a different cluster. It is the standard tool for Kubernetes disaster recovery.

Velero captures two things: Kubernetes API objects (stored as JSON) and persistent volume data (via cloud volume snapshots or file-level backup with Kopia).

Installation#

You need an object storage bucket (S3, GCS, Azure Blob, or MinIO) and write credentials.

Admission Controllers and Webhooks: Intercepting and Enforcing Kubernetes API Requests

Admission Controllers and Webhooks#

Every request to the Kubernetes API server passes through a chain: authentication, authorization, and then admission control. Admission controllers are plugins that intercept requests after a user is authenticated and authorized but before the object is persisted to etcd. They can validate requests, reject them, or mutate objects on the fly. This is where you enforce organizational policy, inject sidecar containers, set defaults, and block dangerous configurations.

Advanced Kubernetes Debugging: CrashLoopBackOff, ImagePullBackOff, OOMKilled, and Stuck Pods

Advanced Kubernetes Debugging#

Every Kubernetes failure follows a pattern, and every pattern has a diagnostic sequence. This guide covers the most common failure modes you will encounter in production, with the exact commands and thought process to move from symptom to resolution.

Systematic Debugging Methodology#

Before diving into specific scenarios, internalize this sequence. It applies to nearly every pod issue:

# Step 1: What state is the pod in?
kubectl get pod <pod> -n <ns> -o wide

# Step 2: What does the full pod spec and event history show?
kubectl describe pod <pod> -n <ns>

# Step 3: What did the application log before it failed?
kubectl logs <pod> -n <ns> --previous --all-containers

# Step 4: Can you get inside the container?
kubectl exec -it <pod> -n <ns> -- /bin/sh

# Step 5: Is the node healthy?
kubectl describe node <node-name>
kubectl top node <node-name>

Each failure mode below follows this pattern, with specific things to look for at each step.

ARM64 Kubernetes: The QEMU Problem with Go Binaries

ARM64 Kubernetes: The QEMU Problem with Go Binaries#

If you run Kubernetes on Apple Silicon (M1/M2/M3/M4) via minikube with the Docker driver, you will eventually try to run an amd64-only container image. For most software this works through QEMU emulation. For Go binaries, it crashes hard.

The Problem#

Go’s garbage collector uses a lock-free stack (lfstack) that packs pointers with counter bits in the high bits of a 64-bit integer. QEMU’s user-mode address translation changes the effective address space layout, which breaks this packing assumption. The result:

Choosing a Service Mesh: Istio vs Linkerd vs Consul Connect vs No Mesh

Choosing a Service Mesh#

A service mesh moves networking concerns – mutual TLS, traffic routing, retries, circuit breaking, observability – out of application code and into the infrastructure layer. Every pod gets a proxy that handles these concerns transparently. The control plane configures all proxies across the cluster.

The decision is not just “which mesh” but “do you need one at all.”

Do You Actually Need a Service Mesh?#

Many teams adopt a mesh prematurely. Before evaluating options, identify which specific problems you are trying to solve:

Choosing Kubernetes Storage: Local vs Network vs Cloud CSI Drivers

Choosing Kubernetes Storage#

Storage decisions in Kubernetes are harder to change than almost any other architectural choice. Migrating data between storage backends in production involves downtime, risk, and careful planning. Understand the tradeoffs before provisioning your first PersistentVolumeClaim.

The decision comes down to five criteria: performance (IOPS and latency), durability (can you survive node failure), portability (can you move the workload), cost, and access mode (single pod or shared).

Storage Categories#

Block Storage (ReadWriteOnce)#

Block storage provides a raw disk attached to a single node. Only one pod on that node can mount it at a time (ReadWriteOnce). This is the most common storage type for databases, caches, and any workload that needs fast, consistent disk I/O.

Cilium Deep Dive: eBPF Networking, L7 Policies, Hubble Observability, and Cluster Mesh

Cilium Deep Dive#

Cilium replaces the traditional Kubernetes networking stack with eBPF programs that run directly in the Linux kernel. Instead of kube-proxy translating Service definitions into iptables rules and a traditional CNI plugin managing pod networking through bridge interfaces and routing tables, Cilium attaches eBPF programs to kernel hooks that process packets at wire speed. The result is a networking layer that is faster at scale, capable of Layer 7 policy enforcement, and provides built-in observability without application instrumentation.

Custom Resource Definitions (CRDs): Extending the Kubernetes API

Custom Resource Definitions (CRDs)#

CRDs extend the Kubernetes API with your own resource types. Once you create a CRD, you can kubectl get, kubectl apply, and kubectl delete instances of your custom type just like built-in resources. The custom resources are stored in etcd alongside native Kubernetes objects, benefit from the same RBAC, and participate in the same API machinery.

When to Use CRDs#

CRDs make sense when you need to represent application-specific concepts inside Kubernetes:

DaemonSets: Node-Level Workloads, System Agents, and Update Strategies

DaemonSets#

A DaemonSet ensures that a copy of a pod runs on every node in the cluster – or on a selected subset of nodes. When a new node joins the cluster, the DaemonSet controller automatically schedules a pod on it. When a node is removed, the pod is garbage collected.

This is the right abstraction for infrastructure that needs to run everywhere: log collectors, monitoring agents, network plugins, storage drivers, and security tooling.