Scenario: Migrating Workloads Between Kubernetes Clusters

Scenario: Migrating Workloads Between Kubernetes Clusters#

You are helping when someone says: “we need to move workloads from cluster A to cluster B.” The reasons vary – Kubernetes version upgrade, cloud provider migration, region change, architecture consolidation, or moving from self-managed to a managed service. The complexity ranges from trivial (stateless services with GitOps) to significant (stateful workloads with zero-downtime requirements).

The core risk in any cluster migration is data loss for stateful workloads and downtime during the traffic cutover. Every decision in this plan aims to minimize both.

Velero Backup and Restore: Disaster Recovery for Kubernetes

Velero Backup and Restore#

Velero backs up Kubernetes resources and persistent volume data to object storage. It handles scheduled backups, on-demand snapshots, and restores to the same or a different cluster. It is the standard tool for Kubernetes disaster recovery.

Velero captures two things: Kubernetes API objects (stored as JSON) and persistent volume data (via cloud volume snapshots or file-level backup with Kopia).

Installation#

You need an object storage bucket (S3, GCS, Azure Blob, or MinIO) and write credentials.

Choosing a Kubernetes Backup Strategy: Velero vs Native Snapshots vs Application-Level Backups

Choosing a Kubernetes Backup Strategy#

Kubernetes clusters contain two fundamentally different types of state: cluster state (the Kubernetes objects themselves – Deployments, Services, ConfigMaps, Secrets, CRDs) and application data (the contents of Persistent Volumes). A complete backup strategy must address both. Most backup failures happen because teams back up one but not the other, or because they never test the restore process.

What Needs Backing Up#

Before choosing tools, inventory what your cluster contains:

Kubernetes Disaster Recovery: Runbooks for Common Incidents

Kubernetes Disaster Recovery Runbooks#

These runbooks cover the incidents you will encounter in production Kubernetes environments. Each follows the same structure: detection, diagnosis, recovery, and prevention. Print these out, bookmark them, put them in your on-call wiki. When the alert fires at 2 AM, you want a checklist, not a tutorial.

Incident Response Framework#

Every incident follows the same cycle:

  1. Detect – monitoring alert, user report, or kubectl showing unhealthy state
  2. Assess – determine scope and severity. Is it one pod, one node, or the entire cluster?
  3. Contain – stop the bleeding. Prevent the issue from spreading
  4. Recover – restore normal operation
  5. Post-mortem – document what happened, why, and how to prevent it

Runbook 1: Node Goes NotReady#

Detection: Node condition changes to Ready=False. Pods on the node are rescheduled (if using Deployments). Monitoring alerts on node status.