Kubernetes Cluster Disaster Recovery: etcd Backup, Velero, and GitOps Recovery

Kubernetes Cluster Disaster Recovery#

Your cluster will fail. The question is whether you can rebuild it in hours or weeks. Kubernetes DR is not a single tool – it is a layered strategy combining etcd snapshots, resource-level backups, GitOps state, and tested recovery procedures.

The three layers of Kubernetes DR: etcd gives you raw cluster state, Velero gives you portable resource and volume backups, and GitOps gives you declarative rebuild capability. You need at least two of these.

Stateful Workload Disaster Recovery: Storage Replication, Database Operators, and Restore Ordering

Stateful Workload Disaster Recovery#

Stateless workloads are easy to recover – redeploy from Git and they are running. Stateful workloads carry data that cannot be regenerated. Databases, message queues, object stores, and anything with a PersistentVolume needs a deliberate DR strategy that goes beyond “we have Velero.”

The fundamental challenge: you must capture data at a point in time where the application state is consistent, replicate that data to a recovery site, and restore it in the correct order. Get any of these wrong and you recover corrupted data or a broken dependency chain.

From Empty Cluster to Production-Ready: The Complete Setup Sequence

From Empty Cluster to Production-Ready#

This is the definitive operational plan for taking a fresh Kubernetes cluster and making it production-ready. Each phase builds on the previous one, with verification steps between phases and rollback notes where applicable. An agent should be able to follow this sequence end-to-end.

Estimated timeline: 5 days for a single operator. Phases 1-2 are blocking prerequisites. Phases 3-6 can partially overlap.


Phase 1 – Foundation (Day 1)#

Everything else depends on a healthy cluster with proper namespacing and storage. Do not proceed until every verification step passes.

Scenario: Migrating Workloads Between Kubernetes Clusters

Scenario: Migrating Workloads Between Kubernetes Clusters#

You are helping when someone says: “we need to move workloads from cluster A to cluster B.” The reasons vary – Kubernetes version upgrade, cloud provider migration, region change, architecture consolidation, or moving from self-managed to a managed service. The complexity ranges from trivial (stateless services with GitOps) to significant (stateful workloads with zero-downtime requirements).

The core risk in any cluster migration is data loss for stateful workloads and downtime during the traffic cutover. Every decision in this plan aims to minimize both.

Velero Backup and Restore: Disaster Recovery for Kubernetes

Velero Backup and Restore#

Velero backs up Kubernetes resources and persistent volume data to object storage. It handles scheduled backups, on-demand snapshots, and restores to the same or a different cluster. It is the standard tool for Kubernetes disaster recovery.

Velero captures two things: Kubernetes API objects (stored as JSON) and persistent volume data (via cloud volume snapshots or file-level backup with Kopia).

Installation#

You need an object storage bucket (S3, GCS, Azure Blob, or MinIO) and write credentials.

Choosing a Kubernetes Backup Strategy: Velero vs Native Snapshots vs Application-Level Backups

Choosing a Kubernetes Backup Strategy#

Kubernetes clusters contain two fundamentally different types of state: cluster state (the Kubernetes objects themselves – Deployments, Services, ConfigMaps, Secrets, CRDs) and application data (the contents of Persistent Volumes). A complete backup strategy must address both. Most backup failures happen because teams back up one but not the other, or because they never test the restore process.

What Needs Backing Up#

Before choosing tools, inventory what your cluster contains:

Kubernetes Disaster Recovery: Runbooks for Common Incidents

Kubernetes Disaster Recovery Runbooks#

These runbooks cover the incidents you will encounter in production Kubernetes environments. Each follows the same structure: detection, diagnosis, recovery, and prevention. Print these out, bookmark them, put them in your on-call wiki. When the alert fires at 2 AM, you want a checklist, not a tutorial.

Incident Response Framework#

Every incident follows the same cycle:

  1. Detect – monitoring alert, user report, or kubectl showing unhealthy state
  2. Assess – determine scope and severity. Is it one pod, one node, or the entire cluster?
  3. Contain – stop the bleeding. Prevent the issue from spreading
  4. Recover – restore normal operation
  5. Post-mortem – document what happened, why, and how to prevent it

Runbook 1: Node Goes NotReady#

Detection: Node condition changes to Ready=False. Pods on the node are rescheduled (if using Deployments). Monitoring alerts on node status.