PostgreSQL Disaster Recovery

PostgreSQL Disaster Recovery#

A DR plan for PostgreSQL has three layers: streaming replication for fast failover, WAL archiving for point-in-time recovery, and a backup tool like pgBackRest for managing retention. Each layer covers a different failure mode – replication for server crashes, WAL archiving for data corruption that replicates, full backups for when everything goes wrong.

Streaming Replication for DR#

Synchronous vs Asynchronous – The Core Tradeoff#

Asynchronous replication is the default. The primary streams WAL to the standby, but does not wait for confirmation before committing. This means the primary is fast, but the standby can be seconds behind. If the primary dies, those uncommitted-on-standby transactions are lost.

Backup Verification and Restore Testing: Proving Your Backups Actually Work

Backup Verification and Restore Testing#

An untested backup is not a backup. It is a file that might contain your data and might be restorable. Teams discover the difference during an actual incident, when the database backup turns out to be corrupted, the restore takes 6 hours instead of the expected 30 minutes, or the backup process silently stopped running three weeks ago.

Backup verification is the practice of regularly proving that your backups contain valid data and can be restored within your required RTO.

Kubernetes Cluster Disaster Recovery: etcd Backup, Velero, and GitOps Recovery

Kubernetes Cluster Disaster Recovery#

Your cluster will fail. The question is whether you can rebuild it in hours or weeks. Kubernetes DR is not a single tool – it is a layered strategy combining etcd snapshots, resource-level backups, GitOps state, and tested recovery procedures.

The three layers of Kubernetes DR: etcd gives you raw cluster state, Velero gives you portable resource and volume backups, and GitOps gives you declarative rebuild capability. You need at least two of these.

CockroachDB Day-2 Operations

Adding and Removing Nodes#

Adding a node: start a new cockroach process with --join pointing to existing nodes. CockroachDB automatically rebalances ranges to the new node.

cockroach start --insecure --store=node4-data \
  --advertise-addr=node4:26257 \
  --join=node1:26257,node2:26257,node3:26257

Watch rebalancing in the DB Console under Metrics > Replication, or query directly:

SELECT node_id, range_count, lease_count FROM crdb_internal.kv_store_status;

Decommissioning a node moves all range replicas off before shutdown, preventing under-replication:

cockroach node decommission 4 --insecure --host=node1:26257

# Monitor progress
cockroach node status --insecure --host=node1:26257 --decommission

Do not simply kill a node. Without decommissioning, CockroachDB treats it as a failure and waits 5 minutes before re-replicating. On Kubernetes with the operator, scale by changing spec.nodes in the CrdbCluster resource.

etcd Maintenance for Self-Managed Clusters

etcd Maintenance for Self-Managed Clusters#

etcd is the backing store for all Kubernetes cluster state. Every object – pods, services, secrets, configmaps – lives in etcd. If etcd is unhealthy, your cluster is unhealthy. If etcd data is lost, your cluster is gone. Managed Kubernetes services (EKS, GKE, AKS) handle etcd for you, but self-managed clusters require you to operate it directly.

All etcdctl commands below require TLS flags. Set these as environment variables to avoid repeating them:

Infrastructure Disaster Recovery with Terraform: State Recovery, Blue-Green Infrastructure, and Rebuild Procedures

Infrastructure Disaster Recovery with Terraform#

Application disaster recovery is well-understood: replicate data, failover traffic, restore from backups. Infrastructure disaster recovery is different — you are recovering the platform that applications run on. If your Terraform state is lost, your VPC is deleted, or an entire region goes down, how do you rebuild?

This article covers the DR patterns specific to Terraform-managed infrastructure: protecting state, recovering from state loss, designing infrastructure for regional failover, and the runbooks that agents and operators need when things go wrong.

MySQL Backup and Recovery

MySQL Backup and Recovery#

A backup that has never been restored is not a backup. This guide covers the tools, when to use each, and how to verify your backups work.

Logical vs Physical Backups#

Logical backups export SQL statements. Portable across versions but slow for large databases. Physical backups copy raw InnoDB data files. Fast but tied to the same MySQL version. Physical backups are essential once your database exceeds a few hundred gigabytes.

PostgreSQL Backup and Recovery

PostgreSQL Backup and Recovery#

A backup you have never tested restoring is not a backup. This covers the main backup tools, when to use each, point-in-time recovery, and automation.

Logical Backups: pg_dump and pg_dumpall#

pg_dump exports a single database as SQL or a compressed binary format. It takes a consistent snapshot without blocking writes.

# Custom format (compressed, supports parallel restore)
pg_dump -U postgres -Fc -d myapp -f myapp.dump

# Directory format (parallel dump)
pg_dump -U postgres -Fd -j 4 -d myapp -f myapp_dir/

pg_dumpall exports every database plus cluster-wide objects. In practice, dump roles separately and per-database for flexibility:

SQLite for Production Use

SQLite for Production Use#

SQLite is not a toy database. It handles more read traffic than any other database engine in the world – every Android phone, iOS device, and major web browser runs SQLite. The question is whether your workload fits its concurrency model: single-writer, multiple-reader. If it does, SQLite eliminates an entire class of operational overhead with no server process, no network protocol, and no connection authentication.

WAL Mode#

Write-Ahead Logging (WAL) mode is the single most important configuration for production SQLite. In the default rollback journal mode, writers block readers and readers block writers. WAL removes this limitation.

Upgrading Self-Managed Kubernetes Clusters with kubeadm: Step-by-Step

Upgrading Self-Managed Kubernetes Clusters with kubeadm#

Upgrading a kubeadm-managed cluster is a multi-step procedure that must be executed in a precise order. The control plane upgrades first, then worker nodes one at a time. Skipping steps or upgrading in the wrong order causes version skew violations that can break cluster communication.

This article provides the complete operational sequence. Execute each step in order. Do not skip ahead.

Version Skew Policy#

Kubernetes enforces strict version compatibility rules between components. Violating these rules results in undefined behavior – sometimes things work, sometimes the API server rejects requests, sometimes components silently fail.