CockroachDB Debugging and Troubleshooting

Node Liveness Issues#

Every node must renew its liveness record every 4.5 seconds. Failure to renew marks the node suspect, then dead, triggering re-replication of its ranges.

cockroach node status --insecure --host=localhost:26257

Look at is_live. If a node shows false, check in order:

Process crashed. Check cockroach-data/logs/ for fatal or panic entries. OOM kills are the most common cause – check dmesg | grep -i oom on the host.

Network partition. The node runs but cannot reach peers. If cockroach node status succeeds locally but fails from other nodes, the problem is network-level (firewalls, security groups, DNS).

CockroachDB Setup and Architecture

Architecture: What CockroachDB Actually Does Under the Hood#

CockroachDB is a distributed SQL database that stores data across multiple nodes while presenting a single logical database to clients. Understanding three concepts is essential before deploying it.

Ranges. All data is stored in key-value pairs, sorted by key. CockroachDB splits this sorted keyspace into contiguous chunks called ranges, each targeting 512 MiB by default. Every SQL table, index, and system table maps to one or more ranges. When a range grows beyond the threshold, it splits automatically.

Database Performance Investigation Runbook

Database Performance Investigation Runbook#

When a database is slow, resist the urge to immediately tune configuration parameters. Follow this sequence: identify what is slow, understand why, then fix the specific bottleneck. Most performance problems are caused by missing indexes or a single bad query, not global configuration issues.

Phase 1 – Identify Slow Queries#

The first step is always finding which queries are consuming the most time.

PostgreSQL: pg_stat_statements#

Enable the extension if not already loaded:

Docker Compose Validation Stacks: Templates for Multi-Service Testing

Docker Compose Validation Stacks#

Docker Compose validates multi-service architectures without Kubernetes overhead. It answers the question: do these services actually work together? Containers start, connect, and communicate – or they fail, giving you fast feedback before you push to a cluster.

This article provides complete Compose stacks for four common validation scenarios. Each includes the full docker-compose.yml, health check scripts, and teardown procedures. The pattern for using them is always the same: clone the template, customize for your services, bring it up, validate, capture results, bring it down.

Planning and Executing Database Migrations: Schema Changes, Data Migrations, and Zero-Downtime Patterns

Planning and Executing Database Migrations#

Database migrations are the highest-risk routine operations most teams perform. A bad migration can cause downtime, data loss, or application errors that cascade across every service that touches the affected tables. This operational sequence walks through the assessment, planning, execution, and rollback of database migrations from simple column additions to full platform changes.

Phase 1 – Assessment#

Step 1: Classify the Migration#

Every migration falls into one of three categories, each with a different risk profile:

PostgreSQL Debugging

PostgreSQL Debugging#

When PostgreSQL breaks, it usually falls into a handful of patterns. This is a reference for diagnosing each one with specific queries and commands.

Connection Refused#

Work through these in order:

1. Is PostgreSQL running?

sudo systemctl status postgresql-16

2. Is it listening on the right address?

ss -tlnp | grep 5432

If it shows 127.0.0.1:5432 but you need remote access, set listen_addresses = '*' in postgresql.conf.

3. Does pg_hba.conf allow the connection? Check logs for no pg_hba.conf entry for host:

PostgreSQL Performance Tuning

PostgreSQL Performance Tuning#

Most PostgreSQL performance problems come from missing indexes, bad query plans, connection overhead, or table bloat. This covers how to diagnose each one.

Reading EXPLAIN ANALYZE#

EXPLAIN shows the query plan. EXPLAIN ANALYZE actually executes the query and shows real timings.

EXPLAIN ANALYZE SELECT * FROM orders WHERE customer_id = 42 AND status = 'pending';
Index Scan using idx_orders_customer on orders  (cost=0.43..8.45 rows=1 width=120) (actual time=0.023..0.025 rows=3 loops=1)
  Index Cond: (customer_id = 42)
  Filter: (status = 'pending'::text)
  Rows Removed by Filter: 12
Planning Time: 0.152 ms
Execution Time: 0.048 ms

What to look for: Seq Scan on large tables means a missing index. Rows Removed by Filter means the index fetched extra rows that a composite index would eliminate. actual rows far from estimated rows means stale statistics – run ANALYZE tablename;. Nested Loop with high loops count usually wants a hash join; check the inner table’s indexes.

PostgreSQL Replication

PostgreSQL Replication#

Streaming replication gives you a full binary copy for high availability and read scaling. Logical replication gives you selective table-level syncing between databases that can run different PostgreSQL versions.

Streaming Replication Setup#

Configure the Primary#

# postgresql.conf
wal_level = replica
max_wal_senders = 5
wal_keep_size = 1GB

Create a replication role and allow connections:

CREATE ROLE replicator WITH REPLICATION LOGIN PASSWORD 'repl-secret';
# pg_hba.conf
host  replication  replicator  10.0.0.0/8  scram-sha-256

Initialize the Standby#

sudo systemctl stop postgresql-16
sudo rm -rf /var/lib/postgresql/16/main/*
pg_basebackup -h primary-host -U replicator -D /var/lib/postgresql/16/main \
  --checkpoint=fast --wal-method=stream -R -P
sudo chown -R postgres:postgres /var/lib/postgresql/16/main
sudo systemctl start postgresql-16

The -R flag creates standby.signal and writes connection info to postgresql.auto.conf. The standby now continuously receives and replays WAL from the primary, accepting read-only queries by default.

PostgreSQL Setup and Configuration

PostgreSQL Setup and Configuration#

Every PostgreSQL deployment boils down to three things: get the binary running, configure who can connect, and tune the memory settings.

Installation Methods#

Package Managers#

On Debian/Ubuntu, use the official PostgreSQL APT repository:

sudo apt install -y postgresql-common
sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh
sudo apt install -y postgresql-16

On macOS: brew install postgresql@16 && brew services start postgresql@16

On RHEL/Fedora:

sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-9-x86_64/pgdg-redhat-repo-latest.noarch.rpm
sudo dnf install -y postgresql16-server
sudo /usr/pgsql-16/bin/postgresql-16-setup initdb
sudo systemctl enable --now postgresql-16

Config files live at /etc/postgresql/16/main/ (Debian) or /var/lib/pgsql/16/data/ (RHEL).

Time-Series Database Selection and Operations

Time-Series Database Selection and Operations#

Time-series databases optimize for a specific access pattern: high-volume writes of timestamped data points, queries that aggregate over time ranges, and automatic expiration of old data. Choosing the right one depends on your data model, query patterns, retention requirements, and operational constraints.

When You Need a Time-Series Database#

A dedicated time-series database is justified when you have high write throughput (thousands to millions of data points per second), queries that are predominantly time-range aggregations, and data that has a defined retention period. Common use cases: infrastructure metrics, application performance monitoring, IoT sensor data, financial tick data, and log analytics.