CockroachDB Debugging and Troubleshooting

Node Liveness Issues#

Every node must renew its liveness record every 4.5 seconds. Failure to renew marks the node suspect, then dead, triggering re-replication of its ranges.

cockroach node status --insecure --host=localhost:26257

Look at is_live. If a node shows false, check in order:

Process crashed. Check cockroach-data/logs/ for fatal or panic entries. OOM kills are the most common cause – check dmesg | grep -i oom on the host.

Network partition. The node runs but cannot reach peers. If cockroach node status succeeds locally but fails from other nodes, the problem is network-level (firewalls, security groups, DNS).

Database Performance Investigation Runbook

Database Performance Investigation Runbook#

When a database is slow, resist the urge to immediately tune configuration parameters. Follow this sequence: identify what is slow, understand why, then fix the specific bottleneck. Most performance problems are caused by missing indexes or a single bad query, not global configuration issues.

Phase 1 – Identify Slow Queries#

The first step is always finding which queries are consuming the most time.

PostgreSQL: pg_stat_statements#

Enable the extension if not already loaded:

Debugging GitHub Actions: Triggers, Failures, Secrets, Caching, and Performance

Debugging GitHub Actions#

When a GitHub Actions workflow fails or does not behave as expected, the problem falls into a few predictable categories. This guide covers each one with the diagnostic steps and fixes.

Workflow Not Triggering#

The most common GitHub Actions “bug” is a workflow that never runs.

Check the event and branch filter. A push trigger with branches: [main] will not fire for pushes to feature/xyz. A pull_request trigger fires for the PR’s head branch, not the base branch:

Jenkins Debugging: Diagnosing Stuck Builds, Pipeline Failures, Performance Issues, and Kubernetes Agent Problems

Jenkins Debugging#

Jenkins failures fall into a few categories: builds stuck waiting, cryptic pipeline errors, performance degradation, and Kubernetes agent pods that refuse to launch.

Builds Stuck in Queue#

When a build sits in the queue and never starts, check the queue tooltip in the UI – it tells you why. Common causes:

No agents with matching labels. The pipeline requests agent { label 'docker-arm64' } but no agent has that label. Check Manage Jenkins > Nodes to see available labels.

Linux Debugging Essentials for Infrastructure

Debugging Workflow#

Start broad, narrow down. Most problems fall into five categories: service not running, resource exhaustion, full disk, network failure, or kernel issue. Work through them in order: service, resources, network, kernel logs.

Services: systemctl and journalctl#

When a service is misbehaving, start with its status:

systemctl status nginx

This shows whether the service is active, its PID, its last few log lines, and how long it has been running. If the service keeps restarting, the uptime will be suspiciously short.

Load Testing Strategies: Tools, Patterns, and CI Integration

Sre

Why Load Test#

Performance problems discovered in production are expensive. A service that handles 100 requests per second in dev might collapse at 500 in production because connection pools exhaust, garbage collection pauses compound, or a downstream service starts throttling. Load testing reveals these limits before users do.

Load testing answers specific questions: What is the maximum throughput before errors start? At what concurrency does latency degrade beyond acceptable limits? Can the system sustain expected traffic for hours without resource leaks? Will a traffic spike cause cascading failures?

MySQL Performance Tuning

MySQL Performance Tuning#

Performance tuning comes down to three things: making queries touch fewer rows (indexes), keeping hot data in memory (buffer pool), and finding the slow queries (slow query log, Performance Schema).

Reading EXPLAIN Output#

EXPLAIN shows MySQL’s query execution plan. Always use EXPLAIN ANALYZE (MySQL 8.0.18+) for actual runtime stats, not just estimates.

EXPLAIN ANALYZE
SELECT u.name, COUNT(o.id) as order_count
FROM users u
JOIN orders o ON o.user_id = u.id
WHERE u.created_at > '2025-01-01'
GROUP BY u.id;

Key columns:

PostgreSQL Performance Tuning

PostgreSQL Performance Tuning#

Most PostgreSQL performance problems come from missing indexes, bad query plans, connection overhead, or table bloat. This covers how to diagnose each one.

Reading EXPLAIN ANALYZE#

EXPLAIN shows the query plan. EXPLAIN ANALYZE actually executes the query and shows real timings.

EXPLAIN ANALYZE SELECT * FROM orders WHERE customer_id = 42 AND status = 'pending';
Index Scan using idx_orders_customer on orders  (cost=0.43..8.45 rows=1 width=120) (actual time=0.023..0.025 rows=3 loops=1)
  Index Cond: (customer_id = 42)
  Filter: (status = 'pending'::text)
  Rows Removed by Filter: 12
Planning Time: 0.152 ms
Execution Time: 0.048 ms

What to look for: Seq Scan on large tables means a missing index. Rows Removed by Filter means the index fetched extra rows that a composite index would eliminate. actual rows far from estimated rows means stale statistics – run ANALYZE tablename;. Nested Loop with high loops count usually wants a hash join; check the inner table’s indexes.

Advanced PromQL: Performance, Cardinality, and Complex Query Patterns

Cardinality Explosion#

Cardinality is the number of unique time series Prometheus tracks. Every unique combination of metric name and label key-value pairs creates a separate series. A metric with 3 labels, each having 100 possible values, generates up to 1,000,000 series. In practice, cardinality explosions are the single most common way to kill a Prometheus instance.

The usual culprits are labels containing user IDs, request paths with embedded IDs (like /api/users/a]3f7b2c1), session tokens, trace IDs, or any unbounded value set. A seemingly innocent label like path on an HTTP metric becomes catastrophic when your API has RESTful routes with UUIDs in the path.

Linux Performance Tuning: sysctl, ulimits, I/O Schedulers, and Kernel Parameters

sysctl: Kernel Parameter Tuning#

The sysctl interface exposes kernel parameters that control how Linux manages memory, networking, file systems, and processes. Changes take effect immediately but are lost on reboot unless persisted.

Memory Parameters#

# Reduce swap aggressiveness (default is 60, range 0-100)
# Lower values make the kernel prefer reclaiming page cache over swapping
# Set to 10 for database servers -- swapping destroys database performance
sysctl -w vm.swappiness=10

# Overcommit behavior
# 0 = heuristic overcommit (default, kernel estimates if there is enough memory)
# 1 = always overcommit (never refuse malloc -- dangerous but used by Redis)
# 2 = strict overcommit (never allocate more than swap + ratio*physical)
sysctl -w vm.overcommit_memory=0

The vm.swappiness parameter is one of the most impactful settings for database servers. The default of 60 means the kernel will fairly aggressively swap application memory to disk in favor of filesystem cache. For databases that manage their own caching (PostgreSQL shared_buffers, MySQL innodb_buffer_pool), this is counterproductive – the database’s carefully managed cache gets swapped out to make room for OS-level cache the database does not use.