On-Call Rotation Design

Sre

On-Call Is a System, Not a Schedule#

On-call done wrong burns out engineers and degrades reliability simultaneously. Exhausted responders make worse decisions, and teams that dread on-call avoid owning production systems. Done right, on-call is sustainable, well-compensated, and generates signal that drives real reliability improvements.

Rotation Schedule Types#

Weekly Rotation#

Each engineer is primary on-call for one full week, Monday to Monday. This is the simplest model and works for teams of 5 or more in a single timezone.

Game Day and Tabletop Exercise Planning

Sre

Why Run Exercises#

Runbooks that have never been tested are fiction. Failover procedures that have never been executed are hopes. Game days and tabletop exercises convert assumptions about system resilience into verified facts – or reveal that those assumptions were wrong before a real incident does.

The value is not just finding technical gaps. Exercises expose process gaps: unclear escalation paths, missing permissions, outdated contact lists, communication breakdowns between teams. These are invisible until a simulated failure forces people to actually follow the documented procedure.

ArgoCD Notifications: Slack, Teams, Webhooks, and Custom Triggers

ArgoCD Notifications#

ArgoCD Notifications is a built-in component (since ArgoCD 2.5) that monitors applications and sends alerts when specific events occur – sync succeeded, sync failed, health degraded, new version deployed. Before notifications existed, teams polled the ArgoCD UI or built custom watchers. Notifications eliminates that.

Architecture#

ArgoCD Notifications runs as a controller alongside the ArgoCD application controller. It watches Application resources for state changes and matches them against triggers. When a trigger fires, it renders a template and sends it through a configured service (Slack, Teams, webhook, email, etc.).

Blameless Post-Mortem Practices: Incident Timelines, Root Cause Analysis, and Organizational Learning

What a Post-Mortem Is and Is Not#

A post-mortem is a structured analysis of an incident conducted after the incident is resolved. Its purpose is to understand what happened, why it happened, and what changes will prevent it from happening again. It is not a blame assignment exercise. It is not a performance review. It is not a formality to check a compliance box.

The output of a good post-mortem is a set of concrete action items that improve the system. Not the humans – the system. If your post-mortem concludes with “engineer X should have been more careful,” you have failed at the process. Humans make mistakes. Systems should be designed so that human mistakes do not cause outages, and when they do, the blast radius is contained.

Change Management for Infrastructure

Sre

Why Change Management Matters#

Most production incidents trace back to a change. Code deployments, configuration updates, infrastructure modifications, database migrations – each introduces risk. Change management reduces that risk through structure, visibility, and accountability. The goal is not to prevent change but to make change safe, visible, and reversible.

Change Request Process#

Every infrastructure change flows through a structured request. The formality scales with risk, but the basic elements remain constant.

Incident Management Lifecycle

Sre

Incident Lifecycle Overview#

An incident is an unplanned disruption to a service requiring coordinated response. The lifecycle has six phases: detection, triage, communication, mitigation, resolution, and review. Each has defined actions, owners, and exit criteria.

Phase 1: Detection#

Incidents are detected through three channels. Automated monitoring is best – alerts fire on SLO violations or error thresholds before users notice. Internal reports come from other teams noticing issues with dependencies. Customer reports are worst case – if users detect your incidents first, your observability has gaps.

Post-Mortem Action Item Tracking

Sre

The Action Item Problem#

Post-mortem reviews produce action items. Teams agree on what needs to change. Then weeks pass, priorities shift, and items quietly decay into a backlog nobody checks. The next incident hits the same root cause, and the post-mortem produces the same action items again.

Studies of recurring incidents consistently show the root cause was identified in a previous post-mortem, and the corresponding action item was never completed. Action item tracking is the mechanism by which incidents make systems more reliable instead of just more documented.

Status Page Setup and Management

Sre

Purpose of a Status Page#

A status page is the single source of truth for service health. It communicates current status, provides historical reliability data, and sets expectations during incidents through regular updates. A well-maintained status page reduces support tickets during incidents, builds customer trust, and gives teams a structured communication channel.

Platform Options#

Statuspage.io (Atlassian)#

The most widely adopted hosted solution. Integrates with the Atlassian ecosystem.

# Create a component
curl -X POST https://api.statuspage.io/v1/pages/${PAGE_ID}/components \
  -H "Authorization: OAuth ${API_KEY}" \
  -d '{"component": {"name": "API", "status": "operational", "showcase": true}}'

# Create an incident
curl -X POST https://api.statuspage.io/v1/pages/${PAGE_ID}/incidents \
  -H "Authorization: OAuth ${API_KEY}" \
  -d '{"incident": {"name": "Elevated Error Rates", "status": "investigating",
       "impact_override": "minor", "component_ids": ["id"]}}'

Strengths: Highly reliable, subscriber notifications built-in, custom domains, API-first. Weaknesses: Expensive ($399+/month business plan), limited customization, component limits on lower tiers.