Setting Up Multi-Environment Infrastructure: Dev, Staging, and Production

Setting Up Multi-Environment Infrastructure: Dev, Staging, and Production#

Running a single environment is straightforward. Running three that drift apart silently is where teams lose weeks debugging “it works in dev.” This operational sequence walks through setting up dev, staging, and production environments that stay consistent where it matters and diverge only where intended.

Phase 1 – Environment Strategy#

Step 1: Define Environments#

Each environment serves a distinct purpose:

  • Dev: Rapid iteration. Developers deploy frequently, break things, and recover quickly. Data is disposable. Resources are minimal.
  • Staging: Production mirror. Same Kubernetes version, same network policies, same resource quotas. External services point to staging endpoints. Used for integration testing and pre-release validation.
  • Production: Real users, real data. Changes go through approval gates. Monitoring is comprehensive and alerting reaches on-call engineers.

Step 2: Isolation Model#

Decision point: Separate clusters per environment versus namespaces in a shared cluster.

SIEM and Security Log Correlation

SIEM and Security Log Correlation#

A SIEM collects logs from across your infrastructure, normalizes them, and applies correlation rules to detect threats that no single log source would reveal. A brute force attempt is visible in auth logs. Lateral movement after successful brute force requires correlating auth logs with network flow data and process execution logs. The SIEM makes that correlation possible.

Log Sources#

The value of a SIEM depends entirely on the logs you feed it. Missing a log source means missing the attacks that source would reveal.

Software Bill of Materials and Vulnerability Management

What Is an SBOM#

A Software Bill of Materials is a machine-readable inventory of every component in a software artifact. It lists packages, libraries, versions, licenses, and dependency relationships. An SBOM answers the question: what exactly is inside this container image, binary, or repository?

When a new CVE drops, organizations without SBOMs scramble to determine which systems are affected. Organizations with SBOMs query a database and have the answer in seconds.

Software Supply Chain Security

Why Supply Chain Security Matters#

Your container image contains hundreds of dependencies you did not write. A compromised base image or malicious package runs with your application’s full permissions. Supply chain attacks target the build process because it is often less guarded than runtime.

The goal is to answer three questions for every artifact: what is in it (SBOM), who built it (signing), and how was it built (provenance).

SBOM Generation#

A Software Bill of Materials lists every dependency in an artifact. Two tools dominate.

SQLite for Production Use

SQLite for Production Use#

SQLite is not a toy database. It handles more read traffic than any other database engine in the world – every Android phone, iOS device, and major web browser runs SQLite. The question is whether your workload fits its concurrency model: single-writer, multiple-reader. If it does, SQLite eliminates an entire class of operational overhead with no server process, no network protocol, and no connection authentication.

WAL Mode#

Write-Ahead Logging (WAL) mode is the single most important configuration for production SQLite. In the default rollback journal mode, writers block readers and readers block writers. WAL removes this limitation.

SRE Fundamentals: SLOs, Error Budgets, and Reliability Practices

The SRE Model#

Site Reliability Engineering treats operations as a software engineering problem. Instead of a wall between developers who ship features and operators who keep things running, SRE defines reliability as a feature – one that can be measured, budgeted, and traded against velocity. The core insight is that 100% reliability is the wrong target. Users cannot tell the difference between 99.99% and 100%, but the engineering cost to close that gap is enormous. SRE makes this tradeoff explicit through service level objectives.

StatefulSets and Persistent Storage: Stable Identity, PVCs, and StorageClasses

StatefulSets and Persistent Storage#

Deployments treat pods as interchangeable. StatefulSets do not – each pod gets a stable hostname, a persistent volume, and an ordered startup sequence. This is what you need for databases, message queues, and any workload where identity matters.

StatefulSet vs Deployment#

FeatureDeploymentStatefulSet
Pod namesRandom suffix (web-api-6d4f8)Ordinal index (postgres-0, postgres-1)
Startup orderAll at onceSequential (0, then 1, then 2)
Stable network identityNoYes, via headless Service
Persistent storageShared or nonePer-pod via volumeClaimTemplates
Scaling downRemoves random podsRemoves highest ordinal first

Use StatefulSets when your application needs any of: stable hostnames, ordered deployment/scaling, or per-pod persistent storage. Common examples: PostgreSQL, MySQL, Redis Sentinel, Kafka, ZooKeeper, Elasticsearch.

Static Validation Patterns: Infrastructure Validation Without a Cluster

Static Validation Patterns#

Static validation catches infrastructure errors before anything is deployed. No cluster needed, no cloud credentials needed, no cost incurred. These tools analyze configuration files – Helm charts, Kubernetes manifests, Terraform modules, Kustomize overlays – and report problems that would cause failures at deploy time.

Static validation does not replace integration testing. It cannot verify that a service starts successfully, that a pod can pull its image, or that a database accepts connections. What it catches are structural errors: malformed YAML, invalid API versions, missing required fields, policy violations, deprecated resources, and misconfigured values. In practice, this covers roughly 40% of infrastructure issues – the ones that are cheapest to find and cheapest to fix.

Status Page Setup and Management

Purpose of a Status Page#

A status page is the single source of truth for service health. It communicates current status, provides historical reliability data, and sets expectations during incidents through regular updates. A well-maintained status page reduces support tickets during incidents, builds customer trust, and gives teams a structured communication channel.

Platform Options#

Statuspage.io (Atlassian)#

The most widely adopted hosted solution. Integrates with the Atlassian ecosystem.

# Create a component
curl -X POST https://api.statuspage.io/v1/pages/${PAGE_ID}/components \
  -H "Authorization: OAuth ${API_KEY}" \
  -d '{"component": {"name": "API", "status": "operational", "showcase": true}}'

# Create an incident
curl -X POST https://api.statuspage.io/v1/pages/${PAGE_ID}/incidents \
  -H "Authorization: OAuth ${API_KEY}" \
  -d '{"incident": {"name": "Elevated Error Rates", "status": "investigating",
       "impact_override": "minor", "component_ids": ["id"]}}'

Strengths: Highly reliable, subscriber notifications built-in, custom domains, API-first. Weaknesses: Expensive ($399+/month business plan), limited customization, component limits on lower tiers.

Structured Output from Small Local Models: JSON Mode, Extraction, Classification, and Token Runaway Fixes

Structured Output from Small Local Models#

Small models (2-7B parameters) produce structured output that is 85-95% as accurate as cloud APIs for well-defined extraction and classification tasks. The key is constraining the output space so the model’s limited reasoning capacity is focused on filling fields rather than deciding what to generate.

This is where local models genuinely compete with — and sometimes match — models 30x their size.

JSON Mode#

Ollama’s JSON mode forces the model to produce valid JSON: