Sandbox to Production: The Complete Workflow for Verified Infrastructure Deliverables

Sandbox to Production#

An agent that produces infrastructure deliverables works in a sandbox. It does not touch production. It does not reach into someone else’s cluster, database, or cloud account. It works in an isolated environment, tests its work, captures the results, and hands the human a verified deliverable they can execute on their own infrastructure.

This is not a limitation – it is a design choice. The output is always a deliverable, never a direct action on someone else’s systems. This boundary is what makes the approach safe enough for production infrastructure work and trustworthy enough for enterprise change management.

Scenario: Migrating Workloads Between Kubernetes Clusters

Scenario: Migrating Workloads Between Kubernetes Clusters#

You are helping when someone says: “we need to move workloads from cluster A to cluster B.” The reasons vary – Kubernetes version upgrade, cloud provider migration, region change, architecture consolidation, or moving from self-managed to a managed service. The complexity ranges from trivial (stateless services with GitOps) to significant (stateful workloads with zero-downtime requirements).

The core risk in any cluster migration is data loss for stateful workloads and downtime during the traffic cutover. Every decision in this plan aims to minimize both.

Scenario: Preparing for and Handling a Traffic Spike

Scenario: Preparing for and Handling a Traffic Spike#

You are helping when someone says: “we have a big launch next week,” “Black Friday is coming,” or “traffic is suddenly 3x normal and climbing.” These are two distinct problems – proactive preparation for a known event and reactive response to an unexpected surge – but they share the same infrastructure mechanics.

The key principle: Kubernetes autoscaling has latency. HPA takes 15-30 seconds to detect increased load and scale pods. Cluster Autoscaler takes 3-7 minutes to provision new nodes. If your traffic spike is faster than your scaling speed, users hit errors during the gap. Proactive preparation eliminates this gap. Reactive response minimizes it.

Setting Up Full Observability from Scratch: Metrics, Logs, Traces, and Alerting

Setting Up Full Observability from Scratch#

This operational sequence deploys a complete observability stack on Kubernetes: metrics (Prometheus + Grafana), logs (Loki + Promtail), traces (Tempo + OpenTelemetry), and alerting (Alertmanager). Each phase is self-contained with verification steps. Complete them in order – later phases depend on earlier infrastructure.

Prerequisite: a running Kubernetes cluster with Helm installed and a monitoring namespace created.

kubectl create namespace monitoring --dry-run=client -o yaml | kubectl apply -f -
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo update

Phase 1 – Metrics (Prometheus + Grafana)#

Metrics are the foundation. Logging and tracing integrations all route through Grafana, so this phase must be solid before continuing.

Setting Up Multi-Environment Infrastructure: Dev, Staging, and Production

Setting Up Multi-Environment Infrastructure: Dev, Staging, and Production#

Running a single environment is straightforward. Running three that drift apart silently is where teams lose weeks debugging “it works in dev.” This operational sequence walks through setting up dev, staging, and production environments that stay consistent where it matters and diverge only where intended.

Phase 1 – Environment Strategy#

Step 1: Define Environments#

Each environment serves a distinct purpose:

  • Dev: Rapid iteration. Developers deploy frequently, break things, and recover quickly. Data is disposable. Resources are minimal.
  • Staging: Production mirror. Same Kubernetes version, same network policies, same resource quotas. External services point to staging endpoints. Used for integration testing and pre-release validation.
  • Production: Real users, real data. Changes go through approval gates. Monitoring is comprehensive and alerting reaches on-call engineers.

Step 2: Isolation Model#

Decision point: Separate clusters per environment versus namespaces in a shared cluster.

Static Validation Patterns: Infrastructure Validation Without a Cluster

Static Validation Patterns#

Static validation catches infrastructure errors before anything is deployed. No cluster needed, no cloud credentials needed, no cost incurred. These tools analyze configuration files – Helm charts, Kubernetes manifests, Terraform modules, Kustomize overlays – and report problems that would cause failures at deploy time.

Static validation does not replace integration testing. It cannot verify that a service starts successfully, that a pod can pull its image, or that a database accepts connections. What it catches are structural errors: malformed YAML, invalid API versions, missing required fields, policy violations, deprecated resources, and misconfigured values. In practice, this covers roughly 40% of infrastructure issues – the ones that are cheapest to find and cheapest to fix.

Validation Playbook Format: Structuring Portable Validation Procedures

Validation Playbook Format#

A validation playbook is a structured procedure that tells an agent exactly how to validate a specific type of infrastructure change. The key problem it solves: the same validation (for example, “verify this Helm chart works”) requires different commands depending on whether the agent has access to kind, minikube, a cloud cluster, or nothing but a linter. A playbook encodes all path variants in one document so the agent picks the right commands for its environment.

Velero Backup and Restore: Disaster Recovery for Kubernetes

Velero Backup and Restore#

Velero backs up Kubernetes resources and persistent volume data to object storage. It handles scheduled backups, on-demand snapshots, and restores to the same or a different cluster. It is the standard tool for Kubernetes disaster recovery.

Velero captures two things: Kubernetes API objects (stored as JSON) and persistent volume data (via cloud volume snapshots or file-level backup with Kopia).

Installation#

You need an object storage bucket (S3, GCS, Azure Blob, or MinIO) and write credentials.

Choosing a GitOps Tool: ArgoCD vs Flux vs Jenkins vs GitHub Actions for Kubernetes Deployments

Choosing a GitOps Tool#

The term “GitOps” is applied to everything from a simple kubectl apply in a GitHub Actions workflow to a fully reconciled, pull-based deployment architecture with drift detection. These are fundamentally different approaches. Choosing between them depends on your team’s operational maturity, cluster count, and tolerance for running controllers in your cluster.

What GitOps Actually Means#

GitOps, as defined by the OpenGitOps principles (a CNCF sandbox project), has four requirements: declarative desired state, state versioned in git, changes applied automatically, and continuous reconciliation with drift detection. The last two are what separate true GitOps from “CI/CD that uses git.”

Cilium Deep Dive: eBPF Networking, L7 Policies, Hubble Observability, and Cluster Mesh

Cilium Deep Dive#

Cilium replaces the traditional Kubernetes networking stack with eBPF programs that run directly in the Linux kernel. Instead of kube-proxy translating Service definitions into iptables rules and a traditional CNI plugin managing pod networking through bridge interfaces and routing tables, Cilium attaches eBPF programs to kernel hooks that process packets at wire speed. The result is a networking layer that is faster at scale, capable of Layer 7 policy enforcement, and provides built-in observability without application instrumentation.