Building a Kubernetes Deployment Pipeline: From Code Push to Production

Building a Kubernetes Deployment Pipeline: From Code Push to Production#

A deployment pipeline connects a code commit to a running container in your cluster. This operational sequence walks through building one end-to-end, with decision points at each phase and verification steps to confirm the pipeline works before moving on.

Phase 1 – Source Control and CI#

Step 1: Repository Structure#

Every deployable service needs three things alongside its application code: a Dockerfile, deployment manifests, and a CI pipeline definition.

Building an API with Cloudflare Workers and D1: From Zero to Production

Building an API with Cloudflare Workers and D1#

This tutorial walks through building a production API on Cloudflare Workers with a D1 database, KV caching, rate limiting, full-text search, and request logging. The patterns come from a real production deployment – not a toy example.

By the end you will have: a TypeScript Worker handling multiple API routes, a D1 database with FTS5 full-text search, KV-based caching and rate limiting, CORS support, request logging with IP hashing for privacy, and a deployment to Cloudflare’s global network.

Building LLM Harnesses: Orchestrating Local Models into Workflows with Scoring, Retries, and Parallel Execution

Building LLM Harnesses#

A harness is the infrastructure that wraps LLM calls into a reliable, testable, and observable workflow. It handles the concerns that a raw API call does not: input preparation, output validation, error recovery, model routing, parallel execution, and quality scoring. Without a harness, you have a script. With one, you have a tool.

Harness Architecture#

Input
  │
  ├── Preprocessing (validate input, select model, prepare prompt)
  │
  ├── Execution (call Ollama with timeout, retry on failure)
  │
  ├── Post-processing (parse output, validate schema, score quality)
  │
  ├── Routing (if quality too low, escalate to larger model or flag)
  │
  └── Output (structured result + metadata)

Core Harness in Python#

import ollama
import json
import time
from dataclasses import dataclass, field
from typing import Any, Callable

@dataclass
class LLMResult:
    content: str
    model: str
    tokens_in: int
    tokens_out: int
    duration_ms: int
    ttft_ms: int
    success: bool
    retries: int = 0
    score: float | None = None
    metadata: dict = field(default_factory=dict)

@dataclass
class HarnessConfig:
    model: str = "qwen2.5-coder:7b"
    temperature: float = 0.0
    max_tokens: int = 1024
    json_mode: bool = False
    timeout_seconds: int = 120
    max_retries: int = 2
    retry_delay_seconds: float = 1.0

def call_llm(
    messages: list[dict],
    config: HarnessConfig,
) -> LLMResult:
    """Make a single LLM call with timing metadata."""
    start = time.monotonic()

    kwargs = {
        "model": config.model,
        "messages": messages,
        "options": {
            "temperature": config.temperature,
            "num_predict": config.max_tokens,
        },
        "stream": False,
    }
    if config.json_mode:
        kwargs["format"] = "json"

    try:
        response = ollama.chat(**kwargs)
        duration = int((time.monotonic() - start) * 1000)

        return LLMResult(
            content=response["message"]["content"],
            model=config.model,
            tokens_in=response.get("prompt_eval_count", 0),
            tokens_out=response.get("eval_count", 0),
            duration_ms=duration,
            ttft_ms=int(response.get("prompt_eval_duration", 0) / 1_000_000),
            success=True,
        )
    except Exception as e:
        duration = int((time.monotonic() - start) * 1000)
        return LLMResult(
            content=str(e),
            model=config.model,
            tokens_in=0,
            tokens_out=0,
            duration_ms=duration,
            ttft_ms=0,
            success=False,
        )

Retry with Validation#

Do not retry blindly. Retry only when the output fails validation:

Building Machine Images with Packer: Templates, Builders, Provisioners, and CI/CD

Building Machine Images with Packer#

Machine images (AMIs, Azure Managed Images, GCP Images) are the foundation of immutable infrastructure. Instead of provisioning a base OS and configuring it at boot, you build a pre-configured image and launch instances from it. Packer automates this process: it launches a temporary instance, runs provisioners to configure it, creates an image from the result, and destroys the temporary instance.

This operational sequence walks through building, testing, and managing machine images with Packer from template creation through CI/CD integration.

CDN and Edge Computing Patterns

CDN and Edge Computing Patterns#

A CDN (Content Delivery Network) caches content at edge locations close to users, reducing latency and offloading traffic from origin servers. Edge computing extends this by running custom code at those edge locations, enabling request transformation, authentication, A/B testing, and dynamic content generation without round-tripping to an origin server.

CDN Cache Fundamentals#

Cache-Control Headers#

The origin server controls CDN caching behavior through HTTP headers. Getting these right is the single most impactful CDN optimization.

cert-manager and external-dns: Automatic TLS and DNS on Kubernetes

cert-manager and external-dns#

These two controllers solve the two most tedious parts of exposing services on Kubernetes: getting TLS certificates and creating DNS records. Together, they make it so that creating an Ingress resource automatically provisions a DNS record pointing to your cluster and a valid TLS certificate for the hostname.

cert-manager#

cert-manager watches for Certificate resources and Ingress annotations, then obtains and renews TLS certificates automatically.

Installation#

helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set crds.enabled=true

The crds.enabled=true flag installs the CRDs as part of the Helm release. Verify with kubectl get pods -n cert-manager – you should see cert-manager, cert-manager-cainjector, and cert-manager-webhook all Running.

Change Management for Infrastructure

Why Change Management Matters#

Most production incidents trace back to a change. Code deployments, configuration updates, infrastructure modifications, database migrations – each introduces risk. Change management reduces that risk through structure, visibility, and accountability. The goal is not to prevent change but to make change safe, visible, and reversible.

Change Request Process#

Every infrastructure change flows through a structured request. The formality scales with risk, but the basic elements remain constant.

Chaos Engineering: From First Experiment to Mature Practice

Why Break Things on Purpose#

Production systems fail in ways that testing environments never reveal. A database connection pool exhaustion under load, a cascading timeout across three services, a DNS cache that masks a routing change until it expires – these failures only surface when real conditions collide in ways nobody predicted. Chaos engineering is the discipline of deliberately injecting failures into a system to discover weaknesses before they cause outages.

Choosing a CNI Plugin: Calico vs Cilium vs Flannel vs Cloud-Native CNI

Choosing a CNI Plugin#

The Container Network Interface (CNI) plugin is one of the most consequential infrastructure decisions in a Kubernetes cluster. It determines how pods get IP addresses, how traffic flows between them, whether network policies are enforced, and what observability you get into network behavior. Changing CNI after deployment is painful – it typically requires draining and rebuilding nodes, or rebuilding the cluster entirely. Choose carefully up front.

Choosing a Deployment Platform for APIs and MVPs: Cloudflare vs AWS vs Vercel vs Fly.io

Choosing a Deployment Platform for APIs and MVPs#

Picking a deployment platform early in a project matters more than most teams realize. The platform determines your cost floor, your scaling ceiling, your deployment workflow, and how much operational overhead you carry. Switching later is possible but never free – you are always migrating data, rewriting config, and updating DNS.

This guide compares four platforms that cover the most common deployment scenarios: Cloudflare (Workers + D1 + Pages), AWS (Lambda + API Gateway + RDS + S3), Vercel (Pro + serverless functions), and Fly.io (Apps + Postgres). Each has a genuine sweet spot. None is best for everything.