Deploying Nginx on Kubernetes

Deploying Nginx on Kubernetes#

Nginx shows up in Kubernetes in two completely different roles. First, as a regular Deployment serving static content or acting as a reverse proxy for your application. Second, as an Ingress controller that watches Ingress resources and dynamically reconfigures itself. These are different deployments with different images and different configuration models. Knowing when to use which saves you from over-engineering or under-engineering your setup.

Nginx as a Web Server (Deployment + Service + ConfigMap)#

For serving static files or acting as a reverse proxy in front of your application pods, deploy nginx as a standard Deployment.

FIPS 140 Compliance: Validated Cryptography, FIPS-Enabled Runtimes, and Kubernetes Deployment

FIPS 140 Compliance#

FIPS 140 (Federal Information Processing Standard 140) is a US and Canadian government standard for cryptographic modules. If you sell software to US federal agencies, process federal data, or operate under FedRAMP, you must use FIPS 140-validated cryptographic modules. Many regulated industries (finance, healthcare, defense) also require or strongly prefer FIPS compliance.

FIPS 140 does not tell you which algorithms to use — it validates that a specific implementation of those algorithms has been tested and certified by an accredited lab (CMVP — Cryptographic Module Validation Program).

gRPC Security: TLS, mTLS, Authentication Interceptors, and Token-Based Access Control

gRPC Security#

gRPC uses HTTP/2 as its transport, which means TLS is not just a security feature — it is a practical necessity. Many load balancers, proxies, and clients expect HTTP/2 over TLS (h2) rather than plaintext HTTP/2 (h2c). Securing gRPC means configuring TLS correctly, authenticating clients, authorizing RPCs, and handling the gRPC-specific gotchas that do not exist with REST APIs.

gRPC Over TLS#

Server-Side TLS in Go#

import (
    "crypto/tls"
    "google.golang.org/grpc"
    "google.golang.org/grpc/credentials"
)

func main() {
    cert, err := tls.LoadX509KeyPair("server-cert.pem", "server-key.pem")
    if err != nil {
        log.Fatal(err)
    }

    tlsConfig := &tls.Config{
        Certificates: []tls.Certificate{cert},
        MinVersion:   tls.VersionTLS13,
    }

    creds := credentials.NewTLS(tlsConfig)
    server := grpc.NewServer(grpc.Creds(creds))

    pb.RegisterMyServiceServer(server, &myService{})

    lis, _ := net.Listen("tcp", ":50051")
    server.Serve(lis)
}

Client-Side TLS in Go#

import (
    "crypto/x509"
    "google.golang.org/grpc"
    "google.golang.org/grpc/credentials"
)

func main() {
    // For public CAs (Let's Encrypt, etc.), use system cert pool
    creds := credentials.NewTLS(&tls.Config{
        MinVersion: tls.VersionTLS13,
    })

    // For internal CAs, load the CA cert explicitly
    caCert, _ := os.ReadFile("ca-cert.pem")
    certPool := x509.NewCertPool()
    certPool.AppendCertsFromPEM(caCert)
    creds = credentials.NewTLS(&tls.Config{
        RootCAs:    certPool,
        MinVersion: tls.VersionTLS13,
    })

    conn, err := grpc.NewClient("api.internal:50051",
        grpc.WithTransportCredentials(creds),
    )
    defer conn.Close()

    client := pb.NewMyServiceClient(conn)
}

TLS in Python#

import grpc

# Server
server_credentials = grpc.ssl_server_credentials(
    [(open("server-key.pem", "rb").read(), open("server-cert.pem", "rb").read())]
)
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
pb_grpc.add_MyServiceServicer_to_server(MyService(), server)
server.add_secure_port("[::]:50051", server_credentials)
server.start()

# Client
ca_cert = open("ca-cert.pem", "rb").read()
channel_credentials = grpc.ssl_channel_credentials(root_certificates=ca_cert)
channel = grpc.secure_channel("api.internal:50051", channel_credentials)
client = pb_grpc.MyServiceStub(channel)

Mutual TLS for gRPC#

mTLS is the strongest authentication model for service-to-service gRPC. Each service has a certificate, and both sides verify each other.

HAProxy Configuration and Operations

Configuration File Structure#

HAProxy uses a single configuration file, typically /etc/haproxy/haproxy.cfg. The configuration is divided into four sections: global (process-level settings), defaults (default values for all proxies), frontend (client-facing listeners), and backend (server pools).

global
    log /dev/log local0
    log /dev/log local1 notice
    maxconn 50000
    user haproxy
    group haproxy
    daemon
    stats socket /run/haproxy/admin.sock mode 660 level admin
    tune.ssl.default-dh-param 2048

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5s
    timeout client  30s
    timeout server  30s
    timeout http-request 10s
    timeout http-keep-alive 10s
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http

The maxconn in the global section sets the hard limit on total simultaneous connections. The timeout values in defaults apply to all frontends and backends unless overridden. The stats socket directive enables the runtime API, which is essential for operational management.

Ingress Controllers and Routing Patterns

Ingress Controllers and Routing Patterns#

An Ingress resource defines HTTP routing rules – which hostnames and paths map to which backend Services. But an Ingress resource does nothing on its own. You need an Ingress controller running in the cluster to watch for Ingress resources and configure the actual reverse proxy.

Ingress Controllers#

The two most common controllers are nginx-ingress and Traefik.

nginx-ingress (ingress-nginx):

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace

Note: there are two different nginx ingress projects. kubernetes/ingress-nginx (community) and nginxinc/kubernetes-ingress (NGINX Inc). The community version is far more common. Make sure you install from https://kubernetes.github.io/ingress-nginx, not the NGINX Inc chart.

Kubernetes API Server: Architecture, Authentication, Authorization, and Debugging

Kubernetes API Server: Architecture, Authentication, Authorization, and Debugging#

The API server (kube-apiserver) is the front door to your Kubernetes cluster. Every interaction – kubectl commands, controller reconciliation loops, kubelet status updates, admission webhooks – goes through the API server. It is the only component that reads from and writes to etcd. If the API server is down, the cluster is unmanageable. Everything else (scheduler, controllers, kubelets) can tolerate brief API server outages because they cache state locally, but no mutations happen until the API server is back.

Nginx Configuration Patterns for Production

Configuration File Structure#

Nginx configuration follows a hierarchical block structure. The main configuration file is typically /etc/nginx/nginx.conf, which includes files from /etc/nginx/conf.d/ or /etc/nginx/sites-enabled/.

# /etc/nginx/nginx.conf
user nginx;
worker_processes auto;           # Match CPU core count
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;

events {
    worker_connections 1024;     # Per worker process
    multi_accept on;             # Accept multiple connections at once
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format main '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent" '
                    '$request_time $upstream_response_time';

    access_log /var/log/nginx/access.log main;

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    client_max_body_size 16m;

    include /etc/nginx/conf.d/*.conf;
}

The worker_processes auto directive sets one worker per CPU core. Each worker handles worker_connections simultaneous connections, so total capacity is worker_processes * worker_connections. For most production deployments, auto with 1024 connections per worker is sufficient.

Securing Docker-Based Validation Templates

Securing Docker-Based Validation Templates#

Validation templates define the environment agents use to test infrastructure changes. If a template runs containers as root, mounts the Docker socket, or skips resource limits, every agent that copies it inherits those risks. This reference covers the security patterns every docker-compose validation template must follow.

1. Non-Root Execution#

Containers run as root by default. A vulnerability in a root-running process gives an attacker full control inside the container and a much larger attack surface for container escapes.

Securing etcd: Encryption at Rest, TLS, and Access Control

Securing etcd#

etcd is the single most critical component in a Kubernetes cluster. It stores everything: pod specs, secrets, configmaps, RBAC rules, service account tokens, and all cluster state. By default, Kubernetes secrets are stored in etcd as base64-encoded plaintext. Anyone with read access to etcd has read access to every secret in the cluster. Securing etcd is not optional.

Why etcd Is the Crown Jewel#

Run this against an unencrypted etcd and you will see why:

TLS and mTLS Fundamentals: Certificates, Chains of Trust, Mutual Authentication, and Troubleshooting

TLS and mTLS Fundamentals#

TLS (Transport Layer Security) encrypts traffic between two endpoints. Mutual TLS (mTLS) adds a second layer: both sides prove their identity with certificates. Understanding these is not optional for anyone building distributed systems — nearly every production failure involving “connection refused” or “certificate verify failed” traces back to a TLS misconfiguration.

How TLS Works#

A TLS handshake establishes an encrypted channel before any application data is sent. The simplified flow: