Security Contexts, Seccomp, and AppArmor: Container Runtime Security

Security Contexts, Seccomp, and AppArmor#

Security contexts control what a container can do at the Linux kernel level: which user it runs as, which syscalls it can make, which files it can access, and whether it can escalate privileges. These settings are your last line of defense when a container is compromised. A properly configured security context limits the blast radius of a breach by preventing an attacker from escaping the container, accessing the host, or escalating to root.

Spot Instances and Preemptible Nodes: Running Kubernetes on Discounted Compute

Spot Instances and Preemptible Nodes#

Spot instances are unused cloud capacity sold at a steep discount – typically 60-90% off on-demand pricing. The trade-off: the cloud provider can reclaim them with minimal notice. AWS gives a 2-minute warning, GCP gives 30 seconds, and Azure varies. Running Kubernetes workloads on spot instances is one of the most effective cost reduction strategies available, but it requires architecture that tolerates sudden node loss.

Taints, Tolerations, and Node Affinity: Controlling Pod Placement

Taints, Tolerations, and Node Affinity#

Pod scheduling in Kubernetes defaults to “run anywhere there is room.” In production, that is rarely what you want. GPU workloads should land on GPU nodes. System components should not compete with application pods. Nodes being drained should stop accepting new work. Taints, tolerations, and node affinity give you control over where pods run and where they do not.

Taints: Repelling Pods from Nodes#

A taint is applied to a node and tells the scheduler “do not place pods here unless they explicitly tolerate this taint.” Taints have three parts: a key, a value, and an effect.

TLS Deep Dive: Certificate Chains, Handshake, Cipher Suites, and Debugging Connection Issues

The TLS Handshake#

Every HTTPS connection starts with a TLS handshake that establishes encryption parameters and verifies the server’s identity. The simplified flow for TLS 1.2:

Client                              Server
  |── ClientHello ──────────────────>|   (supported versions, cipher suites, random)
  |<────────────────── ServerHello ──|   (chosen version, cipher suite, random)
  |<──────────────── Certificate  ──|   (server's certificate chain)
  |<───────────── ServerKeyExchange ─|   (key exchange parameters)
  |<───────────── ServerHelloDone  ──|
  |── ClientKeyExchange ───────────>|   (client's key exchange contribution)
  |── ChangeCipherSpec ────────────>|   (switching to encrypted communication)
  |── Finished ────────────────────>|   (encrypted verification)
  |<──────────── ChangeCipherSpec ──|
  |<──────────────────── Finished ──|
  |<═══════ Encrypted traffic ═════>|

TLS 1.3 simplifies this significantly. The client sends its key share in the ClientHello, allowing the handshake to complete in a single round trip. TLS 1.3 also removed insecure cipher suites and compression, eliminating entire classes of vulnerabilities (BEAST, CRIME, POODLE).

Vertical Pod Autoscaler (VPA): Right-Sizing Resource Requests Automatically

Vertical Pod Autoscaler (VPA)#

Horizontal scaling adds more pod replicas. Vertical scaling gives each pod more (or fewer) resources. VPA automates the vertical side by watching actual CPU and memory usage over time and adjusting resource requests to match reality. Without it, teams guess at resource requests during initial deployment and rarely revisit them, leading to either waste (over-provisioned) or instability (under-provisioned).

What VPA Does#

VPA monitors historical and current resource usage for pods in a target Deployment (or StatefulSet, DaemonSet, etc.) and produces recommendations for CPU and memory requests. Depending on the configured mode, it either reports these recommendations passively or actively applies them by evicting and recreating pods with updated requests.

Minikube Add-ons for Production-Like Environments

Minikube Add-ons for Production-Like Environments#

A bare minikube cluster runs workloads but lacks the infrastructure that production clusters rely on – metrics collection, ingress routing, TLS, monitoring, and load balancer support. Minikube’s addon system bridges this gap with one-command installs of production components.

Surveying Available Add-ons#

List everything minikube offers:

minikube addons list

This prints dozens of addons with their status. Most are disabled by default. The ones worth enabling depend on what you are testing, but a production-like setup typically needs five to seven of them.

Minikube Storage: PersistentVolumes, StorageClasses, and Data Persistence Patterns

Minikube Storage: PersistentVolumes, StorageClasses, and Data Persistence#

Minikube ships with a built-in storage provisioner that handles PersistentVolumeClaims automatically. Understanding how it works – and where it differs from production storage – is essential for testing stateful workloads locally.

Default Storage: The hostPath Provisioner#

When you start minikube, it registers a default StorageClass called standard backed by the k8s.io/minikube-hostpath provisioner. This provisioner creates PersistentVolumes as directories on the minikube node’s filesystem.

Multi-Cluster Emulation with Minikube Profiles

Multi-Cluster Emulation with Minikube Profiles#

Production infrastructure rarely runs on a single cluster. You have staging, production, maybe a dedicated cluster for CI or data workloads. Minikube profiles let you run multiple independent Kubernetes clusters on one machine, each with its own version, resources, and addons. This is how you test multi-cluster workflows without cloud accounts.

What Profiles Are#

A minikube profile is a fully independent cluster. Each profile has its own:

Using Minikube for CI, Integration Testing, and Local Development Workflows

Using Minikube for CI, Integration Testing, and Local Development Workflows#

Minikube gives you a real Kubernetes cluster wherever you need one – on a developer laptop, in a GitHub Actions runner, or in any CI environment that has Docker. The patterns differ between local development and CI, but the underlying approach is the same: stand up a cluster, deploy your workload and its dependencies, test against it, tear it down.