HashiCorp Vault on Kubernetes: Secrets Management Done Right

HashiCorp Vault on Kubernetes#

Vault centralizes secret management with dynamic credentials, encryption as a service, and fine-grained access control. On Kubernetes, workloads authenticate using service accounts and pull secrets without hardcoding anything.

Installation with Helm#

helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update

Dev Mode (Single Pod, In-Memory)#

Automatically initialized and unsealed, stores everything in memory, loses all data on restart. Root token is root. Never use this in production.

helm upgrade --install vault hashicorp/vault \
  --namespace vault --create-namespace \
  --set server.dev.enabled=true \
  --set injector.enabled=true

Production Mode (HA with Integrated Raft Storage)#

Run Vault in HA mode with Raft consensus – a 3-node StatefulSet with persistent storage.

Kubernetes Audit Logging: Tracking API Activity for Security and Compliance

Kubernetes Audit Logging: Tracking API Activity for Security and Compliance#

Audit logging records every request to the Kubernetes API server. Every kubectl command, every controller reconciliation, every kubelet heartbeat, every admission webhook call – all of it can be captured with the requester’s identity, the target resource, the timestamp, and optionally the full request and response bodies. Without audit logging, you have no record of who did what in your cluster. With it, you can trace security incidents, satisfy compliance requirements, and debug access control issues.

Kubernetes Production Readiness Checklist: Everything to Verify Before Going Live

Kubernetes Production Readiness Checklist#

This checklist is designed for agents to audit a Kubernetes cluster before production workloads run on it. Every item includes the verification command and what a passing result looks like. Work through each category sequentially. A failing item in Cluster Health should be fixed before checking Workload Configuration.


Cluster Health#

These are non-negotiable. If any of these fail, stop and fix them before evaluating anything else.

Multi-Tenancy Patterns: Namespace Isolation, vCluster, and Dedicated Clusters

Multi-Tenancy Patterns: Namespace Isolation, vCluster, and Dedicated Clusters#

Multi-tenancy in Kubernetes means running workloads for multiple teams, customers, or environments on shared infrastructure. The core tension is always the same: sharing reduces cost, but isolation prevents blast radius. Choosing the wrong model creates security gaps or wastes money. This guide provides a framework for selecting the right approach and implementing it correctly.

The Three Models#

Every Kubernetes multi-tenancy approach falls into one of three categories, each with different isolation guarantees:

MySQL 8.x Setup and Configuration

MySQL 8.x Setup and Configuration#

MySQL 8.x is the current production series. It introduced caching_sha2_password as the default auth plugin, CTEs, window functions, and a redesigned data dictionary. Getting it installed is straightforward; getting it configured correctly for production takes more thought.

Installation#

Package Managers#

On Ubuntu/Debian, the MySQL APT repository gives you the latest 8.x:

# Add the MySQL APT repo
wget https://dev.mysql.com/get/mysql-apt-config_0.8.30-1_all.deb
sudo dpkg -i mysql-apt-config_0.8.30-1_all.deb
sudo apt update
sudo apt install mysql-server

On RHEL/Rocky/AlmaLinux:

Network Policies: Namespace Isolation and Pod-to-Pod Rules

Network Policies: Namespace Isolation and Pod-to-Pod Rules#

By default, every pod in a Kubernetes cluster can talk to every other pod. Network policies let you restrict that. They are namespace-scoped resources that select pods by label and define allowed ingress and egress rules.

Critical Prerequisite: CNI Support#

Network policies are only enforced if your CNI plugin supports them. Calico, Cilium, and Weave all support network policies. Flannel does not. If you are running Flannel, you can create NetworkPolicy resources without errors, but they will have absolutely no effect. This is a silent failure that wastes hours of debugging.

RBAC Patterns: Practical Access Control for Kubernetes

RBAC Patterns#

Kubernetes RBAC controls who can do what to which resources. It is built on four objects: Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings. Getting RBAC right means understanding how these four pieces compose and knowing the common patterns that cover 90% of real-world needs.

The Four RBAC Objects#

Role – Defines permissions within a single namespace. Lists API groups, resources, and allowed verbs.

ClusterRole – Defines permissions cluster-wide or for non-namespaced resources (nodes, persistent volumes, namespaces themselves).

Security Hardening a Kubernetes Cluster: End-to-End Operational Sequence

Security Hardening a Kubernetes Cluster#

This operational sequence takes a default Kubernetes cluster and locks it down. Phases are ordered by impact and dependency: assessment first, then RBAC, pod security, networking, images, auditing, and finally data protection. Each phase includes the commands, policy YAML, and verification steps.

Do not skip the assessment phase. You need to know what you are fixing before you start fixing it.


Phase 1 – Assessment#

Before changing anything, establish a baseline. This phase produces a prioritized list of findings that drives the order of remediation in later phases.

Service Account Security: Tokens, RBAC Binding, and Workload Identity

Service Account Security#

Every pod in Kubernetes runs as a service account. By default, that is the default service account in the pod’s namespace, with an auto-mounted API token that never expires. This default configuration is overly permissive for most workloads. Hardening service accounts is one of the highest-impact security improvements you can make in a Kubernetes cluster.

The Default Problem#

When a pod starts without specifying a service account, Kubernetes does three things:

TLS Certificate Lifecycle Management

Certificate Basics#

A TLS certificate binds a public key to a domain name. The certificate is signed by a Certificate Authority (CA) that browsers and operating systems trust. The chain goes: your certificate, signed by an intermediate CA, signed by a root CA. All three must be present and valid for a client to trust the connection.

Self-Signed Certificates for Development#

For local development and testing, generate a self-signed certificate. Clients will not trust it by default, but you can add it to your local trust store.