Custom Resource Definitions (CRDs): Extending the Kubernetes API

Custom Resource Definitions (CRDs)#

CRDs extend the Kubernetes API with your own resource types. Once you create a CRD, you can kubectl get, kubectl apply, and kubectl delete instances of your custom type just like built-in resources. The custom resources are stored in etcd alongside native Kubernetes objects, benefit from the same RBAC, and participate in the same API machinery.

When to Use CRDs#

CRDs make sense when you need to represent application-specific concepts inside Kubernetes:

Database Connection Pooling: PgBouncer, ProxySQL, and Application-Level Patterns

Database Connection Pooling: PgBouncer, ProxySQL, and Application-Level Patterns#

Database connections are expensive resources. PostgreSQL forks a new OS process for every connection. MySQL creates a thread. Both allocate memory for session state, query buffers, and sort areas. When your application scales horizontally in Kubernetes – 10 pods, then 20, then 50 – the connection count multiplies, and most databases buckle long before your application pods do.

Connection pooling solves this by maintaining a smaller set of persistent connections to the database and sharing them across many application clients. Understanding pooling options, deployment patterns, and sizing is essential for any production database workload on Kubernetes.

Helm Release Naming Gotchas: How Resource Names Actually Work

Helm Release Naming Gotchas#

Helm charts derive Kubernetes resource names from the release name, but every chart does it differently. If you assume a consistent pattern, you will get bitten by DNS resolution failures, broken connection strings, and mysterious “service not found” errors.

Bitnami PostgreSQL: Names Are Not What You Expect#

The Bitnami PostgreSQL chart names resources using the release name directly, not {release-name}-postgresql. This catches nearly everyone.

# You deploy like this:
helm upgrade --install dt-postgresql bitnami/postgresql \
  --namespace dream-team \
  --set auth.database=mattermost \
  --set auth.username=mmuser

# You expect these resource names:
#   Pod:     dt-postgresql-postgresql-0   <-- WRONG
#   Service: dt-postgresql-postgresql     <-- WRONG

# Actual names:
#   Pod:     dt-postgresql-0
#   Service: dt-postgresql

This means your application connection string should reference dt-postgresql, not dt-postgresql-postgresql. If you chose release name postgresql, your service is just postgresql – which might collide with other things in your namespace.

Kubernetes Operator Development: Patterns, Frameworks, and Best Practices

Kubernetes Operator Development#

Operators are custom controllers that manage CRDs. They encode operational knowledge – the kind of tasks a human operator would perform – into software that runs inside the cluster. An operator watches for changes to its custom resources and reconciles the actual state to match the desired state, creating, updating, or deleting child resources as needed.

Operator Maturity Model#

The Operator Framework defines five maturity levels:

Level Capability Example
1 Basic install Helm operator deploys the application
2 Seamless upgrades Operator handles version migrations
3 Full lifecycle Backup, restore, failure recovery
4 Deep insights Exposes metrics, fires alerts, generates dashboards
5 Auto-pilot Auto-scaling, auto-healing, auto-tuning without human input

Most custom operators target Level 2-3. Levels 4-5 are typically reached by mature projects like the Prometheus Operator or Rook/Ceph.

Multi-Cluster Kubernetes: Architecture, Networking, and Management Patterns

Multi-Cluster Kubernetes#

A single Kubernetes cluster is a single blast radius. A bad deployment, a control plane failure, a misconfigured admission webhook – any of these can take down everything. Multi-cluster is not about complexity for its own sake. It is about isolation, resilience, and operating workloads that span regions, regulations, or teams.

Why Multi-Cluster#

Blast radius isolation. A cluster-wide failure (etcd corruption, bad admission webhook, API server overload) only affects one cluster. Critical workloads in another cluster are untouched.

OAuth2 and OIDC for Infrastructure

OAuth2 vs OIDC: What Actually Matters#

OAuth2 is an authorization framework. It answers the question “what is this client allowed to do?” by issuing access tokens. It does not tell you who the user is. OIDC (OpenID Connect) is a layer on top of OAuth2 that adds authentication. It answers “who is this user?” by adding an ID token – a signed JWT containing user identity claims like email, name, and group memberships.

OPA Gatekeeper: Policy as Code for Kubernetes

OPA Gatekeeper: Policy as Code for Kubernetes#

Gatekeeper is a Kubernetes-native policy engine built on Open Policy Agent (OPA). It runs as a validating admission webhook and evaluates policies written in Rego against every matching API request. Instead of deploying raw Rego files to an OPA server, Gatekeeper uses Custom Resource Definitions: you define policies as ConstraintTemplates and instantiate them as Constraints. This makes policy management declarative, auditable, and version-controlled.

PostgreSQL 15+ Permissions: Why Your Helm Deployment Cannot Create Tables

PostgreSQL 15+ Permissions: Why Your Helm Deployment Cannot Create Tables#

Starting with PostgreSQL 15, only the database owner and superusers can create objects in the public schema by default. This breaks a common Helm pattern where you create a user, grant privileges, and expect it to create tables. The application connects fine but fails on its first CREATE TABLE.

The Symptom#

Your application pod logs show something like:

Error: permission denied for schema public

Or from an ORM like Mattermost’s:

Redis on Kubernetes: Deployment Patterns, Operators, and Production Configuration

Redis on Kubernetes: Deployment Patterns, Operators, and Production Configuration#

Running Redis on Kubernetes requires more thought than deploying a stateless application. Redis is stateful, memory-sensitive, and its clustering model makes assumptions about network identity that conflict with Kubernetes defaults. This guide covers the deployment options from simplest to most complex, the configuration details that matter in production, and the mistakes that cause outages.

Deployment Options#

There are three main approaches to deploying Redis on Kubernetes, each with different tradeoffs.

Secrets Management in Minikube: From Basic to Production Patterns

Secrets Management in Minikube: From Basic to Production Patterns#

Secrets in Kubernetes are simultaneously simple (just base64-encoded data in etcd) and complex (getting the workflow right for rotation, RBAC, and git-safe storage requires multiple tools). Setting up proper secrets management locally means you can validate the entire workflow – from creation through mounting through rotation – before touching production credentials.

Kubernetes Secret Types#

Kubernetes has several built-in secret types, each with its own structure and validation: