Minikube to Cloud Migration: 10 Things That Change on EKS, GKE, and AKS

Minikube to Cloud Migration Guide#

Minikube is excellent for learning and local development. But almost everything that “just works” on minikube requires explicit configuration on a cloud cluster. Here are the 10 things that change.

1. Ingress Controller Becomes a Cloud Load Balancer#

On minikube: You enable the NGINX ingress addon with minikube addons enable ingress. Traffic reaches your services through minikube tunnel or minikube service.

On cloud: The ingress controller must be deployed explicitly, and it provisions a real cloud load balancer. On AWS, the AWS Load Balancer Controller creates ALBs or NLBs from Ingress resources. On GKE, the built-in GCE ingress controller creates Google Cloud Load Balancers. You pay per load balancer.

Monolith to Microservices: When and How to Decompose

Monolith to Microservices#

The decision to break a monolith into microservices is one of the most consequential architectural choices a team makes. Get it right and you unlock independent deployment, team autonomy, and targeted scaling. Get it wrong and you trade a manageable monolith for a distributed monolith – all the complexity of microservices with none of the benefits.

When to Stay with a Monolith#

Microservices are not an upgrade from monoliths. They are a different set of tradeoffs. A well-structured monolith is the right choice in many situations.

Multi-Tenancy Patterns: Namespace Isolation, vCluster, and Dedicated Clusters

Multi-Tenancy Patterns: Namespace Isolation, vCluster, and Dedicated Clusters#

Multi-tenancy in Kubernetes means running workloads for multiple teams, customers, or environments on shared infrastructure. The core tension is always the same: sharing reduces cost, but isolation prevents blast radius. Choosing the wrong model creates security gaps or wastes money. This guide provides a framework for selecting the right approach and implementing it correctly.

The Three Models#

Every Kubernetes multi-tenancy approach falls into one of three categories, each with different isolation guarantees:

MySQL 8.x Setup and Configuration

MySQL 8.x Setup and Configuration#

MySQL 8.x is the current production series. It introduced caching_sha2_password as the default auth plugin, CTEs, window functions, and a redesigned data dictionary. Getting it installed is straightforward; getting it configured correctly for production takes more thought.

Installation#

Package Managers#

On Ubuntu/Debian, the MySQL APT repository gives you the latest 8.x:

# Add the MySQL APT repo
wget https://dev.mysql.com/get/mysql-apt-config_0.8.30-1_all.deb
sudo dpkg -i mysql-apt-config_0.8.30-1_all.deb
sudo apt update
sudo apt install mysql-server

On RHEL/Rocky/AlmaLinux:

OpenTelemetry for Kubernetes

What OpenTelemetry Is#

OpenTelemetry (OTel) is a vendor-neutral framework for generating, collecting, and exporting telemetry data: traces, metrics, and logs. It provides APIs, SDKs, and the Collector – a standalone binary that receives, processes, and exports telemetry. OTel replaces the fragmented landscape of Jaeger client libraries, Zipkin instrumentation, Prometheus client libraries, and proprietary agents with a single standard.

The three signal types:

  • Traces: Record the path of a request through distributed services as a tree of spans. Each span has a name, duration, attributes, and parent reference.
  • Metrics: Numeric measurements (counters, gauges, histograms) emitted by applications and infrastructure. OTel metrics can be exported to Prometheus.
  • Logs: Structured log records correlated with trace context. OTel log support bridges existing logging libraries with trace correlation.

The OTel Collector Pipeline#

The Collector is the central hub. It has three pipeline stages:

PostgreSQL Setup and Configuration

PostgreSQL Setup and Configuration#

Every PostgreSQL deployment boils down to three things: get the binary running, configure who can connect, and tune the memory settings.

Installation Methods#

Package Managers#

On Debian/Ubuntu, use the official PostgreSQL APT repository:

sudo apt install -y postgresql-common
sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh
sudo apt install -y postgresql-16

On macOS: brew install postgresql@16 && brew services start postgresql@16

On RHEL/Fedora:

sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-9-x86_64/pgdg-redhat-repo-latest.noarch.rpm
sudo dnf install -y postgresql16-server
sudo /usr/pgsql-16/bin/postgresql-16-setup initdb
sudo systemctl enable --now postgresql-16

Config files live at /etc/postgresql/16/main/ (Debian) or /var/lib/pgsql/16/data/ (RHEL).

Prometheus and Grafana Monitoring Stack

Prometheus Architecture#

Prometheus pulls metrics from targets at regular intervals (scraping). Each target exposes an HTTP endpoint (typically /metrics) that returns metrics in a text format. Prometheus stores the scraped data in a local time-series database and evaluates alerting rules against it. Grafana connects to Prometheus as a data source and renders dashboards.

Scrape Configuration#

The core of Prometheus configuration is the scrape config. Each scrape_config block defines a set of targets and how to scrape them.

Release Management Patterns: Versioning, Changelog Generation, Branching, Rollbacks, and Progressive Rollouts

Release Management Patterns#

Releasing software is more than merging to main and deploying. A disciplined release process ensures that every version is identifiable, every change is documented, every deployment is reversible, and failures are contained before they reach all users. This operational sequence walks through each phase of a production release workflow.

Phase 1 – Semantic Versioning#

Step 1: Adopt Semantic Versioning#

Semantic versioning (semver) communicates the impact of changes through the version number itself: MAJOR.MINOR.PATCH.

Running Kafka on Kubernetes with Strimzi

Running Kafka on Kubernetes with Strimzi#

Running Kafka on Kubernetes without an operator is painful. You need StatefulSets, headless Services, init containers for broker ID assignment, and careful handling of storage and networking. Strimzi eliminates most of this by managing the entire Kafka lifecycle through Custom Resource Definitions.

Installing Strimzi#

# Option 1: Helm
helm repo add strimzi https://strimzi.io/charts
helm install strimzi strimzi/strimzi-kafka-operator \
  --namespace kafka \
  --create-namespace

# Option 2: Direct YAML install
kubectl create namespace kafka
kubectl apply -f https://strimzi.io/install/latest?namespace=kafka -n kafka

Verify the operator is running:

Running Redis on Kubernetes

Running Redis on Kubernetes#

Redis on Kubernetes ranges from dead simple (single pod for caching) to operationally complex (Redis Cluster with persistence). The right choice depends on whether you need data durability, high availability, or just a fast throwaway cache.

Single-Instance Redis with Persistence#

For development or small workloads, a single Redis Deployment with a PVC is enough:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:7-alpine
        command: ["redis-server", "--appendonly", "yes", "--maxmemory", "256mb", "--maxmemory-policy", "allkeys-lru"]
        ports:
        - containerPort: 6379
        volumeMounts:
        - name: redis-data
          mountPath: /data
        resources:
          requests:
            cpu: 100m
            memory: 300Mi
          limits:
            cpu: 500m
            memory: 350Mi
      volumes:
      - name: redis-data
        persistentVolumeClaim:
          claimName: redis-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: redis-data
spec:
  accessModes: ["ReadWriteOnce"]
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
  name: redis
spec:
  selector:
    app: redis
  ports:
  - port: 6379
    targetPort: 6379

Set the memory limit in Redis (--maxmemory) lower than the container memory limit. If Redis uses 350Mi and the container limit is 350Mi, the kernel OOM-kills the process during background save operations when Redis forks and temporarily doubles its memory usage. A safe ratio: set maxmemory to 60-75% of the container memory limit.